- 1School of Foreign Languages, Southeast University, Nanjing, China
- 2Technical College for the Deaf, Tianjin University of Technology, Tianjin, China
- 3National Research Center for Language and Well-Being, Shanghai Jiao Tong University, Shanghai, China
- 4College of Language Intelligence, Sichuan International Studies University, Chongqing, China
Extensive research has demonstrated that facial occlusion significantly affects individuals’ emotion recognition abilities. However, whether facial occlusion exacerbate the difficulty in emotion recognition for deaf individuals remains elusive. This study employed eye-tracking technology to investigate the mechanisms underlying emotion perception in deaf individuals under different facial occlusion conditions. We compared the percentage of eye and mouth gaze fixation in deaf and hearing participants as they judged different emotions (positive, neutral, negative) under three occlusion conditions (no occlusion, sunglasses, mask). The behavioral and eye-tracking results reveal that, first, facial occlusion by sunglasses and mask significantly impairs emotion perception and social communication for deaf individuals. Second, the eye area is more crucial for recognizing negative emotions, while the mouth area is critical for recognizing positive emotions. Third, deaf individuals exhibit a “happiness superiority effect,” responding more favorably to positive emotions and showing an avoidance bias toward negative emotions. Besides, visual attention allocation strategies of deaf individuals tend to be relatively fixed and less adaptable to task demands. Overall, these findings support the integrative hypothesis of visual function in deaf individuals and provide insights for enhancing facial emotion recognition and optimizing social interaction strategies for the deaf community.
1 Introduction
Facial expressions, the primary means by which emotions are conveyed, serve as essential external manifestations of emotional states. These expressions provide critical clues about an individual’s internal state, significantly impacting social abilities and forming a crucial foundation for establishing complete and effective social interactions (Liu and Ge, 2014). Studies on the fixation patterns of emotional faces have shown that when viewing different emotional images, there may be significant differences in the fixation proportions on different facial regions. That is, different areas of interest may play dominant roles in conveying different emotions. For instance, numerous research indicates that eyes are the most important area for recognizing negative emotions, including sadness and fear, while the mouth is more closely associated with positive emotions, such as happiness (Beaudry et al., 2014; Bombari et al., 2013; Schurgin et al., 2014). However, research by Blais et al. (2012) demonstrated that the mouth region provided critical cues for eight static and dynamic facial expressions (including six basic emotions, pain, and neutral expressions), suggesting that while specific facial regions have important influences on specific emotions, emotion recognition relies on other facial regions as well. Additionally, Nummenmaa and Calvo (2015) found that happy faces were recognized more quickly and accurately than other expressions in emotion classification tasks. This finding aligns with the importance of positive emotions in social interactions, as happy expressions typically convey positive social signals, making them easier to recognize and respond to.
Currently, the use of facial occlusions, such as sunglasses and mask, has become increasingly prevalent due to health concerns (Rabadan et al., 2022), esthetic preferences (Harris, 1991), and occupational requirements. However, such occlusions can significantly impair the recognition of emotions, reducing the identifiability of facial expressions and the accuracy of emotion recognition (Carbon, 2020; Ruba and Pollak, 2020). Sunglasses, which obscure the eyes, the crucial feature for expressing and recognizing emotions, can diminish the overall communicative effectiveness of facial expressions, affecting the accuracy and speed of emotion recognition. Studies by Kramer and Ritchie (2016) and Graham and Ritchie (2019) support these findings, indicating that sunglasses impair observers’ ability to match faces and reduces trust in the observed individuals. Mask, another common type of facial occlusion, obscures the lower part of the face, including the mouth, an area crucial for conveying and recognizing emotions. Since the outbreak of the COVID-19 pandemic, the frequency of mask-wearing has increased significantly. Besides, some professionals, including healthcare workers and construction workers, often wear mask and protective gear to protect against viruses, dust, or chemicals. Recent studies confirm that masks have a negative impact on the ability to recognize emotions (Parada-Fernandez et al., 2022; Pazhoohi et al., 2021). Grahlow et al. (2022) discovered that the capability to identify emotions in masked faces was impaired across all six emotional conditions. Mask-wearing reduces the perceived intensity of emotions, thereby diminishing individuals’ ability and confidence in recognizing facial expressions (Kastendieck et al., 2022; Pazhoohi et al., 2021). Malak and Yildirim (2022) conducted an eye-tracking investigation which revealed that the presence of masks significantly diminishes the accuracy of recognizing fearful facial expressions, without exerting a similar impact on the perception of angry or neutral faces. The neural underpinnings of these findings were further examined by Prete et al. (2022) through electroencephalogram (EEG) analysis. This research posited that masked faces could act as anxiety-provoking stimuli, thereby impairing the efficiency and efficacy of emotional recognition processes. Such impairment was evidenced by increased N170 amplitudes—a marker indicative of early stages of face processing—as well as decreased P2 amplitudes, which are associated with subsequent higher-level cognitive processing stages. These insights are substantiated by complementary behavioral and functional magnetic resonance imaging (fMRI) studies from Abutalebi et al. (2024), suggesting that the use of masks imposes additional demands on the mechanisms responsible for facial emotion recognition, affecting both the speed of this process and the allocation of neural resources involved.
Due to the absence of auditory input, the deaf community faces distinct challenges, such as increased communication difficulties, missed information, and heightened social disconnection in masked conditions (Gutierrez-Sigut et al., 2022; Mansutti et al., 2023). Despite these challenges, little research has examined the impact of facial occlusions on emotion recognition in deaf individuals. Generally, three key hypotheses explain the effects of auditory impairment on visual function in the deaf: the deficiency hypothesis, the compensation hypothesis, and the integration hypothesis (Ambert-Dahan et al., 2017; Alencar et al., 2019; Dye and Bavelier, 2010). The “deficiency hypothesis” posits that that auditory deprivation may disrupt the interplay among senses, leading to deficits in emotion perception (Ambert-Dahan et al., 2017; Esposito et al., 2017; Ludlow et al., 2010). Lau et al. (2022) assessed 59 deaf signers’ ratings of arousal, valence, invariant characteristics, and trait-like features of facial images under masked and unmasked conditions. Their findings revealed that while deaf signers perceive facial expressions more intensely than hearing individuals, they are also more inhibited by face masks. Amadeo et al. (2022) emphasized that face masks particularly impair the recognition of low-intensity expressions of happiness in deaf individuals. Furthermore, even when core linguistic comprehension in sign language remains intact, the ability to attribute emotions and attitudes is compromised when the lower face is obscured (Giovanelli et al., 2023). The “compensation hypothesis” posits that deaf individuals develop enhanced visual functions to compensate for the loss of auditory information (Alencar et al., 2019). This compensation may manifest as faster reaction times or an increased perceptual range in peripheral vision (Buckley et al., 2010). Supporting this idea, studies showed that deaf individuals often outperform hearing controls in target face matching tasks, suggesting a general visual processing advantage (Megreya and Bindemann, 2017).
In contrast, Rodger et al. (2021) discovered that individuals with early-onset severe deafness and hearing individuals performed similarly in recognizing six basic facial expressions, and dynamic stimuli did not provide an advantage over static expressions for the deaf. Dye and Bavelier (2010) proposed the “integration hypothesis,” which encompasses both the deficits and compensatory phenomena in the visual abilities of deaf individuals. Specifically, the changes in visual function for deaf individuals are dual-faceted: auditory deprivation can impair the development of the alerting network but can also enhance basic orienting functions, such as movement and engagement (Teresa Daza and Phillips-Silver, 2013), and spatial environment perception (Bell et al., 2019). Further supporting the complexity of these changes, attention network tests and brain network analyses by Ma et al. (2023) found that weakened fronto-occipital connectivity in the brains of deaf individuals altered alerting functions, but they might also acquire supplementary compensatory cognitive resources as a result of cortical inefficiency. Despite the insights, there is currently no definitive conclusion on how auditory loss affects the facial expression recognition abilities of deaf individuals.
Taken together, significant progress has been made in studying facial expression recognition, particularly the emotional perception abilities of hearing individuals. However, most research has focused on emotion recognition under no occlusion condition, neglecting common real-life scenarios involving facial occlusion, such as wearing sunglasses and mask. The impact of facial occlusion on emotion recognition has not been sufficiently studied. Several studies found that different types of facial occlusions (sunglasses, mask) significantly influenced the accuracy of emotion recognition, with the lowest recognition rates for images with mask, followed by sunglasses (Kim et al., 2022; Noyes et al., 2021). This impact might be more pronounced for the deaf population, who rely heavily on visual information for emotion perception (Amadeo et al., 2022). In light of this, the present study utilizes eye-tracking technology, using hearing individuals as a control group to examine the effects of different emotion types (positive, neutral, negative) and facial occlusion conditions (sunglasses, mask, original face) on facial emotion recognition in deaf individuals. We formulate two specific hypotheses. First, it is expected that facial occlusions (such as masks and sunglasses) will impair emotion recognition to a greater extent in deaf individuals compared to hearing individuals, given that deaf individuals rely more heavily on visual facial cues for emotional interpretation. Second, it is predicted that these occlusion effects would be emotion-specific, with sunglasses most severely disrupting recognition of negative emotions and masks most impairing recognition of positive emotions.
2 Methods
2.1 Participants
The experiment employed a 2 group (deaf people, hearing people) × 3 emotion (positive, neutral, negative) × 3 occlusion (sunglasses, mask, original face) multifactorial design. The group was a between-subjects variable, while emotion and occlusion were within-subjects variables. 63 participants were recruited for the experiment, including 30 deaf individuals (17 males and 11 females, with two males excluded—one due to heterogeneity concerns and another due to incomplete data) and 33 hearing individuals (13 males, 20 females). Based on a total sample size 61, a statistical power of 0.95, and an α level of 0.05, the G*Power 3.1 software (Faul et al., 2009) computed the effect size for the current study as 0.15.
All participants were undergraduate or master’s students from Tianjin University of Technology. They reported normal or corrected-to-normal vision, no color blindness or weakness, and no history of mental health issues or cognitive disorders. Before the experiment, participants gave written informed consent and filled out a basic information survey. As shown in Table 1, first, a total of 28 deaf participants were included, with 22 using hearing aids, one receiving a cochlear implant, and five with no experience using hearing devices. Second, started at 1.5 years on average, all deaf participants experienced severe to profound hearing loss with the mean hearing loss levels of 101.9 dB (SD = 32.6 dB, range: 60–207 dB) and 97.4 dB (SD = 15.8 dB, range: 65–120 dB) for the right ear and the left ear, respectively. Third, all deaf participants were native speakers of Chinese Sign language, while some of them frequently used Spoken Chinese in their daily lives. Since the study aimed to explore the overall impact of hearing loss on facial emotion recognition under occluded conditions, we did not strictly control language modality of the deaf participants.1 Meanwhile, none of the hearing participants in this study were sign language users. Upon completing the experiment, participants received compensation for their involvement. The study was approved by the Ethical Committee of Tianjin University of Technology.
2.2 Stimuli
This study sought to improve the authenticity of the stimuli by employing the approach introduced by Kim et al. (2022) instead of covering the face with bubbles or segmenting specific facial regions. The experimental materials were selected from the Chinese Affective Picture System (CAPS; Gong et al., 2011), comprising 90 images across three emotional categories (positive, neutral, and negative). Given the limitations of the dataset, a sufficient number of images depicting a single negative emotion to match the other categories were unavailable. To maximize ecological validity within these constraints, multiple discrete negative emotions were included (sadness: n = 9, anger: n = 8, fear: n = 7, disgust: n = 3, surprise: n = 3), with controls for several confounding factors, including arousal and gender. All selected images had arousal ratings above 4 on a 9-point scale, ensuring comparable emotional salience across categories. Statistical analysis confirmed no significant differences in arousal levels between emotional categories (one-way ANOVA: F[2,87] = 0.73, p = 0.48). The mean arousal ratings were 5.05 (SD = 0.19) for positive expressions (happiness), 5.03 (SD = 0.32) for negative expressions, and 4.96 (SD = 0.44) for neutral expressions (calmness). To address potential gender effects, balanced gender representation was maintained (15 male and 15 female faces) within each primary emotion category. Using Adobe Photoshop 2018, the original 90 facial images were modified by superimposing standardized sunglasses and masks, resulting in 90 images featuring sunglasses and another 90 images featuring masks, as illustrated in Figure 1a. This process generated a total of 270 facial images, comprising 3 (emotion: positive, neutral, negative) × 3 (occlusion: sunglasses, mask, original face) × 30 (individuals). This methodological approach ensures rigorous control over variables that could influence participants’ emotion recognition accuracy and reaction times, thereby enhancing the internal validity of our study.

Figure 1. Stimuli and experimental procedure. (a) Stimuli: G, sunglasses; M, mask; O, original face. (b) Experimental procedure: Neg, negative; Neu, neutral; Pos, positive. (c) Areas of Interest (AOI): Three sets of AOIs were created to analyze the ocular exploration. Blue area: eye, Yellow area: mouth, Red area: the whole face. Adapted with permission from Gong et al. (2011).
2.3 Procedure
The experiment was conducted in a quiet laboratory using a Hasee K610D-i7 D4 laptop with a 15.6-inch screen, a resolution of 1920 × 1,080, and a refresh rate of 60 Hz. A Tobii Nano Pro eye tracker with a sampling rate of 60 Hz and a chin rest to stabilize participants’ heads were also used. E-Prime 3.0 software was used for eye-tracking calibration, stimulus presentation, and data recording. Participants were engaged in an emotion judgment task, which required them to rapidly identify the emotional attribute of the stimulus image and provide a response by pressing a designated key within a specified time frame. Before the experiment, participants underwent a 9-point calibration to ensure precise recording of their eye movements. During the experiment, participants were seated 60 cm from the screen.
The experimental procedure is depicted in Figure 1b. First, a white fixation cross (+) was displayed centrally on a black computer screen for 1,000 milliseconds. Next, the stimulus image was shown in the center of the screen for 5,000 milliseconds, and participants were required to press a key to complete the emotion judgment task (negative: 1, positive: 2, neutral: 3). Following the participant’s keypress response, the facial stimulus immediately disappeared from the screen. If no response was made within the 5,000-ms time window, the face stimulus was automatically removed, and the trial terminated. In both scenarios, the experiment subsequently advanced to the next trial, reverting to a black background with a white cross displayed at the center. A total of 270 trials were presented in a random order without controlling for the sequence of occluded versus unoccluded stimuli. The entire experiment took approximately 40 min. Prior to the formal experiment, participants completed 30 training trials to acclimate the procedure.
2.4 Data analysis
The experiment collected both behavioral and eye-tracking data from the participants. Behavioral data included reaction times for correct responses and accuracy rates for each participant. RT outliers were identified and removed using a ± 2 SD cutoff from each participant’s mean reaction time. Eye-tracking data consisted of the percentage of time spent gazing at the eye region (blue area) and the mouth region (yellow area) relative to the total face region (red area), as illustrated in Figure 1c. Statistical analyses were conducted using R, with visualizations generated through the ggplot2 package (Wickham, 2016). We conducted four separate three-way mixed-design analyses of variance (ANOVAs) to evaluate the effects of participant group, stimuli type, and their interactions. The dependent variables analyzed were reaction time, accuracy rate, and the proportion of time spent gazing at the mouth and eye regions. For all parameters, the assumption of homogeneity of variance was verified using Levene’s tests, and p-values were corrected for multiple comparisons via the Bonferroni correction.
3 Results
3.1 Behavioral results
Table 2 presents the mean and standard deviation of reaction times and accuracy rates for deaf and hearing participants under different occlusion types (sunglasses, mask, original face) and emotion types (positive, neutral, negative).
3.1.1 Results of ANOVAs on reaction times
With participant group, occlusion, and emotion as independent variables, and reaction times as the dependent variable, a three-way mixed-design analysis of variance (ANOVA) was conducted. Results showed that there were significant main effects related to occlusion, F(2,118) = 59.034, p < 0.001,η2 p = 0.500, and emotion, F(2,118) = 8.065, p < 0.001,η2 p = 0.120. Moreover, the interaction effect between occlusion and emotion was significant, F(4,236) = 18.254, p < 0.001,η2 p = 0.236.
Given the significant interaction effect between occlusion and emotion, simple effects analyses were conducted to examine the effect of occlusion at each level of emotion. Results indicated that the effect of occlusion was significant across all emotional conditions: for positive emotions, F(2,59) = 61.139, p < 0.001,η2 p = 0.675, for neutral emotions, F(2,59) = 5.797, p = 0.005,η2 p = 0.164, for negative emotions, F(2,59) = 10.910, p < 0.001,η2 p = 0.270. Bonferroni-corrected pairwise comparisons revealed that across all emotional valences (positive, neutral, negative), reaction times were significantly longer for masked faces compared to both sunglasses and original face conditions. As shown in Figure 2, this pattern was most pronounced for positive emotions (MDM-G = 289 ms, MDM-O = 274 ms). Moreover, t the effect of emotions was significant under both sunglasses (F(2,59) = 17.090, p < 0.001,η2 p = 0.367) and original face conditions (F(2,59) = 14.316, p < 0.001,η2 p = 0.327). Bonferroni-corrected pairwise comparisons further revealed that, under both sunglasses and original face conditions, reaction times for positive emotions were significantly shorter than those for negative and neutral emotions, suggesting that participants identified positive emotion faces more quickly.

Figure 2. Reaction times illustrating the occlusion (G, sunglasses; M, mask; O, original face) × emotion (Neg, negative; Neu, neutral; Pos, positive) interaction.
3.1.2 Results of ANOVAs on accuracy
The ANOVA results of accuracy revealed significant main effects for participant group F(1,59) = 6.325, p = 0.015,η2 p = 0.097, occlusion F(2,118) = 195.678, p < 0.001,η2p = 0.768, and emotion F(2,118) = 12.243, p < 0.001,η2p = 0.172. The interaction effects showed that all pairwise interactions between the factors were significant: the interaction between participant group and occlusion was significant F(2,118) = 3.910, p = 0.023,η2p = 0.062; the interaction between group and emotion was marginally significant F(2,118) = 2.923, p = 0.058,η2p = 0.047; and the interaction between occlusion and emotion was highly significant F(4,236) = 86.601, p < 0.001,η2p = 0.595.
Simple effects analyses were conducted to examine the interaction between participant group and occlusion. As shown in Figure 3, under sunglasses condition, the simple effect of group was significant, F(1,59) = 8.664, p = 0.005,η2 p = 0.128; under original face conditions, the simple effect was also significant F(1,59) = 6.517, p = 0.013,η2 p = 0.099, with deaf participants having lower accuracy rates compared to hearing participants. For both deaf and hearing participants, the simple effects across the three occlusion conditions were significant F(2,59) = 58.112, p < 0.001,η2 p = 0.663; F(2,59) = 101.745, p < 0.001,η2 p = 0.775. Bonferroni-corrected pairwise comparisons further revealed that recognition accuracy was lowest under mask conditions, followed by sunglasses conditions, with original face conditions yielding the highest recognition accuracy.

Figure 3. Accuracy rates illustrating the group (DP: deaf people, HP: hearing people) × occlusion (G, sunglasses; M, mask; O, original face) interaction.
Simple effects analyses of the interaction between Group and Emotion revealed that, compared to hearing individuals, deaf individuals had a significantly lower recognition rate for negative emotions F(1,60) = 14.657, p < 0.001,η2 p = 0.196, while no significant differences were found between the two groups in the recognition of neutral and positive emotions. When analyzing the main effect of emotion type with group as a moderating variable, both deaf participants F(2,60) = 7.960, p < 0.001,η2 p = 0.210 and hearing participants F(2,60) = 5.762, p = 0.005,η2 p = 0.161 showed significant differences in accuracy across different emotion conditions. Pairwise comparisons further clarified these patterns: deaf participants demonstrated significantly lower accuracy in recognizing negative emotions compared to neutral and positive emotions. In contrast, hearing participants exhibited the lowest accuracy for positive emotions and the highest accuracy for neutral emotions.
As shown in Figure 4, simple effects analyses of the occlusion and emotion interaction showed significant effects for emotion under sunglasses F(2,59) = 34.763, p < 0.001,η2 p = 0.541, mask F(2,59) = 35.986, p < 0.001,η2 p = 0.550, and original face conditions F(2,59) = 3.933, p = 0.025,η2 p = 0.118. Pairwise comparisons revealed that under sunglasses, the recognition rate for negative emotions was significantly lower than for the other two types of emotions. Under mask conditions, the recognition rate for positive emotions was the lowest, followed by negative emotions, with neutral emotions being recognized the most accurately. Under original face conditions, neutral emotions had the lowest recognition rate, while positive emotions had the highest. From the perspective of emotion, the simple effect of occlusion was significant for recognizing negative emotions F(2,59) = 158.177, p < 0.001,η2 p = 0.843, and positive emotions F(2,59) = 124.861, p < 0.001,η2 p = 0.809, not significant for neutral emotions. Pairwise comparisons showed that, for negative emotions, the recognition accuracy was significantly higher for original faces compared to sunglasses and mask conditions, while for positive emotions, the accuracy was highest under sunglasses and lowest under mask. These results underscore the importance of the mouth in recognizing positive emotions and the eyes in recognizing negative emotions.

Figure 4. Accuracy rates illustrating the occlusion (G, sunglasses; M, mask; O, original face) × emotion (Neg: negative, Neu: neutral, Pos: positive) interaction.
3.2 Results of eye-tracking
Table 3 presents the percentage of time spent fixating on the eye and mouth regions for both deaf and hearing participants under various occlusion and emotion conditions.

Table 3. Descriptive statistics for percentage of fixation time on the eye and mouth areas of interest.
3.2.1 Fixation results of the eye area
Among the three independent factors, significant main effects were observed for occlusion, F(2,118) = 422.833, p < 0.001,η2 p = 0.878, and emotion, F(2,118) = 14.748, p < 0.001,η2 p = 0.200. Specifically, the amount of time spent fixating on the eye region was ranked from longest to shortest as follows: mask condition > original face > sunglasses condition; neutral emotion > negative emotion > positive emotion. Regarding interaction effects, the interaction between occlusion and emotion was highly significant, F(4,236) = 68.511, p < 0.001,η2 p = 0.537, while other interactions were not significant.
Simple effects analyses of the interaction between occlusion and emotion revealed significant effects for emotion at different occlusion types, as illustrated in Figure 5. Under the mask condition, the simple effect of emotion was significant, F(2,59) = 72.543, p < 0.001,η2 p = 0.711, the rank for visual attention participants devoted to the eye region was positive emotions > neutral emotions > negative emotions. Under the original face condition, the simple effect of emotion was also significant, F(2,59) = 22.290, p < 0.001,η2 p = 0.430. Pairwise comparisons revealed that attention to the eye region was neutral emotions > negative emotions > positive emotions. Besides, despite the eyes being obscured under sunglasses conditions, deaf individuals still allocated some attention to the eye region, with the longest fixation times occurring for negative emotions. For the occlusion on different emotion conditions, the simple effects of fixation percentage on the eye region were also significant: under negative emotions, the simple effect of occlusion was significant, F(2,59) = 356.914, p < 0.001,η2 p=. 924; under neutral emotions, F(2,59) = 358.660, p < 0.001,η2 p=. 924; and under positive emotions, F(2,59) = 431.512, p < 0.001,η2 p=. 936. Pairwise comparisons further demonstrated that in all three emotion conditions, participants’ visual attention to the eye region was highest under the mask condition, followed by the original face condition, and lowest under the sunglasses condition.

Figure 5. Mean (± standard error) of the proportion of time spent in the eye area illustrating the occlusion (G, sunglasses; M, mask; O, original face) × emotion (Neg, negative; Neu, neutral; Pos, positive) interaction.
3.2.2 Fixation results of the mouth area
The results showed a significant main effect of group, F(1,59) = 7.788, p = 0.007,η2 p = 0.117, with hearing participants spending significantly more time fixating on the mouth region compared to deaf participants. The main effect of occlusion was also significant, F(2,118) = 203.839, p < 0.001,η2 p = 0.776, with the longest fixation times observed under the sunglasses condition, followed by the original face condition. Interestingly, the participants had also allocated some visual attention to the mouth region even under the mask condition. The main effect of emotion type was significant, F(2,118) = 13.161, p < 0.001,η2 p = 0.182. Post-hoc comparisons revealed that, for all participants, fixation times on the mouth region varied from longest to shortest as follows: positive emotions, negative emotions, and neutral emotions. Regarding interaction effects, the interaction between group and occlusion was significant, F(2,118) = 7.690, p < 0.001,η2 p = 0.115, as was the interaction between occlusion and emotion, F(4,236) = 7.679, p < 0.001,η2 p = 0.115.
As illustrated in Figure 6, simple effects analyses of the interaction between group and occlusion revealed significant differences of group in the percentage of time spent fixating on the mouth region under both the sunglasses and original face conditions, F(1,59) = 8.824, p = 0.004,η2 p = 0.130; F(1,59) = 4.678, p = 0.035,η2 p = 0.073, with deaf participants spending significantly less time fixating on the mouth region compared to hearing participants. However, under the mask condition, deaf participants spent significantly more time fixating on the mouth region compared to hearing participants F(1,59) = 4.922, p = 0.030,η2 p = 0.077. Both deaf participants, F(2,59) = 41.779, p < 0.001,η2 p = 0.586, and hearing participants, F(2,59) = 100.329, p < 0.001,η2 p = 0.773, showed significant differences in fixation times on the mouth region for the three types of stimuli. The fixation times ranked from longest to shortest were: sunglasses condition > original face > mask condition.

Figure 6. Mean (± standard error) of the proportion of time spent in the mouth area illustrating the group (DP, deaf people; HP, hearing people) × occlusion (G, sunglasses; M, mask; O, original face) interaction.
There were significant differences in the amount of time participants spent fixating on the mouth region for various emotions under different occlusion conditions (see Figure 7). Specifically, the simple effects of group were significant for both sunglasses, F(2,59) = 11.305, p < 0.001,η2 p = 0.277, and original face conditions, F(2,59) = 8.492, p < 0.001,η2 p = 0.224. Post-hoc comparisons showed that, regardless of the occlusion conditions, participants allocated the most visual attention to the mouth region for positive emotions, followed by negative emotions, and least for neutral emotions. This indicates that the mouth is a crucial area for recognizing positive emotions. Furthermore, the simple effects of occlusion type on the percentage of time spent fixating on the mouth region were also significant across different emotion conditions: negative emotions, F(2,59) = 151.549, p < 0.001,η2 p=. 837; neutral emotions, F(2,59) = 101.421, p < 0.001,η2 p=. 775; and positive emotions, F(2,59) = 120.698, p < 0.001,η2 p = 0.804. For all three emotion conditions, participants allocated more visual attention to the mouth region in the sunglasses condition compared to the original face and mask conditions.

Figure 7. Mean (± standard error) of the proportion of time spent in the mouth area illustrating the occlusion (G, sunglasses; M, mask; O, original face) × emotion (Neg, negative; Neu, neutral; Pos, positive) interaction.
4 Discussion
This study has explored the mechanisms underlying emotion recognition in deaf individuals under different facial occlusion conditions. Behavioral and eye-tracking results reveal the following four key findings: First, both sunglasses and masks significantly impair emotion perception in deaf individuals, though they can enhance it under certain conditions. Second, deaf individuals allocate more attention to the eye region when processing negative emotions compared to positive ones, yet their ability to recognize negative emotions is poorer. Third, masks have a greater impact on emotion recognition than sunglasses, with the mouth region being particularly crucial for identifying positive emotions. Finally, these results suggest that deaf individuals may experience both visual deficits and compensatory enhancements under different conditions, supporting the integration theory of visual function in the deaf population. Overall, these findings highlight the significant challenges faced by deaf individuals in recognizing negative emotions and processing emotion recognition under facial occlusion conditions.
The first finding underscores the significant impact of facial occlusions, particularly masks, on emotion recognition among deaf individuals. Compared to original face conditions, deaf individuals had lower accuracy and slower reaction times in emotion recognition under occlusion conditions. This aligns with previous research emphasizing the unique challenges faced by the deaf community in interpreting emotional cues, especially when visual access to facial features is restricted (Amadeo et al., 2022; Gutierrez-Sigut et al., 2022). However, occluding specific facial regions does not necessarily impair emotion recognition; rather, it may enhance the emotion recognition in certain circumstances. Our results showed that, under mask conditions, deaf individuals spent less time fixating on the eyes when recognizing negative emotions compared to positive ones. This suggests that the occlusion of the mouth region may facilitate the recognition of negative emotions. Covering the mouth may reduce distractions, enabling deaf individuals to more effectively capture and process subtle changes in eye expressions. Correspondingly, behavioral results also showed that under sunglasses conditions, participants recognized positive emotions with the highest accuracy and the fastest reaction times. This indicates that the occlusion of the eyes might reduce visual cognitive load, making the interpretation of mouth-related positive emotions quicker and more accurate for deaf individuals. These findings align with the notion that covering facial regions less relevant to emotion recognition may help filter out irrelevant or misleading information (Leach et al., 2016). It also suggests that facial occlusion may serve as an effective strategy to help deaf individuals more accurately understand others’ emotional states in complex social interactions.
The second finding indicates that the eye region is the most critical area for recognizing negative emotions, and deaf individuals exhibit superiority effect of happiness. Under both sunglasses and original face conditions, deaf individuals allocated more attentional resources on negative emotions than positive ones in the eye region, suggesting deaf individuals have an advantage over recognizing positive emotions from the eyes as compared with negative emotions. This result replicates the superiority effect of happiness (Amadeo et al., 2022), that happy faces were typically recognized most quickly in emotion classification tasks (Nummenmaa and Calvo, 2015). The advantage, better performance on recognizing positive emotions, may be related to the current preferable social environment. Social improvements in areas such as family-centered care (Hlayisi and Sekoto, 2023), peer support (Wang et al., 2023), and technological support (Chapman et al., 2023) have afforded deaf individuals greater exposure to positive emotional expressions in social interactions. On the other hand, this interpretation also aligns with the latest evidence suggesting that deaf individuals may have insufficient social experience with expressing and recognizing negative emotions, which can lead to increased difficulty in recognizing them (Tsou et al., 2021). Additionally, the lower sensitivity to negative emotions among deaf individuals may also be related to their tendency to avoid conflicts and disputes in social environments (Yuen et al., 2022).
The third finding highlights the crucial role of mouth in conveying positive emotions (Beaudry et al., 2014; Bombari et al., 2013; Schurgin et al., 2014). However, the occlusion caused by mask severely restricts deaf individuals’ ability to derive emotional information from the mouth area, thereby compelling them to rely more heavily on other facial features for emotion recognition, particularly the eyes. In particular, participants exhibited better performance under mask conditions than under sunglasses conditions. Moreover, the proportion of fixations on the mouth region under mask conditions was significantly lower than that of fixations on the eye region under sunglasses conditions. This suggests that the occlusion of the mouth seems to have a greater impact on facial emotion recognition compared to the eye occlusion. While core linguistic comprehension remained intact, the ability to attribute emotions and attitudes was compromised when the lower face was obscured (Giovanelli et al., 2023). This may be attributed to the fact that mask covers a larger portion of the face compared to sunglasses (Carbon, 2020). In occlusion conditions, hearing individuals can rely on background sounds, voice tones, and intonation to compensate for the lack of facial expressions (Leitzke and Pollak, 2016). However, deaf individuals in the current study exhibited underdeveloped spoken language abilities due to hearing impairment or technical shortcomings of hearing device. They might experience difficulties to utilize auditory cues and rely more on visual cues to compensate for communication deficits. Therefore, it is crucial to find effective solutions to address this issue, such as developing transparent masks or enhancing other visual cues.
Taken together, the current findings demonstrate that deaf individuals may encounter visual deficits and compensatory enhancement under different conditions, aligning with the integration theory of visual function in deaf individuals (Dye and Bavelier, 2010). On the one hand, the behavioral and eye movement data indicate that deaf individuals have poorer overall facial emotion recognition abilities than hearing individuals, aligning with previous research indicating that deaf individuals may experience delays in facial emotion recognition (Wang et al., 2011). While deaf individuals had slightly longer total reaction times compared to hearing individuals, a notable difference was that they also spent less time fixating on both the eyes and mouth areas. This suggests that hearing individuals initially invest more attention resources in these key facial features, whereas deaf individuals tend to distribute their attention more widely, including peripheral areas. This may be due to the absence of auditory input, which leads deaf individuals to alter the scope of allocating visual attention, resulting in a more widespread distribution of their visual attentional resources (Smith et al., 1998). Furthermore, deaf individuals are less inclined to adjust their visual attention strategy based on task demands, whereas hearing individuals can flexibly allocate their visual resources according to experimental requirements.
On the other hand, we found that deaf individuals exhibited a compensatory enhancement in their visual perception for emotion recognition. Despite overall lower accuracy rates, deaf individuals had faster reaction times for positive emotions. Although deaf individuals showed more off-target fixations than hearing individuals, their recognition of positive emotions were not affected. This discrepancy might be due to the broader peripheral visual span and higher motion detection ability in deaf individuals (Shiell et al., 2014; Stevens and Neville, 2006), enabling them to detect information from the mouth area through peripheral vision while focusing on the eyes. Early hearing loss results in the lack of auditory stimuli during cortical development, which could lead to a reorganization of other modalities, such as vision. Then, the additional involvement, often referred to as “cross-modal reorganization,” of auditory cortices activated by visual stimuli might enhance deaf individuals’ ability to process visual information, potentially conferring a visual advantage in the recognition of certain stimuli (Benetti et al., 2021; Finney et al., 2001; Zhang et al., 2021). In other words, deaf individuals might develop stronger attention and faster visual processing speeds through heightened sensitivity to visual cues, allowing them to quickly identify and react to certain emotional signals as a compensation for their hearing loss.
Besides, this study has several limitations that future research should address. First, the stimuli used in this experiment were static images, which differed from real-life social interactions. Given that some studies suggest that deaf individuals may have advantages in dynamic emotion recognition, future research could employ dynamic audiovisual stimuli encompassing a broader range of emotional types to enhance ecological validity and generalizability. Second, our study categorized emotions into three types: positive, neutral, and negative, without further differentiation. Future research should include a wider variety of facial emotion types to thoroughly investigate the impact of facial occlusion on emotion recognition in deaf individuals. Third, the group of deaf participants in the current study is not homogeneous, as differences exist in language modality (sign language vs. oral language). Sign language, as a visual language, conveys speech information through the location and movement of gestures. The frequency of sign language use can lead to varying patterns of visual attention distribution among deaf individuals (Giovanelli et al., 2023; Wang et al., 2020). Therefore, future studies should carefully control for this factor to more accurately verify the effects of these variables. Finally, facial emotion recognition is not the sole channel for emotion identification; vocal cues, body language, and social context are also commonly used to analyze and interpret emotions. Future research could develop more effective assistive tools and training methods to enhance their emotion recognition abilities across different contexts. An effective way is to use transparent masks. However, Atcherson et al. (2021) found that transparent options have greater attenuation, resonant peaks, and deflect sounds in ways that non-transparent masks do not. Therefore, innovative solutions and technologies must be explored further to enhance social interactions within the deaf community, thereby improving their quality of life and fostering greater social participation.
5 Conclusion
This study utilized eye-tracking technology to investigate differences in facial emotion perception between deaf and hearing individuals under various facial occlusion conditions. From both behavioral and eye-tracking data, the current study reveals that, deaf individuals exhibited weaker emotion recognition abilities compared to hearing individuals across most facial occlusion conditions, but they performed better in recognizing positive emotions. This suggests that deaf individuals experience both deficit and compensation of visual function, but these phenomena occur under different conditions, supporting the integration theory proposed by Dye and Bavelier. Additionally, facial occlusion (e.g., mask and sunglasses) significantly impacts the performance of emotion recognition in deaf individuals. Future studies should place more emphasis on the role of facial visual cues in emotion perception for deaf individuals and implement effective measures to improve this situation, such as promoting the use of transparent masks, enhancing emotional education for deaf children, and developing assistive technologies based on visual and tactile cues. These efforts could support the mental health and social integration of deaf individuals, contributing to the creation of a more inclusive and accessible social environment.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Ethical Committee of Tianjin University of Technology. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
YC: Conceptualization, Funding acquisition, Methodology, Supervision, Validation, Writing – review & editing. SC: Formal analysis, Writing – original draft, Writing – review & editing. ZZ: Data curation, Methodology, Writing – original draft. KC: Funding acquisition, Supervision, Validation, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by National Social Science Fund of P. R. China (grant no.19AZD037) and Chongqing Social Science Planning Project (grant no. 2023NDYB167).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^Thank the reviewer for the concern regarding the heterogeneity of the deaf group. Although all deaf participants are prelingually deaf people and L1 speakers of Sign language, the usage ratios of spoken Chinese to Chinese Sign language could influence face processing abilities, which should be controlled in future research.
References
Abutalebi, J., Gallo, F., Fedeli, D., Houdayer, E., Zangrillo, F., Emedoli, D., et al. (2024). On the brain struggles to recognize basic facial emotions with face masks: an fMRI study. Front. Psychol. 15:1339592. doi: 10.3389/fpsyg.2024.1339592
Alencar, C. D., Butler, B. E., and Lomber, S. G. (2019). What and how the deaf brain sees. J. Cogn. Neurosci. 31, 1091–1109. doi: 10.1162/jocn_a_01425
Amadeo, M. B., Escelsior, A., Amore, M., Serafini, G., Da Silva, B. P., and Gori, M. (2022). Face masks affect perception of happy faces in deaf people. Sci. Rep. 12:12424. doi: 10.1038/s41598-022-16138-x
Ambert-Dahan, E., Giraud, A., Mecheri, H., Sterkers, O., Mosnier, I., and Samson, S. (2017). Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness. Hear. Res. 354, 64–72. doi: 10.1016/j.heares.2017.08.007
Atcherson, S. R., McDowell, B. R., and Howard, M. P. (2021). Acoustic effects of non-transparent and transparent face coveringsa. J. Acoust. Soc. Am. 149, 2249–2254. doi: 10.1121/10.0003962
Beaudry, O., Roy-Charland, A., Perron, M., Cormier, I., and Tapp, R. (2014). Featural processing in recognition of emotional facial expressions. Cognit. Emot. 28, 416–432. doi: 10.1080/02699931.2013.833500
Bell, L., Wagels, L., Neuschaefer-Rube, C., Fels, J., Gur, R. E., and Konrad, K. (2019). The cross-modal effects of sensory deprivation on spatial and temporal processes in vision and audition: a systematic review on behavioral and neuroimaging research since 2000. Neural Plast. 1:9603469. doi: 10.1155/2019/9603469
Benetti, S., Zonca, J., Ferrari, A., Rezk, M., Rabini, G., and Collignon, O. (2021). Visual motion processing recruits regions selective for auditory motion in early deaf individuals. NeuroImage 230:117816. doi: 10.1016/j.neuroimage.2021.117816
Blais, C., Roy, C., Fiset, D., Arguin, M., and Gosselin, F. (2012). The eyes are not the window to basic emotions. Neuropsychologia 50, 2830–2838. doi: 10.1016/j.neuropsychologia.2012.08.010
Bombari, D., Schmid, P. C., Mast, M. S., Birri, S., Mast, F. W., and Lobmaier, J. S. (2013). Emotion recognition: the role of featural and configural face information. Q. J. Exp. Psychol. 66, 2426–2442. doi: 10.1080/17470218.2013.789065
Buckley, D., Codina, C., Bhardwaj, P., and Pascalis, O. (2010). Action video game players and deaf observers have larger Goldmann visual fields. Vis. Res. 50, 548–556. doi: 10.1016/j.visres.2009.11.018
Carbon, C. (2020). Wearing face masks strongly confuses counterparts in Reading emotions. Front. Psychol. 11:566886. doi: 10.3389/fpsyg.2020.566886
Chapman, M., Dammeyer, J., Jepsen, K. S. K., and Liebst, L. S. (2023). Deaf identity, social relationships, and social support: toward a microsociological perspective. J. Deaf. Stud. Deaf. Educ. 29, 82–91. doi: 10.1093/deafed/enad019
Dye, M. W. G., and Bavelier, D. (2010). Attentional enhancements and deficits in deaf populations: an integrative review. Restor. Neurol. Neurosci. 28, 181–192. doi: 10.3233/RNN-2010-0501
Esposito, A., Esposito, A. M., Cirillo, I., Panfilo, L., Scibelli, F., Maldonato, M., et al. (2017). Differences between hearing and deaf subjects in decoding foreign emotional faces. In 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). (pp. 000175–000180). IEEE.
Faul, F., Erdfelder, E., Buchner, A., and Lang, A. G. (2009). Statistical power analyses using G*power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41, 1149–1160. doi: 10.3758/BRM.41.4.1149
Finney, E. M., Fine, I., and Dobkins, K. R. (2001). Visual stimuli activate auditory cortex in the deaf. Nat. Neurosci. 4, 1171–1173. doi: 10.1038/nn763
Giovanelli, E., Gianfreda, G., Gessa, E., Valzolgher, C., Lamano, L., Lucioli, T., et al. (2023). The effect of face masks on sign language comprehension: performance and metacognitive dimensions. Conscious. Cogn. 109:103490. doi: 10.1016/j.concog.2023.103490
Gong, X., Huang, Y., Wang, Y., and Luo, Y. (2011). Revision of the Chinese facial expression picture system. Chin. Ment. Health J. 25, 40–46.
Graham, D. L., and Ritchie, K. L. (2019). Making a spectacle of yourself: the effect of glasses and sunglasses on face perception. Perception 48, 461–470. doi: 10.1177/0301006619844680
Grahlow, M., Rupp, C. I., and Derntl, B. (2022). The impact of face masks on emotion recognition performance and perception of threat. PLoS One 17:e0262840. doi: 10.1371/journal.pone.0262840
Gutierrez-Sigut, E., Lamarche, V. M., Rowley, K., Lago, E. F., Pardo-Guijarro, M. J., Saenz, I., et al. (2022). How do face masks impact communication amongst deaf/HoH people? Cogn. Ther. Res. 7:81. doi: 10.1186/s41235-022-00431-4
Harris, M. B. (1991). Sex differences in stereotypes of spectacles 1. J. Appl. Soc. Psychol. 21, 1659–1680.
Hlayisi, V., and Sekoto, L. V. (2023). Understanding identity construction among deaf adolescents and young adults: implications for the delivery of person and family-centered care in audiological rehabilitation. Front. Rehab. Sci. 4:1228116. doi: 10.3389/fresc.2023.1228116
Kastendieck, T., Zillmer, S., and Hess, U. (2022). (un)mask yourself! Effects of face masks on facial mimicry and emotion perception during the COVID-19 pandemic. Cognit. Emot. 36, 59–69. doi: 10.1080/02699931.2021.1950639
Kim, G., Seong, S. H., Hong, S., and Choi, E. (2022). Impact of face masks and sunglasses on emotion recognition in south Koreans. PLoS One 17:e0263466. doi: 10.1371/journal.pone.0263466
Kramer, R. S. S., and Ritchie, K. L. (2016). Disguising superman: how glasses affect unfamiliar face matching. Appl. Cogn. Psychol. 30, 841–845. doi: 10.1002/acp.3261
Lau, W. K., Chalupny, J., Grote, K., and Huckauf, A. (2022). How sign language expertise can influence the effects of face masks on non-linguistic characteristics. Cogn. Ther. Res. 7:53. doi: 10.1186/s41235-022-00405-6
Leach, A., Ammar, N., England, D. N., Remigio, L. M., Kleinberg, B., and Verschuere, B. J. (2016). Less is more? Detecting lies in veiled witnesses. Law Hum. Behav. 40, 401–410. doi: 10.1037/lhb0000189
Leitzke, B. T., and Pollak, S. D. (2016). Developmental changes in the primacy of facial cues for emotion recognition. Dev. Psychol. 52, 572–581. doi: 10.1037/a0040067
Liu, H., and Ge, L. (2014). The impact of facial expression recognition on social interaction ability. Chinese. J. Clin. Psychol. 22, 413–417. doi: 10.16128/j.cnki.1005-3611.2014.03.054
Ludlow, A., Heaton, P., Rosset, D., Hills, P., and Deruelle, C. (2010). Emotion recognition in children with profound and severe deafness: do they have a deficit in perceptual processing? J. Clin. Exp. Neuropsychol. 32, 923–928. doi: 10.1080/13803391003596447
Ma, H., Zeng, T., Jiang, L., Zhang, M., Li, H., Su, R., et al. (2023). Altered resting-state network connectivity patterns for predicting attentional function in deaf individuals: an EEG study. Hear. Res. 429:108696. doi: 10.1016/j.heares.2023.108696
Malak, C., and Yildirim, F. (2022). Masking emotions: How does perceived gaze direction affect emotion recognition in masked faces?–an eye-tracking study [Preprint]. PsyArXiv. Available at: https://osf.io/preprints/psyarxiv/3fd5h_v1
Mansutti, I., Achil, I., Gastaldo, C. R., Pires, C. T., and Palese, A. (2023). Individuals with hearing impairment/deafness during the COVID-19 pandemic: a rapid review on communication challenges and strategies. J. Clin. Nurs. 32, 4454–4472. doi: 10.1111/jocn.16572
Megreya, A. M., and Bindemann, M. (2017). A visual processing advantage for young-adolescent deaf observers: evidence from face and object matching tasks. Sci. Rep. 7:41133. doi: 10.1038/srep41133
Noyes, E., Davis, J. P., Petrov, N., Gray, K. L. H., and Ritchie, K. L. (2021). The effect of face masks and sunglasses on identity and expression recognition with super-recognizers and typical observers. R. Soc. Open Sci. 8:201169. doi: 10.1098/rsos.201169
Nummenmaa, L., and Calvo, M. G. (2015). Dissociation between recognition and detection advantage for facial expressions: a Meta-analysis. Emotion 15, 243–256. doi: 10.1037/emo0000042
Parada-Fernandez, P., Herrero-Fernandez, D., Jorge, R., and Comesana, P. (2022). Wearing mask hinders emotion recognition, but enhances perception of attractiveness. Pers. Individ. Differ. 184:111195. doi: 10.1016/j.paid.2021.111195
Pazhoohi, F., Forby, L., and Kingstone, A. (2021). Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS One 16:e0257740. doi: 10.1371/journal.pone.0257740
Prete, G., D'Anselmo, A., and Tommasi, L. (2022). A neural signature of exposure to masked faces after 18 months of COVID-19. Neuropsychologia 174:108334. doi: 10.1016/j.neuropsychologia.2022.108334
Rabadan, V., Ricou, C., Latinus, M., Aguillon-Hernandez, N., and Wardak, C. (2022). Facial mask disturbs ocular exploration but not pupil reactivity. Front. Neurosci. 16:1033243. doi: 10.3389/fnins.2022.1033243
Rodger, H., Lao, J., Stoll, C., Richoz, A., Pascalis, O., Dye, M., et al. (2021). The recognition of facial expressions of emotion in deaf and hearing individuals. Heliyon, 7:e07018. doi: 10.1016/j.heliyon.2021.e07018
Ruba, A., and Pollak, S. (2020). Children’s emotion inferences from masked faces: implications for social interactions during COVID-19. PLoS One 15:e0243708. doi: 10.1371/journal.pone.0243708
Schurgin, M. W., Nelson, J., Iida, S., Ohira, H., Chiao, J. Y., and Franconeri, S. L. (2014). Eye movements during emotion recognition in faces. J. Vis. 14:14-14. doi: 10.1167/14.13.14
Shiell, M. M., Champoux, F., and Zatorre, R. J. (2014). Enhancement of visual motion detection thresholds in early deaf people. PLoS One 9:e90498. doi: 10.1371/journal.pone.0090498
Smith, L. B., Quittner, A. L., Osberger, M. J., and Miyamoto, R. (1998). Audition and visual attention: the developmental trajectory in deaf and hearing populations. Dev. Psychol. 34, 840–850.
Stevens, C., and Neville, H. (2006). Neuroplasticity as a double-edged sword: deaf enhancements and dyslexic deficits in motion processing. J. Cogn. Neurosci. 18, 701–714. doi: 10.1162/jocn.2006.18.5.701
Teresa Daza, M., and Phillips-Silver, J. (2013). Development of attention networks in deaf children: support for the integrative hypothesis. Res. Dev. Disabil. 34, 2661–2668. doi: 10.1016/j.ridd.2013.05.012
Tsou, Y., Li, B., Eichengreen, A., Frijns, J. H. M., and Rieffe, C. (2021). Emotions in deaf and hard-of-hearing and typically hearing children. J. Deaf. Stud. Deaf. Educ. 26, 469–482. doi: 10.1093/deafed/enab022
Wang, C., Fu, W., Geng, K., and Wang, Y. (2023). The relationship between deaf Adolescents' empathy and subjective well-being in China during COVID-19 pandemic: the inconsistent role of peer support and teacher support. Child Indic. Res. 16, 1913–1940. doi: 10.1007/s12187-023-10046-w
Wang, Y., Su, Y., Fang, P., and Zhou, Q. (2011). Facial expression recognition: can preschoolers with cochlear implants and hearing aids catch it? Res. Dev. Disabil. 32, 2583–2588. doi: 10.1016/j.ridd.2011.06.019
Wang, J., Zhu, Y., Chen, Y., Mamat, A., Yu, M., Zhang, J., et al. (2020). An eye-tracking study on audiovisual speech perception strategies adopted by Normal-hearing and deaf adults under different language familiarities. J. Speech Lang. Hear. Res. 63, 2245–2254. doi: 10.1044/2020_JSLHR-19-00223
Yuen, S., Li, B., Tsou, Y., Meng, Q., Wang, L., Liang, W., et al. (2022). Family systems and emotional functioning in deaf or hard-of-hearing preschool children. J. Deaf. Stud. Deaf. Educ. 27, 125–136. doi: 10.1093/deafed/enab044
Keywords: deaf individuals, emotion recognition, occlusion, integrative hypothesis of visual function, eye-tracking
Citation: Chen Y, Cao S, Zhou Z and Cheng K (2025) Seeing emotions: an eye-tracking study of emotion recognition in deaf individuals amid facial occlusions. Front. Psychol. 16:1496259. doi: 10.3389/fpsyg.2025.1496259
Edited by:
Alessia Celeghin, University of Turin, ItalyReviewed by:
Giulia Prete, University of Studies G. d’Annunzio Chieti and Pescara, ItalyMaria Arioli, University of Bergamo, Italy
Copyright © 2025 Chen, Cao, Zhou and Cheng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Kaiwen Cheng, a2V2aW5jaGVuZ0BzaXN1LmVkdS5jbg==