Original Research ARTICLE
PERVALE-S: a new cognitive task to assess deaf people’s ability to perceive basic and social emotions
- 1Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de Cádiz, Cadiz, Spain
- 2Centro de Educación Especial para Sordos, Junta de Andalucía, Jerez de la Frontera, Spain
A poorly understood aspect of deaf people (DP) is how their emotional information is processed. Verbal ability is key to improve emotional knowledge in people. Nevertheless, DP are unable to distinguish intonation, intensity, and the rhythm of language due to lack of hearing. Some DP have acquired both lip-reading abilities and sign language, but others have developed only sign language. PERVALE-S was developed to assess the ability of DP to perceive both social and basic emotions. PERVALE-S presents different sets of visual images of a real deaf person expressing both basic and social emotions, according to the normative standard of emotional expressions in Spanish Sign Language. Emotional expression stimuli were presented at two different levels of intensity (1: low; and 2: high) because DP do not distinguish an object in the same way as hearing people (HP) do. Then, participants had to click on the more suitable emotional expression. PERVALE-S contains video instructions (given by a sign language interpreter) to improve DP’s understanding about how to use the software. DP had to watch the videos before answering the items. To test PERVALE-S, a sample of 56 individuals was recruited (18 signers, 8 lip-readers, and 30 HP). Participants also performed a personality test (High School Personality Questionnaire adapted) and a fluid intelligence (Gf) measure (RAPM). Moreover, all deaf participants were rated by four teachers for the deaf. Results: there were no significant differences between deaf and HP in performance in PERVALE-S. Confusion matrices revealed that embarrassment, envy, and jealousy were worse perceived. Age was just related to social-emotional tasks (but not in basic emotional tasks). Emotional perception ability was related mainly to warmth and consciousness, but negatively related to tension. Meanwhile, Gf was related to only social-emotional tasks. There were no gender differences.
Perceiving emotions is an important ability to build emotional intelligence (EI). As it was many times stated, the first branch (of 1997 Mayer and Salovey EI model) is clearly defined as perceiving emotions accurately in oneself and others (Mayer et al., 2004). Perception is a cognitive process, which has traditionally been divided into two interdependent directions: top–down and bottom–up (Kosslyn and Smith, 2006). According to Galotti (2008), data-driven or bottom–up processing occurs when an interpretation emerges from the data. Perceiving emotional expressions “accurately” must thus be largely data-driven because it should reflect precision in the interpersonal relationships where emotions have an important role (Roberts et al., 2006; MacCann and Roberts, 2008; Grunes et al., 2013). In the case of bottom–up processing of emotional stimuli, the interpretation of an emotional expression scene needs to be determined mostly by information from the senses rather than expectations. Nevertheless, in many situations, knowledge or expectations are involved in emotional perception. This process is named schema-driven or top–down processing. Top–down perception processes encompass the mental abilities to drive both the observation and external stimuli into a priori concepts of an understanding exploration (Goldstein, 2008). As Bruner (1973) summarized, people perceive “beyond the information given” constantly in our mental processes, such as learning to add assumptions and supplemental information derived from past experience to the evidence of our senses to understand the emotional world. Likewise, the accurate perception of emotions should encompass both top–down and bottom–up processes.
To understand how deaf people (DP) perceive emotions, it is necessary to develop an instrument that considers how DP process information (input) from the emotional stimuli. While hearing people (HP) perceive emotions through different channels (iconic and echoic sense organs), DP use mostly iconic emotional inputs. Moreover, there are differences in the perception of iconic-emotion inputs between DP and HP. Strong evidence of this difference comes from the activation of the neural circuits for recognizing emotions. Signers (DP) have to identify other factors in facial expressions (other than the emotion) to compensate for the hearing impairment. Indeed, HP activate the right superior temporal sulcus (STS) while DP show bilateral STS activation, during emotional perception tasks (McCullough et al., 2005). This instrument should distinguish both basic and social emotional expressions according to Spanish codes for DP, as we explain in the Section “Materials and Methods.”
The distinction between basic and social emotions have been based in the remarkably consistent findings provided by two experienced research teams led by Ekman and Izard about facial expressions and the distinction between basic and social emotions (see Ekman, 2006, for a deeper explanation). According to Ekman (1992), basic emotions have nine characteristics, which distinguishes from the rest of emotions. These well-known characteristics are: distinctive universal signal (1), physiology (2), and universals in antecedents events (3); also have presence in primates (4), coherence among emotional response (5), a quick start (6), a short duration (7), automatic appraisal (8), and, finally, unexpected occurrence (9). However, social emotions are defined as affective states that depend on the social context and arise when people interact each other, and they are related to the self as well (Lamm and Singer, 2010).
As emotion perception in the brain differs between DP and HP, the cognitive process involved (top–down or bottom–up) could differ as well. For example, Rieffe et al. (2003) indicated that deaf children understand emotional emergence differently from their hearing peers. The authors proposed that hearing children are more interested on why an emotion arose, while deaf children seemed more attentive to the achievement of a desired emotional state, without reasoning much on why that state occurred (Rieffe et al., 2003). Another study investigated how DP process to a particular story. The authors found that DP kept less details than hearing peers, and that DP tended to make interpretations different from those of HP (Cambra et al., 2010). Meanwhile, Dyck and Denver (2003) and Dyck et al. (2004) have found differences between DP and HP in emotional iconic information processing. DP showed deficits in every single scale in comparison with HP, even when compared in the same age intervals (Dyck and Denver, 2003; Dyck et al., 2004). Thus, iconic perception is critical in the understanding of how DP perceive their environments (Ziv et al., 2013). Moreover, social emotion development involved more cognitive effort and time than basic emotions (Arsenio, 2003).
Evidence of how DP and HP differ in the iconic perception of emotions comes from a study of Letourneau and Mitchell (2011). The authors tested 12 HP and 12 DP who were beginners in American Sign Language to examine whether the specialized experience can alter typically observed gaze patterns. Participants had to “judge the emotion and identity of expressive faces (including whole faces, and isolated top and bottom halves), while accuracy and fixations were recorded” (Letourneau and Mitchell, 2011, p. 563). All individuals recognized faces more accurately from top compared to bottom halves, and emotional expressions from bottom compared to top halves. HP paid more attention to the bottom half when they had to evaluate an emotion. In contrast, DP fixated equally on the top and bottom halves regardless of task demands (identity or emotion). The authors suggested that DP could maximize their ability to gather information from expressive faces.
Another important deficit in emotional perception ability (EPA) in DP is prosody or intonation (Most et al., 1999). Important meaningful emotional information is conveyed in prosody, but DP are unable to access it. However, what happens when DP learn to speak? One way could be through cochlear implants. Wiefferink et al. (2013) compared performance in emotion recognition tasks between children with cochlear implants and hearing children. They found that (a) children with cochlear implants were less competent in the emotion recognition tasks than hearing peers, and (b) despite having cochlear implants, these children had some aspects of emotion recognition tasks affected, including their abilities to understand emotions conveyed non-verbally. A possible explanation is that children with cochlear implants may not have effectively developed prosody. However, it was evident that these children were able to understand language, albeit with “metallic” sounds (Kermit, 2009).
Another way for DP to learn to speak is through learning to read lips and spoken language. A few years ago, several European countries (including Spain) promoted various laws for developing language abilities in DP, especially bilingualism (sign language and oral language). An ability to read lips and develop oral language could be learned simultaneously or successively (Acosta-Rodríguez, 2003). Being able to lip-read could be more advantageous in comparison to only signing. Ludlow et al. (2010) showed that only signing deaf children had problems in identifying emotions, and that this inability affected their social skills negatively. Both the delay in language acquisition and/or absence of oral language lead DP to lack opportunities for talking about their personal experiences, and therefore, fewer chances to develop normal emotional perception. Indeed, earlier acquisition of language in DP (signing and/or lip-reading) improved their scores in Theory of Mind (ToM) tasks (a top–down perception) on interpreting what emotions were experienced by others (Meristo et al., 2007; Glickman, 2009; Morgan et al., 2014). However, other studies did not find differences between DP and HP regarding their emotional, social, and communicative development (Masataka, 1996; Peterson and Slaughter, 2006; Meronen and Ahonen, 2008).
The relationship between EPA and social adaptation has been described many times (see Kumschick et al., 2014, for instance). Hosie et al. (2000) developed a study to examine how hearing and deaf children understand socially displayed rules, and how they express and conceal emotions. Regarding displaying rule knowledge, the authors did not find differences between deaf children and hearing peers. However, deaf children reported having difficulties concealing happiness and anger. The difficulty in concealing these two emotions might be disadvantageous to deaf children in some social situations, where both emotions need to be regulated properly. An interesting study about how deaf and hearing children’s parents rated their children’s academic and social aspects found that hearing children and their parents rated the children’s friendships more positively than did deaf children and their parents. However, the authors found that deaf children and deaf parents rated the children’s social skills more positively. The authors suggested that as these children used sign language at home, the social skills learned at home were generalized to the school context (Marschark et al., 2012). Similar findings have been observed in other cultures, such as Argentina (Ipiña et al., 2010) or Brazil (Prietch and Filgueiras, 2013).
The next point to discuss is the theoretical framework to study emotion perception in DP and how to measure it. Many emotional perception measures were developed and validated based on emotional knowledge (EK); especially if this tool is for use on children under social adaptation criteria (Denham, 1986; Morris et al., 2013; Mestre et al., 2014). EK is a theoretical construct which includes mainly: both recognizing and labeling expressions of emotions and understanding their transition from their causes to their consequences (Morgan et al., 2009). However, some revised studies on the perception of emotions in DP (especially with children) used ToM tasks as criteria (as autism studies do), rather than social adaptation criteria. Dyck and Denver (2003) pointed out that children with language impairment have difficulties, among others, in: (a) second-order ToM tasks (understanding false beliefs; do not appear until children are older, at around 3–5 years of age; see Hughes and Leekam, 2004 for a review); (b) recognizing non-verbal emotional expressions; and (c) matching facial expressions (especially with emotional intonation, for hearing impairment).
In contrast, EK includes a set of factors interrelated to emotional perception, such as age, gender, and verbal intelligence, due to the understanding that at certain ages cognitive and emotional development are mutually dependent (Morris et al., 2013). This is precisely the problem in assessing verbal intelligence. Building a tool to assess EPA (or EK) in DP should encompass the entire construct of EK. It is thus necessary to pay attention to the special characteristics discussed above. Most existing instruments assume that DP could understand written language. However, there is in fact a high illiteracy rate among the deaf population (Massone and Baez, 2009). Could a fluid intelligence (Gf) test be used instead of a verbal one? Using a Gf test would also allow us to examine the role of Gf on emotional perception. Another question is the appropriate age at which to administer the tool; for testing our developed tool, we used a sample ranging in age from 12 to 30 years. This is because we are interested in identifying standard performance, instead of how deaf children will score it.
Another factor often ignored in emotional perception among DP is personality. In principle, DP should not have personality traits different from HP. However, HP tend to misunderstand this reality, and tend to assign some traits as stubborn or distrustful to the deaf (Rodríguez-Ortiz, 2005). According with this idea, some studies have pointed out that emotional perception (measured as a part of the MSCEIT, Mayer et al., 2004) had a low-to-moderate relationship with some personality traits, especially Neuroticism (negatively), Openness and Consciousness (both positively; see Mayer et al., 1990; Day and Carroll, 2004; Lopes et al., 2012). However, some studies showed that personality traits and mood states are involved in the emotional perception processing (for instance, Rusting, 1998). Some personality traits stimulate certain mood states that then influence on the emotional perception. As example, Knyazev et al. (2008) found that personality could systematically influence in how people perceive facial expressions of other people. For instance, they pointed out that some traits, both agreeableness and conscientiousness, prepossessed to perceive the faces in a friendly way. However, anxiety and aggressiveness overrate or misunderstand the intentions of other people.
Finally, an emotional perception instrument for DP should consider how emotional stimuli are to be presented. Regarding emotional facial expressions, Fernandez-Dols (2013) strongly recommended to “restore a balance between the top–down and bottom–up strategies” for displaying emotional expression stimuli. An emotional expression “is a continuous flow of muscular movements from bodies moving in a three-dimensional world which produces events with flexible and context-dependent meanings” (Fernandez-Dols, 2013, p. 6). Following this advice (and from others Elfenbein, 2013; Hassin et al., 2013), we included videos of emotional expression. The videos used as emotional stimuli were of a lip-reading DP.
In order to address the issues explained above, we designed a pilot study to assess DP’s EPA using a software created for DP. Following the suggestions from Leon et al. (2011) about how to proceed in pilot studies, we prefer to address objectives rather than research questions and hypothesis. There are several purposes of conducting a pilot study: one of them is assessment procedures. Another relevant question, a “pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples” (Leon et al., 2011, p. 626).
The objectives of the pilot study were:
(a) To analyze the emotional perception achievement among groups (signers, lip-reading deaf, and hearing), which involves making a confusion matrix1 of both basic and social emotions, according to group performance;
(b)To check whether the instrument fits the EK construct (should be related to age and intelligence; gender would be not involved because of sample age); and
(c)Based on EPA and personality studies, we expect to find:
(c.1) Regarding personality: positive relationships of EPA with verbal intelligence and some personality traits (consciousness and warmth). However, it is expected negative relationships between EPA and the next traits: tension, excitability, dominance, apprehension, and self-sufficiency.
(c.2) Regarding adaptation criteria: positive relationships between EPA and all adaptation criteria, excepting unrest, which is expected a negative relationship.
Materials and Methods
A total of 56 individuals were identified according to the language used (signers, lip-reading DP, and HP). Participants were required to have normal or corrected-to-normal vision, and no learning disabilities (including illiteracy). We decided not to include children with cochlear implants because most of them were under 12 years old. All parents of participants under 18 years old signed a consent form for their children’s inclusion in the pilot study. Deaf participants (n = 26) were included in the research under the supervision of the director of the Centro de Educación Especial para Sordos (Special Education Center for Deaf), Jerez de la Frontera (Southern Spain). Hearing participants were collected from the same geographical area, and had similar characteristics to the deaf sample (age range: 12–30 years, 60% male). We decided not to include two deaf participants because their parents were also deaf. Participants received a brief report about their outcomes. Table 1 summarizes the demographic characteristic of the participants.
The ethics committee of the deaf school (November 25, 2013) approved the pilot study after we presented the project to them. All participants and parents of under-18 children signed both an agreement of collaboration and a consent to participate in this research (December 30, 2013).
PERVALE-S (Test de Tareas Cognitivas de Percepción y Valoración de Emociones en sordos, Test of cognitive tasks for perceiving and valuing emotions in deaf; Herrero et al., 2009).
PERVALE-S is an improved version of a software specially designed to assess both basic and social EPA in DP. The instrument developed both basic and social emotional expressions. Although there is no unanimity, we considered as basic emotions: fear, joy, sadness, disgust, anger, and surprise (especially last one is quite controversial). For our emotional perception instrument, we decided to include anxiety, jealousy, envy, and embarrassment as social emotions. However, it is necessary to explain briefly how jealousy and envy are different (we did not consider to include guilty, so embarrassment should not be confused with guilty), and both emotions have different expressions too in Spanish signal code for DP. According to Salovey and Rodin (1988), jealousy and envy could have quantitative differences besides other semantic considerations. However, other authors have demonstrate that both emotions have qualitative differences. Parrott and Smith (1993) introduced new methodologies in order to clarify how both emotions reveal differences. In their experiments, they pointed out that envy was “characterized by feelings of inferiority, longing, resentment, and disapproval of the emotion.” However, jealousy “was characterized by fear of loss, distrust, anxiety, and anger” (Parrott and Smith, 1993, p. 906). The social-emotional expression items of the instrument show different stimuli regarding the Spanish-deaf expression for both emotions.
The answer scale in the first version of the instrument had to be changed from five levels to three (1: a little; 2: a lot; and 3: without emotion), because DP had problems differentiating beyond three levels of intensity of an emotional expression stimulus. In the previous study, we also discovered that DP had difficulties discriminating between close frequency adverbs (e.g., the difference in meaning between “very” and “quite a bit” was not apparent to the deaf sample). Another important difference between PERVALE-S and a regular EK tool is the stimuli presented. To more accurately identify the emotion expressed, DP need to watch the upper body (especially arm movements) rather than just the emotional expression on the face. Thus, the emotional stimuli presented should include “simultaneous or successive facial movements linked to affective reactions”—involving mostly bottom-up perception processes—and “appraisals, social motives, or strategies of regulation, but also to cognitive processes or cultural conventions”—involving especially top–down perception processes (Fernandez-Dols and Crivelli, 2013, p. 27). Nevertheless, showing just the face might not be enough for DP—emotional expression stimuli for the deaf must show moving arms and faces, all in a videotaped emotional expression (Herrero et al., 2009).
The final version has several inserted videos in the program interface. There are three types of videos (see Figure 1): (A) an instruction video at the top left of the interface with simultaneous oral and Spanish Sign Language (SSL) explaining how to use the program. In this video, the instructor asks participants to view all stimuli before starting to answer them, because there are two types of emotion stimuli (with different emotional expression intensity, high and low) for each basic emotion (fear, sad, surprise, anger, joyful, and disgust) and for each social emotion (anxiety, jealousy, envy, and embarrassment). In addition, there was one stimulus without emotional expression for each basic and social emotional test section. The right answer for each item was obtained via consensus among six deaf signal interpreters. The section on basic emotion contains 13 items and that of social emotion contains nine items; (B) a central bigger video with the stimuli to be answered below; and (C) a smaller video, which is presented when the yellow circle is clicked on. This video consists of an interpreter explaining what the displayed emotion is (using SSL, if the user wanted further information about the emotion label). If the participant’s answer matched the emotion and the intensity, then s/he obtains one point; if the answer matched the emotion but not the intensity, then it is scored as 0.5 point. The neutral item was scored 1.0 if the participant gave the correct answer—no emotion shown—and 0.0 for any other answer. Therefore, the possible score that could be obtained in the section on basic emotion ranges from 0 to 13 points, and the social emotion section from zero to nine points. Hence, the maximum total score is 22.0 points. Outcomes are presented in percentages to facilitate interpretation. Finally, the software generates an excel file with the answers given by participants.
FIGURE 1. A sample item for both basic and social emotional expression. Both examples have been translated into English.
The internal consistency obtained was highly moderate (Cronbach’s alpha 0.73) for the total scale, 0.69 for the basic emotions scale, 0.68 for the social emotions scale. However, Zumbo et al. (2007) recommended using polychoric correlations by omega coefficients as a better estimator of reliability for items with a categorical nature, where values can be interpreted similarly to the alpha coefficient. Following the authors’ suggestion, the calculated PERVALE-S omega values were 0.78 for the total scale, 0.75 for basic emotion, and 0.73 for social emotion. Intraclass correlation among expert referees (n = 6) for the software was 0.89 (p < 0.001), indicating a high level of agreement for each instrument item. Lastly, the correlation between sections and total PERVALE-S scores were calculated. The correlations between the basic and social emotion sections and the total scale were, respectively, r = 0.715 and r = 0.88 (both p < 0.001).
Fluid Intelligence (Gf)
Raven’s Standard Progressive Matrices test (RSPM; Raven et al., 1993) was administered to the participants. RSPM comprises 60 problems, and is divided into five sets (A–E) of increasing difficulty. Each set starts with easy problems and ends with more difficult ones. Each item contains a matrix of geometric design with one cell of the matrix removed, and there are six or eight alternatives given to insert in the place of the missing cell, one of which fits correctly. All participants were tested individually, without time limit, and RSPM instructions were given to DP using SSL by an interpreter. We used the achievement rate [(number of correct answers/60) × 100] as an index. Errors were not discounted, as in other intelligence tests. RSPM obtained good internal consistency in this study (Cronbach’s alpha = 0.872).
Because we sampled participants aged 12 years and older, we decided to administer a Spanish adaptation of the HSPQ (High School Personality Questionnaire; Cattell and Cattell, 1995). This questionnaire also contained videos providing instructions on how to answer the test and explanations of each of the 14 HSPQ scales. All deaf participants were instructed to watch the video before answering the HSPQ. HP received a brief verbal explanation about how to proceed. According to our experience with frequency adverbs, explained above, we used an answer scale with three levels (1: a little; 2: medium; and 3: much). This test assesses 14 personality factors, which are summarized in Table 2.
Adaptation School Criteria for Deaf
We designed a questionnaire to administer to the educators at the Special Education Center for the Deaf, which all the DP in our study attended. All of them belonged to this school, or still belong to it. We asked four educators at the school to rate from 1: “nothing related to him/her” to 6: “completely related to him/her.” These professionals (teachers and counselors) had known each DP in our study for at least 2 years. We addressed five questions to be answered independently for each professional. These questions were related to: (1) adaptation to school rules; (2) impulsiveness control; (3) academic achievement; (4) peer acceptation; and (5) degree of conflict with others. Intraclass correlation for each question among the educators were 0.92, 0.91, 0.84, 0.87, 0.88, respectively. These indices indicated a high degree of agreement among raters.
All measures were presented in the following order: HSPQ, RSPM, and PERVALE-S. Signers and lip-reading DP below 17 years old performed all the tests in the Center for Deaf Education (Jerez, Spain). HP and DP 18 years old and above performed the tests at the Emotional Intelligence Lab of the University of Cadiz (Puerto Real, Spain). An interpreter was always present with the deaf sample, even if they did not require her help. All measures were performed individually. HP performed the same version of each measure, except with a prior verbal explanation. Participants received a brief report about their scores on Gf and emotional perception.
Emotional Perception Achievement Among Groups
Our primary objective was to investigate whether DP perform worse than HP using an appropriate emotional perception tool developed for DP. Table 3 shows the scores obtained in each section of the PERVALE-S by linguistic group.
Lip-reading DP performed worse, especially in the social emotion section. Figure 2 provides extra information about performance by PERVALE-S section and linguistic group.
However, the non-parametric equivalent of the ANOVA, the Kruskall–Wallis test, did not reveal any significant difference between the three groups. The findings were as follows: (a) total score χ2 = 3.81 (p = 0.15); (b) basic emotions section χ2 = 1.09 (p = 0.58); and (c) social emotions section χ2 = 3.56 (p = 0.17). We were also interested in displaying both basic and social confusion matrices by linguistic group. According to the results of the Kruskall–Wallis test, we illustrate both matrices using the entire sample, rather than by linguistic group (see Tables 4 and 5).
In order to identify the influences of age and Gf, we also analyzed their relationship with PERVALE-S scores. For signers (n = 18; mean age = 21.06, SD = 6.31), the age by task section correlations were rage-basic = 0.25 (p > 0.05), rage-social = 0.48 (p < 0.01); and the age with Gf (M = 76.11, SD = 20.46) correlation was r = 0.69 (p < 0.001). For lip-readers (n = 8; mean age = 16.86, SD = 5.73), the age by task section correlations were rage-basic = -0.09 (p > 0.05), rage-social = 0.61 (p < 0.001), and the age-Gf correlation was r = 0.08 (p > 0.05). Finally, for HP (n = 30; mean age = 23.4, SD = 4.48), the age by task section correlations were rage-basic = 0.01 (p > 0.05), and rage-social = 0.25 (p > 0.05).
Regarding main mistakes by language code, we found different percentages in the errors made within the signer group when identifying emotions: joy, fear, surprise, and “no emotion” (22.2% of the signer sample), jealousy and envy (38.9% of sample). Different percentages emerged among lip-readers: joy and surprise (42.9%), fear (57.1%), anxiety (28.6%), envy, jealousy, and embarrassment (57.1%). The percentages among HP were 40% for fear, 16.7% for surprise and sadness, and 46.7% for jealousy and envy. Post hoc Kruskall–Wallis analysis confirmed significant differences between groups in according next items: joy-1 (p = 0.015), anger-2 (p = 0.02), and no emotion in the basic emotion section (p = 0.03).
Next, we checked whether both PERVALE-S fits into the EK construct and EPA is related to some personality traits. Table 6 shows the correlations between PERVALE-S sections and age, sex, Gf, and personality traits.
TABLE 6. Correlations between PERVALE-S and age, sex, fluid intelligence (Gf), and HSPQ measures (N = 55, after eliminating case 41).
Our last objective was to test the expected relationship between PERVALE-S sections and adaptation school criteria. Table 7 shows the rho correlations.
The results show that personality and Gf do not belong to the EK construct. However, Gf was not related to any adaptation criteria. Among personality traits, we found almost significant relationships: adaptation to school rules and impulsiveness control were negatively related to tension (Q4) rtension-adaptation = -0.50, rtension-impulsiviness = 0.62, p < 0.05. Academic achievement was positively related to sensitivity (I) r = 0.54, p < 0.01; and negatively related to apprehension r = -0.48, self-sufficiency r = -0.50, and tension (Q4) r = -0.43 (last three p < 0.05). Peer acceptation was positively related to sensitivity (I) r = 0.42, p < 0.05. Unrest was positively related to dominance (E) r = 0.44 and tension (Q4) r = 0.51 (both p < 0.05).
However, Mann–Whitney analysis did not show any significant difference between the genders for all ratings.
This pilot sudy; only signer DPs are increasingly uncommon now due to the advent of cochlear implantstudy had some difficulties recruiting lip-readers, because most of them did not belong to the deaf school anymore. In addition, we could recruit only 18 signers for the pilot st and the inclusion of verbal language education in the school curriculum. However, we decided not to include DPs with cochlear implants because most of them were below 12 years old. Finally, we included a sample of HP to compare their performance with the deaf sample.
Testing PERVALE-S among Linguistic Groups
The main goal of the pilot study was to test the new version of PERVALE-S. Generally, participants performed better in the basic emotion section compared to the social one. Emotional development studies have pointed out that social emotion development takes more cognitive effort and time than basic emotions (Arsenio, 2003). Indeed, EPA—the first branch of the EI ability model (Mayer and Salovey, 2007)—forms the basis on which other branches (using, understanding, and managing) grow. The ability to perceive basic emotions (matching the correct emotion) takes about 6–8 years to develop, while social emotions require about 12 years (Zeidner et al., 2003; Mestre et al., 2007). This corroborates with our finding that age is correlated to the social emotion section but not the basic emotion section (r = 0.447, p < 0.01).
Regarding the first objective of this pilot study, the correct answer rate by section was around 70%, which is an appropriate score for future standards. Except among lip-readers in the social emotion section, who were also younger than the other participants. Even so, the non-parametric ANOVA did not reveal any difference between linguistic groups. However, the analysis of the influence of age by group revealed some relationships, especially among signers. In this case, age was related to both sections. In the lip-reading group, age was related to just the social emotion section. Probably, both age and sample size compromised this finding in the lip-readers group. No age by section relationships were found in the hearing group. Cautiously, our results are comparable to those of previous studies (Dyck and Denver, 2003; González et al., 2011). Thus, DP probably need more time to identify the emotion expressed compared to HP, perhaps due to fewer opportunities to obtain experience in matching emotional expressions (Alegria and Lechat, 2005). Another complementary explanation comes from the lack of abilities for listening. DP need to compensate for their disabilities in hearing to enhance their accuracy in emotional perceptions. For this purpose, they need to spend more time to obtain similar performance standards (Letourneau and Mitchell, 2011). Gf could be a mediating variable in this relationship, as catalyst for emotional learning (Farrelly and Austin, 2007; Belke et al., 2008; Ono et al., 2011).
There is an interesting theoretical debate about facial expression of emotion and the meaning of motor expression (Nelson and Russell, 2013). However, it is also necessary to identify the underlying determinants and production mechanisms in order to widely encompass the nature of this process, and choose fundamental assumptions and predictions regarding the patterning of facial expressions for different emotions (Scherer et al., 2013). The PERVALE-S items were based on idiosyncratic emotional expressions for the deaf from the south of Spain. For this purpose, we worked with six expert deaf interpreters and confirmed that they highly agreed on the right answer for each item (intraclass correlation was 0.89, p < 0.001). We also report the advice given by interpreters about this idiosyncratic emotional culture in DP (Lederberg et al., 2013); thus, the PERVALE-S items might not be appropriate for other cultures, however, HP performed similarly to DP.
The confusion matrices showed that many mistakes were made in the social emotion section. In the basic emotion section, the deaf participants had difficulty identifying joyfulness (42.9% for lip-readers and 22.2% for signers) which is unusual. In fact, it is the easiest emotion to identify for children (Mestre et al., 2011). However, it is normal to have difficulty identifying surprise and fear (57.1% for identifying fear and 42.9% for surprise among lip-readers; Ekman, 1999; Jessen et al., 2012). HP erred mainly on fear when it was expressed with low intensity (40%). There were similar mistakes from the three groups in the social emotion section. Jealousy and envy were less likely to be identified correctly (from 36.7 to 57.1%, respectively). This outcome is interesting because it bolsters an idea pointed out by Salovey and Rodin (1988). The authors reported that jealousy seemed more intense than envy; moreover, both emotions seemed to be experienced practically in the same way. Hence, it is more a question of quantitative than qualitative experience (for more information, see Salovey, 1991). In other words, jealousy and envy require considerable cognitive development to be perceived correctly (Oatley and Johnson-Laird, 2014). Indeed, the relationship found between Gf and social emotion in this study (r = 0.28, p < 0.05) hints at a mediating role of intelligence in the perception of social or complex emotion in DP.
Does PERVALE-S Fit into the EK Construct?
Emotional knowledge is a classic topic for emotional perception in children; however, it can also be used for other ages, for instance, sections A and E of MSCEIT (Mayer et al., 2003; for cross-cultural validation see Karim and Weisz, 2010). Critics have recently discussed if it is possible to measure the EI framework entirely (Maul, 2012a,b). PERVALE-S is a special instrument for measuring the first branch (perceiving).
Gender has been reported as an important variable in EI (Castro-Schilo and Kee, 2010), and in emotional perception (Chiaburu and Gray, 2008). In this study, gender was not significant because of the sample age. Future studies should confirm our results of no differences between groups.
Another question is how other EK measures have been computed. Take, for instance, the EMT (Emotional Match Task, Morgan et al., 2009). This instrument has been validated in Spain (see Alonso-Alberca et al., 2012) with a good reliability and promising validation processes. Nevertheless, some aspects of this task are still under debate. Some EMT items are computed the same way across different population subsets. For example, in the first part of EMT there are two ambiguous items, item number 8 and item number 11, the answers “sad” or “anger” have the same valor (they both are computed as “1”). Both emotions had similar accuracy rates although sadness and anger have different facial expressions (Ekman, 1999). In our opinion, this decision favors the psychometric properties of the instrument. In contrast, we believe that an emotional perception instrument should encompass different levels of achievement, as in an exam (Seal et al., 2009). Verbal intelligence might be used as a screening factor in future studies on emotion perception in DP, while Gf (e.g., Raven’s measure) should be tested as a mediating variable in EK score, even among HP. It is too early to recommend PERVALE-S as a tool for assessing emotional perception in DP. As cochlear implants are increasingly prevalent, the next step would be to test the instrument on a sample with cochlear implants. Therefore, prosody is another factor to include in the study of emotional perception or EK.
Age and intelligence were related to PERVALE-S, but social emotion should be redesigned, removing at least the envy items following Salovey and Rodin’s (1988) suggestions (quantitative difference). Another topic under debate is whether newborn deaf should be implanted, according to pilot study findings, these results should not be taken as support for the defenders of cochlear implant devices for deaf. However, we assume that technology will improve tremendously in the future, and that “metallic sounds” will be controlled. If DP indeed decode emotions similarly to HP, then another social adaptation barrier will disappear.
The Role of Personality Traits
In fact, our personality test adapted for the deaf might be considered as semantic differential with three levels of agreement (1: “a little”, 2: “medium”, 3: “much”). We used HSPQ traits and created a pptx file embedded with videos. All participants understood how to fill it. This was another challenge because we did not assume that the deaf participants would understand the 144-item HSPQ.
We founded that participants who scored lower in the social emotion section perceived themselves as more dominant (r = -0.276, p < 0.05). Dominance and EPA are included in the hypothesis of subordination (Snodgrass, 1992), which states that the increased capacity of women (and compliant people) to perceive emotions is due to traditional social subordination which they have been or are being subjected to (for further reading see Elfenbein et al., 2002). The key idea of this hypothesis is that it is more important for subordinates to understand the emotions of those whom they are subordinated to (Keltner et al., 2003).
Regarding the basic emotion section, this subscale was positively related to consciousness (r = 0.347, p > 0.01) and negatively related to tension (r = -0.265, p < 0.05). Consciousness (and unconsciousness emotion processes) is attracting interest from emotional perception researchers (see Barret et al., 2005). People who pay attention to their tasks (as PERVALE-S) tend to score higher in this performance test as well (Matsumoto et al., 2000; Barret et al., 2005; Matsumoto, 2006). In contrast, neuroticism or tension predicts lower performance in emotional perception tasks (Matsumoto et al., 2000). Finally, the HSPQ trait of self-sufficiency was negatively related to overall PERVALE-S performance (r = -0.265, p < 0.05). Participants who prefers to work in groups, instead of alone, obtained better scores in the instrument. EI abilities are related to social interactions (Lopes et al., 2004, 2011; Grant et al., 2014). This is consistent with the relationship found in the present study. This may be because individuals who perceive themselves as easy-going are more interested in developing their social competence (Gross, 1998).
In order to determine the predictive validity of the instrument, we asked four teachers who were familiar with the deaf participants to rate them along five social adaptation criteria. This strategy was used successfully to determine the predictive validity of EI (Mestre et al., 2006; Lopes et al., 2011, 2012). The level of agreement for each criterion was appropriate. Hearing participants were excluded from posterior multiple regression analysis. The final sample for this part of the study was 26 deaf participants. Stepwise multiple regression analysis revealed that both basic emotion section scores and age were related to impulsiveness control and to unrest. Traditionally, impulsiveness has been related to being able to regulate one’s emotions in school (Gross and Thompson, 2007). The consulted teachers agreed to consider this point of view. The deaf participants who were rated as being able to control their impulse better also scored higher in the basic emotion section. Unrest or conflict was also related negatively to basic emotions and age after regression analysis. This relation between emotional competence and unrest has been described previously, especially among males (Mestre et al., 2006; Lopes et al., 2012). Thus, the finding of this relationship in our study might be due to more than half the sample being male (68%). However, the Mann–Whitney analysis did not reveal this gender influence.
Despite these findings, we recommend prudence in the interpretation of the current results. PERVALE-S is still under review and predictive validity investigations are necessary. Currently, the instrument is being used to train emotional competence among DP.
Limitations and Strengths
The sample size and the lack of similar studies using an emotional perception instrument adapted for DP is a strong limitation. The research design did not allow the making of causal inferences, despite the multiple regression analysis used. Nevertheless, this was a pilot study whose main objective was to test the new instrument and derive standard scores for future investigations with DP. To develop an instrument for the deaf was a challenge, as well as adapting the HSPQ.
Removing verbal intelligence influences from the pilot study and developing an emotional perception instrument for DP minimized traditional differences in the emotional perception assessment between DP and HP. This pilot study provides an interesting contribution to the literature on emotional perception and deaf. In addition, it is a first step to assess EI in DP. The main problem in assessing EI is verbal intelligence (especially in the third branch: understanding emotions, see Beck et al., 2012), and verbal intelligence is a measure of crystallized intelligence (Mackintosh, 2003). Because of these influences, some emotionally competent people obtained low scores in some EI performance test as MSCEIT (Amitay and Mongrain, 2007; Nafukho, 2010). Therefore, the development of instruments such as PERVALE-S allows the separation of the influence of verbal ability on non-linguistic EI studies.
Emotional knowledge and many emotional perception tests have not included social emotion stimuli. This might be due to problems with using static stimuli to simulate existential emotional states, such as anxiety, jealousy, and embarrassment (Krumhuber et al., 2013). Some social emotions are existential, for instance anxiety, jealousy, embarrassment, or envy (Lazarus and Lazarus, 1996). Indeed, participants often committed high error percentages (Lazarus and Lazarus, 1996). Avoiding social emotion stimuli does not encompass the whole emotional sphere and how to face it (Summerfeldt et al., 2006). Moreover, the relationship between Gf and social emotion should be explored with a bigger sample. Previous studies have suggested the role of non-linguistic abilities in the EI (see Albanese et al., 2010). Another strength is that PERVALE-S fits in the current perception theories (top–down versus bottom–up).
Generalizability and Heuristics
It was difficult to recruit the deaf sample, due to the changing situation regarding DP. For example, Spanish deaf students are receiving an inclusive education instead of in a special school (Kelman and Branco, 2009). Furthermore, there is ongoing debate about whether cochlear implants should be given to all newborn deaf (Kermit, 2009). Some Spanish deaf communities are awaiting studies that support their point of view against cochlear implantation (Herrero et al., 2009), but this pilot study is not an answer for or against of the cochlear implant. Our point of view is that the deaf community claims for a major interest on their issues from HP rather than vice-versa. Moreover, DP should not perceive the cochlear implant as a menace for their own culture or way of life (Wang et al., 2011). However, this research itself might help to change some HP stereotypes about the affectivity of DP, due to their emotional performances were similar to HP.
Finally, we considered cognitive information processing in DP (just iconic inputs) when the new version of PERVALE-s was developed (i.e., reducing the levels of the answer scale from 5 to 3); however, we are unsure if this software could be generalized to another deaf community due to the special dialect for Southern SSL. However, hearing participants seemed to understand most of the basic emotions and committed the same errors as the deaf in the social emotion section (envy and jealousy). This software is easily translatable for any researcher interested in the software. In addition, the videos could be replaced.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
- ^ According to wikipedia, “a confusion matrix, also known as a contingency table or an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e., commonly mislabeling one as another).”
Acosta-Rodríguez, V. M. (2003). Un estudio de la sordera como construcción social: visiones externas versus visiones internas. Rev. Logop. Foniatría Audiol. 23, 178–194. doi: 10.1016/S0214-4603(03)75762-3
Albanese, O., De Stasio, S., Di Chiacchio, C., Fiorilli, C., and Pons, F. (2010). Emotion comprehension: the impact of nonverbal intelligence. J. Genet. Psychol. 171, 101–15. doi: 10.1080/00221320903548084
Alonso-Alberca, N.,. Vergara, A. I., Fernandez-Berrocal, P., Johnson, S. R., and Izard, C. E. (2012). The adaptation and validation of the Emotion Matching Task for preschool children in Spain. Int. J. Behav. Dev. 36, 489–494. doi: 10.1177/0165025412462154
Belke, E., Humphreys, G. W., Watson, D. G., Meyer, A. S., and Telling, A. L. (2008). Top-down effects of semantic knowledge in visual search are modulated by cognitive but not perceptual load. Percept. Psychophys. 70, 1444–1458. doi: 10.3758/PP.70.8.1444
Cambra, C., Leal, A., and Silvestre, N. (2010). Graphical representations of a television series: a study with deaf and hearing adolescents. Span. J. Psychol. 13, 765–776. doi: 10.1017/S1138741600002420
Castro-Schilo, L., and Kee, D. W. (2010). Gender differences in the relationship between emotional intelligence and right hemisphere lateralization for facial processing. Brain Cogn. 73, 62–67. doi: 10.1016/j.bandc.2010.03.003
Day, A. L., and Carroll, S. A. (2004). Using an ability-based measure of emotional intelligence to predict individual performance, group performance, and group citizenship behaviours. Pers. Indiv. Differ. 36, 1443–1458. doi: 10.1016/S0191-8869(03)00240-X
Dyck, M. J., Farrugia, C., Shochet, I. M., and Holmes-Brown, M. (2004). Emotion recognition/understanding ability in hearing or vision-impaired children: do sounds, sights, or words make the difference? J. Child Psychol. Psychiatry 45, 789–800. doi: 10.1111/j.1469-7610.2004.00272.x
Elfenbein, H. A., Marsh, A. A., and Ambady, N. (2002). “Emotional intelligence nd the recognition of emotion from facial expression,” in The Wisdom in Feeling: Psychological Processes in Emotional Intelligence, eds L. F. Barret and P. Salovey (New York, NY: Guilford Press), 37–59.
Farrelly, D., and Austin, E. J. (2007). Ability EI as an intelligence? Associations of the MSCEIT with performance on emotion processing and social tasks and with cognitive ability. Cogn. Emot. 21, 1043–1063. doi: 10.1080/02699930601069404
González, M. T. D., Guil, F. G., López, L., Salmerón, R., and García, G. (2011). Evaluación neuropsicológica en niños sordos: resultados preliminares obtenidos con la batería AWARD neuropsychological. Electron. J. Res. Educ. Pychol. 9, 849–868.
Grant, L., Kinman, G., and Alexander, K. (2014). What’s all this talk about emotion? Developing emotional intelligence in social work students. Soc. Work Educ. 33, 874–889. doi: 10.1080/02615479.2014.891012
Grunes, P., Gudmundsson, A., and Irmer, B. (2013). To what extent is the Mayer and Salovey (1997) model of emotional intelligence a useful predictor of leadership style and perceived leadership outcomes in Australian educational institutions? Educ. Manag. Adm. Leadersh. 42, 112–135. doi: 10.1177/1741143213499255
Herrero, J., Mestre, J. M., Guil, R., and Muñoz, J. (2009). PERVALE-S. Test de Tareas Cognitivas de Percepción y Valoración de Emociones en sordos/Cognitive Task Test of Perception and Valoration of emotions in deaf people. Cadiz, Registro General de Patetens y de la Propiedad Intelectual. Junta de Andalucía: Propiedad Intelectual.
Hosie, J. A., Russell, P. A., Gray, C. D., Scott, C., Hunter, N., Banks, J. S., et al. (2000). Knowledge of display rules in prelingually deaf and hearing children. J. Child Psychol. Psychiatry 41, 389–398. doi: 10.1111/1469-7610.00623
Hughes, C., and Leekam, S. (2004). What are the links between theory of mind and social relations? Review, reflections and new directions for studies of typical and atypical development. Soc. Dev. 13, 590–619. doi: 10.1111/j.1467-9507.2004.00285.x
Ipiña, M. J., Molina, L., Guzmán, R., and Reyna, C. (2010). Comparación del desempeño social en niños con sordera profunda y audición normal, según distintos informantes. Electron. J. Res. Educ. Pychol. 8, 1077–1098.
Karim, J., and Weisz, R. (2010). Cross-cultural research on the reliability and validity of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Cross Cult. Res. 44, 374–404. doi: 10.1177/1069397110377603
Kermit, P. (2009). Deaf or Deaf? Questioning Alleged Antinomies in the Bioethical Discourses on Cochlear Implantation and Suggesting an Alternative Approach to Deafness. Scand. J. Disabil. Res. 11, 159–174. doi: 10.1080/15017410902830744
Knyazev, G. G., Bocharov, A. V., Levin, E. A., Savostyanov, A. N., and Slobodskoj-Plusnin, J. Y. (2008). Anxiety and oscillatory responses to emotional facial expressions. Brain Res. 1227, 174–188. doi: 10.1016/j.brainres.2008.06.108
Kumschick, I. R., Beck, L., Eid, M., Witte, G., Klann-Delius, G., Heuser, I., et al. (2014). READING and FEELING: the effects of a literature-based intervention designed to increase emotional competence in second and third graders. Front. Psychol. 5:1448. doi: 10.3389/fpsyg.2014.01448
Letourneau, S. M., and Mitchell, T. V. (2011). Gaze patterns during identity and emotion judgments in hearing adults and deaf users of American Sign Language. Perception 40, 563–575. doi: 10.1068/p6858
Lopes, P. N., Brackett, M. A., Nezlek, J. B., Schütz, A., Sellin, I., and Salovey, P. (2004). Emotional intelligence and social interaction. Pers. Soc. Psychol. Bull. 30, 1018–1034. doi: 10.1177/0146167204264762
Lopes, P. N., Mestre, J. M., Guil, R., Kremenitzer, J. P., and Salovey, P. (2012). The role of knowledge and skills for managing emotions in adaptation to school: social behavior and misconduct in the classroom. Am. Educ. Res. J. 49, 710–742. doi: 10.3102/0002831212443077
Lopes, P. N., Nezlek, J. B., Extremera, N., Hertel, J., Fernández-Berrocal, P., Schütz, A., et al. (2011). Emotion regulation and the quality of social interaction: does the ability to evaluate emotional situations and identify effective responses matter? J. Pers. 79, 429–467. doi: 10.1111/j.1467-6494.2010.00689.x
Ludlow, A., Heaton, P., Rosset, D., Hills, P., and Deruelle, C. (2010). Emotion recognition in children with profound and severe deafness: do they have a deficit in perceptual processing? J. Clin. Exp. Neuropsychol. 32, 923–928. doi: 10.1080/13803391003596447
Marschark, M., Bull, R., Sapere, P., Nordmann, E., Skene, W., Lukomski, J., et al. (2012). Do you see what i see? School perspectives of deaf children, hearing children, and their parents. Eur. J. Spec. Needs Educ. 27, 483–497. doi: 10.1080/08856257.2012.719106
Matsumoto, D., LeRoux, J., Wilson-Cohn, C., Raroque, J., Kooken, K., Ekman, P., et al. (2000). A new test to measure emotion recognition ability: Matsumoto and Ekman’s Japanese and Caucasian Brief Affect Recognition Test (JACBERT). J. Nonverbal Behav. 24, 179–209. doi: 10.1023/A:1006668120583
Mayer, J. D., DiPaolo, M., and Salovey, P. (1990). Perceiving affective content in ambiguous visual stimuli: a component of emotional intelligence. J. Pers. Assess. 54, 772–781. doi: 10.1080/00223891.1990.9674037
McCullough, S., Emmorey, K., and Sereno, M. (2005). Neural organization for recognition of grammatical and emotional facial expressions in deaf asl signers and hearing nonsigners. Brain Res. Cogn. Brain Res. 22, 193–203. doi: 10.1016/j.cogbrainres.2004.08.012
Meristo, M., Falkman, K. M., Hjelmquist, E., Tedoldi, M., Surian, L., and Siegal, M. (2007). Language access and theory of mind reasoning: evidence from deaf children in bilingual and oralist environments. Dev. Psychol. 43, 1156–1169. doi: 10.1037/0012-1622.214.171.1246
Mestre, J. M., Guil, R., Martinez-Cabañas, F., Larran, C., and Gonzalez de la Torre, G. (2011). Validación de una prueba para evaluar la capacidad de percibir, expresar y valorar emociones en niños de la etapa infantil. Rev. Electrón. Interuniversitaria Formación Profesorado 38, 37–54.
Mestre, J. M., Núñez-Vázquez, I., and Guil, R. (2007). “Aspectos psicoevolutivos, psicosociales y diferenciales de la inteligencia emocional,” In Manual de Inteligencia Emocional, eds J. M. Mestre and P. Fernández-Berrocal (Madrid: Pirámide), 151–170.
Mestre, J. M., Pérez-Alarcón, I., Larrán, C., Guil, R., and Hidalgo, V. (2014). “PERCEXPVAL: un instrumento para medir la capacidad para percibir y valorar emociones básicas en niños de la etapa infantil (3-6 años). Resultados recientes en su comparación con el Emotion Match Task (EMT),” In Avances en el Estudio de la Motivación y de la Emoción, eds A. Acosta, J. L. Megías, and J. Lupiáñez (Granada: Universidad de Granada), 181–187.
Morgan, G., Meristo, M., Mann, W., Hjelmquist, E., Surian, L., and Siegal, M. (2014). Mental state language and quality of conversational experience in deaf and hearing children. Cogn. Dev. 29, 41–49. doi: 10.1016/j.cogdev.2013.10.002
Morgan, J. K., Izard, C. E., and King, K. A. (2009). Construct validity of the emotion matching task: preliminary evidence for convergent and criterion validity of a new emotion knowledge measure for young children. Soc. Dev. 19, 52–70. doi: 10.1111/j.1467-9507.2008.00529.x
Morris, C., Denham, S. A., Bassett, H. H., and Curby, T. W. (2013). Relations among teachers’ emotion socialization beliefs and practices, and preschoolers’ emotional competence. Early Educ. Dev. 24, 979–999. doi: 10.1080/10409289.2013.825186
Most, T., Weisel, A., and Tur-Kaspa, H. (1999). Contact with students with hearing impairments and the evaluation of speech intelligibility and personal qualities. J. Spec. Educ. 33, 103–111. doi: 10.1177/002246699903300204
Ono, M., Sachau, D. A., Deal, W. P., Englert, D. R., and Taylor, M. D. (2011). Cognitive ability, emotional intelligence, and the big five personality dimensions as predictors of criminal investigator performance. Crim. Justice Behav. 38, 471–491. doi: 10.1177/0093854811399406
Peterson, C. C., and Slaughter, V. P. (2006). Telling the story of theory of mind: deaf and hearing children’s narratives and mental state understanding. Br. J. Dev. Psychol. 24, 151–179. doi: 10.1348/026151005X60022
Prietch, S. S., and Filgueiras, L. V. L. (2013). “Developing Emotion-Libras 2.0 –an Instrument to Measure the Emotional Quality of Deaf Persons while using Technology,” in Emerging Research and Trends in Interactivity and the Human-Computer Interface, eds K. Blashki and P. Isaías (Hershey, PA: IGI Global).
Roberts, R. D., Schulze, R., O’Brien, K., MacCann, C., Reid, J., and Maul, A. (2006). Exploring the validity of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) with established emotions measures. Emotion 6, 663–669. doi: 10.1037/1528-35126.96.36.1993
Rodríguez-Ortiz, I. R. (2005). El Desarrollo Socioemocional de las Personas sordas (The Social-Emotional Development of Deaf People). Abstract dissertation at CIVE, Congreso International Virtual de Educacion (International Congress of Virtual Education), Palma de Mallorca.
Scherer, K. R., Mortillaro, M., and Mehu, M. (2013). Understanding the mechanisms underlying the production of facial expression of emotion: a componential perspective. Emot. Rev. 5, 47–53. doi: 10.1177/1754073912451504
Seal, C. R., Sass, M. D., Bailey, J. R., and Liao-Troth, M. (2009). Integrating the emotional intelligence construct: the relationship between emotional ability and emotional competence. Organ. Manag. J. 6, 204–214. doi: 10.1057/omj.2009.28
Summerfeldt, L. J., Kloosterman, P. H., Antony, M. M., and Parker, J. D. A. (2006). Social anxiety, emotional intelligence, and interpersonal adjustment. J. Psychopathol. Behav. Assess. 28, 57–68. doi: 10.1007/s10862-006-4542-1
Wang, Y., Su, Y., Fang, P., and Zhou, Q. (2011). Facial expression recognition: can preschoolers with cochlear implants and hearing aids catch it? Res. Dev. Disabil. 32, 2583–2588. doi: 10.1016/j.ridd.2011.06.019
Wiefferink, C. H., Rieffe, C., Ketelaar, L., De Raeve, L., and Frijns, J. H. M. (2013). Emotion understanding in deaf children with a cochlear implant. J. Deaf Stud. Deaf Educ. 18, 175–186. doi: 10.1093/deafed/ens042
Keywords: emotional perception ability, deaf, assessing emotional perception, emotional knowledge in deaf people, emotional perception measure, adaptation criteria in deaf people
Citation: Mestre JM, Larrán C, Herrero J, Guil R and de la Torre GG (2015) PERVALE-S: a new cognitive task to assess deaf people’s ability to perceive basic and social emotions. Front. Psychol. 6:1148. doi: 10.3389/fpsyg.2015.01148
Received: 28 February 2015; Accepted: 23 July 2015;
Published: 07 August 2015.
Edited by:Pablo Fernández-Berrocal, University of Malaga, Spain
Reviewed by:Ai-Girl Tan, Nanyang Technological University, Singapore
Catherine S. Daus, Southern Illinois University Edwardsville, USA
Heribert Harald Freudenthaler, University of Graz, Austria
Copyright © 2015 Mestre, Larrán, Herrero, Guil and de la Torre. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: José M. Mestre, Laboratorio de Inteligencia Emocional, Departamento de Psicología, Universidad de Cádiz, Campus Universitario de Puerto Real, Puerto Real 11519, Cadiz, Spain, firstname.lastname@example.org