<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Integr. Neurosci.</journal-id>
<journal-title>Frontiers in Integrative Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Integr. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5145</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnint.2020.00001</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Limitations of Standard Accessible Captioning of Sounds and Music for Deaf and Hard of Hearing People: An EEG Study</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Revuelta</surname> <given-names>Pablo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/459907/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ortiz</surname> <given-names>Tom&#x000E1;s</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/902350/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Luc&#x000ED;a</surname> <given-names>Mar&#x000ED;a J.</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/776360/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ruiz</surname> <given-names>Bel&#x000E9;n</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/460065/overview"/>
</contrib> 
<contrib contrib-type="author">
<name><surname>S&#x000E1;nchez-Pena</surname> <given-names>Jos&#x000E9; Manuel</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/873082/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, Oviedo University</institution>, <addr-line>Oviedo</addr-line>, <country>Spain</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Psychiatric, Complutense University of Madrid</institution>, <addr-line>Madrid</addr-line>, <country>Spain</country></aff>
<aff id="aff3"><sup>3</sup><institution>Spanish Center for Captioning and Audiodescription, Carlos III University of Madrid</institution>, <addr-line>Legan&#x000E9;s</addr-line>, <country>Spain</country></aff>
<aff id="aff4"><sup>4</sup><institution>Department of Computer Science, Carlos III University of Madrid</institution>, <addr-line>Legan&#x000E9;s</addr-line>, <country>Spain</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Richard B. Reilly, Trinity College Dublin, Ireland</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ines Kozuh, University of Maribor, Slovenia; Srdjan Vlajkovic, The University of Auckland, New Zealand</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Mar&#x000ED;a J. Luc&#x000ED;a <email>maluciam&#x00040;inf.uc3m.es</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>18</day>
<month>02</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>14</volume>
<elocation-id>1</elocation-id>
<history>
<date date-type="received">
<day>22</day>
<month>10</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>01</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2020 Revuelta, Ortiz, Luc&#x000ED;a, Ruiz and S&#x000E1;nchez-Pena.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Revuelta, Ortiz, Luc&#x000ED;a, Ruiz and S&#x000E1;nchez-Pena</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.</p></abstract>
<kwd-group>
<kwd>emotion</kwd>
<kwd>hearing impairment</kwd>
<kwd>audiovisual</kwd>
<kwd>EEG</kwd>
<kwd>captions</kwd>
<kwd>ERP</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="44"/>
<page-count count="9"/>
<word-count count="6065"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>It is widely accepted that music produces emotional responses, being one of its defining features (Gabrielsson, <xref ref-type="bibr" rid="B17">2001</xref>). Indeed, there is increasing scientific evidence on the consistency of emotional responses across listeners to the same musical features (Vieillard et al., <xref ref-type="bibr" rid="B46">2008</xref>) and on the immediacy (less than 1 s) of the emotional response (Paquette et al., <xref ref-type="bibr" rid="B30">2013</xref>).</p>
<p>However, in Spain around 5% of the population over 6 years-old presents some degree of hearing loss, according to official national statistics (INE, <xref ref-type="bibr" rid="B22">2008</xref>). This means around 2.25 M people who encounter limitations when accessing audiovisual soundtrack content through television, cinema, or Internet, among other information channels.</p>
<p>In order to help hard of hearing people to benefit from the rights established in the UN Convention on the Rights of Persons with Disabilities (UN, <xref ref-type="bibr" rid="B45">2006</xref>) concerning access to television programs, films, theater, and other cultural activities, focus was placed on captioning. Captioning is the reference assistive tool for hearing impairment, and special regulations were issued to guarantee its application [in Spain, the General Law of Audiovisual Communication (BOE, <xref ref-type="bibr" rid="B500">2010</xref>) requires captioning for at least 90% of all public television broadcasts)], and its quality, considering factors as visual aspects, synchronism, presentation speed, speaker identification, or accuracy (AENOR, <xref ref-type="bibr" rid="B1">2012</xref>).</p>
<p>Within the benefits of captioning, no significant differences were found up to date in immersion, transportation, presence, or enjoyment when watching audiovisual oeuvres either dubbed or captioned (for example, see D&#x02019;Ydewalle and Van Rensbergen, <xref ref-type="bibr" rid="B10">1989</xref>; Kim and Biocca, <xref ref-type="bibr" rid="B24">1997</xref>; Green and Brock, <xref ref-type="bibr" rid="B18">2000</xref>; Rheinberg et al., <xref ref-type="bibr" rid="B37">2003</xref>; Wissmath et al., <xref ref-type="bibr" rid="B47">2009</xref>). This seems to be related with automated text processing involved when reading captions, as shown in D&#x02019;Ydewalle et al. (<xref ref-type="bibr" rid="B11">1991</xref>), D&#x02019;Ydewalle and De Bruycker (<xref ref-type="bibr" rid="B9">2007</xref>) and Perego et al. (<xref ref-type="bibr" rid="B35">2015</xref>).</p>
<p>However, captioning offers some shortcomings. Pre-lingual deafness is associated with lower language skills and reading ability, and thus with lower caption understanding, but precisely pre-lingual profoundly deaf participants depend on alternative methods of information assimilation as captions (Gulliver and Ghinea, <xref ref-type="bibr" rid="B19">2003</xref>). When captions are added, attention is drawn to captions, resulting in a reduction in the level of video information assimilated, though captions provide a greater level of context (Gulliver and Ghinea, <xref ref-type="bibr" rid="B19">2003</xref>) or improve the comprehension when added to sign language interpreter videos (Debevc et al., <xref ref-type="bibr" rid="B13">2015</xref>).</p>
<p>Another issue is that when dealing with non-verbal sounds, national regulations establish verbal cues: sound effects and music must be subtitled in the upper right of the screen formatted in brackets, e.g., (Applause), (Phone). In the case of music, information of the type of music, sensation transmitted, and identification of the piece (title, author) must be included, e.g., (Rock music), (Horror music), (Adagio, Albinoni). This verbal representation does not include the emotional information of sounds and music (Pehrs et al., <xref ref-type="bibr" rid="B33">2014</xref>). Although many experiments were conducted with captions, none of them deals with the musical representation of non-verbal information.</p>
<p>Our main hypothesis is that captions cannot elicit the same emotional and behavioral reactions than sound or music. Moreover, captions would produce &#x0201C;lower emotional effects&#x0201D; than the auditory correlates, as they require conscious and selective attention (Gulliver and Ghinea, <xref ref-type="bibr" rid="B19">2003</xref>).</p>
<p>Among the many ways these limitations can be measured and quantified, in this study, we chose event-related potential (ERP) measurements before emotional motor response by means of EEG. The focus of the present study is to examine what happens just before a motor response (associated to emotion detection) while watching videos with audio or captions in two groups of participants: normal hearing and deaf or c subjects.</p>
<p>The decision of using the ERPs prior to motor response is based on the following results: the emotional and cognitive networks involved in decision making can be tracked by ERPs (Olofsson et al., <xref ref-type="bibr" rid="B27">2008</xref>; Imbir et al., <xref ref-type="bibr" rid="B21">2015</xref>). There are negative ERPs close to motor response present in anticipatory processes that reflect the emotional and cognitive processing of stimuli, such as Readiness Potential (Pedersen et al., <xref ref-type="bibr" rid="B32">1998</xref>), Movement Preceding Negativity (Brunia and van Boxtel, <xref ref-type="bibr" rid="B7">2001</xref>), Negative Shift Potential (Ortiz et al., <xref ref-type="bibr" rid="B29">1993</xref>; Duncan et al., <xref ref-type="bibr" rid="B15">2009</xref>), or Decision Preceding Negativity (DPN; Bianchin and Angrili, <xref ref-type="bibr" rid="B4">2011</xref>). DPN is the last salient slow negative potential before a willed risky decision (Bianchin and Angrili, <xref ref-type="bibr" rid="B4">2011</xref>) associated with emotional processes. Before motor response, researchers have found a negative wave around 150 ms that is associated with neurophysiological processes related to decision making (Shibasaki et al., <xref ref-type="bibr" rid="B41">1980</xref>; Ortiz et al., <xref ref-type="bibr" rid="B29">1993</xref>).</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Participants</title>
<p>Two groups of participants were recruited.</p>
<p>In one group, 16 participants with self-reported normal hearing were recruited, eight females and eight males, aged between 20 and 60 (mean: 39.83, <italic>SD</italic>: 12.24), 24.1% with a high school degree, 27.6% with a college degree, and 48.3% with post-graduate studies.</p>
<p>In the other group, 13 participants with self-reported hearing loss were recruited, seven females and six males, aged between 20 and 60 (mean: 39.4, <italic>SD</italic>: 12.21), 38.5% with a high school degree, 38.5% with a college degree, and 23.1% with post-graduate studies.</p>
<p>The self-reported hearing losses were classified according to the Audiometric Classification of Hearing Impairments of the International Bureau for Audiophonology (BIAP, <xref ref-type="bibr" rid="B5">1996</xref>): mild hearing loss (between 20 and 40 dB), moderate hearing loss (between 41 and 70 dB, speech is perceived if the voice is loud, and the subject understands better what is being said if he can see his/her interlocutor), severe hearing loss (between 71 and 90 dB, speech is perceived if the voice is loud and close to the ear, loud noises are perceived), very severe hearing loss (between 91 and 119 dB, speech is not perceived, only very loud noises are perceived), and total hearing loss (over 120 dB).</p>
<p>Four participants had moderate hearing loss and used hearing aids, four participants had severe loss, used hearing aids and three of them had cochlear implant, and one participant had total loss and had cochlear implant.</p>
<p>All of them signed an informed consent approved by the Bioethical Committee of the Carlos III University of Madrid and filled out a survey concerning demographic information, level of studies, and degree of hearing loss.</p>
</sec>
<sec id="s2-2">
<title>Materials</title>
<sec id="s2-2-1">
<title>Stimuli</title>
<p>The visual stimuli used were extracted from the &#x0201C;Samsara&#x0201D; documentary in order to select neutral sequences without story or associated dramaturgy. The &#x0201C;Samsara&#x0201D;&#x02019; documentary<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> is composed of sequences of soft images of nature and human society from 25 countries with musical background but without dialog or written messages. Forty video extracts of 10-s length were selected based on the absence of plane changes during 10 s. The original soundtrack was removed, and a 2-s fade-in and fade-out were applied to soften the transitions. An auditory stimulus was added to each fragment. These stimuli proceeded from an audio database: the fragments and the instant in which they appeared were assigned randomly (between seconds 2 and 8 to avoid the fades). A caption corresponding to these auditory stimuli was added to each muted fragment. The captions were generated by a specialist at the Spanish Center for Captioning and Audio description (CESyA) following the Spanish regulation (AENOR, <xref ref-type="bibr" rid="B1">2012</xref>).</p>
<p>A final video was built combining the 40 audio fragments and the corresponding 40 captioned fragments. These 80 fragments were randomly sorted, and the final video was split into five sets, allowing 20 s of rest between each set.</p>
</sec>
<sec id="s2-2-2">
<title>Hardware</title>
<p>In the presented experiment, two computers were involved, one triggering the video and sending temporal marks to the EEG amplifier to locate the stimuli, with screen and speakers pointing to the participant, and another one registering the EEG data. This last one allows a high-density (128 channel) EEG recordings, obtained using a custom-designed electrode Neuroscan cap and an ATI EEG system (Advantek SRL). Impedances were kept under 5 k&#x003A9;. Additional channels were included to monitor eye movement (right and left lateral canthi and superior and inferior orbits of the left eye). The reference electrodes were placed on the mastoids, and the ground electrode was placed on the forehead. Data were processed to an average reference following acquisition with a band-pass filter of 0.05&#x02013;30 Hz and a sample rate of 512 Hz. An artifact rejection criterion of 100 &#x003BC;V was used to exclude eye blinks. Individual subject averages were visually inspected to insure that clean recordings were obtained. Eye and muscle movement artifacts were identified off-line on a trial-by-trial basis through visual inspection, and they were removed prior to data averaging and ERP analysis. Noisy channels were sparingly replaced with linear interpolations from clean channels (around 6 &#x000B1; 3.5 channels per record and subject). From the remaining artifact-free trials, averages were computed for each participant and each condition. The analysis epochs for ERPs were 500 ms before motor response. EEG analysis was carried out on frequent (non-target) trials to avoid contamination by motor-related neural activity associated with making a response. ERPs obtained were averaged separately for each condition and each subject. A Bayesian Model Averaging (BMA) analysis over all electrodes was performed by opening a time window of &#x02212;20 to +20 ms around the highest negative amplitude peak measured in Cz electrode.</p>
</sec>
</sec>
<sec id="s2-3">
<title>Procedure</title>
<p>Participants were cited in individual sessions. They were first asked to fill in a survey including questions about their age, gender, education level, type, and degree of hearing loss and hearing aids if applicable.</p>
<p>Then, they were asked to sit in an armchair facing a 17&#x02033; screen with speakers placed at 1.5 m in front of the participant. They were asked to remove their hearing aids, but to keep their glasses on if needed. The 128-EEG cap was fixed to their head. A press button was placed under their left hand. Participants were explained that they were going to watch a video and were asked to press the button grabbed in their left hand whenever they felt any emotion while watching the video. Lights in the room were turned off, and the corresponding video was launched. Normal hearing participants watched the video with soundtrack (audio and captioned sequences), while participants with hearing loss watched the video without soundtrack (muted and captioned sequences). The press button in their left hand was connected with one of the computers, and each pressure was transmitted and registered as a mark in the EEG track.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec id="s3-1">
<title>Behavioral Response</title>
<p>We registered the total number of times each participant pressed the button with their left hand indicating they were feeling an emotion. The scores were registered for three conditions: Audio (pressures occurring during audio fragments display), Caption (pressures occurring during captioned fragments display), and Mute (pressures occurring during muted fragment display).</p>
<p>Nonparametric Mann&#x02013;Whitney tests were used to compare the number of button presses in the different conditions and groups. The Mann&#x02013;Whitney statistic was selected as the Shapiro&#x02013;Wilk test rejected normality in some conditions, and the size of the samples was not sufficiently large (less than 20) to assume the normal distribution in the rest of the conditions. The results are shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption><p>Comparisons of number of button presses for each group in each condition and between groups for Caption condition.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left">Hearing</th>
<th align="left">Hearing loss</th>
<th/>
<th/>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Audio</td>
<td align="left">Mute</td>
<td/>
<td/>
</tr>
<tr>
<td align="left">19.87 &#x000B1; 9.63</td>
<td align="left">7 &#x000B1; 4.1</td>
<td/>
<td/>
</tr>
<tr>
<td align="left">Caption</td>
<td align="left">Caption</td>
<td/>
<td/>
</tr>
<tr>
<td align="left">9.81 &#x000B1; 9.69</td>
<td align="left">7.15 &#x000B1; 4.86</td>
<td align="left"><italic>P</italic>-value**</td>
<td align="left">0.79486</td>
</tr>
<tr>
<td align="left"><italic>P</italic>-value*</td>
<td align="left"><italic>P</italic>-value*</td>
<td/>
<td/>
</tr>
<tr>
<td align="left">0.00528</td>
<td align="left">0.52218</td>
<td/>
<td/>
</tr>
<tr>
<td align="left">&#x0003C;0.05</td>
<td align="left">&#x0003E;0.05</td>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>*P-values of standardized Mann&#x02013;Whitney statistics comparing Audio/Caption conditions in Hearing group and Mute/Caption conditions in Loss Hearing Group. **P-values of standardized Mann&#x02013;Whitney statistics for Caption Condition in Hearing and Hearing Loss Group</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>A Mann&#x02013;Whitney test was conducted to compare the number of button presses in Caption and Audio conditions in the normal hearing group. The standardized results show a significant difference (<italic>p</italic> = 0.00528) in the scores for Audio (19.87 &#x000B1; 9.63) and Caption (9.81 &#x000B1; 9.69) conditions, suggesting that video auditory stimuli produced a number of emotional reactions twice as much as the captions stimuli did. A second Mann&#x02013;Whitney test was conducted to compare the number of button presses in Caption and Mute conditions in the hearing loss group. There was no significant difference (<italic>p</italic> = 0.52218) in the scores for the Mute (7 &#x000B1; 4.1) and Caption (7.15 &#x000B1; 4.86) conditions, suggesting that captions do not produce additional emotional reactions to the visual stimuli.</p>
<p>Finally, a Mann&#x02013;Whitney test was conducted to compare the number of button presses in Caption condition in the normal hearing group and in the hearing loss group. There was no significant difference (<italic>p</italic> = 0.79486) in the scores for the hearing group (9.81 &#x000B1; 9.69) and the hearing loss group (7.15 &#x000B1; 4.86), suggesting that the emotional reaction to captions is similar in both groups.</p>
</sec>
<sec id="s3-2">
<title>ERP Waves Before Emotional Response Onset</title>
<p><xref ref-type="fig" rid="F1">Figure 1</xref> shows the response onset-synchronized cerebral responses recorded from the vertex Cz electrode for the task. Prior to the emotional response (button press), two negative waves were found: an early negative shift around 300 ms (labeled NS300 from now on) and a closely negative wave around 100 ms (labeled NS100) prior to onset emotional response (button press).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Grand averages of stimulus-synchronized cerebral waveforms (Cz) prior to motor response onset. Calibration signal at the left indicates &#x000B1;10 &#x003BC;V for the cerebral responses.</p></caption>
<graphic xlink:href="fnint-14-00001-g0001.tif"/>
</fig>
</sec>
<sec id="s3-3">
<title>Source Localization</title>
<p>NS300 and NS100 show significantly greater activation in both groups in temporal middle and inferior lobe for all video fragments.</p>
<p>Regarding NS300 maps (<xref ref-type="fig" rid="F2">Figure 2</xref> and <xref ref-type="table" rid="T2">Tables s 2</xref>, <xref ref-type="table" rid="T3">3</xref>), NS300 (with an average amplitude of &#x02212;1.8 &#x003BC;V and <italic>SD</italic> of 0.67) shows high activation in the left temporal pole (TP) lobe for Audio and Mute conditions in the hearing group and the hearing loss group, respectively. The difference between both groups concerns the magnitude obtained with the Hotelling&#x02019;s T<sup>2</sup> test that shows (given an equal number of samples in all maps) greater activation of these cerebral areas in the hearing loss group (&#x0003E;1,400) compared with the hearing group (&#x02245;500). In addition, high activation appears in the frontal inferior lobe only in the hearing loss group. For the Caption condition, activation is found in the right temporal lobe in both groups, with greater activation in the hearing loss group (&#x0003E;2,500) compared with the hearing group (&#x02245;1,200).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Negative shift around 300 ms (NS300) mean electrical maps with <bold>(A)</bold> Mute/Audio condition and <bold>(B)</bold> Caption condition in each group. SPMs were computed based on a voxel-by-voxel Hotelling T<sup>2</sup> test against zero. Maximal intensity projection areas are displayed in yellow/red color. Averaging [Bayesian Model Averaging (BMA)] analysis was made by opening a time window of &#x02212;20 to +20 ms starting from the highest negative amplitude peak measured in Cz electrode.</p></caption>
<graphic xlink:href="fnint-14-00001-g0002.tif"/>
</fig>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption><p>Hearing loss group NS300/NS100 wave summary.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="left">AAL</th>
<th align="center">BA</th>
<th align="center"><italic>X</italic></th>
<th align="center"><italic>Y</italic></th>
<th align="center"><italic>Z</italic></th>
<th align="center">Activation (T<sup>2</sup>)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><bold>NS100 wave</bold></td>
</tr>
<tr>
<td align="left">Mute</td>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">65</td>
<td align="center">&#x02212;39</td>
<td align="center">&#x02212;16</td>
<td align="center">917.45</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Inf R</td>
<td align="center">20</td>
<td align="center">66</td>
<td align="center">&#x02212;35</td>
<td align="center">&#x02212;20</td>
<td align="center">736.67</td>
</tr>
<tr>
<td align="left">Caption</td>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">66</td>
<td align="center">&#x02212;22</td>
<td align="center">&#x02212;13</td>
<td align="center">828.67</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Inf R</td>
<td align="center">20</td>
<td align="center">54</td>
<td align="center">&#x02212;7</td>
<td align="center">&#x02212;31</td>
<td align="center">726.78</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Pole Mid R</td>
<td align="center">20</td>
<td align="center">46</td>
<td align="center">8</td>
<td align="center">&#x02212;23</td>
<td align="center">573.28</td>
</tr>
<tr>
<td align="left"><bold>NS300 wave</bold></td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td align="left">Mute</td>
<td align="left">Temporal Mid L</td>
<td align="center">21</td>
<td align="center">&#x02212;66</td>
<td align="center">&#x02212;31</td>
<td align="center">&#x02212;8</td>
<td align="center">1,624.87</td>
</tr>
<tr>
<td/>
<td align="left">Temporal inf L</td>
<td align="center">20</td>
<td align="center">&#x02212;66</td>
<td align="center">&#x02212;31</td>
<td align="center">&#x02212;14</td>
<td align="center">1,484.78</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Pole Sup L</td>
<td align="center">38</td>
<td align="center">&#x02212;46</td>
<td align="center">10</td>
<td align="center">&#x02212;21</td>
<td align="center">1,434.96</td>
</tr>
<tr>
<td/>
<td align="left">Frontal Inf Tri L</td>
<td align="center">45</td>
<td align="center">&#x02212;54</td>
<td align="center">28</td>
<td align="center">4</td>
<td align="center">1,347.67</td>
</tr>
<tr>
<td align="left">Caption</td>
<td align="left">Temporal Inf R</td>
<td align="center">20</td>
<td align="center">66</td>
<td align="center">&#x02212;38</td>
<td align="center">&#x02212;12</td>
<td align="center">2,902.56</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">66</td>
<td align="center">&#x02212;22</td>
<td align="center">&#x02212;13</td>
<td align="center">2,579.89</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note. NS300/NS100 wave summary of maximal intensity projection areas statistically significant based on a voxel-by-voxel Hotelling T<sup>2</sup> test against zero (<italic>p</italic> &#x0003C; 0.001), with each specific brain area localization. Abbreviations: AAL, Automated Anatomical Labeling corresponding to Probabilistic Brain Atlas; BA, Brodmann areas; X, Y, Z, coordinates in three spatial axes according to the MNI coordinate system of the maximum point T<sup>2</sup> in the anatomical structure</italic>.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap id="T3" position="float">
<label>Table 3</label>
<caption><p>Hearing group NS300/NS100 waves summary.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="left">AAL</th>
<th align="center">BA</th>
<th align="center"><italic>X</italic></th>
<th align="center"><italic>Y</italic></th>
<th align="center"><italic>Z</italic></th>
<th align="center">Activation (T<sup>2</sup>)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><bold>NS100 wave</bold></td>
</tr>
<tr>
<td align="left">Audio</td>
<td align="left">Parietal Sup R</td>
<td align="center">7</td>
<td align="center">22</td>
<td align="center">&#x02212;62</td>
<td align="center">64</td>
<td align="center">1,304.19</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">66</td>
<td align="center">&#x02212;23</td>
<td align="center">&#x02212;13</td>
<td align="center">549.56</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Inf R</td>
<td align="center">20</td>
<td align="center">50</td>
<td align="center">&#x02212;2</td>
<td align="center">&#x02212;36</td>
<td align="center">437.68</td>
</tr>
<tr>
<td align="left">Caption</td>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">66</td>
<td align="center">&#x02212;23</td>
<td align="center">&#x02212;13</td>
<td align="center">1,808.11</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Inf R</td>
<td align="center">20</td>
<td align="center">66</td>
<td align="center">&#x02212;38</td>
<td align="center">&#x02212;12</td>
<td align="center">1,843.91</td>
</tr>
<tr>
<td align="left"><bold>NS300 wave</bold></td>
</tr>
<tr>
<td align="left">Audio</td>
<td align="left">Temporal Pole Sup L</td>
<td align="center">38</td>
<td align="center">&#x02212;12</td>
<td align="center">6</td>
<td align="center">&#x02212;20</td>
<td align="center">525.79</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Inf L</td>
<td align="center">20</td>
<td align="center">&#x02212;50</td>
<td align="center">&#x02212;2</td>
<td align="center">&#x02212;36</td>
<td align="center">513.56</td>
</tr>
<tr>
<td/>
<td align="left">Temporal Mid L</td>
<td align="center">21</td>
<td align="center">&#x02212;50</td>
<td align="center">5</td>
<td align="center">&#x02212;23</td>
<td align="center">506.34</td>
</tr>
<tr>
<td align="left">Caption</td>
<td align="left">Temporal Mid R</td>
<td align="center">21</td>
<td align="center">66</td>
<td align="center">&#x02212;35</td>
<td align="center">&#x02212;13</td>
<td align="center">1,265.89</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note. NS300/NS100 wave summary of maximal intensity projection areas statistically significant based on a voxel-by-voxel Hotelling T<sup>2</sup> test against zero (<italic>p</italic> &#x0003C; 0.001), with each specific brain area localization</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>Regarding NS100 maps (<xref ref-type="fig" rid="F3">Figure 3</xref> and <xref ref-type="table" rid="T2">Tables s 2</xref>, <xref ref-type="table" rid="T3">3</xref>), activation is found in the right temporal lobe in both groups for Audio, Mute, and Caption conditions. For Audio and Mute conditions, NS100 (average amplitude of &#x02212;2.4 &#x003BC;V and <italic>SD</italic> of 1.27) shows a significantly greater activation in the hearing loss group. Activation in the right parietal lobe for the Audio condition appears only in the hearing group. For the caption stimuli, greater activation is found in the hearing group (&#x02245;1,800) compared with the hearing loss group (&#x02245;800).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>NS100 mean electrical maps with <bold>(A)</bold> Mute/Audio condition and <bold>(B)</bold> Caption condition in each group. SPMs were computed based on a voxel-by-voxel Hotelling T<sup>2</sup> test against zero. Maximal intensity projection areas are displayed in yellow/red color. Averaging (BMA) analysis was made by opening a time window of &#x02212;20 to +20 ms starting from the highest negative amplitude peak measured in Cz electrode.</p></caption>
<graphic xlink:href="fnint-14-00001-g0003.tif"/>
</fig>
</sec>
<sec id="s3-4">
<title>NS Latencies</title>
<p>In the hearing group, the average NS100 latency measured was 165.85 ms (&#x000B1;6.14) for the Audio condition and 161.91 ms (&#x000B1;8.22) for the Caption condition. In the case of NS300, the results were 377.79 ms (&#x000B1;50.57) for Audio and 370.08 ms (&#x000B1;43.99) for Caption. No significant differences were found in the NS100 or NS300 latencies between the Audio and Caption conditions in this group.</p>
<p>In the hearing loss group, the average NS100 wave latency for the Mute condition was 116.15 ms (&#x000B1;19.30) and 125.84 ms (&#x000B1;18.17) for the Caption condition. Regarding the NS300 wave, the latency was 338.46 ms (&#x000B1;7.77) for the Mute and 334.15 ms (&#x000B1;6.15) for the Caption. No significant differences were found in NS100 or NS300 latencies between the Mute and Caption conditions in this group.</p>
<p>Significant differences between groups where found in the Caption condition. <xref ref-type="table" rid="T4">Table 4</xref> shows the Mann&#x02013;Whitney test results comparing NS100 and NS300 latencies in the Caption condition between hearing and hearing loss groups. For NS100, a significant difference (<italic>p</italic> = 0.0001) was found between the hearing group (165.85 &#x000B1; 6.14) and the hearing loss group (125.85 &#x000B1; 18.17). For NS300, a significant difference (<italic>p</italic> = 0.00078) was likewise found between the hearing group (377.79 &#x000B1; 50.57) and the hearing loss group (334.15 &#x000B1; 6.15). These results suggest that the reaction time is lower in the hearing loss group for the Caption condition.</p>
<table-wrap id="T4" position="float">
<label>Table 4</label>
<caption><p>Comparisons of NS100 and NS300 latencies between groups for Caption condition.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" colspan="2">NS100</th>
<th align="center">NS300</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Hearing</td>
<td align="center">165.85 &#x000B1; 6.14</td>
<td align="center">377.79 &#x000B1; 50.57</td>
</tr>
<tr>
<td align="left">Hearing loss</td>
<td align="center">125.85 &#x000B1; 18.17</td>
<td align="center">334.15 &#x000B1; 6.15</td>
</tr>
<tr>
<td align="left"><italic>P</italic>-value</td>
<td align="center">0.0001</td>
<td align="center">0.00078</td>
</tr>
<tr>
<td/>
<td align="center">&#x0003C;0.05</td>
<td align="center">&#x0003C;0.05</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>P-values of standardized Mann&#x02013;Whitney statistics</italic>.</p>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>An important result from our study is the existence of two differentiated negative ERPs, labeled NS100 and NS300, regarding the latency and cerebral activation produced prior to the motor response (button press), without significant differences between both groups. The negative ERPs close to the motor response present in anticipatory processes reflect the emotional and cognitive processing of stimuli and neurophysiological processes related to decision making (Shibasaki et al., <xref ref-type="bibr" rid="B41">1980</xref>; Ortiz et al., <xref ref-type="bibr" rid="B29">1993</xref>; Duncan et al., <xref ref-type="bibr" rid="B15">2009</xref>; Bianchin and Angrili, <xref ref-type="bibr" rid="B4">2011</xref>). Different authors state that the negative component immediately previous to the motor response (NS100 in our case) is related to cognitive processes needed to activate the motor programs associated with the decision making and to the efficiency of the executive behavior (Bianchin and Angrili, <xref ref-type="bibr" rid="B4">2011</xref>). These results indicate a clear cognitive and emotional processing in both groups prior to the motor response.</p>
<p>Comparing the hearing and hearing loss groups&#x02019; brain activation with Audio and Mute conditions, respectively, it was found that both groups activate the same left temporary (lower, middle, and pole) areas, meaning that images activate the same areas, and probably the same emotional and cognitive processes. The lower and middle temporal areas are associated with visual and auditory processing, object and face recognition, and word meaning (Pehrs et al., <xref ref-type="bibr" rid="B34">2017</xref>). The TP is part of the association cortex and is involved in multimodal sensory integration (Olson et al., <xref ref-type="bibr" rid="B28">2007</xref>; Skipper et al., <xref ref-type="bibr" rid="B43">2011</xref>), and it has been implicated in various higher-order functions of socioemotional cognition and empathic behavior (Altmann et al., <xref ref-type="bibr" rid="B2">2012</xref>; Aust et al., <xref ref-type="bibr" rid="B3">2013</xref>; Carlson et al., <xref ref-type="bibr" rid="B8">2014</xref>; Parkinson and Wheatley, <xref ref-type="bibr" rid="B31">2014</xref>). To test whether TP acts as a semantic hub integrating over complex social cues, naturalistic stimuli were employed: empathy-evoking movie sequences depicting protagonists undergoing emotional experiences. Sharing such experiences with filmed protagonists requires a continuous neural multisensory integration of visual, auditory, and contextual information (Raz et al., <xref ref-type="bibr" rid="B36">2014</xref>).</p>
<p>The difference appreciated between the normal hearing and hearing loss groups with Audio and Mute conditions, respectively, regarding the NS300 maps, was the greater activation of these cerebral areas in the hearing loss group. Numerous studies justified that the increase in the amplitude of the evoked potentials would be associated with greater cognitive effort of the task, more complex processing, and processing intensity (Moreno et al., <xref ref-type="bibr" rid="B26">2016</xref>; Romero-Rivas et al., <xref ref-type="bibr" rid="B38">2016</xref>; Sanchez-Lopez et al., <xref ref-type="bibr" rid="B40">2016</xref>).</p>
<p>In addition, the hearing loss group showed activation in the left frontal inferior pole in Mute condition. This NS300 frontal activity could be associated with higher voluntary emotional attentional resources and integration of cognitive and emotional processes when perceiving external stimuli (B&#x000F6;cker et al., <xref ref-type="bibr" rid="B6">2001</xref>; De Marino et al., <xref ref-type="bibr" rid="B12">2006</xref>).</p>
<p>Regarding Caption condition, comparing the hearing and hearing loss group, we found that both groups activated the right temporal inferior and medium areas (activity is centered in visual processing and word recognition), but again, the hearing loss group activated these areas with higher intensity. The shorter reaction time in the hearing loss group can be related to this higher activation.</p>
<p>As for the laterality of the processes, we found highest left hemisphere activity for NS300 in both groups for Audio and Mute conditions, which seems to be related to attentional positive emotional processes. Other scientific studies demonstrated that the valence of emotions is represented bilaterally in our brain, emerging differentially positive in the left hemisphere and negative in the right hemisphere (Silberman and Weingartner, <xref ref-type="bibr" rid="B42">1986</xref>). Instead, NS100 showed activation in the right hemisphere in both groups for Audio and Mute conditions.</p>
</sec>
<sec id="s5">
<title>Conclusions</title>
<p>Auditory stimuli produced a significant number of emotional reactions, in addition to the emotional reactions produced by the visual components that captions did not produce. Related to EEG measures, two new EEG waves were found, labeled negative shift around 100 ms (NS100) and 300 ms (NS300) prior to emotional response onset. These waves were present with different intensities or in different areas in the normal hearing and hearing loss groups, demonstrating that:</p>
<list list-type="simple">
<list-item><label>&#x02212;</label><p>Both groups mobilized temporal perception and processing areas.</p></list-item>
<list-item><label>&#x02212;</label><p>The deaf group mobilized these areas with much higher intensity and adding voluntary cerebral resources (frontal areas) when watching muted videos without captions.</p></list-item>
<list-item><label>&#x02212;</label><p>Hearing people mobilized these resources with moderate intensity levels and activated perception integration areas (Parietal Sup) when watching the same videos with audio.</p></list-item>
<list-item><label>&#x02212;</label><p>The presence of captions increased and focused the activation of visual and word processing areas in both groups.</p></list-item>
</list>
<p>On the one hand, these results indicate that when a subject with hearing loss is watching a video without captions, a higher voluntary attentional effort is needed prior to motor response compared with the normal hearing group. According to previous works, this greater energy is related to higher brain resource consumption as a consequence of hearing loss. If we add captions to the video, this attention effort increases and focuses on visual and word processing. Other studies with deaf and hard of hearing participants showed that captions cause a shift in attention from video information to captioned information (which results in an increased level of information assimilated from captions to the detriment of the information assimilated from the video sources), especially in participants with profound and severe hearing loss who rely on captions as the principal source of context information.</p>
<p>On the other hand, these results show that auditory stimuli produce significantly more number of emotional reactions than captions. Studies on the emotional response to music showed that less than 2 s of music can produce basic emotional states as happiness, sadness, or fear, and that the emotional response depends on musical parameters such as mode, tempo, register, dynamics, articulation, or timbre. These parameters do not have a literal translation, and thus, captions transcribing information associated with non-verbal sounds generate additional intensity of attentional activity, instead of emotional response.</p>
<p>Finally, we found that, when watching captioned videos, all the cerebral activations are produced in the right hemisphere, while when they are not present, both hemispheres interact. The explanation of this fact remains open for further research since the generally assumed approach, establishing that the right hemisphere is the one in charge of processing negative emotions, and the left hemisphere positive emotions, does not bring light to these findings.</p>
<p>With this study, we want to contribute from a scientific basis to the enrichment of the deaf and hard of hearing people audio-visual experience. Our main conclusion is that if we want deaf people to feel the emotion produced by sounds in a similar manner as hearing people do, we need to provide other non-verbal representations of the sound, exploring other stimuli, rather than literal captions, triggering more direct emotional reactions. There is increasing research on the correspondences between the sense of hearing and the sense of touch and, thus, the potential of vibrotactile technologies to produce musical experience (Vieillard et al., <xref ref-type="bibr" rid="B46">2008</xref>; Russo et al., <xref ref-type="bibr" rid="B39">2012</xref>; Hopkins et al., <xref ref-type="bibr" rid="B20">2016</xref>). Different devices have already been designed that apply tactile vibrations to the skin (fingertip, back, forefoot&#x02026;), reproducing musical features as rhythm to enhance musical experience in deaf and hard of hearing people. Another example of different creative representation is the enriched captioning of the Russian film &#x0201C;Night watch&#x0201D;<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>, which embeds the captions in a very creative way into the scene visual composition, combining fonts, colors, animations, and other artistic resources in a radical application of the Design-for-All paradigm (Design for All Foundation, <xref ref-type="bibr" rid="B14">2019</xref>). The Design-for-All paradigm is much more than an accessibility guideline. It states that designing products, thinking about the different casuistic the public may present, makes the oeuvre not only accessible for a wider group but also more consistent, homogeneous, and even more enhanced for everyone, independently of the eventual disabilities. Thus, we suggest the creative designer, art creators, and other public-related professionals to integrate this paradigm in their works.</p>
</sec>
<sec id="s6">
<title>Data Availability Statement</title>
<p>The datasets generated for this study are available on request to the corresponding author.</p>
</sec>
<sec id="s7">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by Carlos III University of Madrid Ethics Committee. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>TO and PR contributed to all aspects of the work. BR and JS-P contributed to the conception and design of the study. ML contributed to the data analysis, discussion, and article writing. All authors contributed to manuscript revision, read and approved the submitted version.</p>
</sec>
<sec id="s9">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank the volunteers who participated in the experiment.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="web"><person-group person-group-type="author"><collab>AENOR</collab></person-group>. (<year>2012</year>). <article-title>UNE 153010:2012. Subtitulado para personas sordas y personas con discapacidad auditiva</article-title>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.une.org/encuentra-tu-norma/busca-tu-norma/norma/?c=N0049426">https://www.une.org/encuentra-tu-norma/busca-tu-norma/norma/?c=N0049426</ext-link>. Accessed February 4, 2020.</citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Altmann</surname> <given-names>U.</given-names></name> <name><surname>Bohrn</surname> <given-names>I. C.</given-names></name> <name><surname>Lubrich</surname> <given-names>O.</given-names></name> <name><surname>Menninghaus</surname> <given-names>W.</given-names></name> <name><surname>Jacobs</surname> <given-names>A. M.</given-names></name></person-group> (<year>2012</year>). <article-title>The power of emotional valence-from cognitive to affective processes in reading</article-title>. <source>Front. Hum. Neurosci.</source> <volume>6</volume>:<fpage>192</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2012.00192</pub-id><pub-id pub-id-type="pmid">22754519</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aust</surname> <given-names>S.</given-names></name> <name><surname>Alkan H&#x000E4;rtwig</surname> <given-names>E.</given-names></name> <name><surname>Koelsch</surname> <given-names>S.</given-names></name> <name><surname>Heekeren</surname> <given-names>H. R.</given-names></name> <name><surname>Heuse</surname> <given-names>I.</given-names></name> <name><surname>Bajbouj</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>How emotional abilities modulate the influence of early life stress on hippocampal functioning</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>9</volume>, <fpage>1038</fpage>&#x02013;<lpage>1045</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nst078</pub-id><pub-id pub-id-type="pmid">23685776</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bianchin</surname> <given-names>M.</given-names></name> <name><surname>Angrili</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Decision preceding negativity in the iowa gambling task: an ERP study</article-title>. <source>Brain Cogn.</source> <volume>75</volume>, <fpage>273</fpage>&#x02013;<lpage>280</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2011.01.005</pub-id><pub-id pub-id-type="pmid">21306813</pub-id></citation></ref>
<ref id="B5"><citation citation-type="web"><person-group person-group-type="author"><collab>BIAP</collab></person-group>. (<year>1996</year>). <article-title>BIAP recommendation 02/1: audiometric classification of hearing impairments</article-title>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.biap.org/en/recommandations/recommendations/tc-02-classification/213-rec-02-1-en-audiometric-classification-of-hearing-impairments/file">https://www.biap.org/en/recommandations/recommendations/tc-02-classification/213-rec-02-1-en-audiometric-classification-of-hearing-impairments/file</ext-link>. Accessed February 4, 2020.</citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000F6;cker</surname> <given-names>K. B. E.</given-names></name> <name><surname>Bass</surname> <given-names>J. M. P.</given-names></name> <name><surname>Kenemas</surname> <given-names>J. L.</given-names></name> <name><surname>Verbaten</surname> <given-names>M. N.</given-names></name></person-group> (<year>2001</year>). <article-title>Stimulus-preceding negativity induced by fear: a manifestation of affective anticipation</article-title>. <source>Int. J. Psychophysiol.</source> <volume>43</volume>, <fpage>77</fpage>&#x02013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1016/s0167-8760(01)00180-5</pub-id><pub-id pub-id-type="pmid">11742686</pub-id></citation></ref>
<ref id="B500"><citation citation-type="web"><person-group person-group-type="author"><collab>BOE</collab></person-group>. (<year>2010</year>). <article-title>Ley 7/2010, de 31 de marzo, General de la Comunicaci&#x000F3;n Audiovisual</article-title>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.boe.es/buscar/pdf/2010/BOE-A-2010-5292-consolidado.pdf">https://www.boe.es/buscar/pdf/2010/BOE-A-2010-5292-consolidado.pdf</ext-link>. Accessed February 4, 2020.</citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunia</surname> <given-names>C. H.</given-names></name> <name><surname>van Boxtel</surname> <given-names>G. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Wait and see</article-title>. <source>Int. J. Psychophysiol.</source> <volume>43</volume>, <fpage>59</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1016/s0167-8760(01)00179-9</pub-id><pub-id pub-id-type="pmid">11742685</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carlson</surname> <given-names>T. A.</given-names></name> <name><surname>Simmons</surname> <given-names>R. A.</given-names></name> <name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name> <name><surname>Slevc</surname> <given-names>L. R.</given-names></name></person-group> (<year>2014</year>). <article-title>The emergence of semantic meaning in the ventral temporal pathway</article-title>. <source>J. Cogn. Neurosci.</source> <volume>26</volume>, <fpage>120</fpage>&#x02013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1162/jocn_a_00458</pub-id><pub-id pub-id-type="pmid">23915056</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Debevc</surname> <given-names>M.</given-names></name> <name><surname>Milo&#x00161;evic</surname> <given-names>D.</given-names></name> <name><surname>Ko&#x0017E;uh</surname> <given-names>I.</given-names></name></person-group> (<year>2015</year>). <article-title>A comparison of comprehension processes in sign language interpreter videos with or without captions</article-title>. <source>PLoS One</source> <volume>10</volume>:<fpage>e0127577</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0127577</pub-id><pub-id pub-id-type="pmid">26010899</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Marino</surname> <given-names>B.</given-names></name> <name><surname>Kumaran</surname> <given-names>D.</given-names></name> <name><surname>Seymour</surname> <given-names>B.</given-names></name> <name><surname>Dolan</surname> <given-names>R. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Frames, biases and rational decision-making in the human brain</article-title>. <source>Science</source> <volume>313</volume>, <fpage>684</fpage>&#x02013;<lpage>687</lpage>. <pub-id pub-id-type="doi">10.1126/science.1128356</pub-id><pub-id pub-id-type="pmid">16888142</pub-id></citation></ref>
<ref id="B14"><citation citation-type="web"><person-group person-group-type="author"><collab>Design for All Foundation</collab></person-group>. (<year>2019</year>). Design for all is design tailored to human diversity. Available online at: <ext-link ext-link-type="uri" xlink:href="http://designforall.org/design.php">http://designforall.org/design.php</ext-link>. Accessed February 4, 2020.</citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duncan</surname> <given-names>C. C.</given-names></name> <name><surname>Barry</surname> <given-names>R. J.</given-names></name> <name><surname>Connoly</surname> <given-names>J. F.</given-names></name> <name><surname>Fischer</surname> <given-names>C.</given-names></name> <name><surname>Michie</surname> <given-names>P. T.</given-names></name> <name><surname>N&#x000E4;&#x000E4;t&#x000E4;nen</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Event-related potentials in clinical research: guidelines for eliciting, recording and quantifying mismatch negativity, P300 and N400</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>1883</fpage>&#x02013;<lpage>1908</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.07.045</pub-id><pub-id pub-id-type="pmid">19796989</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x02019;Ydewalle</surname> <given-names>G.</given-names></name> <name><surname>De Bruycker</surname> <given-names>W.</given-names></name></person-group> (<year>2007</year>). <article-title>Eye movements of children and adults while reading television subtitles</article-title>. <source>Eur. Psychol.</source> <volume>12</volume>, <fpage>196</fpage>&#x02013;<lpage>205</lpage>. <pub-id pub-id-type="doi">10.1027/1016-9040.12.3.196</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x02019;Ydewalle</surname> <given-names>G.</given-names></name> <name><surname>Praet</surname> <given-names>C.</given-names></name> <name><surname>Verfaillie</surname> <given-names>K.</given-names></name> <name><surname>Van Rensbergen</surname> <given-names>J.</given-names></name></person-group> (<year>1991</year>). <article-title>Watching subtitled television automatic reading behaviour</article-title>. <source>Commun. Res.</source> <volume>18</volume>, <fpage>650</fpage>&#x02013;<lpage>666</lpage>. <pub-id pub-id-type="doi">10.1177/009365091018005005</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>D&#x02019;Ydewalle</surname> <given-names>G.</given-names></name> <name><surname>Van Rensbergen</surname> <given-names>J.</given-names></name></person-group> (<year>1989</year>). <article-title>13 developmental studies of text-picture interactions in the perception of animated cartoons with text</article-title>. <source>Adv. Psychol.</source> <volume>58</volume>, <fpage>233</fpage>&#x02013;<lpage>248</lpage>. <pub-id pub-id-type="doi">10.1016/s0166-4115(08)62157-3</pub-id></citation></ref>
<ref id="B17"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Gabrielsson</surname> <given-names>A.</given-names></name></person-group> (<year>2001</year>). &#x0201C;<article-title>Emotions in strong experiences with music</article-title>,&#x0201D; in <source>Music and Emotion: Theory and Research</source>, eds <person-group person-group-type="editor"><name><surname>Juslin</surname> <given-names>P.</given-names></name> <name><surname>Sloboda</surname> <given-names>J. A.</given-names></name></person-group> (<publisher-loc>Oxford, UK</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>), <fpage>431</fpage>&#x02013;<lpage>449</lpage>.</citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Green</surname> <given-names>M. C.</given-names></name> <name><surname>Brock</surname> <given-names>T. C.</given-names></name></person-group> (<year>2000</year>). <article-title>The role of transportation in the persuasiveness of public narratives</article-title>. <source>J. Pers. Soc. Psychol.</source> <volume>79</volume>, <fpage>701</fpage>&#x02013;<lpage>721</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.79.5.701</pub-id><pub-id pub-id-type="pmid">11079236</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gulliver</surname> <given-names>S.</given-names></name> <name><surname>Ghinea</surname> <given-names>G.</given-names></name></person-group> (<year>2003</year>). <article-title>How level and type of deafness affect user perception of multimedia video clips</article-title>. <source>UAIS</source> <volume>2</volume>:<fpage>374</fpage>. <pub-id pub-id-type="doi">10.1007/s10209-003-0067-5</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopkins</surname> <given-names>C.</given-names></name> <name><surname>Mate-Cid</surname> <given-names>S.</given-names></name> <name><surname>Fulford</surname> <given-names>R.</given-names></name> <name><surname>Seiffert</surname> <given-names>G.</given-names></name> <name><surname>Ginsborg</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Vibrotactile presentation of musical notes to the glabrous skin for adults with normal hearing or a hearing impairment: thresholds, dynamic range and high-frequency perception</article-title>. <source>PLoS One</source> <volume>11</volume>:<fpage>e0155807</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0155807</pub-id><pub-id pub-id-type="pmid">29958604</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Imbir</surname> <given-names>K.</given-names></name> <name><surname>Jarymowicz</surname> <given-names>M.</given-names></name> <name><surname>Spustek</surname> <given-names>T.</given-names></name> <name><surname>K&#x000FA;s</surname> <given-names>R.</given-names></name> <name><surname>&#x0017B;ygierewicz</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Origin of emotion effects on ERP correlates of emotional word processing: the emotion duality approach</article-title>. <source>PLoS One</source> <volume>10</volume>:<fpage>e0126129</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0126129</pub-id><pub-id pub-id-type="pmid">25955719</pub-id></citation></ref>
<ref id="B22"><citation citation-type="web"><person-group person-group-type="author"><collab>INE</collab></person-group>. (<year>2008</year>). <source>Panor&#x000E1;mica de la Discapacidad en Espa&#x000F1;a</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.ine.es/revistas/cifraine/1009.pdf">http://www.ine.es/revistas/cifraine/1009.pdf</ext-link>. Accessed February 4, 2020</citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>T.</given-names></name> <name><surname>Biocca</surname> <given-names>F.</given-names></name></person-group> (<year>1997</year>). <article-title>Telepresence via television: two dimensions of telepresence may have different connections to memory and persuasion</article-title>. <source>J. Comput. Med. Commun.</source> <volume>3</volume>:<fpage>2</fpage>. <pub-id pub-id-type="doi">10.1111/j.1083-6101.1997.tb00073.x</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moreno</surname> <given-names>E. M.</given-names></name> <name><surname>Casado</surname> <given-names>P.</given-names></name> <name><surname>Mart&#x000ED;n-Loeches</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Tell me sweet little lies: an event-related potentials study on the processing of social lies</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>16</volume>, <fpage>616</fpage>&#x02013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.3758/s13415-016-0418-3</pub-id><pub-id pub-id-type="pmid">27007770</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olofsson</surname> <given-names>J. K.</given-names></name> <name><surname>Nordin</surname> <given-names>S.</given-names></name> <name><surname>Sequeira</surname> <given-names>H.</given-names></name> <name><surname>Posich</surname> <given-names>J.</given-names></name></person-group> (<year>2008</year>). <article-title>Affective picture processing: an integrative review of ERP findings</article-title>. <source>Biol. Psychol.</source> <volume>77</volume>, <fpage>247</fpage>&#x02013;<lpage>265</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2007.11.006</pub-id><pub-id pub-id-type="pmid">18164800</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Olson</surname> <given-names>I. R.</given-names></name> <name><surname>Plotzker</surname> <given-names>A.</given-names></name> <name><surname>Ezzyat</surname> <given-names>Y.</given-names></name></person-group> (<year>2007</year>). <article-title>The enigmatic temporal pole: a review of findings on social and emotional processing</article-title>. <source>Brain</source> <volume>130</volume>, <fpage>1718</fpage>&#x02013;<lpage>1731</lpage>. <pub-id pub-id-type="doi">10.1093/brain/awm052</pub-id><pub-id pub-id-type="pmid">17392317</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ortiz</surname> <given-names>T.</given-names></name> <name><surname>Goodin</surname> <given-names>D. S.</given-names></name> <name><surname>Aminoff</surname> <given-names>M. J.</given-names></name></person-group> (<year>1993</year>). <article-title>Neural processing in a three-choice reaction-time task: a study using cerebral evoked-potentials</article-title>. <source>J. Neurophysiol.</source> <volume>69</volume>, <fpage>1499</fpage>&#x02013;<lpage>1512</lpage>. <pub-id pub-id-type="doi">10.1152/jn.1993.69.5.1499</pub-id><pub-id pub-id-type="pmid">8509828</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paquette</surname> <given-names>S.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Belin</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>The &#x0201C;musical emotional bursts&#x0201D;: a validated set of musical affect bursts to investigate auditory affective processing</article-title>. <source>Front. Psychol.</source> <volume>4</volume>:<fpage>509</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00509</pub-id><pub-id pub-id-type="pmid">23964255</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parkinson</surname> <given-names>C.</given-names></name> <name><surname>Wheatley</surname> <given-names>T.</given-names></name></person-group> (<year>2014</year>). <article-title>Relating anatomical and social connectivity: white matter microstructure predicts emotional empathy</article-title>. <source>Cereb. Cortex</source> <volume>24</volume>, <fpage>614</fpage>&#x02013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhs347</pub-id><pub-id pub-id-type="pmid">23162046</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pedersen</surname> <given-names>J. R.</given-names></name> <name><surname>Johannsen</surname> <given-names>P.</given-names></name> <name><surname>Back</surname> <given-names>C. K.</given-names></name> <name><surname>Kofoed</surname> <given-names>B.</given-names></name> <name><surname>Saermark</surname> <given-names>K.</given-names></name> <name><surname>Gjedde</surname> <given-names>A.</given-names></name></person-group> (<year>1998</year>). <article-title>Origin of human motor readiness field linked to left middle frontal gyrus by MEG and PET</article-title>. <source>NeuroImage</source> <volume>8</volume>, <fpage>214</fpage>&#x02013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1006/nimg.1998.0362</pub-id><pub-id pub-id-type="pmid">9740763</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pehrs</surname> <given-names>C.</given-names></name> <name><surname>Deserno</surname> <given-names>L.</given-names></name> <name><surname>Bakels</surname> <given-names>J. H.</given-names></name> <name><surname>Schlochtermeier</surname> <given-names>L. H.</given-names></name> <name><surname>Kappelhoff</surname> <given-names>H.</given-names></name> <name><surname>Jacobs</surname> <given-names>A. M.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>9</volume>, <fpage>1770</fpage>&#x02013;<lpage>1778</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nst169</pub-id><pub-id pub-id-type="pmid">24298171</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pehrs</surname> <given-names>C.</given-names></name> <name><surname>Zaki</surname> <given-names>J.</given-names></name> <name><surname>Schlochtermeier</surname> <given-names>L. H.</given-names></name> <name><surname>Jacobs</surname> <given-names>A. M.</given-names></name> <name><surname>Kuchinke</surname> <given-names>L.</given-names></name> <name><surname>Koelsch</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>The temporal pole top-down modulates the ventral visual stream during social cognition</article-title>. <source>Cereb. Cortex</source> <volume>27</volume>, <fpage>777</fpage>&#x02013;<lpage>792</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhv226</pub-id><pub-id pub-id-type="pmid">26604273</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perego</surname> <given-names>E.</given-names></name> <name><surname>Del Missier</surname> <given-names>F.</given-names></name> <name><surname>Bottiroli</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Dubbing versus subtitling in young and older adults: cognitive and evaluative aspects</article-title>. <source>Perspectives</source> <volume>23</volume>, <fpage>1</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1080/0907676x.2014.912343</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raz</surname> <given-names>G.</given-names></name> <name><surname>Jacob</surname> <given-names>Y.</given-names></name> <name><surname>Gonen</surname> <given-names>T.</given-names></name> <name><surname>Winetraub</surname> <given-names>Y.</given-names></name> <name><surname>Flash</surname> <given-names>T.</given-names></name> <name><surname>Soreq</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Cry for her or cry with her: context-dependent dissociation of two modes of cinematic empathy in network cohesion dynamics</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>9</volume>, <fpage>30</fpage>&#x02013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nst052</pub-id><pub-id pub-id-type="pmid">23615766</pub-id></citation></ref>
<ref id="B37"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Rheinberg</surname> <given-names>F.</given-names></name> <name><surname>Vollmeyer</surname> <given-names>R.</given-names></name> <name><surname>Engeser</surname> <given-names>S.</given-names></name></person-group> (<year>2003</year>). &#x0201C;<article-title>Die Erfassung des Flow-Erlebens</article-title>,&#x0201D; in <source>Diagnostik von Motivation und Selbstkonzept</source>, eds <person-group person-group-type="editor"><name><surname>Stiensmeier-Pelsterand</surname> <given-names>J.</given-names></name> <name><surname>Rheinberg</surname> <given-names>F.</given-names></name></person-group> (<publisher-loc>G&#x000F6;ttingen</publisher-loc>: <publisher-name>Hogrefe</publisher-name>), <fpage>261</fpage>&#x02013;<lpage>279</lpage>.</citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romero-Rivas</surname> <given-names>C.</given-names></name> <name><surname>Martin</surname> <given-names>C. D.</given-names></name> <name><surname>Costa</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>Foreign-accented speech modulates linguistic anticipatory processes</article-title>. <source>Neuropsychologia</source> <volume>85</volume>, <fpage>245</fpage>&#x02013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2016.03.022</pub-id><pub-id pub-id-type="pmid">27020137</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Russo</surname> <given-names>F. A.</given-names></name> <name><surname>Ammirante</surname> <given-names>P.</given-names></name> <name><surname>Fels</surname> <given-names>D. I.</given-names></name></person-group> (<year>2012</year>). <article-title>Vibrotactile discrimination of musical timbre</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>38</volume>, <fpage>822</fpage>&#x02013;<lpage>826</lpage>. <pub-id pub-id-type="doi">10.1037/a0029046</pub-id><pub-id pub-id-type="pmid">24029098</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanchez-Lopez</surname> <given-names>J.</given-names></name> <name><surname>Silva-Pereyra</surname> <given-names>J.</given-names></name> <name><surname>Fernandez</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Sustained attention in skilled and novice martial arts athletes: a study of event-related potentials and current sources</article-title>. <source>PeerJ</source> <volume>4</volume>:<fpage>e1614</fpage>. <pub-id pub-id-type="doi">10.7717/peerj.1614</pub-id><pub-id pub-id-type="pmid">26855865</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shibasaki</surname> <given-names>H.</given-names></name> <name><surname>Barrett</surname> <given-names>G.</given-names></name> <name><surname>Halliday</surname> <given-names>F.</given-names></name> <name><surname>Halliday</surname> <given-names>A. M.</given-names></name></person-group> (<year>1980</year>). <article-title>Components of the movement-related cortical potentials and their scalp topography</article-title>. <source>Electroencephalogr. Clin. Neurophysiol</source> <volume>49</volume>, <fpage>213</fpage>&#x02013;<lpage>226</lpage>. <pub-id pub-id-type="doi">10.1016/0013-4694(80)90216-3</pub-id><pub-id pub-id-type="pmid">6158398</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silberman</surname> <given-names>E. K.</given-names></name> <name><surname>Weingartner</surname> <given-names>H.</given-names></name></person-group> (<year>1986</year>). <article-title>Hemispheric lateralization of functions related to emotion</article-title>. <source>Brain Cogn.</source> <volume>5</volume>, <fpage>322</fpage>&#x02013;<lpage>353</lpage>. <pub-id pub-id-type="doi">10.1016/0278-2626(86)90035-7</pub-id><pub-id pub-id-type="pmid">3530287</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Skipper</surname> <given-names>I. M.</given-names></name> <name><surname>Ross</surname> <given-names>L. A.</given-names></name> <name><surname>Olson</surname> <given-names>I. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Sensory and semantic category subdivisions within the anterior temporal lobes</article-title>. <source>Neuopsychologia</source> <volume>49</volume>, <fpage>3419</fpage>&#x02013;<lpage>3429</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.07.033</pub-id><pub-id pub-id-type="pmid">21889520</pub-id></citation></ref>
<ref id="B45"><citation citation-type="web"><person-group person-group-type="author"><collab>UN</collab></person-group>. (<year>2006</year>). <article-title>Convention on the rights of persons with disabilities (CRPD)</article-title>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html">https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html</ext-link>. Accessed February 4, 2020.</citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vieillard</surname> <given-names>S.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Gosselin</surname> <given-names>N.</given-names></name> <name><surname>Khalfa</surname> <given-names>S.</given-names></name></person-group> (<year>2008</year>). <article-title>Happy, sad, scary and peaceful musical excerpts for research on emotions</article-title>. <source>Cogn. Emot.</source> <volume>22</volume>, <fpage>720</fpage>&#x02013;<lpage>752</lpage>. <pub-id pub-id-type="doi">10.1080/02699930701503567</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wissmath</surname> <given-names>B.</given-names></name> <name><surname>Weibel</surname> <given-names>D.</given-names></name> <name><surname>Groner</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Dubbing or subtitling? Effects on spatial presence, transportation, flow and enjoyment&#x0201D;</article-title>. <source>J. Med. Psychol.</source> <volume>21</volume>, <fpage>114</fpage>&#x02013;<lpage>125</lpage>. <pub-id pub-id-type="doi">10.1027/1864-1105.21.3.114</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="https://www.filmaffinity.com/es/film269079.html">https://www.filmaffinity.com/es/film269079.html</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="https://www.filmaffinity.com/en/film670027.html">https://www.filmaffinity.com/en/film670027.html</ext-link></p></fn>
</fn-group>
</back>
</article>
