AUTHOR=Mastrantuono Eliana , Saldaña David , Rodríguez-Ortiz Isabel R. TITLE=An Eye Tracking Study on the Perception and Comprehension of Unimodal and Bimodal Linguistic Inputs by Deaf Adolescents JOURNAL=Frontiers in Psychology VOLUME=Volume 8 - 2017 YEAR=2017 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2017.01044 DOI=10.3389/fpsyg.2017.01044 ISSN=1664-1078 ABSTRACT=An eye tracking experiment explored the gaze behaviour of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech. Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of sign-supported speech has rarely been tested within the same group of children at discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and sign-supported speech. The capacity of these communicative systems to equalise comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of sign-supported speech favoured discourse comprehension compared to unimodal languages. Deaf participants with cochlear implants achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no cochlear implants achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of sign-supported speech was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analysed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lipreading in sign-supported speech and spoken language. Few fixations were directed towards the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision.