Event Abstract

Correlates of facial expressions in the primary visual cortex

  • 1 University of Algarve, Vision Laboratory, Institute for Systems and Robotics, Portugal

Face detection and recognition should be complemented by recognition of facial expression, for example for social robots which must react to human emotions. Our framework is based on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth are grouped for face detection [1]; lines and edges provide information for face recognition [2]. We assume that keypoints play a key role in the where system, lines and edges being exploited in the what system. This dichotomy, together with coarse-to-fine-scale processing, yields translation and rotation invariance, refining object categorisations until recognition, assuming that objects are represented by normalised templates in memory. Faces are processed the following way: (1) Keypoints at coarse scales are used to translate and rotate the entire input face, using a generic face template with neutral expression. (2) At medium scales, cells with dendritic fields at corners of mouth and eyebrows of the generic template collect evidence for expressions using the line and edge information of the (globally normalised) input face at those scales. Big structures, including mouth and eyebrows, are further normalised using keypoints and first categorizations (gender, race) are obtained using lines and edges. (3) The latter process continues until the finest scale, with normalisation of the expression to neutral for final face recognition. The advantage of this framework is that only one frontal view of a person's face with neutral expression must be stored in memory. This procedure resulted from an analysis of the multi-scale line/edge representation of normalised faces with seven expressions: neutral, anger, disgust, fear, happy, sad and surprise. Following [3], where Action Units (AUs) are related to facial muscles, we analysed the line/edge representation in all AUs.
We found that positions of lines and edges at one medium scale, and only at AUs covering the mouth and eyebrows, relative to positions in the neutral face at the same scale, suffice to extract the right expression. Moreover, by implementing AUs by means of six groups of vertically aligned summation cells with a dendritic field size related to that scale (sizes of simple and complex cells), covering a range of positions above and below the corners of mouth and eyebrows in the neutral face, the summation cell with maximum response of each of the six cell groups can be detected, and it is possible to estimate the degree of the expression, from mild to extreme. This work is in progress, since the method must still be tested using big databases with many faces and their natural variations. Perhaps some expressions detected at one medium scale must be validated at one or more finer scales. Nevertheless, in this framework detection of expression occurs before face recognition, which may be an advantage in the development of social robots.

Acknowledgements:FCT funding of ISR-IST with POS-Conhecimento and FEDER; FCT projects PTDC-PSI-67381-2006 and PTDC-EIA-73633-2006

References

1. Rodrigues and du Buf 2006. BioSystems 86,75-90 [2] Rodrigues and du Buf 2009. BioSystems 95,206-26 [3] Ekman and Friesen 1978. FACS, Consulting Psychologists Press, Palo Alto

Conference: Bernstein Conference on Computational Neuroscience, Frankfurt am Main, Germany, 30 Sep - 2 Oct, 2009.

Presentation Type: Poster Presentation

Topic: Information processing in neurons and networks

Citation: Sousa R, Rodrigues J and Du-Buf H (2009). Correlates of facial expressions in the primary visual cortex. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience. doi: 10.3389/conf.neuro.10.2009.14.076

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 26 Aug 2009; Published Online: 26 Aug 2009.

* Correspondence: Ricardo Sousa, University of Algarve, Vision Laboratory, Institute for Systems and Robotics, Faro, Portugal, ricardo_sousa@hotmail.com