AUTHOR=Onal Ertugrul Itir , Yang Le , Jeni László A. , Cohn Jeffrey F. TITLE=D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection JOURNAL=Frontiers in Computer Science VOLUME=1 YEAR=2019 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2019.00011 DOI=10.3389/fcomp.2019.00011 ISSN=2624-9898 ABSTRACT=

Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.