Original Research ARTICLE
D-PAttNet: Dynamic patch-attentive deep network for action unit detection
- 1Carnegie Mellon University, United States
- 2Northwestern Polytechnical University, China
- 3University of Pittsburgh, United States
Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that i) controls for 3D head and face rotation, ii) learns mappings of patches to AUs, and iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.
Keywords: Action unit detection, 3D face registration, 3D-CNN, sigmoidal attention, Patch-based
Received: 02 Aug 2019;
Accepted: 08 Nov 2019.
Copyright: © 2019 Onal Ertugrul, Yang, Jeni and Cohn. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mx. Itir Onal Ertugrul, Carnegie Mellon University, Pittsburgh, United States, firstname.lastname@example.org