Leveraging the Deep Learning Paradigm for Continuous Affect Estimation from Facial Expressions This publication appears in: IEEE Transactions on Affective Computing Authors: M. Oveneke, Y. Zhao, A. Diáz Berenguer, D. Jiang and H. Sahli Number of Pages: 14 Publication Year: 2019
Abstract: Continuous affect estimation from facial expressions has attracted increased attention in the affective computing researchcommunity. This paper presents a principled framework for estimating continuous affect from video sequences. Based on recentdevelopments, we address the problem of continuous affect estimation by leveraging the Bayesian filtering paradigm, i.e. consideringaffect as a latent dynamical system corresponding to a general feeling of pleasure with a degree of arousal, and recursively estimatingits state using a sequence of visual observations. To this end, we advance the state-of-the-art as follows: (i) Canonical facerepresentation (CFR): a novel algorithm for two-dimensional face frontalization, (ii) Convex unsupervised representation learning(CURL): a novel frequency-domain convex optimization algorithm for unsupervised training of deep convolutional neural networks(CNN)s, and (iii) Deep extended Kalman filtering (DEKF): an extended Kalman filtering-based algorithm for affect estimation from asequence of CNN observations. The performance of the resulting CFR-CURL-DEKF algorithmic framework is empirically evaluated onpublicly available benchmark datasets for facial expression recognition (CK+) and continuous affect estimation (AVEC 2012 and 2014).
|