Continuous Facial Expression Representation for Multimodal Emotion Detection
Résumé
This paper presents a multimodal system for dimensional emotion detection, that extracts and merges visual, acoustic and context relevant features. The paper studies the two main components of such systems: the extraction of relevant features and the multimodal fusion technique. Additionally, we propose a method for the automatic extraction of a new emotional facial expression feature to be used as an input of the fusion system. The feature is an invariant representation of facial expressions, which enables person-independent high-level expression recognition. It relies on 8 key emotional expressions, which are synthesized from plausible distortions applied on the neutral face of a subject. The expressions in the video sequences are defined by their relative position to these 8 expressions. High-level expression recognition is then performed in this space with a basic intensity-area detector. In this paper, the impact of fusion techniques is investigated by comparing two different fusion techniques: a fuzzy inference system and a radial basis function (RBF) system. The experiments show that the choice of the fusion technique has little impact on the results, thus indicating that the feature extraction is the key component of a multimodal emotion detection system. The experiments have been performed on the AVEC 2012 database.