Hierarchical sparse coding framework for speech emotion recognition This publication appears in: Speech Communication Authors: D. Torres Boza, M. Oveneke, F. Wang, D. Jiang and H. Sahli Volume: 99 Pages: 80-89 Publication Date: May. 2018
Abstract: Finding an appropriate feature representation for audio data is central to speech emotion recognition. Most existing audio features rely on hand-crafted feature encoding techniques, such as the AVEC challenge feature set. An alternative approach is to use features that are learned automatically. This has the advantage of generalizing well to new data, particularly if the features are learned in an unsupervised manner with less restrictions on the data itself. In this work, we adopt the sparse coding framework as a means to automatically represent features from audio and propose a hierarchical sparse coding (HSC) scheme. Experimental results indicate that the obtained features, in an unsupervised fashion, are able to capture useful properties of the speech that distinguish between emotions.
|