Audio Visual Speech Recognition Based on Multi-Stream DBN Models with Articulatory Features Host Publication: Finds and Results from the Swedish Cyprus Expedition: A Gender Perspective at the Medelhavsmuseet Authors: D. Jiang, P. Wu, F. Wang, H. Sahli and W. Verhelst Publication Date: Nov. 2010 Number of Pages: 4
Abstract: We present a multi-stream Dynamic Bayesian Network model with Articulatory Features (AF_AV_DBN) for audio visual speech recognition. Conditional probability distributions of the nodes are defined considering the asynchronies between the articulatory features (AFs). Speech recognition experiments are carried out on an audio visual connected digit database. Results show that comparing with the state synchronous DBN model (SS_DBN) and state asynchronous DBN model (SA_DBN), when the asynchrony constraint between the AFs is appropriately set, the AF_AV_DBN model gets the highest recognition rates, with average recognition rate improved to 89.38% from 87.02% of SS_DBN and 88.32% of SA_DBN. Moreover, the audio visual multi-stream AF_AV_DBN model greatly improves the robustness of the audio only AF_A_DBN model, for example, under the noise of ᆞdB, the recognition rate is improved from 20.75% to 76.24%.
|