Multi-modal neural networks with multi-scale RGB-T fusion for semantic segmentation This publication appears in: Electronics Letters Authors: L. Yangxintong, I. Schiopu and A. Munteanu Volume: 56 Issue: 18 Pages: 920-922 Publication Date: Sep. 2020
Abstract: A novel deep-learning-based method for semantic segmentation of RGB and Thermal images is introduced. The proposed method employs a novel neural network design for multi-modal fusion based on multi-resolution patch processing. A novel decoder module is introduced to fuse the RGB and Thermal features extracted by separate encoder streams. Experimental results on synthetic and real-world data demonstrate the efficiency of the proposed method compared with state-of-the-art methods.
|