Multi-Modal Dictionary Learning for Image Separation With Application In Art Investigation This publication appears in: IEEE Transactions on Image Processing Authors: N. Deligiannis, J. Mota, B. Cornelis, M. Rodrigues and I. Daubechies Volume: 26 Issue: 2 Pages: 751-764 Publication Date: Feb. 2017
Abstract: In support of art investigation, we propose a newsource separation method that unmixes a single X-ray scanacquired from double-sided paintings. In this problem, the X-raysignals to be separated have similar morphological characteristics,which brings previous source separation methods to theirlimits. Our solution is to use photographs taken from the frontandback-side of the panel to drive the separation process. Thecrux of our approach relies on the coupling of the two imagingmodalities (photographs and X-rays) using a novel coupleddictionary learning framework able to capture both commonand disparate features across the modalities using parsimoniousrepresentations the common component captures features sharedby the multi-modal images, whereas the innovation componentcaptures modality-specific information. As such, our modelenables the formulation of appropriately regularized convexoptimization procedures that lead to the accurate separation ofthe X-rays. Our dictionary learning framework can be tailoredboth to a single- and a multi-scale framework, with the latterleading to a significant performance improvement. Moreover, toimprove further on the visual quality of the separated images,we propose to train coupled dictionaries that ignore certain partsof the painting corresponding to craquelure. Experimentation onsynthetic and real datataken from digital acquisition of theGhent Altarpiece (1432)confirms the superiority of our methodagainst the state-of-the-art morphological component analysistechnique that uses either fixed or trained dictionaries to performimage separation.
|