Image Fusion using ICA bases

The idea of extracting interesting local image features that resemble the receptive fields of simple cells in mammalian primary visual cortex (V1) has been exploited thoroughly in image analysis. More specifically, it has been shown that maximizing the sparseness (or nongaussianity) of image components, assuming a linear generative model, tends to produce Gabor-like bases that can approximate the receptive fields in V1. Independent Component Analysis (ICA) has been proposed to identify this type of local features that can be employed for a number of image processing tasks, such as Image Coding and Image Denoising. In this seminar, we describe the use of such bases in the field of Image Fusion. Image fusion is commonly described as the task of enhancing the perception of a scene by combining information captured by different modality sensors. The pyramid decomposition and the Dual-Tree Wavelet Transform have also been employed as analysis and synthesis tools for image fusion by the fusion community, following a variety of fusion rules. In this seminar, we demonstrate the efficiency of Independent Component Analysis (ICA) bases for image fusion. The bases are trained offline using images of similar context to the observed scene. The images are fused in the transform domain using pixel-based or region-based rules. An unsupervised adaptation ICA-based fusion scheme is also introduced. The proposed schemes feature improved performance compared to previous traditional approaches based on the wavelet transform.

Hide picture