:: Volume 8, Issue 3 (1-2021) ::
jgit 2021, 8(3): 39-59 Back to browse issues page
Fusion of Thermal Infrared and Visible Images Based on Multi-scale Transform and Sparse Representation
Mohammad Fallah *, Mohsen Azadbakht
Remote Sensing & GIS Research Center, Shahid Beheshti University
Abstract:   (281 Views)
Due to the differences between the visible and thermal infrared images, combination of these two types of images is essential for better understanding the characteristics of targets and the environment. Thermal infrared images have most importance to distinguish targets from the background based on the radiation differences, which work well in all-weather and day/night conditions also in land surface temperature (LST) calculation. However, their spatial resolution is relatively low, making it challenging to detect targets. Image fusion is an efficient method that is employed to enhance spatial resolution of the thermal bands through fusing these images with high spatial resolution visible images. Therefore, it is desirable to fuse these two types of images, which can combine the advantages of both the thermal radiation information and detailed spatial information. Multi-scale transforms (MST) and sparse representation (SR) are widely used in image fusion. To improve the performance of image fusion, these two types of methods can be combined. In this regard, an MST is firstly performed on each of the preregistered source images to obtain their low-pass and high-pass coefficients. Then, the low-pass images are combined with a SR-based fusion approach while the high-pass images are fused using the absolute values of the coefficients. The fused image is finally obtained by performing an inverse MST on the merged coefficients.  In this paper, nine image fusion methods based on the multi-scale transform and sparse representation, namely Laplacian pyramid (LP), ratio of low-pass pyramid (RP), wavelet transform (Wavelet), dual-tree complex wavelet transform (DTCWT), curvelet transform (CVT), nonsubsampled contourlet transform (NSCT), sparse representation (SR), hybrid sparse representation and Laplacian pyramid methods (LP-SR) and hybrid sparse representation and NSCT methods (NSCT-SR) are tested on FLIR and landsat-8 thermal infrared and visible images. To evaluate the performance of different image fusion methods we use three quantitative evaluation metrics: entropy (EN), mutual information (MI), and gradient based fusion metric )QAB/F(. Despite the lack of spectral coverage between the visible and thermal infrared bands of Landsat 8, quantitative evaluation metrics showed that the hybrid LP-SR method provides the best result (EN=7.362, MI=2.605, QAB/F =0.531) and fused images have best visual quality. This method improve spatial details while preserving the thermal radiation information. It followed by RP, LP, and NSCT methods. Similar results were achieved in FLIR images.
Keywords: Visible image, Thermal infrared image, Image fusion, Multi-scale transform, Sparse representation
Full-Text [PDF 1633 kb]   (84 Downloads)    
Type of Study: Research | Subject: RS
Received: 2018/09/3 | Accepted: 2020/12/14 | Published: 2021/01/19

XML   Persian Abstract   Print

Volume 8, Issue 3 (1-2021) Back to browse issues page