目的 针对传统的基于多尺度变换的图像融合算法的不足，本文提出了一种基于W变换和二维经验模态分解（BEMD）的红外与可见光图像融合算法。方法 首先，为了更有效地提取图像的高频信息，抑制BEMD中存在的模态混叠现象，提出了一种基于BEMD和W变换的新的多尺度分解算法（简称为W-BEMD）；然后，利用W-BEMD对源图像进行塔式分解，获得图像的高频分量WIMFs和残差分量WR；接着，对源图像的各对应WIMFs分量和WR分量分别采用基于局部区域方差与加权和基于局部区域能量选择与加权的融合规则进行融合，得到融合图像的W-BEMD分解；最后，通过W-BEMD逆变换得到最终融合图像。W-BEMD分解算法的主要思想是通过W变换递归地将BEMD分解过程中每层所得低频分量中滞留的高频成分提取出来并叠加到相应的高频分量中，实现更有效的图像多尺度分解。结果 对比实验结果表明，本文方法得到的融合图像视觉效果更佳，其中既有突出的红外目标，又有清晰的可见光背景细节。而且，本文方法在在平均梯度（AG）、空间频率（SF）、互信息（MI）三个客观评价指标上也有显著优势。结论 本文提出了一种新的红外与可见光图像融合算法，实验结果表明，该算法具有较好的融合效果，在保留可见光图像中的细节信息和突出红外图像中的目标信息方面更加有效。
Infrared and visible image fusion based on BEMD and W-transform（Chinagraph2018-P000113）
Gong Rui,Wang Xiaochun(Beijing Forestry University)
Objective The infrared and visible images of the same scene have the characteristics of large difference and complementary information. It is of great significance to use the complementary information of two images to generate a fused image with both outstanding targets and clear details. Plenty of applications of visible and infrared images fusion have appeared in military, security, and surveillance areas. Multi-scale techniques including wavelet transforms and multi-scale geometric decomposition are widely used in image fusion. Empirical mode decomposition (EMD) and W-transform are two such tools. EMD is a fully data-driven time-frequency analysis method that adaptively decomposes signals into Intrinsic Mode Functions (IMFs) and has shown considerable prowess in the analysis of non-stationary data. The W-transform is a new orthogonal transform that has strong decomposability and reconstruction ability to both continuous and discontinuous information, it can characterize the local variation of images effectively. In view of the deficiency of the traditional multi-scale transform based image fusion algorithms, this paper proposes a new infrared and visible image fusion method based on the W-transform and the bidimensional empirical mode decomposition (BEMD). Method The proposed method is conducted on the registered infrared and visible images with the same spatial resolution. To eliminate modal aliasing phenomenon in EMD, a new decomposition method, called W-BEMD, based on BEMD and the W-transform is firstly proposed. The main idea of W-BEMD method is performing W-transform to the low frequency components of each level in the BEMD decomposition process, and superimposing the obtained high-frequency component into the corresponding IMFs of the same decomposition level. W-BEMD is an improved BEMD method, which can effectively extract the high frequency information and suppress frequency aliasing effect in BEMD. W-BEMD is further applied on infrared and visible images fusion to get more satisfied fused results. First, the registered infrared and visible captures of the same scene are decomposed into the high frequency component WIMFs and the residual component WR through W-BEMD. Then, the corresponding WIMFs of the same decomposition level of the source images are fused using the weighted average fusion rule based on the local area variance to obtain the fused WIMF images, while the weighted average strategy based on the area energy is adopted for the fusion of the residual component WR. Finally, the fused image is then generated by adding the fused WIMF images and fused residual component together. Result Decomposition experiments are conducted to evaluate the effect of W-BEMD, which show that the high frequency part under W-BEMD contains more complete edge information compared with the one under BEMD. Simulation experiments for four groups of infrared and visible images are conducted to verify the superiority and validity of the proposed fusion method. Three objective evaluation indices including mean gradient (AG), spatial frequency (SF) and mutual information (MI) are also employed to evaluate the fusion results quantitatively. Fusion experiment results show that the proposed method outperforms other five compared methods in both aspects of objective assessment and subjective visual quality. Visually, the proposed method not only preserves the rich scene information of the visible light image, but also effectively highlights the hot target information in the infrared image. The fused results of the proposed method have high contrast, rich edge details and remarkable target information, they are obviously better than the results generated by other five methods. Objectively, the proposed algorithm achieves the best average gradient and space frequency, and is almost superior to the other compared algorithms in the index of mutual information. Conclusion A new fusion method of the infrared and visible images based on BEMD and W-transform is proposed. Based on the characteristic of the W-BEMD of the source images, we design different fusion rules for different frequency bands. Four groups of infrared and visible images are employed for the performance evaluation of the proposed method. Known from the analysis, the proposed algorithm is more effective in preserving details in the visible images and highlighting target information in the infrared images.