目的 像对稠密匹配是三维重建，SLAM等高级图像处理的基础，而摄影基线过宽、重复纹理、非刚性形变及时空效率低下等问题是影响这类方法实用性的主要因素，为了更好地解决这类问题，本文提出一种面向重复纹理及非刚性形变的高效稠密匹配方法。方法 首先，采用DeepMatching算法获得降采样后像对的匹配点集，并采用随机抽样一致算法剔除其中外点。其次，利用上一步得到的匹配结果估计相机位姿及缩放比例，以确定每个点对稠密化过程中的邻域，再对相应点对的邻域提取HOG描述符并将其进行卷积操作得到分数矩阵。最后，根据归一化后分数矩阵的数值以及下标距离的方差确定新的匹配点对以实现稠密化。结果 在多个公共数据集上采用相同大小且宽高比为4:3的像对进行实验，实验结果表明，本文方法具备一定的抗旋转、尺度变化与形变的能力，能够较好地完成宽基线条件下具有重复纹理及非刚性形变像对的匹配。与DeepMatching算法进行对比实验，本文方法在查准率、空间效率以及时间效率上分别提高近10%、25%和30%。结论 本文提出的稠密匹配方法具有较高的查准率以及时空效率，其结果可以运用于三维重建，超分辨率重建等高级图像处理技术中。
Efficient dense matching method for repeated texture and non-rigid deformation
Jia Di,Zhao Mingyuan,Yang Ninghua,Zhu Ningdan,Meng Lu(School of Electronic and Information Engineering Liaoing Technical University,LiaoNing Huludao)
Objective Dense matching between images has been the basis of 3D reconstruction, SLAM and the other advanced image processing. However, the problems of excessive baseline, repeated texture, non-rigid deformation and time-space efficiency are the main factors affecting the practicability of such methods. To better solve such problems, this paper proposes an efficient dense matching method for repeated textures and non-rigid deformation. Method Firstly, the source image and the target image are scaled by linear bilinear interpolation. A series of matching points are obtained by the DeepMatching constitute the set and the outer points are eliminated by random sample consensus. Secondly, using the matching set obtained in the previous step to estimate the camera pose and scaling to determine the neighborhood of each point in the process of densification, and then the fractional matrix is obtained by convoluting the HOG descriptors extracted from the corresponding neighborhood. The fractional matrix composed of similarity scores between all points in the neighborhood is also the most important concept in our method because it connects two major steps: selecting the appropriate convolution region and determining the new matching point. The size and position of the convolution area decided by the scaling factor a and the camera position respectively determine the appropriate neighborhood. The selection of the above convolution neighborhood is still stable under the conditions of rotation and scaling. Finally, the new matching points are determined according to the values and the variance of the subscript distance of the normalized fractional matrix to achieve densification. This also means the relative coordinates of the maximum values in each group of Sim are then restored to the absolute coordinates of the input image. Result The code was implemented in VS2013 with Intel MKL2015 and Opencv3. The image pairs with the same size and the aspect ratio of 4:3 on Mikolajczyk，MPI-Sintel， and Kitti datasets were used for the experiment in an environment with 3.8GHz CPU and 8G RAM. To comprehensively and objectively evaluate our method, we select multiple sets of images of different sizes to compare the time usage, memory usage and precision of the proposed method with DeepMatching. For better illustrate the problem solved by the proposed method, the method is applied to the matching of image pairs under repeated texture and non-rigid deformation conditions. Under the condition of repeated texture, this method can not only solve the matching problem under the rotation and scaling condition, but also realize the matching problem of the repeated texture under the wide baseline; the method also performs very well in the non-rigid deformation. Combining the calculation results of accuracy on the above datasets, the precision of the experimental results of this method is better than the precision of the direct use of DeepMatching algorithm (average increase of about 10%), and as the image size increases, memory usage and time usage increase by nearly 25% and nearly 53%, respectively. Conclusion To verify the effectiveness of the proposed method, time usage, memory usage and precision of this method were compared with DeepMatching on multiple public datasets. Among them, the precision, time usage and memory usage increased by 10%, 25% and 53%, respectively. The effect of wide baseline, repeated texture, non-rigid deformation on the robustness and efficiency of matching results is better solved. At the same time, in order to achieve the versatility of the algorithm, we have coded the rotation and scaling. For the high versatility and practicality, in the future, we will integrate this method into advanced image processing such as 3D reconstruction and SLAM.