张灿龙,唐艳平,李志欣,王智文,蔡冰(广西区域多源信息集成与智能处理协同创新中心, 桂林 541004;桂林电子科技大学广西信息科学实验中心, 桂林 541004;广西科技大学计算机科学与通信工程学院, 柳州 545006)
目的 针对融合跟踪中的实时性和准确性问题，提出一种基于二阶空间直方图联合的红外与可见光目标融合跟踪算法。方法 该算法以二阶空间直方图为目标表示模型，通过将红外的目标相似度和可见光的目标相似度进行加权融合，来构建新的目标函数；并依据核跟踪推理机制导出目标的联动位移公式；最后使用均值漂移程序实现目标的自动搜索。此外，算法还实现了融合权值的自适应调节和目标模型的在线更新。结果 实验中选取了4组典型的红外可见光视频对进行跟踪，测试了算法在夜间环境、背景阴影、目标交汇与拥簇，以及目标遮挡等场合下的跟踪性能，并与L1跟踪器（L1T）、基于区域模糊动态融合的跟踪器（FRD），以及基于联合直方图的跟踪器在平均中心误差、平均重叠率、成功率以及平均跟踪时间等指标上进行了定量比较，得到各算法在这4组视频上的对应性能指标数据分别为本文算法（6.664，0.702，0.921，0.009）、L1T跟踪红外目标（25.53，0.583，0.742，0.363）、L1T跟踪可见光目标（31.21，0.359，0.459，0.293）、FRD（10.73，0.567，0.702，0.565）、JHT（15.07，0.622，0.821，0.001），发现本文算法的平均准确率比其他跟踪算法分别高约23%、14%和8%，而平均成功率分别高约32%、46%和10%。结论 本文算法在处理场景拥簇、光照变化以及空间信息保持等方面要优于传统的单源跟踪方法，适用于夜间环境、背景阴影以及背景拥簇等场景下目标跟踪，对帧频为30 帧/s的视频数据，算法可同时在线跟踪到4个目标。
Joint tracking of infrared-visible target via spatiogram representation
Zhang Canlong,Tang Yanping,Li Zhixin,Wang Zhiwen,Cai Bing(Guangxi Collaborative Innovation Center of Multi-source Information Integration and Intelligent Processing, Guilin 541004, China;Guangxi Experiment Center of Information Science, Guilin University of Electronic Technology, Guilin 541004, China;College of Computer Science And Communication Engineering, Guangxi University of Science And Technology, Liuzhou 545006, China)
Objective This study proposes a joint spatiogram tracker after considering the issues of real time and accuracy within the tracking system of a multiple sensor. Method In the proposed method, a second-order spatiogram is used to represent a target, and the similarity between the infrared candidate and its target model, as well as that between the visible candidate and its target model, is integrated into a novel objective function for evaluating target state. A joint target center-shift formula is established by performing a derivation method similar to the mean shift algorithm on the objective function. Finally, the optimal target location is obtained recursively by applying the mean shift procedure. In addition, the adaptive weight adjustment method and the model update method based on a particle filter are designed. Result We tested the proposed tracker on four publicly available data sets. These data sets involved general tracking difficulties, such as the absence of light at night; shade, cluster, and overlap among targets; and occlusion. We also compared our method with joint histogram tracking (JHT, the degenerated version of our method) and state-of-the-art algorithms, such as the L1 tracker (L1T) and the fuzzified region dynamic fusion tracker (FRD), on more than four infrared-visible image sequences. For the quantitative comparison, we use four evaluation metrics, namely, the average center offset error, the average overlap ratio, the average success rate, and the average calculation time. The corresponding test results of each algorithm in the four data sets are as follows: proposed method (6.664, 0.702, 0.921, 0.009), L1T track infrared target(25.53, 0.583, 0.742, 0.363), L1T track visible target(31.21, 0.359, 0.459, 0.293), FRD (10.73, 0.567, 0.702, 0.565), and JHT(15.07, 0.622, 0.821, 0.001). In terms of overlap ratio, the average precision of our method is approximately 23%, 14%, and 8% higher than those of L1T, FRD, and JHT, respectively. In terms of success ratio, the average value of our method is approximately 32%, 46%, and 10% higher than the corresponding trackers.Conclusion The proposed fusion tracker is superior to a single-source tracker in addressing cluttered background, light change, and spatial information retention. It is suitable for tracking targets in certain situations, such as when light is absent at night; shade, cluster, and overlap among targets; and occlusion. The method runs at a rate of 30 frame/s, thereby allowing simultaneous tracking of up to four targets in real time.