4.2.1. Qualitative Comparison

Referring to the qualitative evaluation in Section 4.1, Figure 7 shows an intuitive impression of registration and fusion results on six typical frame pairs in the self-built multi-spectral database.

**Figure 7.** Registration and fusion results of our method considering six typical unregistered visible and infrared image pairs in the self-built multi-spectral face datasets. (**a**) Visible image; (**b**) Thermal infrared image; (**c**) Canny edge maps; (**d**) Superimposed checkerboard pattern of visible and infrared images; (**e**) Fusion results.

As shown in the Figure 7 above. Firstly, several original visible images and thermal infrared images of six individuals are given in Figure 7a,b, respectively, and those face images involve different poses and illumination changes. In addition, the image pairs in the bright light are shown in the first three rows, while the image pairs in the dark condition are shown in the last three rows with different poses. Then, Figure 7c shows the discrete point set of edge maps, which are extracted by the canny edge detector. For distinguishing each spectral image, we also use the blue point set to denote the edge

map of the visible image, and the red point set corresponds to the edge map of the thermal infrared image. We can see that contour point sets of the face images in the edge map pairs are significantly different for the thermal infrared and visible images. Those discrepancies will result in a lot of outlier and noise for the edge maps alignment, especially in the textured regions of hairs and clothes. Hence, it is hard to match the two edge map pairs accurately.

Furthermore, note that we just focus on the registration results of the face region, the redundant information, such as the edge points of clothes, which is not related to face detection, might be ignored during the matching. Subsequently, the visible image is aligned to the thermal infrared image by our FR-GLS method, and the registration performance is demonstrated by checkerboard images in Figure 7d, where the aligned visible image is superimposed to the corresponding thermal infrared image. In addition to the uninterested regions of hairs and clothes, we can see from Figure 7d that the seams between two grids in the checkerboard images are natural, and this can demonstrate the feasibility and effectiveness of the proposed registration method qualitatively. Finally, for further verification, Figure 7e shows the fusion results of the visible and thermal infrared images by our GF-GP method. Compared to the original thermal infrared images in the second column (Figure 7b), we can see that the fused images in the last column (Figure 7e) seem to be sharpened, which contain both the apparent details of the visible image and the thermal radiation information of the infrared image. Thus, it will be beneficial for follow-up face detection and recognition in case of illumination changes, e.g., the outdoor environment illumination changed from bright to dark.

#### 4.2.2. Quantitative Comparison

To have a quantitative evaluation, we also manually select a set of landmarks in the multi-spectral images (about 40 pairs of landmarks for each spectral image), and then treat the recall on all real face images of an individual as the metric. Figure 8 shows the recall comparison results of the four methods on six individuals, where each individual contains about 52 image pairs.

**Figure 8.** Quantitative comparisons of multi-spectral face image pairs with 6 individuals in Figure 7.

We can see from the Figure 8 above, FR-GLS (mauve fork) and RGF (green star) methods with the global and local structures are significantly superior to the CPD and SMM methods with global structures, and the curve of the Student's t distribution model (red square) is mainly above the curve of the Gaussian distribution model (blue triangle). Furthermore, the curve of our FR-GLS method, combined with SMM and IDSC, is consistently above those of the other three methods. In conclusion, for the six pairs of typical multi-spectral images in Figure 7, the average registration errors of CPD, SMM, RGF and our FR-GLS method are about 2.40, 2.24, 2.01 and 1.76 pixels, respectively.

#### **5. Conclusions**

In this paper, we have introduced a novel multi-sensor face images registration method. It uses the IDSC to describe the local feature of a face image, which is more stable than SC, and then the Student's t Mixture probabilistic model is used to estimate the transformation between visible and infrared images by combining global and local structures of feature point sets. In order to verify the correctness of the registration method intuitively, we propose a multi-spectral image fusion strategy based on guided filtering and gradient preserving. Experimental results on a standard real face database and self-built multi-spectral face database demonstrate that the proposed method is able to achieve much more registration accuracy compared to other state-of-the-art registration methods. Therefore, it will be beneficial to improve the reliability of a fusion-based face recognition system. In the future, given that the ultimate goal of registration and fusion is to enhance the recognition rate, a series of more comprehensive and scientific evaluations for multi-sensor face images registration will be conducted through some combination of the latest deep neural network face recognition method.

**Author Contributions:** Conceptualization, W.L. and W.Z.; Methodology, W.L.; Software, W.Z.; Validation, N.L. and W.Z.; Formal Analysis, W.L.; Investigation, W.L.; Resources, M.D.; Data Curation, W.L.; Writing-Original Draft Preparation, W.L.; Writing-Review & Editing, W.L. and W.Z.; Visualization, W.L.; Supervision, N.L. and M.D.; Project Administration, X.L.; Funding Acquisition, M.D and X.L.

**Funding:** This research was funded by National Natural Science Foundation of China, grant number No. 51475046, No. 51475047 and the National High-tech R&D Program of China. grant number No. 2015AA042308.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
