**1. Introduction**

Night vision goggles (NVG) equipment can be used as nighttime visual aids by helicopter pilots in low-light environments. In particular, the NVG availability situation will directly affect the safety of nighttime aerial reconnaissance missions. Therefore, highly equipment availability should be maintained through regularly maintaining inspections and verifications. At present day, the aviator's NVG model AN/AVS-6 (V) 1 and AN/AVS-6 (V) 2 still rely on lots of manpower to perform their calibration. While processing, NVGs need to be placed on a test bench in order to commence manual focus adjustment operations by observing the image through the eyepiece. After the focus adjustment operation is completed, to make sure that the equipment is in line with calibration standards is confirmed only by human eyes. Since automatic image detection technology is sophisticated and widely applied, the researcher aims to reduce the staff's education time and to achieve the goal of proper equipment used by means of automatic image detection [1].

After gaining an insight into the performance and limitations of NVG, the autofocus operating process can be more accurately developed. Referring to the document of basic structure [2], the image quality of NVG relies on the electromagnetic spectrum signals detected by the enlarged image intensifier. The electro-optic system of the image intensifier is an important component. This component significantly affects resolution and light amplification. However, this component is subject to damage under strong light or high-humidity environments, and the general architectural diagram of the image intensifier is as shown in Figure 1 [2]. As the image intensifier will affect aviator's safety, image intensifier detection has become a standardized process. The current aviator's nighttime NVG test bench (TS-3895A/UV) [3] can provide the nighttime low-light environment required for NVG calibration. However, the test bench itself is unable to automatically adjust the NVG focal length. In addition to the drawback of needing to observe NVG eyepiece images by human eye before manually adjusting nighttime NVG focal length, human factors may lead to inaccurate test results. Therefore, this project intends to use a direct current (DC) servo driver to promote the focal knob of NVG to achieve the purpose of adjusting focus and acquiring quantitative value of rotation angle. For the configuration and design, refer to the document [1]. At present the autofocusing methods can be divided into active autofocusing and passive autofocusing [4]. Active autofocusing involves installing external infrared or other tools to measure distance between camera lens and target. Passive autofocusing, on the other hand, involves calculating sharpness information of a single image obtained from the camera. After calculating the sharpness of multiple images, the sharpness curve is acquired. The peak value of the sharpness curve is the best focal distance. Since this case proposes to adjust focus via image information of NVG, the passive autofocusing method was adopted. The key to the application of this method lies in whether effective sharpness points can be calculated through image information. Light luminance is the key affecting the passive autofocusing system. In previous studies, many types of sharpness computing methods were compared [5] to determine merits and drawbacks, which were applied in NVG's autofocusing [1]. In passive autofocusing, regardless of sharpness computing method, the subsequent image intensifier display on the screen undergoes defect testing, all of which are independent processes. Jian and Peng proposed autofocusing process for NVGs [1], which uses gradient-based variable step search and variation of normalized gray-level as the main method for accomplishing autofocus. Wang et al. [6] suggested the application of a robust principal component analysis method in multifocus image fusion. Additionally, an increasing number of related themes have undergone research [7], and low-rank matrix and sparse matrix themes aroused the study interest. Therefore, further development and application in NGV autofocusing and image fusion to aid in identifying NVG equipment availability were explored.

**Figure 1.** Structure of image intensifier [2].

The configuration of the mechanism comprises an NVG testing autofocusing system that includes a platform, motor, mechanism, and camera, as shown in Figure 2, as well as the multifocus problem caused by the inaccuracy between lens and NVG, as shown in Figure 3. This study adopted the image fusion method to resolve multifocus problems. Targeting how to correctly fuse images to ensure results presenting better information representativeness compared to any single input image is also an important topic in image fusion [7]. So far, a large quantity of image fusion techniques have been proposed. Among them, wavelet transport-based image fusion is a popular subject of research [8,9] because it can maintain precision of spectrum while increasing and improving accuracy of the space. When using wavelet decomposition, if only a few decomposition stages are used, the fused image's accuracy of space will be poorer. On the contrary, if too many decomposition stages are used, spatial similarity between the fused image and the original will be poorer [10]. Among those fusion methods, structure-aware image fusion [11] and image fusion in the discrete cosine transform (DCT) domain [12,13] are quite classic methods and widely used in various fields [14,15]. The following discusses wavelet-based image fusion in recent years. Vanmali et al. [16] proposed a quantitative measure using structural dissimilarity to measure the ringing artifacts. Ganasala and Prasad [17] especially focused on poor contrast and high-computational complexity issues of fusion outcomes. Seal and Panigrahy [18] focused on translation-invariant à trous wavelet transform and fractal dimension using a differential box counting method. Hassan et al. [19] implemented image fusion methods that are combined with wavelet transform and the learning ability of artificial neural networks. In recent years, deep learning networks have also been used to execute image fusion [20–22]. In general, deep learning networks' fusion quality depend on the sample characteristics at the time of data training. Image fusion based on low-rank matrix and sparse matrix characteristics has been a popular topic in recent years. Maqsood and Javed [23] proposed a multimodal image fusion scheme, which was based on two-scale image decomposition and sparse representation. This technology mainly uses the edge information of the sparse matrix for fusion. Ma et al. [24] proposed a multifocus image fusion method, mainly established in one fusion rule of sparse coefficients, which is based on the optimum theory and solved by the orthogonal matching pursuit method. Wang and Bai [25] proposed a novel strategy on the low frequency fusion assisted through sparse representation. Wang [26] proposed a novel fusion method based on sparse representation and non-subsampled contourlet transform, and used some indicators to prove the fusion result was excellent. Fu et al. [27] proposed a multifocus image fusion method through distributed compressed sensing (DCS). This method is mainly considered the high-frequency images' information. The final result was using visual and quantitative metric evaluations to analyze the results of the fusion. Among all the methods for decomposing data into low-rank matrix and sparse matrix, the most classic is robust principal component analysis (RPCA). There have been quite extensive expansion and application of RPCA, where RPCA via the principal component pursuit (PCP) method has been used to reduce the amount of calculation, with numerous extensions and expansions [28], including stable principal component pursuit (SPCP) [28], quantization based principal component pursuit (QPCP) [29], block based principal component pursuit (BPCP) [30], and local principal component pursuit (LPCP) [31]. Additionally, other methods for solving low-rank matrix and sparse matrix also include the subspace tracking series method [32], matrix completion series method [33], and nonnegative matrix factorization series method [34]. Of the discussions on these various methods, so far, studies have provided different pros to decompose matrices [28,35,36]. This study attempts to fuse the images of different focal distances by decomposing low-rank matrix and sparse matrix, not only taking into consideration decomposition and recombination of a single image [37,38] but also considering simultaneously decomposing and fusing more than two images [6,7] and even expanding to multiple images. Among those studies to date, there has not yet been a correct image fusion rating standard, and different fields result in different conclusions. Nevertheless, the rating standard currently provides evidential fusion results and field applicability related studies [39–41]. Thus, the indicators provided in the study by Liu [42] et al. were adopted to carry out fusion rating. The program for fusion rating standard used is provided by the website below: https://github.com/zhengliu6699/imageFusionMetrics, which discusses feasibility of applying deep semi- nonnegative matrix factorization (NMF) model [34] method in autofocusing and image fusion.

**Figure 2.** System installation [1].

**Figure 3.** Situation of a minor error between lens and night vision goggles (NVG).

#### **2. Materials and Methods**

In order to examine the pros and cons of the NVG image autofocusing and fusion method proposed, the processing method was as shown in Figure 4. The process mainly consisted of several blocks, including low-rank and sparse matrix, image fusion, and autofocus. The image samples, proposing method, matrix decomposition process, and fusion method used were included in low-rank and sparse matrix and image fusion blocks. The explanation for this part is found in the description of tested images section and image fusion using low-rank and sparse matrix section. Autofocus block explains the use of sparse matrix information to complete the sharpness computing and to obtain the best focal image.

#### *2.1. Description of Tested Images*

In order to carry out various fusion method ratings, aircraft, clock, disk, lab, leopard, and toy images commonly used in research were used here to test the qualities of fusion methods. The images for testing were as shown in Figure 5a–l. In addition, the images for NVG testing of fusion results were as shown in Figure 5m,n. Through the aviaiton nighttime NVG testing bench (TS-3895A/UV) with NVG, the DC servo driver-driven focal knob was able to collect NVG testing images at the

rotation angles ranging between 1 to 110 degrees. In order to simplify and facilitate the description of subsequent algorithms, the images collected were converted from colored ones into the gray-level, 110 images in total for autofocusing the algorithm used. To compare the quality of the traditional methods with the method in this paper, the same image sources as those of Jian and Peng [1] were used.

**Figure 4.** Autofocusing and fusion algorithm handling method.

(**a**) Aircraft 1 (**b**) Aircraft 2

**Figure 5.** *Cont*.

(**c**) Clock 1 (**d**) Clock 2

(**e**) Disk 1 (**f**) Disk 1

(**g**) Lab 1 (**h**) Lab 2

(**i**) Leopard 1 (**j**) Leopard 2

(**k**) Toy 1 (**l**) Toy 2

(**m**) Motor rotation angle of 60 (incorrect focal distance) (**n**) Motor rotation angle of 96 (correct focal distance) **Figure 5.** Autofocusing and fusion algorithm handling method.
