Next Article in Journal
Detection and Classification of Cotton Foreign Fibers Based on Polarization Imaging and Improved YOLOv5
Previous Article in Journal
Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Printing Defect Detection Based on Scale-Adaptive Template Matching and Image Alignment

1
Electronics and Information School, Yangtze University, Jingzhou 434023, China
2
Institute for Artificial Intelligence, Yangtze University, Jingzhou 434023, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4414; https://doi.org/10.3390/s23094414
Submission received: 19 February 2023 / Revised: 15 April 2023 / Accepted: 29 April 2023 / Published: 30 April 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Printing defects are extremely common in the manufacturing industry. Although some studies have been conducted to detect printing defects, the stability and practicality of the printing defect detection has received relatively little attention. Currently, printing defect detection is susceptible to external environmental interference such as illuminance and noise, which leads to poor detection rates and poor practicality. This research develops a printing defect detection method based on scale-adaptive template matching and image alignment. Firstly, the research introduces a convolutional neural network (CNN) to adaptively extract deep feature vectors from templates and target images at a low-resolution version. Then, a feature map cross-correlation (FMCC) matching metric is proposed to measure the similarity of the feature map between the templates and target images, and the matching position is achieved by a proposed location refinement method. Finally, the matching image and the template are both sent to the image alignment module, so as to detect printing defects. The experimental results show that the accuracy of the proposed method reaches 93.62%, which can quickly and accurately find the location of the defect. Simultaneously, it is also proven that our method achieves state-of-the-art defect detection performance with strong real-time detection and anti-interference capabilities.

1. Introduction

Defect detection has always been the focus of research in computer vision [1,2]. Nowadays industrial printing has become a necessity that fills our daily lives in various forms, such as books, newspapers, advertisements, packaging boxes, and so on. At the same time, the rapid development of industrial printing has also brought a series of printing quality problems [3]. Common printing defects are mainly presented in two forms. The first form is color defects, including uneven printing ink, deviation of printing color, distortion of printing color, etc. The second one is shape defects, including incomplete printing, offset position of printing fonts, distortion of printing patterns, etc. Therefore, how to quickly and efficiently detect printing defects is the focus of the current manufacturing industry. The initial printing defect detection mainly relies on manual visual detection, which requires a lot of manpower and material resources. Moreover, there are uncontrollable shortcomings such as slow speed and low precision. To solve these problems, computer vision technology has been introduced for printing defect detection [4].
As far as we know, defect detection methods are one of the most popular research topics in computer vision [5,6,7]. Although some studies have been conducted to detect printing defects, the stability and practicality of printing defect detection has received relatively little attention. At present, printing defect detection is mainly implemented based on template matching [8]. Template matching refers to finding the regions similar to the given template in the target image. Template matching has always been the focus of research in image processing, which has been applied in many fields, such as object detection [9], object tracking [10], defect detection [11] and so on. Traditional template matching is proposed based on pixel-level methods [12]. These algorithms include sum of squared differences (SSD) [13], zero-mean normalized cross-correlation (ZNCC) [14], sum of absolute differences (SAD) [15], etc. They often cost a lot of time to implement matching and can not achieve real-time detection. In addition, they do not show strong robustness in complex scenes, which limits their application. In real scenarios, the target image will be inevitably contaminated by environmental interference, including illumination and noise. Therefore, traditional template matching cannot satisfy practical industrial requirements.
In order to solve these limitations, researchers have proposed some methods based on deep learning. With the rapid development of artificial intelligence, deep convolutional neural networks (CNNs) have become a dominant direction in various fields, including image inpainting [16], object tracking [17], image segmentation [18], and so on. In recent years, there is a two-stream structure based on CNN, called the Siamese network [19], which can also be viewed as a matching problem. Inspired by the Siamese network, a neural network was introduced to measure the similarity between the target image and the template, which showed better anti-interference ability under complex scenes. The common method was based on a parameter-free robust best buddies similarity (BBS) method [20], which combined with a neural network for nearest-neighbor (NN) matching by extracting location and shape information. Considering that the samples may be deformed, a deformable diversity similarity (DDIS) method [21] was introduced to measure similarity by finding the features of potential matching locations. In addition, in order to overcome the interference of the external environment, such as illumination and noise, etc., Fang et al. [22] proposed a smart reinforcement learning (RL) method. This method could learn to tune parameters automatically to enhance model performance, which could provide a broader perspective on overcoming the interference and reference for template matching.
Motivated by the successes of deep learning, this paper proposes a printing defect detection method which is composed of two modules, including a template matching module and an image alignment module. The main contributions of this paper are given as follows:
(1)
A scale-adaptive deep convolutional feature extraction method is proposed for template matching. Moreover, the feature extraction is implemented on a low-resolution version of the template and the target image. Therefore, the method effectively decreases the matching time.
(2)
A feature map cross-correlation (FMCC) matching metric is proposed to measure the similarity between the feature map of the template and the target image. The introduction of a matching metric can greatly improve the accuracy of the similarity measurement.
(3)
Furthermore, an image alignment and difference detection module is introduced to adjust the defect position and greatly improve the effect of defect detection. Therefore, the proposed method can obtain state-of-the-art detection performance with strong real-time performance and anti-interference capabilities.
The rest of the paper is organized as follows. In Section 2, we present the related works on printing defect detection. Then, our proposed method is detailed in Section 3, while Section 4 presents the experimental results. Finally, the conclusion is given in Section 5.

2. Related Work

Over the past few years, the manual detection of printing defects can no longer meet strict artistic effects and quality pursuit. Computer vision methods have made some progress in defect detection. However, the current printing defect detection technology is subject to certain technical limitations, and there is still a lot of research space in terms of accuracy and speed.
In recent years, researchers have proposed some computer vision-based defect detection methods [23]. Golnabi et al. [24] proposed a printing defect detection method based on a three-dimensional spatial coordinate system, which used the pixel indexes in the three-dimensional coordinate system to achieve defect detection. The experimental results showed that the method can detect defects in complex printed matters, but its robustness was poor. For defect classification problems, Luo et al. [25] combined a BP neural network with an image histogram to achieve defect classification and detection. The system can extract the defects of small-format printed matters and accurately classify printed matter images, but it needs special hardware equipment and has poor applicability. Meanwhile, under the premise of ignoring unimportant pixels, Salahdine et al. [26] proposed a defect detection method based on a dynamic threshold to shorten the detection time. The simulation results showed that the algorithm was fast and could output multiple parameters. Considering that noise also affects the detection results, Tian et al. [27] proposed an adaptive filtering denoising algorithm, which could effectively avoid the influence of noise without damaging image information.
Template matching is the core technology of the defect detection method. It searches the best matching region in the target image according to the given template. In order to accurately achieve the matching region with the highest similarity of the template, it is usually necessary to define a similarity measurement metric to measure the similarity between the target image and the template. Some traditional template matching metrics are proposed based on the pixel level, and thus they cost much time to implement the matching. Moreover, their matching performance will become worse when the image is contaminated with external interference, such as illuminance and noise.
In recent years, several robust algorithms combined with neural networks have been proposed for template matching and defect detection. Based on the concept of bi-directional similarity (BDS), a parameter-free and robust method was proposed based on best buddies similarity (BBS) method [28]. It does not directly take the actual distance value but achieves the bidirectional matching region by counting the number of best buddies pairs (BBPs), which can improve robustness. Moreover, in order to improve computational efficiency and make better use of neural networks, a deformable diversity similarity (DDIS) matching method was proposed based on the nearest-neighbor (NN) method. This method not only considers complex deformations but also improves the accuracy of localization. However, the DDIS algorithm has difficulty dealing with the problem of scale change. To accommodate matching patterns in non-single scenarios, Cheng et al. [29] proposed a quality-aware template matching (QATM) method. This method can perform many matching modules, including one–one, one–many, and many–many. However, they perform poorly in complex industrial environments. In this paper, we propose an improved template matching algorithm which achieves better matching results compared to previous methods. Moreover, an image alignment module is introduced to adjust the defect position and achieve state-of-the-art detection performance.

3. Methods

To achieve efficient and reliable printing defect detection, we propose a printing defect detection method based on scale-adaptive template matching and image alignment. As shown in Figure 1, the schematic diagram consists of two modules: a scale-adaptive template matching module, and an image alignment module. The details of the method will be presented in this section.

3.1. Template Matching Module

Aiming at the shortcomings of the existing template matching algorithms, this paper proposes a template matching algorithm based on a scale-adaptive feature extraction method with low-resolution images. Moreover, a novel feature map cross-correlation (FMCC) matching metric is introduced to measure similarity in the matching processing. As shown in Figure 2, the proposed template matching module consists of two parts, including a scale-adaptive deep convolutional feature extraction method and an FMCC-based similarity measure method.

3.1.1. Scale-Adaptive Deep Convolutional Feature Extraction Method

In order to better describe our proposed template matching module, the sample (target image) S and the template T are given explicit spatial domain parameters. We suppose the sample S I a × b × 3 , where a and b represent the width and height of the sample, respectively, and the template T I w × h × 3 , where w and h represent the width and height of the template, respectively. Our method does not directly use the traditional sliding window to search for the sample. We introduce CNN to extract feature vectors of different depths. Furthermore, our feature extraction method neither specifies the size of the input image nor specifies the layer of depth features. Instead, the proposed method can adaptively identify the optimal layer in the CNN, and thus has superior robustness.
The scale-adaptive feature extraction method proposed in this paper will be introduced in detail below. We use VGG-Net [30] as the feature extraction network and do not specify the size of the sample S and the template T. For different images, VGG-Net is introduced to adaptively identify the optimal layer and extract the depth feature vectors. For CNNs, the feature map output by each layer has its corresponding receptive field, and the receptive field of the l layer is defined as:
r f l = { r f l 1 + ( ( f l 1 ) i = 1 l 1 s i ) l > 1 , 3 l = 1 ,
where rfl represents the receptive field of l layer, rfl-1 represents the receptive field of the previous layer, fl is a filter size of the l layer, and si represents a stride. Here, 3 is the initial receptive field of the first convolution layer. If the receptive field of the template is smaller than the receptive field of the optimal layer, the layer will fill some meaningless areas with zeros. Therefore, we limit the optimal layer to have a receptive field which is smaller than or equal to the template. Here, the constraint of the receptive field is detailed as:
l = max ( l k , 1 )       r f l min ( w , h )
where k is an integer greater than or equal to 0. Notice that we specify the amount of padding for the optimal layer receptive field that needs to be filled with zeros. For example, if a layer has a 5 × 5 receptive field and the stride of the filter is 2, the number of zeros to be filled in the optimal layer is 5 + 2d, where d is an integer greater than or equal to 0. Compared with the sliding window method, we only compute feature vectors once for each sample and template with CNN, which greatly decreases the number of parameters and shortens the operation time.

3.1.2. FMCC-Based Similarity Measure Method

To effectively decrease the matching time, the feature extraction method is implemented on a low-resolution version of the template and the sample. Our method first sets the scale zooming factor Z = 2r (r = 0, 1, 2, ……) to scale down the template T and the sample S in equal proportions. Then, the scaled sample Sz and the scaled template Tz are both sent into feature extraction network. Finally, the optimal layer is adaptively obtained, and feature map X (the template) and feature map Y (the sample) are extracted from the optimal layer. In addition, we also keep the sample S and the template T, which can be mapped to high resolution images after matching and easily embedded into different scenarios. For the extracted feature map, we measure the similarity between X and Y by FMCC, and FMCC is detailed as:
F M C C i , j = < X ,   Y ¯ > | X | | Y ¯ |
where < > denotes the inner product of X and Y. Inspired by traditional template matching algorithms, our proposed FMCC matching metric also adopts the sliding window method. However, our method is performed on the feature map, which can greatly reduce the computational cost of sliding the window directly on the image. The FMCC metric first extracts a feature patch Y ¯ from the sample feature map Y. The size of the feature patch Y ¯ is the same as the template feature map X. Then, a convolution method is used to calculate the location (u, v) which has the maximum FMCC. Finally, according to the location obtained from the feature map, it is mapped to the sample image through back-projecting, and the obtained regions are the result of template matching.
Moreover, we also use location refinement algorithms to improve matching accuracy. Using the FMCC as the weight, the maximum location obtained on the feature map is mapped back to the region of the sample image by a weighted sum. Firstly, we set the position of the initial box obtained on the sample image as ( ( x 1 * ,   y 1 * ) , ( x 2 * ,   y 2 * ) ) , where ( x 1 * ,   y 1 * ) is the upper-left location of the initial box, and ( x 2 * ,   y 2 * ) is the bottom-right location of the initial box. Then, the position of the box after refinement is obtained as ( ( x 1 ,   y 1 ) , ( x 2 ,   y 2 ) ) , which is transformed as follows:
x 1 = m = 1 1 n = 2 1 F M C C u + m , v + n ( x 1 + n i = 1 l 1 s i ) m = 1 1 n = 2 1 F M C C u + m , v + n
y 1 = m = 1 1 n = 2 1 F M C C u + m , v + n ( y 1 * + m i = 1 l 1 s i ) m = 1 1 n = 2 1 F M C C u + m , v + n
x 2 = m = 1 1 n = 2 1 F M C C u + m , v + n ( x 2 * + n i = 1 l 1 s i ) m = 1 1 n = 2 1 F M C C u + m , v + n
y 2 = m = 1 1 n = 2 1 F M C C u + m , v + n ( y 2 * + m i = 1 l 1 s i ) m = 1 1 n = 2 1 F M C C u + m , v + n
The location is achieved on low-resolution images. In order to restore the matching area to the sample S, we reversely scale up the refined position ( ( x 1 ,   y 1 ) , ( x 2 ,   y 2 ) ) on high resolution images.

3.2. Image Alignment Module

After template matching, an image alignment module is introduced for defect detection. The implementation process is shown in Figure 3. Firstly, we propose image alignment to calibrate the matching region accurately, and then use the detection difference to complete the printing defect detection.
Since our template matching algorithm is performed on low-resolution images, there will be some positional deviations in the matching region. Therefore, it is necessary to use image alignment to modify the offset of the matching region. Image alignment searches the coordinate relationship between the pixels of the two images. If there is only an affine transformation between the two images, the problem of image alignment is transformed into the problem of solving the affine transformation matrix. In this paper, there is only a coordinate distortion between the matching region and the template, and thus image alignment can be regarded as a matrix estimation problem. We assume that the template is recorded as It(H), the corresponding pixel coordinate is H = (xt, yt), the region to be aligned is recorded as Iw(G), and the corresponding pixel coordinate is G = (xw, yw). Then, the image alignment only needs to search the transformation relationship between H and G, that is G =   β ( H ;   p ) , where p = (p1, …, pn)T. Finally, the alignment problem can be converted into an estimation of the parameter p.
I t ( H ) = I w ( β ( H ; p ) )
Assuming that there are K central coordinates between the alignment region and the template, the alignment region is composed of template vector it and warped vector iw, which are defined as follows:
i t = [ I t ( H 1 ) I t ( H 2 ) I t ( H K ) ] T
i w ( p ) = [ I w ( G 1 ( p ) ) I w ( G 2 ( p ) ) I w ( G K ( p ) ) ] T
Then, the alignment metric enhanced correlation coefficient (ECC) is defined to calculate the motion transformation matrix and realize the alignment of the warped image:
E C C = i ¯ t i ¯ t i ¯ w ( p ) i ¯ w ( p ) 2
where i ¯ t and i ¯ w denote the zero-mean vectors of the template and warped vector, respectively.
After the alignment, the template and the alignment region are both sent to the detection difference module to extract the difference information. We perform adaptive threshold binarization on the template and the target image and introduce the structural similarity (SSIM) [31] to evaluate the similarity between the two binary images. It should be noted that we do not output the SSIM score, only the difference information. Then, we perform the defect contour search, and finally achieve the defect position.

4. Experimental Results and Analysis

In this section, we detail the experimental implementation process and evaluate the proposed printing defect detection method, including template matching performance and defect detection performance.

4.1. Evaluation Metrics

In order to verify the effectiveness of the proposed printing defect detection method, we collected 51 types of printing images with different sizes. Each type of printing image contained four different images, which were standard images without defect, called as zero-defect images (ZD), defect images (D), zero-defect images with interference (ZD_I), and defect images with interference (D_I). Examples of the printing images are shown in Figure 4. For the entire method, we used a confusion matrix to calculate several metrics for performance evaluation, including accuracy, precision, recall, F1-Score, and the area-under-curve (AUC). The confusion matrix is shown in Table 1.
Here, TP (true positive) means that the positive class in the sample is correctly predicted as a positive class, FP (false positive) means that the negative class in the sample is incorrectly predicted as a positive class, TN (true negative) means that the positive class in the sample is incorrectly predicted as a negative class, FN (false negative) means that the negative class in the sample is correctly predicted as a negative class. Simultaneously, we use the accuracy, precision, recall, F1-Score and area-under-curve (AUC) for performance evaluation [32]. In addition, in order to more intuitively reflect the specific detection performance on each type of sample, this paper used the true detection rate (TDR) and the false detection rate (FDR) as the evaluation metrics, which are defined as follows:
TDR = true   detection   number defective   sample × 100 %
FDR = false   detection   number total   sample × 100 %
To evaluate the performance of our proposed scale-adaptive template matching algorithm, we define the accuracy of template matching by adapting the measure of overlap area, which is shown as;
IOU = a r e a ( A P A G T ) a r e a ( A p A G T )
where AP and AGT represent the prediction box and the ground-truth box, respectively, and IOU reflects the overlap between the prediction results and the ground-truth results. The proportion of the overlap greater than a certain threshold ( T H [ 0 , 1 ] ) is used to achieve a continuous curve, and then the area under the curve (AUC) is calculated to quantify the accuracy of each method.
In addition, the experimental environment is Windows10 version 64-bit operating system, and all experiments are implemented on CPU (AMD Ryzen 75800H with Radeon Graphics 3.20 GHz). Therefore, our method has the advantage of low requirements for hardware equipment.

4.2. Experiment Results of Scale-Adaptive Template Matching

The proposed method consists of a scale-adaptive template matching module and an image alignment module. Moreover, the template matching performance will directly impact the defect detection results. Therefore, we first compared the performance of the proposed scale-adaptive template matching algorithm (FMCC) with several state-of-the-art methods, including SSD [13], ZNCC [14], SAD [15], and QATM [29].
In order to prove the effectiveness of our proposed method more reliably, we also conducted the experiment on a public dataset. During the experiment, we followed the evaluation protocol [20] and used the public dataset proposed by Wu et al. [33]. The dataset can be applied to different challenging situations, including deformation, illumination change, partial occlusion, and template matching. Meanwhile, the successful curves with varying IOU thresholds and AUC were used for quantitative comparison.
Firstly, we quantitatively analyzed the template matching method on the collected printing dataset. As shown in Figure 5a, the overall accuracy of FMCC method proposed in this paper was the highest, although the performance (0.844) of SAD is close to our performance (0.827). In addition, the matching method was implemented on the public dataset; the FMCC performance and all baseline methods performance are shown in Figure 5b. Again, our FMCC method achieved the best performance.
Then, we tested the time performance of these template matching algorithms on the collected printing dataset, and the results are shown in Figure 6. The execution time of our FMCC (7.23 s) is significantly faster than that of SAD (8.49 s), SSD (8.26 s), and ZNCC (31.95 s). Considering that the CNN-based feature extraction method is more complex than the sliding window method, we introduced a scale zooming factor to accelerate the implementation (SZ-FMCC). This method scales down the template and sample in equal proportions, which reduces parameter calculation. Therefore, the accelerated method (SZ-FMCC) can achieve a matching speed of 0.57 s; this is about 12 times than the proposed benchmark method (FMCC).
As shown in Figure 5 and Figure 6, we introduced a scale zooming factor to greatly accelerate the implementation (SZ-FMCC), but its AUC decreased to 0.811, which had a bad influence on defect detection. Therefore, we introduced an image alignment method to greatly improve the AUC of the method from 0.811 to 0.984 (SZ-FMCC+IA). Simultaneously, the execution time only increased from 0.57 s to 0.62 s.
In addition, Figure 7 shows the matching results of the different challenging situations. It can be seen that the matching regions of the proposed method almost overlapped with the labeled ground-truth regions.
At the same time, in order to more intuitively illustrate the performance, we calculated the IOU of the predicted box and the ground-truth box. As shown in Table 2, it can be seen that the IOU of SZ-FMCC is lower than FMCC. However, the IOU of SZ-FMCC+IA increased significantly after introducing the alignment method, indicating that our proposed method improves the accuracy of template matching effectively.

4.3. Experiment Results of Printed Matter Defect Detection

In order to validate the detection performance, we tested the proposed method on the printing dataset. Table 3 shows the different evaluation values, including the accuracy, precision, recall, F1-Score, and AUC. As shown in Table 3, the accuracy rate of the defect detection method proposed in this paper reached 93.62%, indicating that the model has a high probability of correct identification of defects, and the recall rate was as high as 100%, which means that the model can detect all defects in the actual sample and meets the actual industrial detection requirements. Moreover, the F1-Score, as the harmonic mean of precision and recall, can reached 94.01%. Similarly, the AUC also comprehensively reflected the overall satisfactory performance.
Meanwhile, in order to discuss the impact of interference, the confusion matrices on different types are shown in Figure 8. Obviously, the introduction of confusion matrices has shown that the false detections are generally from samples with interference. This is mainly because interference may generate false defects, which have an influence on the detection results.
Table 4 shows the effect of the model on each sample type. The proposed printing defect detection method has a TDR of 96.07% for zero-defect images (ZD), and a TDR of 94.12% for defect images (D). Although the false detection mainly results from the samples with interference (ZD_I and D_I), the method has high anti-interference ability in general.
In order to verify the effect of the scale zooming factor Z = 2r (r = 0,1,2……) on the detection speed as r increases, we randomly selected some printing images to test the performance. The changing trend between scale zooming factor and detection speed is shown in Figure 9.
It can be seen from Figure 9 that as the scale zooming factor r increases, the detection time tends to decrease. Furthermore, these curves are almost the same, indicating that our method has high robustness.

5. Conclusions

In this paper, we propose a novel printing defect detection method based on scale-adaptive template matching and image alignment. Simultaneously, we introduce a new similarity measurement metric called feature map cross-correlation (FMCC), for improving the accuracy of similarity measurements. Our method extracts the underlying features of the template and the target image through scale-adaptive depth-wise convolution, and then optimizes that on the basis of FMCC. The method greatly improves the running speed while ensuring detection accuracy. The experimental results demonstrate that the method can quickly and accurately find the location of the defect. At the same time, it is also proven that our method achieves state-of-the-art defect detection performance with strong real-time detection and anti-interference performance.
The limitation of this work is that the parameters of the input model were obtained through a large number of trials. Indeed, these parameters can be determined by the underlying features of the image. In the future, we will continue to optimize the method by designing appropriate adaptive input parameters for each module. Meanwhile, we can conduct research in the field of reinforcement learning. In addition, we will apply the proposed method to defect detection of more industrial products.

Author Contributions

Conceptualization, X.L. and L.Z.; methodology, X.L.; software, Y.L. and Y.G.; validation, X.L. and Y.L.; formal analysis, X.L. and Y.L.; investigation, Y.L. and Y.G.; resources, L.Z.; data curation, Y.G.; writing—original draft preparation, X.L.; writing—review and editing, L.Z.; visualization, X.L.; supervision, Y.G. and L.Z.; project administration, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (No.61901059).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, B.; Zhu, W.; Wang, Y.; Wu, H.; Yang, Y.; Fan, H.; Xu, H. The defect detection of personalized print based on template matching. In Proceedings of the IEEE International Conference on Unmanned Systems, Beijing, China, 27–29 October 2017; pp. 266–271. [Google Scholar] [CrossRef]
  2. Ming, W.; Shen, F.; Li, X.; Zhang, Z.; Du, J.; Chen, Z.; Cao, Y. A comprehensive review of defect detection in 3C glass components. Measurement 2015, 158, 107722. [Google Scholar] [CrossRef]
  3. Wang, Y.; Xu, S.; Zhu, Z.; Sun, Y.; Zhang, Z. Real-time Defect Detection Method for Printed Images Based on Grayscale and Gradient Differences. J. Eng. Sci. Technol. Rev. 2018, 11, 180–188. [Google Scholar] [CrossRef]
  4. Gao, F.; Li, Z.; Xiao, G.; Yuan, X.; Han, Z. An online inspection system of surface defects for copper strip based on computer vision. In Proceedings of the 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012; pp. 1200–1204. [Google Scholar] [CrossRef]
  5. Xing, Z.; Zhang, Z.; Yao, X.; Qin, Y.; Jia, L. Rail wheel tread defect detection using improved YOLOv3. Measurement 2022, 203, 111959. [Google Scholar] [CrossRef]
  6. Lu, Q.; Lin, J.; Luo, L.; Zhang, Y.; Zhu, W. A supervised approach for automated surface defect detection in ceramic tile quality control. Adv. Eng. Inform. 2022, 53, 101692. [Google Scholar] [CrossRef]
  7. Zhang, T.; Zhang, C.; Wang, Y.; Zou, X.; Hu, T. A vision-based fusion method for defect detection of milling cutter spiral cutting edge. Measurement 2021, 177, 109248. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Yang, X.; Jia, X. Scale-Adaptive NN-Based Similarity for Robust Template Matching. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  9. Lei, X.; Ohuchi, T.; Kitamura, M.; Li, X.; Li, Q. An effective method for laboratory acoustic emission detection and location using template matching. J. Rock Mech. Geotech. Eng. 2022, 14, 1642–1651. [Google Scholar] [CrossRef]
  10. Yan, B.; Xiao, L.; Zhang, H.; Xu, D.; Ruan, L.; Wang, Z.; Zhang, Y. An adaptive template matching-based single object tracking algorithm with parallel acceleration. J. Vis. Commun. Image Represent. 2019, 64, 102603. [Google Scholar] [CrossRef]
  11. Kong, Q.; Wu, Z.; Song, Y. Online detection of external thread surface defects based on an improved template matching algorithm. Measurement 2022, 195, 111087. [Google Scholar] [CrossRef]
  12. Ouyang, W.; Tombari, F.; Mattoccia, S.; Di Stefano, L.; Cham, W.K. Performance Evaluation of Full Search Equivalent Pattern Matching Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 127–143. [Google Scholar] [CrossRef]
  13. Hisham, M.B.; Yaakob, S.N.; Raof, R.A.A.; Nazren, A.B.A.; Wafi, N.M. Template Matching using Sum of Squared Difference and Normalized Cross Correlation. In Proceedings of the IEEE Student Conference on Research and Development, Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 100–104. [Google Scholar] [CrossRef]
  14. Wang, X.; Wang, X.; Han, L. A Novel Parallel Architecture for Template Matching based on Zero-Mean Normalized Cross-Correlation. IEEE Access 2019, 7, 186626–186636. [Google Scholar] [CrossRef]
  15. Wong, S.; Vassiliadis, S.; Cotofana, S. A sum of absolute differences implementation in FPGA hardware. In Proceedings of the EUROMICRO, Dortmund, Germany, 4–6 September 2002; pp. 183–188. [Google Scholar] [CrossRef]
  16. Zhang, X.; Zhai, D.; Li, T.; Zhou, Y.; Li, Y. Image inpainting based on deep learning: A review. Inf. Fusion 2023, 90, 74–94. [Google Scholar] [CrossRef]
  17. Saada, M.; Kouppas, C.; Li, B.; Meng, Q. A multi-object tracker using dynamic Bayesian networks and a residual neural network based similarity estimator. Comput. Vis. Image Underst. 2022, 225, 103569. [Google Scholar] [CrossRef]
  18. Li, H.; Fan, J.; Hua, Q.; Li, X.; Wen, Z.; Yang, M. Biomedical sensor image segmentation algorithm based on improved fully convolutional network. Measurement 2022, 197, 111307. [Google Scholar] [CrossRef]
  19. He, A.; Luo, C.; Tian, X.; Zeng, W. A twofold siamese network for real-time object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Beijing, China, 18–23 June 2018; pp. 4834–4843. [Google Scholar] [CrossRef]
  20. Oron, S.; Dekel, T.; Xue, T.; Freeman, W.T.; Avidan, S. Best-Buddies Similarity—Robust Template Matching Using Mutual Nearest Neighbors. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1799–1813. [Google Scholar] [CrossRef]
  21. Talmi, I.; Mechrez, R.; Zelnik-Manor, L. Template matching with deformable diversity similarity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 175–183. [Google Scholar] [CrossRef]
  22. Fang, F.; Xu, Q.; Cheng, Y.; Sun, Y.; Lim, J.-H. Image Understanding With Reinforcement Learning: Auto-Tuning Image Attributes and Model Parameters for Object Detection and Segmentation. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6671–6685. [Google Scholar] [CrossRef]
  23. Luo, J.; Yang, Z.; Li, S.; Wu, Y. FPCB Surface Defect Detection: A Decoupled Two-Stage Object Detection Framework. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  24. Golnabi, H.; Asadpour, A. Design and application of industrial machine vision systems. Robot. Comput.-Integr. Manuf. 2007, 23, 630–637. [Google Scholar] [CrossRef]
  25. Luo, J.; Zhang, Z. Automatic colour printing inspection by image processing. J. Mater. Process. Technol. 2017, 139, 373–378. [Google Scholar] [CrossRef]
  26. Salahdine, F.; Ghazi, H.E.; Kaabouch, N.; Fihri, W.F. Matched filter detection with dynamic threshold for cognitive radio networks. In Proceedings of the International Conference on Wireless Networks and Mobile Communications (WINCOM), Marrakech, Morocco, 20–23 October 2015; pp. 1–6. [Google Scholar] [CrossRef]
  27. Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar] [CrossRef]
  28. Simakov, D.; Caspi, Y.; Shechtman, E.; Irani, M. summarizing visual data using bidirectional similarity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 24–26 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  29. Cheng, J.; Wu, Y.; AbdAlmageed, W.; Natarajan, P. QATM: Quality-aware template matching for deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 11553–11562. [Google Scholar] [CrossRef]
  30. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  31. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  32. Kaur, R.; Singh, S. A comprehensive review of object detection with deep learning. Digit. Signal Process. 2022, 132, 103812. [Google Scholar] [CrossRef]
  33. Wu, Y.; Lim, J.; Yang, M. Online object tracking: A benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 2411–2418. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the printing defect detection based on scale-adaptive template matching and image alignment.
Figure 1. Schematic diagram of the printing defect detection based on scale-adaptive template matching and image alignment.
Sensors 23 04414 g001
Figure 2. Schematic diagram of the improved template matching module, including a scale-adaptive deep convolutional feature extraction method and FMCC-based similarity measurement method.
Figure 2. Schematic diagram of the improved template matching module, including a scale-adaptive deep convolutional feature extraction method and FMCC-based similarity measurement method.
Sensors 23 04414 g002
Figure 3. Schematic diagram of image alignment module.
Figure 3. Schematic diagram of image alignment module.
Sensors 23 04414 g003
Figure 4. The examples of printing images: (a) zero-defect images; (b) zero-defect images with interference (illuminance); (c) zero-defect images with interference (noise); (d) defect images; (e) defect images with interference (illuminance); (f) defect images with interference (noise).
Figure 4. The examples of printing images: (a) zero-defect images; (b) zero-defect images with interference (illuminance); (c) zero-defect images with interference (noise); (d) defect images; (e) defect images with interference (illuminance); (f) defect images with interference (noise).
Sensors 23 04414 g004
Figure 5. Quantitative analysis and comparison of the template matching performance. (a) Success curve based on AUC quantization accuracy on the collected printing dataset. (b) Success curve based on AUC quantization accuracy on the public dataset proposed in [33].
Figure 5. Quantitative analysis and comparison of the template matching performance. (a) Success curve based on AUC quantization accuracy on the collected printing dataset. (b) Success curve based on AUC quantization accuracy on the public dataset proposed in [33].
Sensors 23 04414 g005
Figure 6. Time performance comparison on different models.
Figure 6. Time performance comparison on different models.
Sensors 23 04414 g006
Figure 7. Performance evaluation of the scale-adaptive template matching algorithm proposed in this paper on partial samples; (ac) sampled from the public dataset and (df) sampled from the collected dataset. (The ground-truth is labeled with a green box, and the matching results of FMCC, SZ-FMCC, and SZ-FMCC+IA are, respectively, labeled with red, blue, and yellow boxes).
Figure 7. Performance evaluation of the scale-adaptive template matching algorithm proposed in this paper on partial samples; (ac) sampled from the public dataset and (df) sampled from the collected dataset. (The ground-truth is labeled with a green box, and the matching results of FMCC, SZ-FMCC, and SZ-FMCC+IA are, respectively, labeled with red, blue, and yellow boxes).
Sensors 23 04414 g007
Figure 8. Confusion matrices on printed datasets; (a) the confusion matrix on the images without interference, and (b) the confusion matrix on the images with interference.
Figure 8. Confusion matrices on printed datasets; (a) the confusion matrix on the images without interference, and (b) the confusion matrix on the images with interference.
Sensors 23 04414 g008
Figure 9. Computation time with respect to the scale zooming factor.
Figure 9. Computation time with respect to the scale zooming factor.
Sensors 23 04414 g009
Table 1. Confusion matrix.
Table 1. Confusion matrix.
PositiveNegative
PositiveTPFP
NegativeFNTN
Table 2. The IOU of the ground-truth box and the prediction box on partial images of the printing dataset.
Table 2. The IOU of the ground-truth box and the prediction box on partial images of the printing dataset.
ImageFMCCSZ-FMCCSZ-FMCC+IA
Figure 7a0.9100.8390.845
Figure 7b0.7910.6630.753
Figure 7c0.5340.5230.534
Figure 7d0.8870.7540.926
Figure 7e0.5690.5630.586
Figure 7f0.8070.7910.907
Table 3. Defect detection performance evaluation.
Table 3. Defect detection performance evaluation.
Accuracy (%)Precision (%)Recall (%)F1 (%)AUC (%)
93.6288.69100.0094.0194.00
Table 4. Evaluation values of different sample types.
Table 4. Evaluation values of different sample types.
Sample TypesTDR (%)FDR (%)
ZD96.070.98
ZD_I78.435.39
D94.121.47
D_I92.161.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Li, Y.; Guo, Y.; Zhou, L. Printing Defect Detection Based on Scale-Adaptive Template Matching and Image Alignment. Sensors 2023, 23, 4414. https://doi.org/10.3390/s23094414

AMA Style

Liu X, Li Y, Guo Y, Zhou L. Printing Defect Detection Based on Scale-Adaptive Template Matching and Image Alignment. Sensors. 2023; 23(9):4414. https://doi.org/10.3390/s23094414

Chicago/Turabian Style

Liu, Xinyu, Yao Li, Yiyu Guo, and Luoyu Zhou. 2023. "Printing Defect Detection Based on Scale-Adaptive Template Matching and Image Alignment" Sensors 23, no. 9: 4414. https://doi.org/10.3390/s23094414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop