Next Article in Journal
Systematic Approach for Measuring Semantic Relatedness between Ontologies
Previous Article in Journal
Active Disturbance Rejection-Based Performance Optimization and Control Strategy for Proton-Exchange Membrane Fuel Cell System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fabric Defect Detection Algorithm Based on Image Saliency Region and Similarity Location

1
School of Computer Science and Technology, Tiangong University, Tianjin 300387, China
2
Engineering Teaching Practice Training Center, Tiangong University, Tianjin 300387, China
3
CNRS, CRAN UMR 7039, Universitéde Lorraine, 54000 Nancy, France
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(6), 1392; https://doi.org/10.3390/electronics12061392
Submission received: 10 February 2023 / Revised: 8 March 2023 / Accepted: 13 March 2023 / Published: 15 March 2023
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In order to solve the problem of defect detection and to contour accurate segmentation of periodic texture fabric images, a fabric defect detection method based on saliency region and similarity location is proposed. Firstly, the image to be detected was processed by color space conversion, Gaussian filtering, and contrast enhancement, and a frequency-tuned (FT) salient region detection algorithm was used to estimate a saliency map of the enhanced image. The fabric image was divided into image blocks of the same size with overlapping areas through a sliding window, and then the statistical parameters of each image block were calculated. The outliers in the statistical parameters were filtered out using inter-quartile range (IQR). Through the positioning and processing of image defects, abnormal elimination was carried out, and the defect outline was finally obtained. The experimental results show that the method proposed in this paper has better performance in terms of qualitative characterization of Acc, Precision, Recall, and F1 score.

1. Introduction

In order to comply with the trend of global economic integration and because fabric defects seriously affect product quality and cause huge economic losses, the market requirements for the quality of textiles are also correspondingly increasing [1].
Product quality control plays an important role in the textile production process. The scientific and efficient detection of fabric defects is a key component of product quality control [2]. In the process of fabric production, defects caused by yarn quality and mechanical failure seriously affect fabric quality [3]. The traditional manual method of fabric defect detection has serious shortcomings, such as low efficiency, low accuracy, and lack of reliability [4,5]. In recent years with the rapid development of computer science and technology, people are paying more attention to the field of machine vision. The advantages of no contact, high efficiency, convenience, and objectivity make machine vision more widely used in defect detection. Many machine-learning and neural network techniques have been used to automate the visual inspection of industrial production of fabrics [6,7], cracks [8], ceramics [9], microsurfaces [10], printing [11], welding [12], wood [13], and more.
The existing fabric defect detection methods based on digital image processing can be roughly divided into the following four categories: mathematical statistical detection methods, visual saliency detection methods, transform domain detection methods, and deep-learning detection methods. Mathematical statistical detection methods guide fabric defect detection through the statistical analysis of image gray values, texture, and other features. The detection effects of these methods are greatly affected by the selection of statistical parameters and statistical methods, and there are problems of large amounts of calculation and low calculation efficiency [14]. The method proposed by Cao Junjie et al. used a principal component analysis method and added noise to the model, resulting in low computational efficiency [15]. Visual saliency detection methods use random defects to block significant changes in the periodic characteristics of fabric texture to realize defect detection. These methods have high detection efficiency and a good detection effect for obvious defects but a poor detection effect for subtle defects [16,17]. E. Essa, et al. presented a defection detection system based on cognitive support using the minimum ratio between neighborhood regions. However, this method was not tested on many kinds of fabric defects and could not give qualitative conclusions [18]. Transform domain detection methods convert an image from a spatial domain to a frequency domain for detection, which has a good detection effect on periodic, simple-texture fabric images, but the detection effect on complex-texture fabric images is general, which is not suitable for multiscale and multiresolution cloth detection [19,20]. Deep-learning detection methods proposed in recent years have high detection accuracy, but the determination of a detection model and parameters requires the provision of a large number of training samples. The hardware cost is high, and the implementation process is complex, which cannot meet the application requirements of embedded platforms [21,22].
In order to further improve the accuracy of fabric defect detection and realize the accurate segmentation of defect contours, this paper combines mathematical statistics with visual saliency detection and proposes a fabric defect detection method based on saliency region and similarity location. Firstly, the image to be detected is processed by color space conversion, Gaussian filtering, and contrast enhancement, and a frequency-tuned (FT) salient region detection algorithm is used to estimate a saliency map of the enhanced image. The fabric image is divided into image blocks of the same size with overlapping areas through a sliding window, and then the statistical parameters of each image block are calculated. The outliers in the statistical parameters are filtered out using inter-quartile range (IQR). At this time, the initial positioning of defects in the saliency map is realized. Finally, binarization is performed on the initially located image block, and morphological operations are used to eliminate outliers to obtain complete defect contours. The defect detection method for fabric images proposed in this paper is mainly divided into three steps, and a flow chart is shown in Figure 1.
1. Image preprocessing mainly includes image enhancement and FT significance estimation;
2. Combination of statistical parameters, such as mean, variance, and IQR algorithm, to detect an image sub-block where a defect is located;
3. Performing image postprocessing, such as binarization and morphological operation on the image sub-block detected in step 2, and counting the number and area of defects.

2. Materials and Methods

2.1. Image Preprocessing

Considering the problems of noise and blur in the original fabric image, we filtered and enhanced the defect before locating to reduce the noise interference and improve the contrast between the defect and the background. The preprocessing in this method included four parts: color space conversion, Gaussian filtering, saliency map generation, and contrast stretching. Figure 2 is the original fabric image.
  • Firstly, we convert the input color image from an RGB model to a lab space model according to Equations (1)–(5). The variable I i is used to represent the input color fabric image. R, g, and b are the red, green, and blue components of the input image I i , and their range is [ 0 , 225 ] . R, G, and B represent the normalized values of r, g, and b, and their range is [ 0 , 1 ] . The variable I l represents the image of the converted lab color space model. L, a, and b represent the three channels of I l .
    { R = g a m m a ( r 255 ) G = g a m m a ( g 255 ) B = g a m m a ( b 255 )
    g a m m a ( x ) = { ( x + 0.055 1.055 ) 2.4 i f   x > 0.04045 x 12.92 o t h e r w i s e
    [ X Y Z ] = M × [ R G B ] , M = [ 0.4124 0.3576 0.1805 0.2126 0.7152 0.0722 0.0193 0.1192 0.9502 ]
    { L = 116 × f ( Y Y n ) 16 a = 500 × [ f ( X X n ) f ( Y Y n ) ] b = 200 × [ f ( Y Y n ) f ( Z Z n ) ]
    f ( t ) = { t 1 3 i f   t > ( 29 6 ) 3 1 3 × ( 29 6 ) 2 × t + 4 29 o t h e r w i s e
  • Equation (2) is used to perform nonlinear tone editing on the image in order to improve the image contrast. The default values of variables X n , Y n , and Z n are 95.047, 100.0, and 108.883, respectively. Figure 3 shows the results of color conversion space.
  • The method proposed in this paper selects a Gaussian filter operator G ( x , y ) to filter I l . The size of the filter operator is 3 × 3, σ = 1 . I g represents the filtered image, and in Equation (6) represents the convolution operation, and the value range of x and y in Equation (7) is [ 1 , 0 , 1 ] . Figure 4 shows the filtering results.
  • In order to weaken the influence of periodic texture on defect detection, this paper uses an FT saliency detection algorithm [23] to estimate a saliency map corresponding to I l . We use I s to represent this saliency map. I s can significantly improve the gray value of the defect and further improve the contrast between the defect and the background. Its calculation method is shown in Equation (8), where I ¯ l i represents the mean value of the ith component in I l , and I g i represents the ith component in I g .
  • Considering that the pixel gray value of I s exceeds the value range of [ 0 , 255 ] , we need to compress the gray value of I s to obtain I n , n = 1 , 2 , 3 . The calculation process is as follows: First, calculate the maximum gray value of all pixels in I s as maxValue and the minimum gray value as minValue , and then stretch the contrast according to Equation (9). The final saliency map is shown in Figure 5.
    I g = I l G ,
    G ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2 ,
    I s ( i , j ) = n = 1 3 ( I ¯ l n I g n ( i , j ) ) 2 ,
    I n ( i , j ) = 255 maxValue minValue ( I s ( i , j ) minvalue ) ,

2.2. Defect Location Method in Fabric Image

In order to reduce the defect detection range, the method proposed in this paper preliminarily located the image sub-block where the defect was located through (1)−(2) of the following steps and realized the final location of the defect through step (3) and step (4).
(1)
The variance and mean values of all image sub-blocks are detected using the sliding window method. In this paper, the size of the sliding window is 24, and the moving step size is 12. We determine the size of the sliding window based on the experience of many experiments. In practice, it can be determined experimentally according to the resolution of the fabric image collected.
(2)
Screen out all outliers in the mean and variance values in step (1). Outliers specifically mean that the data values in the sample are abnormally large or small, and their distributions deviate significantly from the rest of the observed values. In this paper, the IQR (inter-quartile range) method of statistics is used to analyze outliers. Interquartile range (IQR) is a method in mathematical statistics to determine the difference between the third quartile and the first quartile. Quartile difference is a kind of robust statistics. Figure 6 is the schematic diagram of the IQR method, and Equations (10) and (11) detail the calculation process of the IQR method.
IQR = Q 3 Q 1 ,
outliers = value < ( Q 1 2.5 × IQR ) or   value > ( Q 3 + 2.5 × IQR ) ,
After sorting all samples from small to large, Q3 is the point at 75%, Q1 is the point at 25%, IQR is the difference between 75% and 25%, and outliers are the abnormal points. The threshold of Equation (11) is defined according to the principle of normal distribution. According to normal distribution, about 99.85% of the data are within the four standard deviations of the average (<4 σ ), and the remaining 0.15% of the data are outside the four standard deviations of the average (>4 σ ). The distance from Q1 to the average value is −0.675 σ , and the distance from Q2 to the average value is 0.675 σ . Substituting it into Equation (11) can obtain the following: Q1 − 2.5 × IQR = −4.05σ. and Q3 + 2.5 × IQR = 4.05 σ . Through many experiments, it is found that, when the threshold in Equation (11) is set to [Q1 − 2.5 × IQR, Q1 + 2.5 × IQR], the application scenario in the manuscript has a better experimental effect.
We use a fixed-size window to slide over the whole image according to a fixed step size and calculate the mean and variance of each window sub-block at the same time. We sort the mean and variance of all sub-blocks and then determine the sub-blocks whose mean and variance values are outliers using the IQR method. At this time, we think that this is the sub-block containing defects that we are looking for, as shown in Figure 7. We use maskImg to represent the mask image of the corresponding sub-block, as shown in Figure 8. This step is used for rough defect location.
(3)
The adaptive threshold binarization operation is performed on the sub-block where the defect is located. The results obtained are represented by binaryImg , as shown in Figure 9.
(4)
By searching the outline in binaryImg and combining maskImg , we finally locate the final position of the defect, as shown in Figure 10. In this paper, some small noise points are filtered out according to the contour area of the target.
Table 1 is the Pseudo code of the algorithm to locate defects.

3. Results and Discussion

In order to verify the effectiveness of the proposed method, it was compared with the following three methods for direct visual and parametric performance. (1) A method (AC) for determining saliency maps based on selecting the mean value in different size neighborhoods [24]. This algorithm used a similar image pyramid algorithm to evaluate difference and accumulation from the standard image in different dimensions, and then used normalization to obtain the final saliency by adding the saliency of multiscale fuzzy images. (2) A method (HC) of constructing saliency maps based on reducing the number of colors in the lab color space [25]. This method proposed a scheme to reduce the number of colors. By mapping each component of RGB into 12 equal parts, the implicit image could only have 12 × 12 × 12 colors at most, so a smaller histogram could be constructed to accelerate. (3) A method (LC) for constructing saliency maps by calculating the global contrast of pixels in the entire image [26]. This method computed spatial saliency maps using the color statistics of images. The algorithm was designed with a linear computational complexity with respect to the number of image pixels. A saliency map of the image was built upon the color contrast between image pixels. We implemented a comparison of the methods mentioned in Table 2. In the experiment, 200 fabric images with different periodic textures were selected as the test set, of which 100 were fabric images without defects and 100 were fabric images with defects, including kinks, different fibers, yarn skipping, holes, oil stains, wool balls, and other defect types. The image size was 250 × 250 pixels. The fabric image data in this manuscript were from the public data set of the Alibaba Cloud Tianchi Dataset. The experimental environment was a computer with a 2.11 Ghz Core i5-10210u processor, and the programming software was Visual Studio 2015 and OpenCV. In the experiment, the fabric images were divided into two categories according to whether there were defects. In the experiment, the accuracy (Acc), precision, recall, and T 1 shown in Equations (12)–(15), as well as the average detection time, were selected for comparison. The closer the Acc value to 1 and the smaller the T value, the better the detection performance of the method.
Acc = T P + T N T P + T N + F P + F N ,
Precision = T P T P + F P ,
Recall = T P T P + F N ,
F 1 = 2 × Precision × Recall Precision + Recall ,
where T P is the number of samples with defects correctly detected, T N is the number of flawless samples correctly detected, F P is the number of samples with defects detected incorrectly, and F N is the number of flawless samples for error detection. Each method tested the fabric images in the test set. The experimental results are shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 and Table 2.
It can be seen from Table 2 that the method proposed in this paper was superior to the other three comparison methods, as seen in the Acc, precision, recall, and F1 score values characterizing the detection effect. The three comparison methods were prone to the false detection of fabric images without defects. Due to the high false detection rates of the three comparison methods, their detection accuracy values were reduced. However, the method proposed in this paper had high detection sensitivity, so the detection accuracy was significantly improved compared with the three comparison methods. The obvious improvement in the detection effect of the method proposed in this paper benefitted from the combination of mathematical statistics and significance region estimation, which not only improved the detection rate of various defects but also significantly improved the influence of texture on defect detection and reduced the occurrence of false detections. The detection speeds of the HC method and LC method were faster, but the detection effect was lower. The detection effect of the method proposed in this paper was the best, but the detection speed was not dominant, only ranking third. The method proposed in this paper calculated the statistical parameters of each overlapping image block through a sliding window when the defect was initially located. This comparison strategy not only effectively improved the detection effect but also significantly increased the amount of computation, resulting in a long average detection time. Compared with other research, the algorithm proposed in this paper was more sensitive to relatively small defects, and the contours of detected defects were also more accurate. The algorithm proposed in this paper had a better performance for small defects in fabric images.
Based on the above experimental results, it can be seen that the AC, HC, and LC methods could be used to detect a variety of defect types in the test set, and the reasons for their low accuracy mainly included the following two aspects.
(1)
From the direct visual comparison in Figure 11, it can be seen that the three comparison methods had good detection effects on the relatively obvious defect areas in the fabric image, but the detection effects on the complex-texture fabric image or the fabric image with no obvious defects were general, so its sensitivity was slightly lower than that of the method proposed in this paper.
(2)
The test set selected in this paper contained 100 fabric images with defects and 100 fabric images without defects. It can be seen from the parameter performance comparison in Table 2 that the three comparison methods easily misjudged the fabric images without defects as fabric images with defects, so the detection accuracy was reduced.
The method proposed in this paper combined mathematical statistics with saliency region estimation, calculated the statistical parameters of each overlapping image block through a sliding window, and combined statistical IQR to realize the preliminary positioning of defects in the saliency maps of different periodic textured fabric images, thus significantly improving the adverse impact of fabric texture on defect detection. Therefore, its accuracy was significantly improved compared with the three comparison methods. To sum up, the method proposed in this paper could detect defects in more types of periodic textured fabric images and could accurately extract the complete contours of defects. Compared with the other three comparison methods, the detection accuracy was higher, and the extracted defect contours were more complete.

4. Conclusions

This paper combined mathematical statistics with visual saliency detection and proposed a fabric defect detection method based on saliency region detection and similarity location. Firstly, a saliency map was calculated, and then the statistical parameters, such as the mean and variance, of each image block with the same size and overlapping each other in the saliency map were used to locate a block where a defect was located. This method effectively improved the adverse effect of background texture on detection effect and could effectively locate the defects in simple-texture fabric images and complex-texture fabric images. In addition, adaptive threshold binarization was used to process the coarse positioning image blocks, and a more precise extraction of defect contours was realized. There are many kinds of fabric defects, with about 92 kinds in total. Most algorithms are good for obvious defects, but the algorithm proposed in this paper was more sensitive to relatively small defects, and the contours of detected defects were also more accurate. The algorithm proposed in this paper had a better performance for small defects in fabric images. The defect detection effect obtained using this method was compared with defect detection effect maps generated by other detection methods. Through the detection results, it was concluded that the defect detection effect map generated by this method was the best. The experimental results showed the effectiveness of the proposed algorithm. We think the method proposed in this paper could be extended to other visual inspection fields, such as stone defects and metal defects. The experiments showed that the method proposed in this paper was effective for images with periodic textures. In the future, we plan to combine deep-learning models and traditional methods to improve the accuracy and robustness of fabric defect detection.

Author Contributions

Conceptualization, W.L., Z.Z. and M.W.; methodology, W.L., M.W. and H.C.; software, W.L.; validation, Z.Z. and M.W.; formal analysis, Z.Z. and M.W.; investigation, H.C.; resources, W.L. and M.W.; data curation, Z.Z. and M.W.; writing—original draft preparation, W.L., Z.Z. and M.W.; writing—review and editing, W.L. and H.C.; project administration, W.L. and M.W.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to Li Yuelong of School of Computer Science and Technology, Tiangong University for his support for this work. This work was supported by the National Natural Science Foundation of China (No. 61771340), the Natural Science Foundation of Tianjin (No. 19JCYBJC16400), and the Program for Innovative Research Team of the University of Tianjin (No. TD13-5032).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest, financial or otherwise.

References

  1. Kumar, A. Computer-Vision-Based Fabric Defect Detection: A Survey. IEEE Trans. Ind. Electron. 2008, 55, 348–363. [Google Scholar] [CrossRef]
  2. Li, W.; Cheng, L. New progress of fabric defect detection based on computer vision and image processing. J. Text. Res. 2014, 35, 158–164. [Google Scholar]
  3. Cho, C.S.; Chung, B.M.; Park, M.J. Development of Real-Time Vision-Based Fabric Inspection System. IEEE Trans. Ind. Electron. 2005, 52, 1073–1079. [Google Scholar] [CrossRef]
  4. Jeyaraj, P.R.; Nadar, E.R.S. Effective textile quality processing and an accurate inspection system using the advanced deep learning technique. Text. Res. J. 2020, 90, 971–980. [Google Scholar] [CrossRef]
  5. Li, W.; Xue, W.; Cheng, L. Intelligent detection of defects of yarn-dyed fabrics by energy-based local binary patterns. Text. Res. J. 2012, 82, 1960–1972. [Google Scholar] [CrossRef]
  6. Wang, M.; Bai, R.; He, W.; Ji, F. Machine vision detection method of pattern fabric flaws. Opto-Electron. Eng. 2014, 41, 19–26. [Google Scholar]
  7. Hu, K.; Luo, S.; Hu, H. Improved algorithm of fabric defect detection using Canny operator. J. Text. Res. 2019, 40, 153–158. [Google Scholar]
  8. Sun, W.; Li, Q.; Shao, T.; Wu, H. Crack Detection Algorithm of Protective Wall for Piles Based on Machine Vision. Comput. Eng. Appl. 2019, 55, 260–265. [Google Scholar]
  9. Karimi, M.H.; Asemani, D. Surface defect detection in tiling industries using digital image processing methods: Analysis and evaluation. ISA Trans. 2014, 53, 834–844. [Google Scholar] [CrossRef]
  10. Hu, H.; Li, J.F.; Shen, J.M. Detection methods for surface micro defection on small magnetic tile based on machine vision. Mech. Electr. Eng. Mag. 2019, 36, 117–123. [Google Scholar]
  11. Zhang, E.H.; Chen, Y.J.; Gao, M.; Duan, J.; Jing, C. Automatic defect detection for web offset printing based on machine vision. Mach. Vis. 2020, 2020, 3598. [Google Scholar] [CrossRef] [Green Version]
  12. Li, C.; Li, C.; Wu, X.J.; Palade, V.; Fang, W. Effective method of weld defect detection and classification based on machine vision. Comput. Eng. Appl. 2018, 54, 264–270. [Google Scholar]
  13. Shojaedini, S.V.; Haghighi, R.K.; Kermani, A. A new method for defect detection in lumber images: Optimising the energy model by an irregular parametric genetic approach. Int. Wood Prod. J. 2017, 8, 26–31. [Google Scholar] [CrossRef]
  14. Cao, J.; Wang, N.; Zhang, J.; Wen, Z.; Li, B.; Liu, X. Detection of varied defects in diverse fabric images via modified RPCA with noise term and defect prior. Int. J. Cloth. Sci. Technol. 2016, 28, 516–529. [Google Scholar] [CrossRef]
  15. Zhu, S.; Hao, C. Fabric defect detection method based on texture periodicity analysis. Comput. Eng. Appl. 2012, 48, 163–166. [Google Scholar]
  16. Zhang, K.; Yan, Y.; Li, P.; Jing, J.; Liu, X.; Wang, Z. Fabric defect detection using salience metric for color dissimilarity and positional aggregation. IEEE Access 2018, 6, 49170–49181. [Google Scholar] [CrossRef]
  17. Qian, X.L.; Zhang, H.Q.; Zhang, H.L.; He, Z.D.; Yang, C.X. Solar cell surface defect detection based on visual saliency. Chin. J. Sci. Instrum. 2017, 38, 1570–1578. [Google Scholar]
  18. Essa, E.; Hossain, M.S.; Tolba, A.S.; Raafat, H.M.; Elmogy, S.; Muahmmad, G. Toward cognitive support for automated defect detection. Neural Comput. Appl. 2020, 32, 4325–4333. [Google Scholar] [CrossRef]
  19. Li, Y.; Luo, H.; Yu, M.; Jiang, G.; Cong, H. Fabric defect detection algorithm using RDPSO-based optimal Gabor filter. J. Text. Inst. Part 3. Technol. A New Century 2019, 110, 487–495. [Google Scholar] [CrossRef]
  20. Bo, Z.; Tang, C. A Method for Defect Detection of Yarn-Dyed Fabric Based on Frequency Domain Filtering and Similarity Measurement. Autex Res. J. 2019, 19, 257–262. [Google Scholar]
  21. Wang, L.; Zhong, Y.; Li, Z.; He, Y. Online detection algorithm of fabric defects based on deep learning. J. Comput. Appl. 2019, 39, 2125–2128. [Google Scholar]
  22. Wu, Z.; Zhuo, Y.; Li, J.; Feng, Y.; Han, B.; Liao, S. Fast detection algorithm of single color fabric defects based on convolution neural network. J. Comput. Aided Des. Comput. Graph. 2018, 30, 2262–2270. [Google Scholar]
  23. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned Salient Region Detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009; pp. 1597–1604. [Google Scholar]
  24. Achanta, R.; Estrada, F.; Wils, P.; Süsstrunk, S. Salient Region Detection and Segmentation. In Proceedings of the Computer Vision Systems, ICVS 2008, Santorini, Greece, 12–15 August 2008; pp. 66–75. [Google Scholar]
  25. Cheng, M.M.; Mitra, N.J.; Huang, X.; Torr, P.H.; Hu, S.M. Global Contrast Based Salient Region Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 569–582. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zhai, Y.; Shah, M.; Shah, P. Visual attention detection in video sequences using spatiotemporal cues. In Proceedings of the 14th Annual ACM International Conference on Multimedia, Santa Barbara, CA, USA, 23–27 October 2006; ACM Press: New York, NY, USA, 2006; pp. 815–824. [Google Scholar]
Figure 1. Flow chart of the proposed method.
Figure 1. Flow chart of the proposed method.
Electronics 12 01392 g001
Figure 2. Input color image.
Figure 2. Input color image.
Electronics 12 01392 g002
Figure 3. Lab image.
Figure 3. Lab image.
Electronics 12 01392 g003
Figure 4. Gaussian filter image.
Figure 4. Gaussian filter image.
Electronics 12 01392 g004
Figure 5. Saliency map.
Figure 5. Saliency map.
Electronics 12 01392 g005
Figure 6. Schematic diagram of IQR.
Figure 6. Schematic diagram of IQR.
Electronics 12 01392 g006
Figure 7. Image sub-block of defect.
Figure 7. Image sub-block of defect.
Electronics 12 01392 g007
Figure 8. Mask of image sub-block of defect.
Figure 8. Mask of image sub-block of defect.
Electronics 12 01392 g008
Figure 9. Binary image.
Figure 9. Binary image.
Electronics 12 01392 g009
Figure 10. Detected defect contours.
Figure 10. Detected defect contours.
Electronics 12 01392 g010
Figure 11. Fabric images: (a) a fabric image without defects; (bf) some fabric images with different defects.
Figure 11. Fabric images: (a) a fabric image without defects; (bf) some fabric images with different defects.
Electronics 12 01392 g011
Figure 12. Saliency maps obtained using the AC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Figure 12. Saliency maps obtained using the AC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Electronics 12 01392 g012
Figure 13. The detection results obtained using the AC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Figure 13. The detection results obtained using the AC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Electronics 12 01392 g013
Figure 14. Saliency maps obtained using the HC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Figure 14. Saliency maps obtained using the HC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Electronics 12 01392 g014
Figure 15. The detection results obtained using the HC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Figure 15. The detection results obtained using the HC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Electronics 12 01392 g015
Figure 16. Saliency maps obtained using the LC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Figure 16. Saliency maps obtained using the LC method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Electronics 12 01392 g016
Figure 17. The detection results obtained using the LC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Figure 17. The detection results obtained using the LC method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Electronics 12 01392 g017
Figure 18. Saliency maps obtained using our method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Figure 18. Saliency maps obtained using our method: (a) the saliency map of Figure 11a; (b) the saliency map of Figure 11b; (c) the saliency map of Figure 11c; (d) the saliency map of Figure 11d; (e) the saliency map of Figure 11e; (f) the saliency map of Figure 11f.
Electronics 12 01392 g018
Figure 19. The detection results obtained using our method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Figure 19. The detection results obtained using our method: (a) the detection result of Figure 11a; (b) the detection result of Figure 11b; (c) the detection result of Figure 11c; (d) the detection result of Figure 11d; (e) the detection result of Figure 11e; (f) the detection result of Figure 11f.
Electronics 12 01392 g019
Table 1. Method 1: location of defects.
Table 1. Method 1: location of defects.
1. The enhancement result obtained in Section 2.1 is represented with enhanceImg. Its size is M × N. M is the height, N is the width, and the size of the sliding window is 24, i.e., size = 24. The result of the preliminary location of defects is represented with locationImg. The result of binarization is represented with binaryImg. Mask results are represented with maskImg.
2.
3.
4.
5.
6.
7.
 
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
For i = 0 : M
For j = 0 : N
subImg = enhanceImg(j, i, min(N − j , size), min(M − i, size))
Calculate the mean value of subImg, represented by m
   Calculate the variance of subImg, represented by sd
   Use a structure variable named subInfo to save the i, j, m, and sd, and then save it
   with a list named infoList
   infoList.push_back(subInfo)
   Save m with a list named meanList, and save sd with a list named sdList
   meanList.push_back(m)
   sdList.push_back(sd)
   j = j + size × 0.5
End
i = i + size × 0.5
End
Next, the outliers are obtained using the IQR method. Sort meanList and sdList from small
to large
length = meanList.size
meanQ1 = meanList [0.25 × length]
meanQ3 = meanList [0.75 × length]
meanIQR = meanQ3 − meanQ1
meanUpper = meanQ1 − 2.5 × meanIQR
meanDown = meanQ3 + 2.5 × meanIQR
sdQ1 = sdList [0.25 × length]
sdQ3 = sdList [0.75 × length]
sdIQR = sdQ3 − sdQ1
sdUpper = sdQ1 − 2.5 × sdIQR
sdDown = sdQ3 + 2.5 × sdIQR
The defects are initially located according to the outliers of mean and variance
For t = 0 : length
mTmp = infoList[t].mean
mSd = infoList[t].sd
If mTmp > meanDown and mSd > sdDown
Obtain the position of the current sub-image and save it to Rect r
r.x = infoList[t].i,  r.y = infoList[t].j
   r.width = min(N, width − r.x),    r.height = min(M, height − r.y)
   Obtain the mask of the sub-image where the defect is located, maskImg(r) = 255
   Copy this sub-image into locationImg, enhanceImg(r).copyto(locationImg(r))
   Adaptive threshold binarization for this sub-image
   threshValue = min(40,infoList[t].mean × 10)
   Binarize the sub image with threshValue as the threshold and assign it to binaryImg (r)
End
42.End
Table 2. Comparison of parameters of fabric image detection results using different methods.
Table 2. Comparison of parameters of fabric image detection results using different methods.
Detection Method T P T N F P F N AccPrecisionRecallF1T (s)
AC776423360.7050.770.6810.7230.078
HC656835320.6650.650.6700.6590.061
LC697131290.70.690.7040.6960.053
Our method9798320.9750.970.9790.9740.067
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W.; Zhang, Z.; Wang, M.; Chen, H. Fabric Defect Detection Algorithm Based on Image Saliency Region and Similarity Location. Electronics 2023, 12, 1392. https://doi.org/10.3390/electronics12061392

AMA Style

Li W, Zhang Z, Wang M, Chen H. Fabric Defect Detection Algorithm Based on Image Saliency Region and Similarity Location. Electronics. 2023; 12(6):1392. https://doi.org/10.3390/electronics12061392

Chicago/Turabian Style

Li, Weiwei, Zijing Zhang, Mingyue Wang, and Hang Chen. 2023. "Fabric Defect Detection Algorithm Based on Image Saliency Region and Similarity Location" Electronics 12, no. 6: 1392. https://doi.org/10.3390/electronics12061392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop