Improved U-Net++ with Patch Split for Micro-Defect Inspection in Silk Screen Printing
Abstract
:1. Introduction
- A learning and inference method based on the patch-split method is proposed to detect defects that are minute compared to the product size.
- A combination of several loss functions is proposed to solve the problem of robustness during inference resulting from a lack of data in the manufacturing industry.
- A microdefect inspection process is proposed for quality inspection in a manufacturing environment with various product sizes.
2. Related Work
2.1. U-Net and U-Net++
2.2. Silk Screen Printing Defect
2.3. Patch Sampling
3. Improved U-Net++ with Patch-Split Method
3.1. System Architecture
3.2. Patch-Split Method
3.3. Micro Defect Inspection Architecture and Process
4. Performance Analysis
- The original image input, that is, the input to which the patch-split method is applied, increases the Dice score. The Dice score is often used to quantify the performance of image-segmentation methods.
- Compared to the feature pyramid network (FPN) [33] and DeepLabV3, the U-net++ architecture increases the Dice score.
- The Dice score can be increased by learning multiple loss functions and summing their values.
4.1. Data Set
4.2. Evaluation Metrics
4.3. Experimental Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kapitanov, A. Special characteristics of the multi-product manufacturing. Procedia Eng. 2016, 150, 832–836. [Google Scholar] [CrossRef] [Green Version]
- Riew, M.C.; Lee, M.K. A Case Study of the Construction of Smart Factory in a Small Quantity Batch Production System: Focused on IDIS Company. J. Korean Soc. Qual. Manag. 2018, 46, 11–26. [Google Scholar]
- Krebs, F.C.; Alstrup, J.; Spanggaard, H.; Larsen, K.; Kold, E. Production of large-area polymer solar cells by industrial silk screen printing, lifetime considerations and lamination with polyethyleneterephthalate. Sol. Energy Mater. Sol. Cells 2004, 83, 293–300. [Google Scholar] [CrossRef]
- Czimmermann, T.; Ciuti, G.; Milazzo, M.; Chiurazzi, M.; Roccella, S.; Oddo, C.M.; Dario, P. Visual-based defect detection and classification approaches for industrial applications—A survey. Sensors 2020, 20, 1459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Guo, F.; Qian, Y.; Wu, Y.; Leng, Z.; Yu, H. Automatic railroad track components inspection using real-time instance segmentation. Comput. Aided Civ. Infrastruct. Eng. 2021, 36, 362–377. [Google Scholar] [CrossRef]
- Bergmann, P.; Fauser, M.; Sattlegger, D.; Steger, C. MVTec AD—A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9592–9600. [Google Scholar]
- Agnisarman, S.; Lopes, S.; Madathil, K.C.; Piratla, K.; Gramopadhye, A. A survey of automation-enabled human-in-the-loop systems for infrastructure visual inspection. Autom. Constr. 2019, 97, 52–76. [Google Scholar] [CrossRef]
- Defard, T.; Setkov, A.; Loesch, A.; Audigier, R. Padim: A patch distribution modeling framework for anomaly detection and localization. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2021; pp. 475–489. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liang, Y.; He, R.; Li, Y.; Wang, Z. Simultaneous segmentation and classification of breast lesions from ultrasound images using mask R-CNN. In Proceedings of the 2019 IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019; pp. 1470–1472. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Yang, H.; Min, K. A Saliency-Based Patch Sampling Approach for Deep Artistic Media Recognition. Electronics 2021, 10, 1053. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Roy, A.M.; Bhaduri, J. Real-time growth stage detection model for high degree of occultation using DenseNet-fused YOLOv4. Comput. Electron. Agric. 2022, 193, 106694. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (sp), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
- Im, D.; Lee, S.; Lee, H.; Yoon, B.; So, F.; Jeong, J. A data-centric approach to design and analysis of a surface-inspection system based on deep learning in the plastic injection molding industry. Processes 2021, 9, 1895. [Google Scholar] [CrossRef]
- Masci, J.; Meier, U.; Cireşan, D.; Schmidhuber, J. Stacked convolutional auto-encoders for hierarchical feature extraction. In Proceedings of the International Conference on Artificial Neural Networks, Espoo, Finland, 14–17 June 2011; pp. 52–59. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Misra, I.; Maaten, L.V.D. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6707–6717. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Biegeleisen, J.I. The Complete Book of Silk Screen Printing Production; Courier Corporation: Chelmsford, MA, USA, 2012. [Google Scholar]
- Minoura, D.; Nagahashi, H.; Agui, T.; Nagao, T. An Automatic Detection of Defects on Silk Screen Printed Plate Surfaces. Jpn. Soc. Print. Sci. Technol. 1993, 30, 1315. [Google Scholar] [CrossRef]
- Eugene Chian, Y.T.; Tian, J. Surface Defect Inspection in Images Using Statistical Patc hes Fusion and Deeply Learned Features. AI 2021, 2, 17–31. [Google Scholar] [CrossRef]
- Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Processing Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
- Bertels, J.; Eelbode, T.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 92–100. [Google Scholar]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2017; pp. 240–248. [Google Scholar]
- Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Québec, QC, Canada, 16–19 September 2019; pp. 85–94. [Google Scholar]
- Li, Y.; Chen, L.; Huang, H.; Li, X.; Xu, W.; Zheng, L.; Huang, J. Nighttime lane markings recognition based on Canny detection and Hough transform. In Proceedings of the 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, 6–9 June 2016; pp. 411–415. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neural Inf. Processing Syst. 2014, 27, 3320–3328. [Google Scholar]
- Llugsi, R.; El Yacoubi, S.; Fontaine, A.; Lupera, P. Comparison between Adam, AdaMax and Adam W optimizers to implement a Weather Forecast based on Neural Networks for the Andean city of Quito. In Proceedings of the 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), Cuenca, Ecuador, 12–15 October 2021; pp. 1–6. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Salehi, S.S.M.; Erdogmus, D.; Gholipour, A. Tversky loss function for image segmentation using 3D fully convolutional deep networks. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Quebec City, QC, Canada, 10 September 2017; pp. 379–387. [Google Scholar]
- Abraham, N.; Khan, N.M. A novel focal tversky loss function with improved attention u-net for lesion segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 683–687. [Google Scholar]
- Raj, E.; Buffoni, D.; Westerlund, M.; Ahola, K. Edge MLOps framework for AIoT applications. In Proceedings of the 2021 IEEE International Conference on Cloud Engineering (IC2E), San Francisco, CA, USA, 4–8 October 2021. [Google Scholar] [CrossRef]
Hardware Environment | Software Environment |
---|---|
CPU: Intel Core i9-10900 Processor | Ubuntu 20.04 |
GPU: Nvidia GeForce RTX 3080 | Python 3.7 |
RAM: Samsung Electronics DDR4 32 GB | CUDA 11.2 |
SSD: Samsung Electronics 970 EVO series 1 TB M.2 NVMe | Pytorch 1.8.1 |
HDD: Western Digital BLUE HDD 4 TB | Albumentation 1.1 |
Vision Camera: Lucid TRI122S-MC 12 MP | |
Customized mechanical part for quality inspection |
Normal | Abnormal | Total | |
---|---|---|---|
Total | 115 | 234 | 349 |
Training | 75 | 188 | 263 |
Test | 40 | 46 | 86 |
Method | Architecture | Loss Function | Dice Score |
---|---|---|---|
Original Input | U-Net | BCE | 0.001376 |
Patch Split Input | U-Net | BCE | 0.6729 |
Architecture | Loss Function | Dice Score |
---|---|---|
U-Net | BCE | 0.6729 |
FPN | BCE | 0.679 |
DeepLabV3 | BCE | 0.6729 |
U-Net++ | BCE | 0.6729 |
U-Net++ | Focal | 0.6729 |
U-Net++ | Tversky | 0.7185 |
FPN | Tversky + Focal | 0.7071 |
(Our Proposal) U-Net++ | Tversky + Focal | 0.728 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yoon, B.; Lee, H.; Jeong, J. Improved U-Net++ with Patch Split for Micro-Defect Inspection in Silk Screen Printing. Appl. Sci. 2022, 12, 4679. https://doi.org/10.3390/app12094679
Yoon B, Lee H, Jeong J. Improved U-Net++ with Patch Split for Micro-Defect Inspection in Silk Screen Printing. Applied Sciences. 2022; 12(9):4679. https://doi.org/10.3390/app12094679
Chicago/Turabian StyleYoon, Byungguan, Homin Lee, and Jongpil Jeong. 2022. "Improved U-Net++ with Patch Split for Micro-Defect Inspection in Silk Screen Printing" Applied Sciences 12, no. 9: 4679. https://doi.org/10.3390/app12094679
APA StyleYoon, B., Lee, H., & Jeong, J. (2022). Improved U-Net++ with Patch Split for Micro-Defect Inspection in Silk Screen Printing. Applied Sciences, 12(9), 4679. https://doi.org/10.3390/app12094679