Next Article in Journal
Amine-Impregnated Dendritic Mesoporous Silica for the Adsorption of Formaldehyde
Previous Article in Journal
Evaluation of Concrete Carbonation Based on a Fiber Bragg Grating Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Real-Time Defect Detection Strategy for Additive Manufacturing Processes Based on Deep Learning and Machine Vision Technologies

1
Key Laboratory of MEMS of the Ministry of Education, Southeast University, Nanjing 210096, China
2
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215400, China
3
College of Computer and Information, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Micromachines 2024, 15(1), 28; https://doi.org/10.3390/mi15010028
Submission received: 22 November 2023 / Revised: 17 December 2023 / Accepted: 19 December 2023 / Published: 22 December 2023
(This article belongs to the Section D3: 3D Printing and Additive Manufacturing)

Abstract

:
Nowadays, additive manufacturing (AM) is advanced to deliver high-value end-use products rather than individual components. This evolution necessitates integrating multiple manufacturing processes to implement multi-material processing, much more complex structures, and the realization of end-user functionality. One significant product category that benefits from such advanced AM technologies is 3D microelectronics. However, the complexity of the entire manufacturing procedure and the various microstructures of 3D microelectronic products significantly intensified the risk of product failure due to fabrication defects. To respond to this challenge, this work presents a defect detection technology based on deep learning and machine vision for real-time monitoring of the AM fabrication process. We have proposed an enhanced YOLOv8 algorithm to train a defect detection model capable of identifying and evaluating defect images. To assess the feasibility of our approach, we took the extrusion 3D printing process as an application object and tailored a dataset comprising a total of 3550 images across four typical defect categories. Test results demonstrated that the improved YOLOv8 model achieved an impressive mean average precision (mAP50) of 91.7% at a frame rate of 71.9 frames per second.

1. Introduction

To date, additive manufacturing technologies have been significantly advanced to be able to develop 3D microelectronics due to their prominent advantages of design and fabrication flexibility, cost-effectiveness, and high customization [1,2,3,4]. Unfortunately, the process chain integrates multiple fabrication processes to deal with diverse materials and needs to build various micro-structures [5,6,7,8]. In this period, fabrication defects may occur due to diverse issues such as improper calibration, aiming errors, temperature fluctuation, inner stress, and material flow [9]. These defects significantly degrade the quality of electronic products and even result in total failure. Consequently, the development of in situ and non-destructive defect detection strategies for additive manufacturing technology has become an urgent task for ensuring the fabrication success of microelectronics. It can improve the quality and consistency of 3D printed parts by detecting and correcting errors in time and reduce material waste and time cost by avoiding the need to reprint defective parts or perform post-processing inspection. This method can further enable intelligent real-time feedback control for 3D printing by using machine learning to analyze the defect detected by the in-situ strategy.
The earliest detection method in additive manufacturing was artificial visual inspection. However, it faced challenges such as inefficiency, fatigue, and limited applicability. It fails to meet the efficiency and quality requirements of modern industrial production lines. Therefore, there is an urgent need to develop more efficient and reliable inspection technologies to address these limitations and enhance quality control in additive manufacturing processes [10,11].
ISO 9712 defines the non-destructive testing (NDT) method as a discipline that employs a physical principle [12]. Current NDT detection methods could be used for additive manufacturing processes including acoustic emission testing, eddy current testing, magnetic testing, and laser-ultrasonic testing, etc., which can precisely identify the location and the size of the defects [13,14,15]. However, these techniques normally rely on specific and costly equipment that is quite difficult to install within AM apparatus to realize in-situ inspection.
Comparatively, machine vision has been regarded as a suitable candidate for in-situ and real-time detection [16,17,18,19]. Machine vision, as a measurement and inspection technology, improves detection efficiency, automation, real-time performance, and accuracy, particularly in large-scale and long-term industrial production processes. Machine vision technology only requires hardware for deploying the model and a camera for capturing images. Common defect detection methods based on machine vision include thresholding, edge detection, feature extraction, and description. Due to the development of deep learning, machine vision defect detection methods generally embrace neural networks and achieve real-time and intelligent working styles [16]. Moreover, these methods have also been applied in the AM area for defect detection. For example, G. Bakas et al. proposed a computer vision-based method for automatic defect detection in the fused deposition modelling (FDM) process [20]. Xu et al. developed an improved one-stage model based on the You Only Look Once (YOLO) v4 to detect the print quality of the FDM process [21]. However, both works only focused on the material filling defects and did not cover the major defects that often occurred in the extrusion-based AM processes, such as scratches, holes, and impurities, which largely limited the detecting capability of the systems.
Herein, we propose an advanced one-stage defect detection strategy based on machine vision and deep learning for AM processes, which enables high-precise in-situ and real-time detection of four defect categories in a variety of scenarios. Firstly, a defect dataset of an extrusion 3D printing is prepared via an industry camera. Secondly, the YOLOv8 neural network algorithm is improved by replacing the localization loss function and an attention mechanism into the backbone network, which significantly improves the detection capability of the model. Moreover, a defect quantification method is introduced to provide defect detection information depending on the number and the corresponding actual size of the image pixel. Finally, the proposed strategy is deployed and testified during the layer-by-layer extrusion 3D printing procedure.

2. Methods

2.1. 3D Printing

All the samples were designed using AutoCAD 2019 software (AutoCAD, San Francisco, CA, USA) and saved in .stl format. Subsequently, the model was imported into slicing software and converted into G-code to operate a self-made extrusion-based 3D printer (Figure 1a) [22]. The printer’s core component consisted of a 3-axis CNC moving platform equipped with a precision pneumatic extruder. During the printing process, an air pump precisely controlled the extrusion of a calcium carbonate-based paste from the syringe nozzle. This paste was deposited layer-by-layer onto the substrate as guided by the CNC router’s movements. Key parameters for this work included a nozzle diameter of 510 μm, a layer height of 300 μm, an extrusion pressure of 40 kPa, and a nozzle moving speed of 250 mm/min.

2.2. Dataset Collection

To generate the defective dataset, we integrated an image acquisition system with a custom extrusion 3D printer to capture defect images (Figure 1b). In the extrusion 3D printing process, the morphology of each printed layer should be smooth without any cracks (Figure 2a). There are four primary defects including scratches, holes, over-extrusion, and impurities, that usually occur during printing (Figure 2b). These raw defect images underwent additional post-processing steps, including cropping, denoising, and annotation, to obtain high-resolution images devoid of background and noise interference.

2.2.1. Defect Image Collection

Deep learning-based defect detection heavily depends on the quality of the datasets. Hence, we opted for a high-performance industrial camera, the Hikvision MV-CA-10GM/GC model, which offers a 6-megapixel resolution and utilizes Gigabit Ethernet (GigE) for swift and high-quality image transmission. The industrial camera is depicted in Figure 1b. The images of four defects including scratches, holes, over-extrusions, and impurities were captured, getting 3624 images in total.

2.2.2. Image Check and Process

The captured images that were gathered underwent a manual inspection process, during which 74 images were identified as ineffective and subsequently eliminated from the dataset. These ineffective images were excluded due to reasons such as exposure abnormalities, the absence of discernible defects, or excessive defects that could not be adequately annotated. Ultimately, a dataset containing 3550 valid defect images for each defect type was obtained. The four defective categories are illustrated in Figure 2b.
The collected images required additional processing before they could be utilized as a training dataset. To eliminate irrelevant backgrounds that might interfere with training, we utilized Adobe Photoshop software 23.4.1 for image cropping, ensuring that the content of the printed product occupied more than 70% of each image. After cropping, 850 × 850 was the minimum size of all images in the dataset. To improve the model’s training efficiency and generalization ability, YOLOv8 would normalize the image size to 640 × 640 during model training. Therefore, all the images in the dataset can meet the resolution limit required for YOLOv8 model training. To reduce noise stemming from environmental factors during image acquisition, we applied Gaussian denoising on the image data and the Gaussian filter had a kernel size of 7 × 7 and a standard deviation of 1.5. For the annotation of the four defect categories, we employed the annotation software LabelImg 1.8.6, which recorded their locations and categories in corresponding .txt data files. Subsequently, the defect dataset was randomly divided into training, validation, and test sets. The training set comprised 80% of the images (2840 images), while both the validation and test sets contained 10% of the images (355 images each). This ratio was adopted by many previous works to ensure that the model has enough data to learn and avoid underfitting situations [23,24,25,26].

2.3. YOLOv8 Algorithm

To date, industrial defect detection predominantly relies on one-stage target detection algorithms from the you only look once (YOLO) series. YOLO has evolved rapidly since its inception in 2016. YOLO variants aim to better address the needs of object detection, including fast detection, high accuracy, and deployment on constrained edge devices. The latest version, YOLOv8 emphasizes real-time and high-classification performance while optimizing computational requirements. YOLOv8 has five models, namely YOLOv8 nano (YOLOv8n), YOLOv8 small (YOLOv8s), YOLOv8 medium (YOLOv8m), YOLOv8 large (YOLOv8l), and YOLOv8 extra large (YOLOv8x) depending on their model size. With the larger model, the accuracy is enhanced, while the speed is lower. Therefore, a balance between accuracy and speed needs to be made according to the requirements of specific applications. In this research, we employed YOLOv8n, the most recent iteration of the YOLO algorithm with the smallest model size to ensure high detection and diagnosis speed. By integrating an attention mechanism, enhancing the loss function, and devising a defect detection quantification method, we achieve intelligent defect detection in the AM process.

2.3.1. Network Architecture

Figure 3 illustrates the structure of YOLOv8, which consists of three main components: the backbone, neck, and head. The backbone serves as the fundamental framework of YOLOv8, comprised of a sequence of deep convolutional neural networks (CNN). These CNNs process the input image and extract high-level features. To achieve adaptive-sized output, the final layer of the backbone network employs spatial pyramid pooling fast (SPPF). The neck module utilizes a variant of the feature pyramid network (FPN). It combines features from various layers of the backbone network, merging them into a unified feature pyramid. This approach enables YOLOv8 to capture both low-level and high-level features concurrently, facilitating the detection of objects at different scales. The head module denotes the final component of YOLOv8′s architecture and is responsible for generating object detection predictions. YOLOv8 adopts an anchor-free model, eliminating the need for generating anchor boxes. This eliminates computational complexity while minimizing model storage requirements and computing resources.

2.3.2. Attention Mechanism

When training the model, conventional YOLOv8 algorithms treat all the features as equally important, which can lead to inaccurate defect identification. To address this issue, an attention mechanism was added to enable networks to focus on crucial regions, thereby improving the accuracy and efficiency of defect detection. The bottleneck attention module (BAM) and convolutional block attention module (CBAM) extract positional attention information through convolutions after reducing the number of channels, but they struggle to capture long-range dependencies. In contrast, the recent coordinate attention (CA) method encodes both horizontal and vertical positional information, allowing networks to extract a wide range of spatial information without excessive computation. The structure diagram of CA is depicted in Figure 4 [27].
CA improves upon the capabilities of SE and CBAM by considering inter-channel and spatial information [28,29]. The process involves three steps: coordinate information embedding, spatial transformation, and weighted fusion. CA calculates a similarity matrix using position embedding vectors and performs weighted fusion with the feature map to get the final representation. The embedding method for horizontal and vertical coordinates is described as follows:
z c h ( h ) = 1 W 0 i < W x c ( h , i )
z c w ( w ) = 1 H 0 j < H x c ( j , w )
This method involves taking the input x and applying two pooling kernels with spatial extents of (H, 1) or (1, W) to encode each channel along the horizontal and vertical coordinates in the c-th channel. As a result, a pair of direction-aware feature maps is obtained.
In this paper, the CA module is deployed in the preceding layer of the YOLOv8 backbone network’s SPPF module, which aims to enhance the algorithm’s feature learning capability and enable it to concentrate on relevant and valuable information. Figure 5 illustrates the structure of the backbone network following the CA module.

2.3.3. EIOU Loss Function

In YOLOv8, the classification task utilizes the binary cross entropy (BCE) loss function, while the regression task combines the distribution focal loss (DFL) and complete intersection over union (CIOU) loss functions. Intersection over union (IOU) is a widely used evaluation metric in object detection, measuring the overlap between the predicted bounding box and the ground truth bounding box of an object. Both CIOU and efficient intersection over union (EIOU) are bounding box regression loss functions. The CIOU incorporates additional terms in its calculation to penalize variations in aspect ratios and adjusts the IOU value accordingly. However, it does not account for the orientation between the ground truth bounding box and the predicted box, leading to a slower convergence speed [30]. On the other hand, EIOU measures the similarity between the predicted box and the ground truth bounding box by considering the distances between their center point, width, and height. This enables better handling of scenarios involving rotation, occlusion, and misalignment among target boxes, ultimately improving detection accuracy [31]. The specific formula for the EIOU loss function is:
L E I O U = L I O U + L d i s + L a s p = 1 I O U + ρ 2 ( b , b g t ) ( w c ) 2 + ( h c ) 2 + ρ 2 ( w , w g t ) ( w c ) 2 + ρ 2 ( h , h g t ) ( h c ) 2
L I O U denotes the IOU loss function; L d i s is the center box distance loss function; L a s p represents the length and width loss function; IOU denotes the intersection and union ratio, which is an indicator that measures the degree of overlap between the predicted box and the target box. For example, w denotes the width of the box, h is the height of the box, and b represents the center point of the box. w , w g t , and w c , respectively, denote the width of the predicted box, the width of the target box, and the width of the smallest enclosing box covering the two boxes.
EIOU is a more sensitive metric compared to CIOU since it considers the differences in width and height rather than solely relying on the aspect ratio. This characteristic allows EIOU to provide a more accurate measurement of the shape and size of bounding boxes, thereby improving the accuracy of object detection. Models that utilize the EIOU loss function demonstrate superior performance, particularly in detecting small and overlapping objects. In this paper, the EIOU loss function is adopted as an alternative to the CIOU loss function in model regression.

2.3.4. Defect Information Evaluation

The detection algorithm mentioned above is capable of obtaining specific type and location information of defects, but it lacks the ability to provide detailed defect information such as defect size and area. To address this limitation, this paper introduces a method to extract defect sizes by analyzing the pixel information of the detected bounding boxes in the image. The proposed method establishes a proportional relationship between pixel sizes and physical sizes. The building platform of the 3D printer was used as the reference. The specific formula is:
P = W w
It measures the actual physical width of the printing platform (W) and the corresponding pixel width (w) in the obtained image. Then, we acquired the physical size (P) of each pixel in the image.
The extraction of defect information involves several steps. First, when the defects are captured and identified, their corresponding image pixels are also obtained. Subsequently, based on the number and the actual size of the image pixels, the dimensions, area, and proportion of the defect area in the current printing layer can be calculated. If the area of the detection box exceeds one-fourth of the printed product modeling area, we define it as a special defect, indicating the severe defect is inspected on that layer. Finally, the diagnosis information of the total number of defects, the individual defects, and the defect area ratio are labeled in the detected image. The flow chart illustrating the process of defect information evaluation is presented in Figure 6.

3. Results

To validate the practicality of the proposed defect detection method, this paper chose the extrusion 3D printing process as the application object. The defective dataset acquired in Section 2.2.1 was used for model training. The performance of commonly used one-stage and two-stage target detection models was compared to show the superiority of YOLOv8n. The performance of improved YOLOv8n was further investigated via ablation experiments. The PyTorch framework was utilized for implementing the YOLOv8n defect detection model training [32]. The experimental setup included Python 3.8, CUDA 11.7, cuDNN 8.5, and an Nvidia GTX 3060Ti GPU with 8 GB memory, which provided the hardware infrastructure for training and running the model.

3.1. Evaluation Indicators

The performance of the detection model was evaluated using four key indicators: precision (P), recall (R), average precision (AP), mean average precision (mAP), and frames per second (FPS). These indicators provide comprehensive metrics to assess the accuracy and efficiency of the detection model [33]. The formulas of these indicators are as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
A P = 0 1 P ( R ) d R
During the evaluation of a test point, it can fall into one of the four categories: false positive (FP), the system incorrectly predicts the test point as positive when it is negative; false negative (FN), the system incorrectly predicts the test point as negative when it is positive; true positive (TP), the system correctly predicts the positive label for the test point; true negative (TN), the system correctly predicts the negative label for the test point.
Mean average precision (mAP) denotes the mean average precision of all classes, providing an overall performance measure for the model. The mAP formula is:
m A P = 1 n i = 1 n A P i
mAP50 is an indicator that measures the mean average precision across different classes when the IOU threshold is 50%. mAP50–95 is an indicator that measures the mean average precision across different classes by considering IOU thresholds ranging from 50% to 95%.

3.2. Performance Comparison Experiment

To evaluate the performance of YOLOv8n, we compared it with several state-of-the-art one-stage and two-stage object detection models, such as Faster R-CNN, Cascade R-CNN, YOLOv5s, YOLOv6n, YOLOv7, and PP-YOLOEs, based on our dataset. We tried to keep the training hyperparameters identical among these models during 300 training epochs. The specific data for each model’s performance testing are presented in Table 1.
According to the results, the original YOLOv8n model demonstrated the highest precision, recall, and mAP50 value among all the models evaluated. These results indicated that YOLOv8n provided better performance. In terms of speed, the two-stage models were normally slower compared to the one-stage models. This is because the two-stage models involve additional steps to generate candidate boxes, which increases computational requirements and complexity. YOLOv5s has the fastest model detection speed among all the models evaluated. This is because YOLOv5s uses the pruned CSP-Darknet53 network as a feature extractor, which reduces the feature map redundancy and improves the network parallelism, enhancing network efficiency and performance. Comparatively, other models like YOLOv6n use a deeper and wider network than YOLOv5s, which increases the computational cost and memory consumption. Considering both detection time and detection accuracy comprehensively, YOLOv8n was a better choice in these models.

3.3. Ablation Experiment

In this experiment, the network hyperparameters were configured as follows: the Adam optimizer was used for parameter tuning. The category confidence threshold for the target was set to 0.5, and the initial learning rate was 0.001. The WarmUp strategy was applied to adjust the initial learning rate. The batch size was set to 32 which meant that 32 images were processed in each training iteration, and the training process spanned 300 epochs, so the entire dataset was iterated 300 times during training. The input image size was normalized to 640 × 640, and 8 working threads were utilized.
In the experiment, the official YOLOv8n model was selected as the benchmark for model initialization. This model also has undergone transfer learning, which accelerates training and enhances performance for object detection tasks in new scenarios. Four models, named MDL 1–4, were trained with original YOLOv8n, YOLOv8n with CA, YOLOv8n with EIOU, and YOLOv8n with both CA and EIOU, respectively. The performance indices including precision, recall, average precision, and mean average precision were systematically compared and analyzed below.
The results of MDL 1, as shown in Figure 7a, exhibited a moderate level of detection performance. First, for the four defect categories, the precision, recall, and AP50 values were all around 80%. This indicated that the defect detection algorithm demonstrated competent capabilities in identifying different defect categories. Secondly, the AP50–95 values were considerably lower, mainly because the algorithm encountered challenges in achieving high accuracy in the higher IOU threshold range. Moreover, the over-extrusion defect category showed a higher precision but a lower recall rate. This phenomenon can be attributed to the fact that most small over-extrusion defects had little dissimilarity to the normal printing surface. Only when the over-extrusion material was severe, the protruding profile became distinguishable.
When the CA was incorporated into YOLOv8n training, the results of MDL 2 demonstrated a significant improvement in precision for all categories (Figure 7b). The precision of the defect categories of scratch and impurity has increased by more than 10%. This improvement is because CA made the model focus more on crucial areas during image processing, which avoids the interference of the image background. Accordingly, the attention mechanism enhances accuracy in target detection.
By utilizing the EIOU loss function, a remarkable enhancement of the overall recall rate was observed in Figure 7c. The EIOU loss function used could calculate the values of width and height independently rather than relying on their aspect ratio. This modification enables the rapid convergence of the model and provides superior performance in detecting small and overlapping objects.
Figure 7d illustrates the performance of MDL 4 which was trained by the YOLOv8n incorporating both CA and EIOU. This final model made a good balance between recall and precision. We summarized and compared the average performance of the four models for the four defect categories in Figure 7e. The values of the key indices were the average of those of the four types of defects. It was obvious that the MDL 4 achieved the highest mAP50 with a value of 91.7% compared to other models. The mAP50 comprehensively evaluates the precision and recall across multiple classes, which could be regarded as an important factor in assessing the accuracy of object detection. Therefore, the YOLOv8n model with both CA and EIOU significantly enhanced defect detection performance.
Figure 8 shows the detection and diagnosis results of the white disk-like samples with a single type of defect. The system successfully detected each defect in the samples. Figure 9a shows the detection and diagnosis results of the disk-like samples with different types of defects. For both white and black disks, the detection was correctly conducted, and different defects were identified simultaneously. For the samples with different shapes, our system also worked well distinguishing the different defects on the substrates with the shapes of dumbbell, star, and square (Figure 9b). Therefore, all these results showed that the proposed method can detect multiple defects in the samples with different backgrounds and structures.

3.4. System Integration

According to Figure 10, the real-time defect detection module was integrated with the extrusion 3D printing apparatus using Jetson Nano as the platform. The Jetson Nano’s high-performance GPU efficiently accelerated the improved YOLOv8n model. Communication between the Jetson Nano and the 3D printer’s mainboard was established via USB. Additionally, Jetson Nano utilized a Python program to capture in-situ images with a camera. During the building of an object, when a layer printing is completed, the nozzle returns to the origin, and the camera is triggered to capture the image of this layer. Subsequently, the printing is restarted, and simultaneously the previously captured image is processed by the improved YOLOv8n model for detection and diagnosis. After the whole printing is completed, a comprehensive diagnostic report of the entire sample can be obtained based on the analysis information of each layer. In the future, a real-time feedback control mechanism can be developed to adjust the printing parameters on time based on the diagnostic results of each layer, which will significantly enhance the fabrication quality of additive manufacturing technologies.

4. Conclusions

In this study, we proposed an in situ defect detection strategy that integrates machine vision and deep learning technologies into additive manufacturing processes. An improved YOLOv8 algorithm with coordinate attention (CA) and EIOU loss function was developed to train the defect detection model with a self-made defect image dataset. This model could be employed to monitor fabrication defects in situ during the 3D printing procedure. The typical defects of the extrusion 3D printing process were prepared in the dataset to testify to the performance of this strategy. Experimental results showed that the improved YOLOv8n model achieves a mean average precision (mAP50) of 91.7% at a high speed of 71.9 FPS. We have deployed the proposed strategy to an extrusion 3D printing apparatus for real-time quality monitoring. However, this method still has two main limitations: (1) the defect depth information cannot be reflected in the captured 2D images, so it is difficult to obtain three-dimensional information of the defects in the additively manufactured products; (2) The improved YOLOv8 used in this work is a supervised algorithm which needs to label a large number of datasets during data annotation, and this step usually consumed a lot of labour and time. In the future, 3D printing settings can be adjusted depending on the feedback of the defect detection module in time to ensure a high success rate in additive manufacturing.

Author Contributions

Conceptualization, J.L.; data curation, W.W., X.C., G.W. and Y.L.; formal analysis, W.W., H.Z. and Y.L.; funding acquisition, H.L. and J.L.; investigation, W.W., H.Z., X.C., G.W. and Y.L.; methodology, W.W., M.C., H.L. and J.L.; project administration, J.L.; software, J.L.; supervision, M.C. and J.L.; validation, W.W. and G.W.; visualization, W.W., P.W., H.Z. and X.C.; writing—original draft, W.W., P.W. and J.L.; writing—review and editing, P.W. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China [Grant Nos. 61974025, 61504024, and 62004061], the Fundamental Research Funds for the Central Universities, China, and the Innovative and Entrepreneurial Talent Plan of Jiangsu Province, China.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eckel, Z.C.; Zhou, C.; Martin, J.H.; Jacobsen, A.J.; Carter, W.B.; Schaedler, T.A. Additive Manufacturing of Polymer-Derived Ceramics. Science 2016, 351, 58–62. [Google Scholar] [CrossRef] [PubMed]
  2. Cui, H.; Yao, D.; Hensleigh, R.; Lu, H.; Calderon, A.; Xu, Z.; Davaria, S.; Wang, Z.; Mercier, P.; Tarazaga, P.; et al. Design and Printing of Proprioceptive Three-Dimensional Architected Robotic Metamaterials. Science 2022, 376, 1287–1293. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, P.; Li, J.; Wang, G.; He, L.; Yang, J.; Zhang, C.; Han, Z.; Yan, Y. Hybrid Additive Manufacturing Based on Vat Photopolymerization and Laser-Activated Selective Metallization for Three-Dimensional Conformal Electronics. Addit. Manuf. 2023, 63, 103388. [Google Scholar] [CrossRef]
  4. Wang, P.; Li, J.; Deng, L.; Liu, S.; Wang, G.; Huang, J.; Tang, X.; Han, L. Laser-Activated Selective Electroless Plating on 3D Structures via Additive Manufacturing for Customized Electronics. Adv. Mater. Technol. 2023, 8, 2300516. [Google Scholar] [CrossRef]
  5. 3D Printing of Multilayered and Multimaterial Electronics: A Review—Goh—2021—Advanced Electronic Materials—Wiley Online Library. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/aelm.202100445 (accessed on 28 September 2023).
  6. Schwartz, J.J.; Boydston, A.J. Multimaterial Actinic Spatial Control 3D and 4D Printing. Nat. Commun. 2019, 10, 791. [Google Scholar] [CrossRef]
  7. Wang, P.; Li, J.; Wang, G.; He, L.; Yu, Y.; Xu, B. Multimaterial Additive Manufacturing of LTCC Matrix and Silver Conductors for 3D Ceramic Electronics. Adv. Mater. Technol. 2022, 7, 2101462. [Google Scholar] [CrossRef]
  8. Sampson, K.L.; Deore, B.; Go, A.; Nayak, M.A.; Orth, A.; Gallerneault, M.; Malenfant, P.R.L.; Paquet, C. Multimaterial Vat Polymerization Additive Manufacturing. ACS Appl. Polym. Mater. 2021, 3, 4304–4324. [Google Scholar] [CrossRef]
  9. Meng, L.; McWilliams, B.; Jarosinski, W.; Park, H.-Y.; Jung, Y.-G.; Lee, J.; Zhang, J. Machine Learning in Additive Manufacturing: A Review. JOM 2020, 72, 2363–2377. [Google Scholar] [CrossRef]
  10. Mital, A.; Govindaraju, M.; Subramani, B. A Comparison between Manual and Hybrid Methods in Parts Inspection. Integr. Manuf. Syst. 1998, 9, 344–349. [Google Scholar] [CrossRef]
  11. Malamas, E.N.; Petrakis, E.G.M.; Zervakis, M.; Petit, L.; Legat, J.-D. A Survey on Industrial Vision Systems, Applications and Tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
  12. ISO 9712; 2012 Non-Destructive Testing—Qualification and Certification of NDT Personnel. ISO: Geneva, Switzerland, 2012.
  13. Shaloo, M.; Schnall, M.; Klein, T.; Huber, N.; Reitinger, B. A Review of Non-Destructive Testing (NDT) Techniques for Defect Detection: Application to Fusion Welding and Future Wire Arc Additive Manufacturing Processes. Materials 2022, 15, 3697. [Google Scholar] [CrossRef] [PubMed]
  14. Ramírez, I.S.; Márquez, F.P.G.; Papaelias, M. Review on Additive Manufacturing and Non-Destructive Testing. J. Manuf. Syst. 2023, 66, 260–286. [Google Scholar] [CrossRef]
  15. Machado, M.A.; Rosado, L.S.; Santos, T.G. Shaping Eddy Currents for Non-Destructive Testing Using Additive Manufactured Magnetic Substrates. J. Nondestruct. Eval. 2022, 41, 50. [Google Scholar] [CrossRef]
  16. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  17. Fu, Y.; Downey, A.R.J.; Yuan, L.; Zhang, T.; Pratt, A.; Balogun, Y. Machine Learning Algorithms for Defect Detection in Metal Laser-Based Additive Manufacturing: A Review. J. Manuf. Process. 2022, 75, 693–710. [Google Scholar] [CrossRef]
  18. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  19. Zdravković, M.; Korunović, N. Novel Methodology for Real-Time Structural Analysis Assistance in Custom Product Design. FU Mech Eng. 2023, 21, 293. [Google Scholar] [CrossRef]
  20. Bakas, G.; Bei, K.; Skaltsas, I.; Gkartzou, E.; Tsiokou, V.; Papatheodorou, A.; Karatza, A.; Koumoulos, E.P. Object Detection: Custom Trained Models for Quality Monitoring of Fused Filament Fabrication Process. Processes 2022, 10, 2147. [Google Scholar] [CrossRef]
  21. Xu, L.; Zhang, X.; Ma, F.; Chang, G.; Zhang, C.; Li, J.; Wang, S.; Huang, Y. Detecting Defects in Fused Deposition Modeling Based on Improved YOLO V4. Mater. Res. Express 2023, 10, 095304. [Google Scholar] [CrossRef]
  22. Wang, P.; Li, J.; Wang, G.; Hai, Y.; He, L.; Yu, Y.; Wang, X.; Chen, M.; Xu, B. Selectively Metalizable Low-Temperature Cofired Ceramic for Three-Dimensional Electronics via Hybrid Additive Manufacturing. ACS Appl. Mater. Interfaces 2022, 14, 28060–28073. [Google Scholar] [CrossRef]
  23. Liu, J.; Zhu, X.; Zhou, X.; Qian, S.; Yu, J. Defect Detection for Metal Base of TO-Can Packaged Laser Diode Based on Improved YOLO Algorithm. Electronics 2022, 11, 1561. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Ni, Q. A Novel Weld-Seam Defect Detection Algorithm Based on the S-YOLO Model. Axioms 2023, 12, 697. [Google Scholar] [CrossRef]
  25. Jiang, L.; Yuan, B.; Wang, Y.; Ma, Y.; Du, J.; Wang, F.; Guo, J. MA-YOLO: A Method for Detecting Surface Defects of Aluminum Profiles With Attention Guidance. IEEE Access 2023, 11, 71269–71286. [Google Scholar] [CrossRef]
  26. Du, B.; Wan, F.; Lei, G.; Xu, L.; Xu, C.; Xiong, Y. YOLO-MBBi: PCB Surface Defect Detection Method Based on Enhanced YOLOv5. Electronics 2023, 12, 2821. [Google Scholar] [CrossRef]
  27. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
  28. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
  29. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11211, pp. 3–19. ISBN 978-3-030-01233-5. [Google Scholar]
  30. Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2022, 52, 8574–8586. [Google Scholar] [CrossRef]
  31. Zhang, Y.-F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IOU Loss for Accurate Bounding Box Regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
  32. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
  33. A Review of Research on Object Detection Based on Deep Learning—IOP Science. Available online: https://iopscience.iop.org/article/10.1088/1742-6596/1684/1/012028/meta (accessed on 28 September 2023).
Figure 1. Extrusion 3D printing system with machine vision module. (a) The schematic diagram of the whole system. (b) The photos of the 3D printing system and image acquisition system.
Figure 1. Extrusion 3D printing system with machine vision module. (a) The schematic diagram of the whole system. (b) The photos of the 3D printing system and image acquisition system.
Micromachines 15 00028 g001
Figure 2. Photos of the front surface of the extrusion 3D printed layers. (a) The photo of the well-printed sample. (b) Four types of defects: scratch, hole, over-extrusion, and impurity.
Figure 2. Photos of the front surface of the extrusion 3D printed layers. (a) The photo of the well-printed sample. (b) Four types of defects: scratch, hole, over-extrusion, and impurity.
Micromachines 15 00028 g002
Figure 3. YOLOv8 network architecture.
Figure 3. YOLOv8 network architecture.
Micromachines 15 00028 g003
Figure 4. Coordinate attention (CA) structure.
Figure 4. Coordinate attention (CA) structure.
Micromachines 15 00028 g004
Figure 5. YOLOv8 improved the backbone network.
Figure 5. YOLOv8 improved the backbone network.
Micromachines 15 00028 g005
Figure 6. The flow chart of defect information evaluation.
Figure 6. The flow chart of defect information evaluation.
Micromachines 15 00028 g006
Figure 7. Detection performance of four YOLOv8n models.
Figure 7. Detection performance of four YOLOv8n models.
Micromachines 15 00028 g007
Figure 8. Defect detection images with labelled diagnosis information: total defects (TD), special defects number (SDN), defects area ratio (DAR).
Figure 8. Defect detection images with labelled diagnosis information: total defects (TD), special defects number (SDN), defects area ratio (DAR).
Micromachines 15 00028 g008
Figure 9. Defect detection images. (a) Images of multiple defects with the same shape. (b) Images of multiple defects with different shapes.
Figure 9. Defect detection images. (a) Images of multiple defects with the same shape. (b) Images of multiple defects with different shapes.
Micromachines 15 00028 g009
Figure 10. Jetson Nano-based integrated defect detection system for extrusion 3D printing apparatus.
Figure 10. Jetson Nano-based integrated defect detection system for extrusion 3D printing apparatus.
Micromachines 15 00028 g010
Table 1. Detailed performance of each detection model.
Table 1. Detailed performance of each detection model.
SchemePrecisionRecallmAP50FPS
Faster R-CNN76.6%68.8%74.5%5
Cascade R-CNN75.1%75.2%78.4%4
YOLOv5s77.8%68.0%74.2%100
YOLOv6n80.8%73.1%79.1%45
YOLOv780.1%68.5%76.7%83
PP-YOLOEs80.2%74.3%78.8%66
YOLOv8n85.1%75.8%82.7%90
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Wang, P.; Zhang, H.; Chen, X.; Wang, G.; Lu, Y.; Chen, M.; Liu, H.; Li, J. A Real-Time Defect Detection Strategy for Additive Manufacturing Processes Based on Deep Learning and Machine Vision Technologies. Micromachines 2024, 15, 28. https://doi.org/10.3390/mi15010028

AMA Style

Wang W, Wang P, Zhang H, Chen X, Wang G, Lu Y, Chen M, Liu H, Li J. A Real-Time Defect Detection Strategy for Additive Manufacturing Processes Based on Deep Learning and Machine Vision Technologies. Micromachines. 2024; 15(1):28. https://doi.org/10.3390/mi15010028

Chicago/Turabian Style

Wang, Wei, Peiren Wang, Hanzhong Zhang, Xiaoyi Chen, Guoqi Wang, Yang Lu, Min Chen, Haiyun Liu, and Ji Li. 2024. "A Real-Time Defect Detection Strategy for Additive Manufacturing Processes Based on Deep Learning and Machine Vision Technologies" Micromachines 15, no. 1: 28. https://doi.org/10.3390/mi15010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop