Next Article in Journal
SCAE—Stacked Convolutional Autoencoder for Fault Diagnosis of a Hydraulic Piston Pump with Limited Data Samples
Previous Article in Journal
The Impact of Liquids and Saturated Salt Solutions on Polymer-Coated Fiber Optic Sensors for Distributed Strain and Temperature Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Inspection Method and System of Plastic Gear Surface Defects Based on Adaptive Sample Weighting Deep Learning Model

by
Zhaoyao Shi
*,
Yiming Fang
and
Huixu Song
Beijing Engineering Research Center of Precision Measurement Technology and Instruments, College of Mechanical & Energy Engineering, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4660; https://doi.org/10.3390/s24144660
Submission received: 6 June 2024 / Revised: 3 July 2024 / Accepted: 17 July 2024 / Published: 18 July 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
After injection molding, plastic gears often exhibit surface defects, including those on end faces and tooth surfaces. These defects encompass a wide range of types and possess complex characteristics, which pose challenges for inspection. Current visual inspection systems for plastic gears suffer from limitations such as single-category defect inspection and low accuracy. There is an urgent industry need for a comprehensive and accurate method and system for inspecting defects on plastic gears, with improved inspection capability and higher accuracy. This paper presents an intelligent inspection algorithm network for plastic gear defects (PGD-net), which effectively captures subtle defect features at arbitrary locations on the surface compared to other models. An adaptive sample weighting method is proposed and integrated into an improved Focal-IoU loss function to address the issue of low inspection accuracy caused by imbalanced defect dataset distributions, thus enhancing the regression accuracy for difficult defect categories. CoordConv layers are incorporated into each inspection head to improve the model’s generalization capability. Furthermore, a dataset of plastic gear surface defects comprising 16 types of defects is constructed, and our algorithm is trained and tested on this dataset. The PGD-net achieves a comprehensive mean average precision (mAP) value of 95.6% for the 16 defect types. Additionally, an online inspection system is developed based on the PGD-net algorithm, which can be integrated with plastic gear production lines to achieve online full inspection and automatic sorting of plastic gear defects. The entire system has been successfully applied in plastic gear production lines, conducting daily inspections of over 60,000 gears.

1. Introduction

During the manufacturing process, plastic gears can develop a wide variety of surface defects due to issues related to materials, equipment, and processes [1,2]. As illustrated in Figure 1, common defects include flash, dark spots, void, point gate perforation, point gate protrusion, white, overflow, and burn, occurring on both end faces and tooth surfaces. Based on the causes of defect formation, they can be categorized into the following four types:
(1)
Injection molding defects caused by the molding process, such as flow marks, white, and burn;
(2)
Injection defects related to the storage or use of materials, including color variations and foreign particle inclusions (such as dark spots);
(3)
Injection defects caused by maintenance issues or poor mold design, such as flash, bubbles, overflow, oil contamination, and short shot;
(4)
Defects resulting from improper human operations, such as scratches from manually removing flash and tooth surface damage from mishandling.
In response to the vast production volume and diverse surface defects of plastic gears, there is a pressing need within the plastic gear industry for an online automated inspection device, as traditional gear measuring instruments fail to meet this demand. Therefore, visual online inspection technology for gears has rapidly developed, offering advantages such as non-contact operation, high inspection efficiency, and comprehensive information acquisition, which facilitates integration with automated equipment to achieve online inspection of plastic gear defects [3,4,5].
Traditional defect inspection algorithms often struggle to accurately distinguish the complex surface defect features of plastic gears. With the rapid development of deep learning technology, defect inspection algorithms based on deep learning have shown significant improvements in inspecting multiple complex defects. However, challenges still exist, such as the real-time problem, small sample problem, small target problem, and unbalanced sample problem [6,7,8,9,10,11,12,13].
Sun et al. [14] proposed a cascade inspection method for surface defects based on high-resolution inspection images. By introducing Euclidean distance width deviation in the CIoU localization loss function, they effectively inspected wireframe defects of multiple scales, achieving mAP of 85.01% and an inference time of 37 ms. The method attained inspection accuracy exceeding 95%, with a false negative rate controlled below 6%, but only inspected three types of defects: dirt, white scratches, and black scratches. He et al. [15] introduced a novel steel plate defect inspection system. The system employed a baseline convolutional neural network to generate feature maps at each stage, and a multi-layer feature fusion network (MFN) to merge multiple hierarchical features into one, enhancing the feature’s ability to capture position details of more defects. The study established the NEU-DET defect inspection dataset, demonstrating accuracy rates of 99.67% for defect classification and 82.3% mAP for defect inspection. However, the system could only inspect six types of steel plate defects.
In recent years, some scholars have conducted research on gear defect inspection algorithms in the field of deep learning. Zhang et al. [16] proposed an improved YOLO-v3 network to inspect stain and miss defects in plastic gears, achieving a false inspection rate of 1.3%. However, this study only considered two types of end-face defects and did not address tooth surface defects. Kamal et al. [17] employed two classification methods based on convolutional neural networks to identify scratches, protrusions, hole erosion, and asymmetric block defects in gears. The first method had a shorter processing time of 0.09 s, with an accuracy of 92%, while the second method achieved a higher accuracy of 96.5% with an average time of 0.67 s. Nonetheless, the study focused on inspecting defects in metal gears, which has unknown effectiveness for plastic gear inspection. Xi et al. [18] developed an integrated Yolov5-Deeplabv3+ real-time segmentation network for online measurement of pitting defects in metal gears, addressing sample imbalance issues. However, the study only considered one type of pitting defect in metal gears and lacked experimental results for mixed defect inspection. Xiao et al. [19] designed an improved GA-PSO algorithm for identifying four types of defects in powder metallurgy gears: tooth breakage, wear, cracking, and scratching, achieving an accuracy rate of over 94%. However, tooth surface defect inspection was not addressed.
In summary, existing deep learning algorithms for gear defect inspection still face challenges such as limited defect inspection types, low accuracy, and insufficient research on tooth surface defect inspection.
Furthermore, apart from defect inspection algorithms, it is equally important to design the structure and control system of a gear defect inspection system. Most defect inspection systems are unsuitable for adopting vibration plate feeding methods [20,21], which will cause significant damage to plastic gears. Gear defect inspection systems can be classified into two types: integrated on conveyors and independently mounted on glass turntables. Reference [22] describes gear visual online inspection devices located above conveyors, which occupy less space, but have many factors resulting in low inspection accuracy, e.g., unstable conveyor motion, susceptibility to wear, and variable product placement angles. Additionally, the non-transparency of conveyors prevents image capture of the gear bottom surface. Reference [23] introduces a glass turntable-based injection-molded gear online inspection and sorting system, for inspecting surface defects such as dark spots and flash without flipping the product to obtain bottom-side inspection images. However, the device inspects fewer types of defects and cannot collect tooth surface defect data.
All in all, plastic gear defect online inspection systems face three main challenges: First, the diversity and complexity of plastic gear defects, encompassing both end-face and tooth surface defects, pose significant inspection difficulties, with a lack of research on specialized multi-class defect fusion inspection algorithms for plastic gears. Second, research on plastic gear defect online inspection devices is scarce, so that the industry lacks practical solutions for coupling online inspection equipment with plastic gear production lines. Third, it poses significant challenges to ensure the normal operation and coordination of various modules because integrating defect inspection algorithms with automated inspection equipment requires comprehensive consideration of mechanical structure, control systems, and inspection algorithms, involving multiple complex technologies [24].
To address the above issues, this study developed a method and system for inspecting comprehensive surface defects on plastic gears, enabling intelligent online inspection of 16 types of surface defects on plastic gears. The system has been successfully deployed in production lines. In the following, we will discuss it point by point.

2. Plastic Gear Online Inspection System

The gear online inspection system (GOI system) developed in this study has been successfully deployed on plastic gear production lines, tightly integrated with the assembly line to achieve real-time online inspection and sorting of plastic gears. Over 60,000 plastic gears are inspected daily, with a physical representation of the production line shown in Figure 2.
The composition structure of the GOI system is illustrated in Figure 3, consisting of three main components: the dynamic inspection structure, the automatic control system, and the online inspection software.

2.1. Dynamic Inspection Structure

As shown in Figure 4, the dynamic inspection mechanism consists of several components, including the feeding mechanism, glass turntable motion mechanism, radial positioning wheel, fiber optic sensor, lower image acquisition mechanism, upper image acquisition mechanism I, upper image acquisition mechanism II, side image acquisition mechanism, automatic sorting and unloading mechanism, and cleaning mechanism. The entire process of plastic gear inspection and sorting is completed under dynamic rotation.
Firstly, a feeding mechanism is designed to achieve non-manipulator feeding. The glass turntable rotates counterclockwise along the axis under servo motor control, and the processes of gear feeding, positioning, image acquisition, and sorting and unloading are all driven by the glass turntable motion mechanism. Other mechanisms are arranged around the turntable in the order in which the gears pass. The gears are clamped within a fixed radius range by the radial positioning wheel, ensuring they are within the camera’s field of view. The fiber optic sensor automatically inspects the signal when a gear arrives and can determine its circumferential position. The lower image acquisition mechanism, upper image acquisition mechanisms I and II, and side image acquisition mechanism are used to capture images of the bottom end face, upper end face (small tooth), upper end face (large tooth), and tooth surface of the plastic gears, respectively, achieving multi-surface image acquisition of the gears. The cleaning mechanism is used to clean the surface of the glass turntable, removing any interfering stains.
In order to automatically capture images of the tooth surface around the circumference of the plastic gear, a special design was implemented for the side image acquisition mechanism, which differs significantly from the other three image acquisition mechanisms. As illustrated in Figure 5, when the plastic gear reaches the side image acquisition mechanism, it is immediately inspected. At this point, the cylinder and slide table act together, causing the active blocking rod to move in the direction away from the axis. This, in conjunction with the fixed blocking rod, halts the gear’s movement. Subsequently, the pneumatic gripper clamps the gear and places it on the rotating table. The center of the rotating table is aligned with the center hole of the gear by a limit pin, securing the gear in place. The built-in motor within the rotating table rotates the gear one full revolution, while the side-mounted camera rapidly captures multiple tooth surface images covering the circumference of the gear, based on a predetermined frame rate.

2.2. Automatic Control System

The automatic control system of the GOI system is depicted in Figure 6, where a PLC serves as the central processor. It comprises input control modules, sensors, and actuating components, aimed at achieving motion control, positioning, image acquisition, inspection, sorting and unloading, as well as anomaly alert functionalities.
Referring to Figure 7, after the plastic gear is demolded, it is placed onto the conveyor belt by a robotic arm. The gear is then conveyed to the glass turntable for loading via the feeding mechanism. The PLC controls the servo motor to rotate the glass turntable counterclockwise starting from the loading position of the gear. When the gear reaches each image acquisition mechanism, the PLC triggers the corresponding camera to capture an image of the gear. Particularly, when the gear reaches the side image acquisition mechanism, the system first controls the pneumatic gripper to place the gear onto the rotating table. Subsequently, multiple images covering the circumference of the gear are captured. The online inspection software processes the gear images to produce inspection results, which are then sent to the PLC. When the gear reaches the corresponding discharge chute, the PLC sends an on/off signal to the respective solenoid valve, causing the air nozzle at the entrance of the chute to release air momentarily as the gear passes through, thereby blowing the gear into the corresponding discharge chute. This completes the entire process of plastic gear inspection and sorting.

3. Intelligent Inspection Network for Comprehensive Defects of Plastic Gears: PGD-net

At the core of the GOI system lies the online inspection software. We collected and established a dataset of surface defects on the upper end face (ejector pin point face), lower end face (gate point face), and side face (entire tooth surface) of plastic gears. Based on the sample distribution characteristics of the dataset, we constructed a deep learning network called Plastic Gear Defect-net (PGD-net). The basic architecture of this network model is based on YOLOv5’s Backbone, Neck, and Head, and incorporates a new sample weight adaptive optimization method. It outperforms YOLOv5 in terms of sample balance, inference speed, and generalization ability. The online inspection software integrated into PGD-net can accurately and rapidly identify 16 types of surface defects from the multi-surface images of plastic gears captured.

3.1. PGD-net Architecture

As shown in Figure 8, the structure of PGD-net consists of three classic parts: Backbone, Neck, and Head.
Firstly, the Backbone is connected to the input end and is responsible for extracting features from input images, with the specified input image size being 640 × 640. The Backbone and Neck together comprise 10 Convolution-BatchNorm-SiLU (CBS) modules. Each CBS module consists of a convolutional layer, a Batch Normalization (BN) layer, and a SiLU layer (an activation function). The equation for the SiLU function is:
S i L U ( x ) = x S i g m o i d ( x ) = x / ( 1 + e x )
The Backbone and Neck consist of a total of eight C3-FasterNetBlock modules. In this study, all regular convolutional layers in the original C3 modules were replaced with FasterNet Blocks [25]. The structure of the FasterNet Block, as shown in Figure 9, is utilized to reduce redundant computations and memory access, effectively extracting spatial features.
In this study, CoordConv [26] was incorporated into each inspection head in the Head, as illustrated in Figure 10. Compared to traditional convolution, CoordConv simply adds two additional channels behind the input feature map, one representing the x-coordinate and the other representing the y-coordinate, while leaving the rest unchanged. Traditional convolution possesses three characteristics: few parameters, computational efficiency, and translational invariance. However, CoordConv inherits only the first two characteristics, allowing the network to maintain or discard translational invariance based on its learning process. While it may seem that this would impair the model’s generalization ability, dedicating a portion of the network’s capacity to model non-translational invariance can actually enhance the model’s generalization capability.

3.2. Adaptive Sample Weighting Method: Focal-IoU Loss

In object inspection, the Intersection over Union (IoU) value represents the ratio of the intersection and union between the predicted box A and the ground truth box B, as follows:
I o U = ( A B ) ( A B )
The IoU loss function [27], widely used in object inspection tasks, directly optimizes the IoU value. However, it may not precisely reflect the overlap between the predicted box and the ground truth box. Therefore, several improved versions have been proposed, including the GIoU [28], DIoU, and CIoU [29] loss functions, as depicted in Figure 11.
The definition of GIoU is as follows:
L G I O U = 1 I O U + | C ( A B ) | | C |
where | C | represents the area of the minimum convex bounding box containing both A and B. The disadvantage of GIoU is that it tends to produce a larger bounding box when the two boxes are far apart, and it has a slow convergence speed.
Addressing the drawbacks of GIoU, DIoU offers improvements. Its equation is as follows:
L D I o U = 1 I o U + ρ 2 ( O A , O B ) d 2
In the equation, ρ ( O A , O B ) represents the Euclidean distance between the center points of boxes A and B, while d is the diagonal distance of the minimum enclosing box around them.
CIoU, based on DIoU, takes into account both the center point distance and aspect ratio. It is defined as:
L C I o U = 1 I o U + ρ 2 ( O A , O B ) c 2 + α ν
where α is a weighting function and ν is used to measure the consistency of aspect ratios, defined as follows:
α = ν ( 1 I o U ) + ν
ν = 4 π 2 ( arctan w B h B arctan w A h A ) 2
Among them, w A and h A represent the width and height of predicted box A. w B and h B represent the width and height of ground truth box B.
However, the vast majority of plastic gear products in daily production are qualified, resulting in a scarcity of defective samples. Additionally, there exists a long-tail effect in the distribution of different types of defect samples, leading to severe class imbalance in the sample distribution, which significantly affects model convergence. To address this issue, this study incorporates the idea of focal loss [30] into the IoU loss function. This involves adapting the contribution of samples to the loss based on their prediction accuracy, aiming to enhance the loss of hard samples and facilitate the exploration of difficult samples. However, the specific calculation formula is different from the existing focal loss methods.
The specific approach is as follows: we introduce an adjustment factor γ for sample difficulty into GIoU, DIoU, and CIoU, respectively, to reduce the impact of easy-to-classify samples. The equations are as follows:
L F o c a l G I o U = ( 1 I o U ) γ L G I o U
L F o c a l D I o U = ( 1 I o U ) γ L D I o U
L F o c a l C I o U = ( 1 I o U ) γ L C I o U
Here, a higher IoU indicates that the sample is easier to inspect, while a lower IoU indicates that the sample is more challenging to inspect. Therefore, when γ > 0 , the loss for samples with higher IoU will be smaller, causing the weight of easy-to-inspect samples to decrease, while the weight of hard-to-inspect samples will increase. This acts as a weighting mechanism, assigning a larger loss to more challenging targets, thereby helping to improve the regression accuracy of difficult samples.

4. Experimental Results and Discussion

4.1. Experimental Condition

This study builds a deep learning framework based on PyTorch 2.0 on a Windows 10 system. The GPU used is an NVIDIA GeForce GTX 4090 with CUDA version 11.8 and 24 GB of memory. The CPU is an Intel Core i9-13900K. The hardware parameters of the visual inspection system are shown in Table 1.
A dataset of plastic gear surface defects was established, comprising a total of 922 end-face images and 518 tooth surface images, with a total of 16 defect types and a total of 2881 samples. The dataset was split into training, validation, and test sets in an 8:1:1 ratio. The number of instances for each defect in the training set is shown in Figure 12. The original image resolution is 2048 × 2448.

4.2. Evaluation Indicators

We evaluated the performance of the model using recall (R), accuracy (P), mean precision (AP), mean average precision (mAP), F1-score, and inference speed, respectively.
The equation for calculating the recall rate R is as follows:
R = T P T P + F N
In the equation, TP, TN, FN, and FP represent the number of samples with true positive, true negative, false negative, and false positive, respectively.
The equation for calculating accuracy P is as follows:
P = T P T P + F P
The mean precision (AP) is the area calculated based on the P–R Curve (Precision–Recall Curve), which is used to comprehensively consider the performance of the model under different recall rates. The value range of AP is between 0 and 1, and a higher value indicates better performance of the model in inspecting target instances. The calculation equation is as follows:
A P = 0 1 p ( r ) d r
But usually, approximation or difference methods are used to calculate AP, and the equation is as follows:
A P = k = 1 N P ( n ) Δ R ( n )
In the equation, N is the total amount of data, and n is the index of each sample point.
Δ R ( n ) = R ( n ) R ( n 1 )
Mean Average Precision (mAP) is the average AP value of all categories, calculated as follows:
m A P = i = 1 k A P i k
In the equation, k is the number of categories.
mAP:0.5 refers to the mAP calculated using an IoU threshold of 0.5 in object inspection tasks.
mAP:0.5,0.95 refers to the mAP calculated at Intersection over Union (IoU) thresholds of 0.5 and 0.95, respectively.
F1-score is the harmonic mean of Precision and Recall, used to comprehensively assess the accuracy and recall capabilities of an algorithm. The calculation equation is as follows:
F 1 = 2 P R P + R

4.3. PGD-net Inspection Performance Experiment

To evaluate the inspection performance of PGD-net, this study compared it with current mainstream object inspection models, and the results are shown in Table 2. After training for 300 epochs, PGD-net performed the best on metrics such as mAP:0.5, Precision, Recall, False Positive Rate (FPR), and False Negative Rate (FNR). It achieved an mAP:0.5 of 95.6%, which is 2.5% higher than the second-ranked YOLOv5 model. The FPR and FNR of PGD-net were also the lowest among all models, at 0.2% and 6.7%, respectively, which are crucial for ensuring product quality. Besides demonstrating excellent inspection performance, PGD-net also showed decent training speed, reaching 21 ms/piece, and inference speed of 44.2 ms/piece. All inference was conducted on the CPU, and if GPU inference is used, the average speed of PGD-net can reach 4.3 ms/piece.
Table 2 reflects the overall inspection performance of each model for all defect categories.
Before further analysis, to provide readers with a more intuitive understanding of the 16 types of surface defects inspected by the models, we first present images of the 16 defects in Table 3 and provide explanations.
In order to reflect the individual inspection capability of each model for each defect, we compiled the inspection results of various models for the 16 types of surface defects on plastic gears, using mAP:0.5 as the metric, and summarized them in Table 4. Analysis of Table 4 reveals that defects such as Broken, Protrusion, Missing, Overflow, Perforation, Reverse, Short Shot, and White are relatively easy to inspect, with mAP:0.5 values for models like Faster RCNN, FCOS, and others mostly exceeding 95%, with the lowest not falling below 85%. Conversely, defects like Damage, Dark Spot, Dirt, Flash, and Flow are more challenging to inspect. Potential reasons for this discrepancy include: a smaller number of samples for these defects, less distinct features making manual annotation errors during dataset creation more likely, and lower robustness to disturbances. PGD-net shows significant improvements in inspecting difficult defects like Damage, Dark Spot, Dirt, Flash, and Flow, with mAP:0.5 values of 89.3%, 95.3%, 90.1%, and 89.4%, respectively. However, for Damage, the mAP:0.5 is only 71.8%. In contrast, inspection performance is excellent for defects like Broken, Protrusion, Burn, Missing, Overflow, Perforation, Reverse, Short Shot, Void, and White, with mAP:0.5 values all exceeding 99.1%.
In addition, this article also used a 5-fold crossover test to evaluate the generalization ability of PGD net. The mAP values for all categories in the five experiments were 95.6%, 95.1%, 95.3%, 96.0%, and 95.5%, respectively, with results all above 95.0%.
Table 5 further compares the inspection performance of different models for various defects, showcasing the inspection results for several case scenarios.
Figure 13 presents the P–R Curve and F1–Confidence Curve of PGD-net for the 16 defect types. The P–R Curve illustrates the relationship between precision and recall at different thresholds, while the F1-score represents the harmonic mean of precision and recall, ranging from 0 to 1, with 1 being the best and 0 being the worst.
The tooth surface images of plastic gears are captured using a side-view camera. However, it is not possible to capture the entire tooth surface in a single image, so multiple tooth surface images need to be captured from the side. This raises a question: whether the PGD-net model proposed in this paper can inspect the same defect at different rotation angles, considering that the scale, position, and shape of the same defect may vary at different angles. To verify the inspection capability of PGD-net for the same defect at different rotation angles, side-view images were captured at different angles and subjected to inspection. The inspection results are shown in Figure 14 and Figure 15. Figure 14 depicts images captured at intervals of 30 degrees as the turntable rotates counterclockwise, resulting in a total of six images containing the Overflow defect. Figure 15 also shows images captured at intervals of 30 degrees, resulting in three images containing the Dark Spot defect. It can be observed that PGD-net can accurately inspect the same defect at different angles.

4.4. Focal-IoU Loss Inspection Performance Experiment

To verify the positive impact of the proposed sample weight optimization method Focal-IoU loss on inspection performance, it is compared with GIoU loss, DIoU loss, and CIoU loss in experiments. The experimental results are shown in Table 6. Among them, Focal-GIoU achieves the highest mAP:0.5, reaching 95.6%, which is a 0.5% improvement compared to GIoU loss. Focal-DIoU achieves an mAP:0.5 of 95.5%, a 0.6% improvement compared to DIoU loss. Focal-CIoU also achieves an mAP:0.5 of 95.5%, a 0.8% improvement compared to CIoU loss. The experiment demonstrates that Focal-IoU loss exhibits excellent sample balancing performance.
As mentioned earlier, introducing focal weights into the IoU loss function is intended to improve the inspection accuracy of difficult samples. Several defects with previously poor inspection performance, namely Damage, Dark Spot, Dirt, Flash, and Flow, were selected as difficult samples for comparison. The results are shown in Figure 16.
Comparing Focal-GIoU loss, Focal-DIoU loss, and Focal-CIoU loss with GIoU loss, DIoU loss, and CIoU loss, respectively, reveals a significant overall improvement in inspection accuracy for several difficult defect samples. Only Focal-GIoU loss shows a decrease in inspection accuracy for the Dirt defect by 1.7%, and Focal-DIoU loss shows a decrease of 1.4% for the Flash defect. However, improvements are observed for the Damage, Dark Spot, and Flow defects. Focal-DIoU loss exhibits the highest increase in inspection accuracy for the Damage defect, with an increase of 6.5%.

4.5. Ablation Experiment

To validate the effectiveness of each module proposed in PGD-net, we conducted ablation experiments. Table 7 presents the results of these ablation experiments for each module.
Compared to Model A, Model E shows an improvement of 2.5% in mAP:0.5. With the addition of CoordConv in Model C and the incorporation of the Focal-IoU Loss module in Model D, the inference speed decreases slightly compared to Model A. However, the inference speed improves after adding the C3-FasterNetBlock module in Model E.
This suggests that the modules contribute positively to the overall performance of PGD-net, with some trade-offs in inference speed depending on the specific module.

5. Conclusions

This paper proposes an intelligent inspection method for comprehensive surface defects on plastic gears, termed PGD-net. In the network architecture, a sample-weight adaptive optimization method called Focal-IoU loss is designed, which alleviates the long-tail effect of defect samples on plastic gears, balances sample distribution, and enhances the inspection capability of difficult defect categories. Among these, the Focal-GIoU loss achieves the best performance, with an mAP:0.5 of 95.6%. Moreover, replacing the ordinary convolution layers in the C3 module with FasterNet Blocks accelerates the model’s inference speed. Additionally, incorporating CoordConv in each inspection head enhances the model’s generalization ability.
Furthermore, a dataset of comprehensive surface defects on plastic gears is constructed, covering defects on the upper surface (ejector pin point surface), lower surface (point gate surface), and side surface (tooth surface), totaling 16 categories of surface defects. PGD-net achieves a comprehensive mAP:0.5 of 95.6% for the 16 types of plastic gear surface defects, surpassing other models by at least 2.5%. The false positive rate (FPR) is 0.2%, and the false negative rate (FNR) is 6.7%. Notably, for defects such as Broken, Protrusion, Burn, Missing, Overflow, Perforation, Reverse, Short Shot, Void, and White, the mAP:0.5 is consistently above 99.1%.
Building upon the PGD-net algorithm, this paper develops the Gear Online Inspection (GOI) system, which is deployed in plastic gear production facilities. The system features automatic feeding, automatic acquisition of gear end-face and full-tooth-face images, automatic inspection, automatic sorting and unloading, and anomaly alarm functions. It has been successfully applied in plastic gear production lines, inspecting over 60,000 gears daily.
While the constructed dataset encompasses 16 types of plastic gear surface defects, there might still be some defect types not included. Moreover, for certain complex defects, more instance images are required to expand the dataset and achieve better inspection performance. In future work, we will explore weakly supervised algorithms to inspect abnormal defects by learning from normal plastic gear sample images.

Author Contributions

Conceptualization, Z.S. and Y.F.; methodology, Z.S. and Y.F.; software, Y.F.; validation, Y.F.; formal analysis, Y.F.; investigation, Y.F.; resources, Z.S. and H.S.; data curation, Y.F.; writing—original draft preparation, Y.F.; writing—review and editing, Z.S. and Y.F.; visualization, Y.F.; supervision, Z.S. and H.S.; project administration, Z.S. and H.S.; funding acquisition, Z.S. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation Major Scientific Research Instrument Development Project of China under Grants 52227809.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shi, Z.Y.; Fang, Y.M.; Wang, X.Y. Research progress of gear machine vision inspection instrument and technology. Laser Opto-Electron. Prog. 2022, 59, 13. [Google Scholar]
  2. Tang, D.; Zhao, P.; Shen, Y.; Zhou, H.; Xie, J.; Fu, J. Detecting shrinkage voids in plastic gears using magnetic levitation. Polym. Test. 2020, 91, 106820. [Google Scholar] [CrossRef]
  3. Saini, P.; Anand, R. Identification of defects in plastic gears using image processing and computer vision: A review. Int. J. Eng. Res. 2014, 3, 94–99. [Google Scholar] [CrossRef]
  4. Mane, U.; Mahajan, A.; Kargutkar, E.; Dhuri, K. Detection of defects in plastic gears using image processing. Int. J. Innov. Sci. Res. Technol. 2017, 2, 169–171. [Google Scholar]
  5. Chen, J.Z.; Wang, G.T.; Chen, J.Q.; Yin, X.L. Automatic inspection system of small plastic gears based on machine vision. Key Eng. Mater. 2012, 522, 628–633. [Google Scholar] [CrossRef]
  6. Gao, Y.; Li, X.; Wang, X.V.; Wang, L.; Gao, L. A Review on Recent Advances in Vision-based Defect Recognition towards Industrial Intelligence. J. Manuf. Syst. 2022, 62, 753–766. [Google Scholar] [CrossRef]
  7. Chen, Y.; Ding, Y.; Zhao, F.; Zhang, E.; Wu, Z.; Shao, L. Surface defect detection methods for industrial products: A review. Appl. Sci. 2021, 11, 7657. [Google Scholar] [CrossRef]
  8. Bhatt, P.M.; Malhan, R.K.; Rajendran, P.; Shah, B.C.; Thakar, S.; Yoon, Y.J.; Gupta, S.K. Image-based surface defect detection using deep learning: A review. J. Comput. Inf. Sci. Eng. 2021, 21, 040801. [Google Scholar] [CrossRef]
  9. Zhong, X.; Zhu, J.; Liu, W.; Hu, C.; Deng, Y.; Wu, Z. An overview of image generation of industrial surface defects. Sensors 2023, 23, 8160. [Google Scholar] [CrossRef]
  10. Cui, L.; Jiang, X.; Xu, M.; Li, W.; Lv, P.; Zhou, B. SDDNet: A fast and accurate network for surface defect detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  11. Lupea, I.; Lupea, M. Detecting helical gearbox defects from raw vibration signal using convolutional neural networks. Sensors 2023, 23, 8769. [Google Scholar] [CrossRef] [PubMed]
  12. Idzik, T.; Veres, M.; Tarry, C.; Moussa, M. A Real-Time Inspection System for Industrial Helical Gears. Sensors 2023, 23, 8541. [Google Scholar] [CrossRef] [PubMed]
  13. Mavi, A.; Kaur, M. Identify defects in gears using digital image processing. Int. J. Eng. Res. Dev. 2012, 1, 49–55. [Google Scholar]
  14. Sun, T.; Li, Z.; Xiao, X.; Guo, Z.; Ning, W.; Ding, T. Cascaded detection method for surface defects of lead frame based on high-resolution detection images. J. Manuf. Syst. 2024, 72, 180–195. [Google Scholar] [CrossRef]
  15. He, Y.; Song, K.; Meng, Q.; Yan, Y. An end-to-end steel surface defect detection approach via fusing multiple hierarchical features. IEEE Trans. Instrum. Meas. 2019, 69, 1493–1504. [Google Scholar] [CrossRef]
  16. Zhang, G.S.; Ge, G.Y.; Zhu, R.H.; Sun, Q. Gear defect detection based on the improved yolov3 network. Laser Optoelectron. Prog. 2020, 57, 153–161. [Google Scholar]
  17. Kamal, I.M.; Sutrisnowati, R.A.; Bae, H.; Lim, T. Gear Classification for Defect Detection in Vision Inspection System Using Deep Convolutional Neural Networks. ICIC Express Lett. 2018, 9, 1279–1286. [Google Scholar]
  18. Xi, D.; Qin, Y.; Wang, S. YDRSNet: An integrated Yolov5-Deeplabv3+ real-time segmentation network for gear pitting measure-ment. J. Intell. Manuf. 2023, 34, 1585–1599. [Google Scholar] [CrossRef]
  19. Xiao, M.; Wang, W.; Shen, X.; Zhu, Y.; Bartos, P.; Yiliyasi, Y. Research on defect detection method of powder metallurgy gear based on machine vision. Mach. Vis. Appl. 2021, 32, 51. [Google Scholar] [CrossRef]
  20. Sun, J.; Wang, P.; Luo, Y.-K.; Li, W. Surface defects detection based on adaptive multiscale image collection and convolutional neural networks. IEEE Trans. Instrum. Meas. 2019, 68, 4787–4797. [Google Scholar] [CrossRef]
  21. Feng, W.; Mao, Z.; Yang, Y.; Ma, H.; Zhao, K.; Qi, C.; Hao, C.; Liu, Z.; Xie, H.; Liu, S. Online defect detection method and system based on similarity of the temperature field in the melt pool. Addit. Manuf. 2022, 54, 102760. [Google Scholar] [CrossRef]
  22. Wu, W.; Wang, X.; Huang, G.; Xu, D. Automatic gear sorting system based on monocular vision. Digit. Commun. Netw. 2015, 1, 284–291. [Google Scholar] [CrossRef]
  23. Zhou, J.Y.; Shi, Z.Y.; Nan, H.X.; Li, R.; Tong, A.J. Rapid sorting and inspecting system for plastic gears in production site. Opt. Precis. Eng. 2020, 28, 2017–2026. [Google Scholar] [CrossRef]
  24. Haleem, N.; Bustreo, M.; Del Bue, A. A computer vision based online quality control system for textile yarns. Comput. Ind. 2021, 133, 103550. [Google Scholar] [CrossRef]
  25. Chen, J.; Kao, S.H.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, Don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
  26. Liu, R.; Lehman, J.; Molino, P.; Petroski Such, F.; Frank, E.; Sergeev, A.; Yosinski, J. An intriguing failing of convolutional neural networks and the coordconv solution. Adv. Neural Inf. Process. Syst. 2018, 31, 9605–9616. [Google Scholar]
  27. Yu, J.; Jiang, Y.; Wang, Z.; Cao, Z.; Huang, T. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 516–520. [Google Scholar]
  28. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar] [CrossRef]
  29. Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU loss: Faster and better learning for bounding box regression. Proc. AAAI Conf. Artif. Intell. 2020, 34, 12993–13000. [Google Scholar] [CrossRef]
  30. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
Figure 1. Partial surface defects of plastic gears. The green box marks the location of the defect.
Figure 1. Partial surface defects of plastic gears. The green box marks the location of the defect.
Sensors 24 04660 g001
Figure 2. GOI system on plastic gear production line.
Figure 2. GOI system on plastic gear production line.
Sensors 24 04660 g002
Figure 3. Composition of GOI system.
Figure 3. Composition of GOI system.
Sensors 24 04660 g003
Figure 4. Composition of the dynamic inspection structure. The red arrow indicates the direction of motion of the gears.
Figure 4. Composition of the dynamic inspection structure. The red arrow indicates the direction of motion of the gears.
Sensors 24 04660 g004
Figure 5. Full surface image acquisition mechanism.
Figure 5. Full surface image acquisition mechanism.
Sensors 24 04660 g005
Figure 6. Composition of automatic control system.
Figure 6. Composition of automatic control system.
Sensors 24 04660 g006
Figure 7. Automatic inspection flowchart.
Figure 7. Automatic inspection flowchart.
Sensors 24 04660 g007
Figure 8. PGD-net architecture.
Figure 8. PGD-net architecture.
Sensors 24 04660 g008
Figure 9. FasterNet Block.
Figure 9. FasterNet Block.
Sensors 24 04660 g009
Figure 10. CoordConv Layer.
Figure 10. CoordConv Layer.
Sensors 24 04660 g010
Figure 11. The geometric relationship between IoU boxes.
Figure 11. The geometric relationship between IoU boxes.
Sensors 24 04660 g011
Figure 12. Statistics on the number of 16 defect samples in the training set.
Figure 12. Statistics on the number of 16 defect samples in the training set.
Sensors 24 04660 g012
Figure 13. Inspection results of PGD-net for 16 types of defects. (a) Precision–Recall Curve; (b) F1–Confidence Curve.
Figure 13. Inspection results of PGD-net for 16 types of defects. (a) Precision–Recall Curve; (b) F1–Confidence Curve.
Sensors 24 04660 g013
Figure 14. The inspection results of the same tooth surface overflow defect at different counterclockwise rotation angles. (a) 0 degrees; (b) 30 degrees; (c) 60 degrees; (d) 90 degrees; (e) 120 degrees; (f) 150 degrees.
Figure 14. The inspection results of the same tooth surface overflow defect at different counterclockwise rotation angles. (a) 0 degrees; (b) 30 degrees; (c) 60 degrees; (d) 90 degrees; (e) 120 degrees; (f) 150 degrees.
Sensors 24 04660 g014aSensors 24 04660 g014b
Figure 15. The inspection results of the same tooth surface dark spot defect at different counterclockwise rotation angles. (a) 0 degrees; (b) 30 degrees; (c) 60 degrees.
Figure 15. The inspection results of the same tooth surface dark spot defect at different counterclockwise rotation angles. (a) 0 degrees; (b) 30 degrees; (c) 60 degrees.
Sensors 24 04660 g015
Figure 16. The inspection ability of Focal IoU loss for several difficult samples. (a) Focal-GIOU Loss vs. GIOU Loss; (b) Focal-DIOU Loss vs. DIOU Loss; (c) Focal-CIOU Loss vs. CIOU Loss.
Figure 16. The inspection ability of Focal IoU loss for several difficult samples. (a) Focal-GIOU Loss vs. GIOU Loss; (b) Focal-DIOU Loss vs. DIOU Loss; (c) Focal-CIOU Loss vs. CIOU Loss.
Sensors 24 04660 g016
Table 1. Hardware parameters of visual inspection system.
Table 1. Hardware parameters of visual inspection system.
Hardware NameParameters
Industrial cameraColor, 3.45 μm pixel size, 2448 × 2048 resolution
LensMagnification 0.22×, optical distortion ≤ 0.001%
Light sourceDiffuse light source
Table 2. Comparison of inspection results between PGD-net and other methods. Bold indicates the best value.
Table 2. Comparison of inspection results between PGD-net and other methods. Bold indicates the best value.
ModelBackboneEpochmAP:0.5 (%)Precision (%)Recall (%)FPR(%)FNR(%)CPU Infer Time (ms/piece)Training Time (ms/piece)
Faster RCNNResnet-1830077.867.175.83.024.267.729.7
FCOSResnet-5030073.986.558.52.841.543.637.5
ATSSResnet-1830083.288.467.92.632.158.726.4
YOLOv3Darknet-5330087.484.583.61.816.340.419.5
YOLOv5Darknet-5330093.191.388.80.58.350.121.3
YOLOv8Darknet-5330088.889.184.91.214.052.420.5
SSDVgg-1630088.387.180.91.919.145.119.5
PGD-netDarknet-5330095.695.092.80.26.744.221.4
Table 3. The 16 types of surface defects on plastic gears.
Table 3. The 16 types of surface defects on plastic gears.
Defect TypeImageRemarks
BrokenSensors 24 04660 i001Tooth fracture or severe damage
ProtrusionSensors 24 04660 i002Generally occurring at the point gate
BurnSensors 24 04660 i003Color change and damage caused by high temperature
DamageSensors 24 04660 i004Tooth surface damage caused by collision
Dark SpotSensors 24 04660 i005Dark impurities on the surface
DirtSensors 24 04660 i006Oil stains and other dirt
FlashSensors 24 04660 i007Sharp protrusions at the tooth tip, root, profile, or center hole
FlowSensors 24 04660 i008The flow traces of the formed material remain on the surface
HairSensors 24 04660 i009Hair adhering to the surface
MissingSensors 24 04660 i010This defect is caused by the missing installation of a small tooth during secondary molding of a dual gear shown in picture (a). Acceptable product is shown in picture (b)
OverflowSensors 24 04660 i011Plastic does not completely fill the mold cavity, resulting in excess adhesive material
PerforationSensors 24 04660 i012Perforation defects that generally occur at the point gate
ReverseSensors 24 04660 i013This defect is caused by the reverse installation of a small tooth during the secondary molding of a dual gear shown in picture (a). Acceptable product is shown in picture (b)
Short ShotSensors 24 04660 i014The injection molding material cannot completely fill the entire mold cavity
VoidSensors 24 04660 i015Vacuum bubbles generated by rapid freezing of material flow and obstruction of contraction
WhiteSensors 24 04660 i016Imprints generated by the ejection rod
Table 4. Comparison of mAP:0.5 (%) results for 16 defect types across different models.
Table 4. Comparison of mAP:0.5 (%) results for 16 defect types across different models.
Defect TypeFaster RCNNFCOSATSSYOLOv3YOLOv5YOLOv8SSDPGD-net
Broken85.385.895.910099.099.399.499.5
Protrusion10010010010099.599.510099.5
Burn64.950.078.493.698.597.487.799.5
Damage29.839.941.853.764.149.753.571.8
Dark Spot54.272.561.569.284.574.768.789.3
Dirt50.648.370.660.991.969.088.395.3
Flash57.326.848.454.379.262.858.790.1
Flow30.328.259.584.082.283.277.789.4
Hair77.278.788.097.793.993.293.697.3
Missing10010010010099.599.510099.5
Overflow92.791.095.193.697.397.394.799.3
Perforation99.910010099.099.099.599.099.5
Reverse10010010010099.599.510099.5
Short Shot97.692.910099.599.599.510099.5
Void87.475.992.492.698.497.491.899.1
White10092.910010099.599.510099.5
Table 5. The case comparisons of defect inspection results by various models. (Note: The colors of the inspection boxes correspond to the labels in Figure 12).
Table 5. The case comparisons of defect inspection results by various models. (Note: The colors of the inspection boxes correspond to the labels in Figure 12).
CaseOriginal ImageGround TruthFaster RCNNYOLOv3YOLOv5SSDPGD-net
Case 1Sensors 24 04660 i017Sensors 24 04660 i018Sensors 24 04660 i019Sensors 24 04660 i020Sensors 24 04660 i021Sensors 24 04660 i022Sensors 24 04660 i023
Case 2Sensors 24 04660 i024Sensors 24 04660 i025Sensors 24 04660 i026Sensors 24 04660 i027Sensors 24 04660 i028Sensors 24 04660 i029Sensors 24 04660 i030
Case 3Sensors 24 04660 i031Sensors 24 04660 i032Sensors 24 04660 i033Sensors 24 04660 i034Sensors 24 04660 i035Sensors 24 04660 i036Sensors 24 04660 i037
Case 4Sensors 24 04660 i038Sensors 24 04660 i039Sensors 24 04660 i040Sensors 24 04660 i041Sensors 24 04660 i042Sensors 24 04660 i043Sensors 24 04660 i044
Case 5Sensors 24 04660 i045Sensors 24 04660 i046Sensors 24 04660 i047Sensors 24 04660 i048Sensors 24 04660 i049Sensors 24 04660 i050Sensors 24 04660 i051
Case 6Sensors 24 04660 i052Sensors 24 04660 i053Sensors 24 04660 i054Sensors 24 04660 i055Sensors 24 04660 i056Sensors 24 04660 i057Sensors 24 04660 i058
Case 7Sensors 24 04660 i059Sensors 24 04660 i060Sensors 24 04660 i061Sensors 24 04660 i062Sensors 24 04660 i063Sensors 24 04660 i064Sensors 24 04660 i065
Case 8Sensors 24 04660 i066Sensors 24 04660 i067Sensors 24 04660 i068Sensors 24 04660 i069Sensors 24 04660 i070Sensors 24 04660 i071Sensors 24 04660 i072
Case 9Sensors 24 04660 i073Sensors 24 04660 i074Sensors 24 04660 i075Sensors 24 04660 i076Sensors 24 04660 i077Sensors 24 04660 i078Sensors 24 04660 i079
Case 10Sensors 24 04660 i080Sensors 24 04660 i081Sensors 24 04660 i082Sensors 24 04660 i083Sensors 24 04660 i084Sensors 24 04660 i085Sensors 24 04660 i086
Table 6. The inspection performance comparison of several loss functions ( γ = 0.5 ). Bold indicates better value.
Table 6. The inspection performance comparison of several loss functions ( γ = 0.5 ). Bold indicates better value.
Loss FunctionmAP:0.5 (%)
GIoU95.1
Focal-GIoU95.6
DIoU94.9
Focal-DIoU95.5
CIoU94.7
Focal-CIoU95.5
Table 7. Ablation experiment. The “√” indicates that the model has been selected for experimentation, bold indicates the best value.
Table 7. Ablation experiment. The “√” indicates that the model has been selected for experimentation, bold indicates the best value.
MethodsC3-FasterNetBlockCoordConvFocal-GIoUmAP:0.5 (%)CPU Infer Time (ms/piece)
Model A 93.145.8
Model B 95.344.6
Model C 95.446.9
Model D 95.645.9
Model E95.645.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, Z.; Fang, Y.; Song, H. Intelligent Inspection Method and System of Plastic Gear Surface Defects Based on Adaptive Sample Weighting Deep Learning Model. Sensors 2024, 24, 4660. https://doi.org/10.3390/s24144660

AMA Style

Shi Z, Fang Y, Song H. Intelligent Inspection Method and System of Plastic Gear Surface Defects Based on Adaptive Sample Weighting Deep Learning Model. Sensors. 2024; 24(14):4660. https://doi.org/10.3390/s24144660

Chicago/Turabian Style

Shi, Zhaoyao, Yiming Fang, and Huixu Song. 2024. "Intelligent Inspection Method and System of Plastic Gear Surface Defects Based on Adaptive Sample Weighting Deep Learning Model" Sensors 24, no. 14: 4660. https://doi.org/10.3390/s24144660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop