Next Article in Journal
Predicting Financial Performance in the IT Industry with Machine Learning: ROA and ROE Analysis
Previous Article in Journal
Explainable AI (XAI) Techniques for Convolutional Neural Network-Based Classification of Drilled Holes in Melamine Faced Chipboard
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

YOLO-RRL: A Lightweight Algorithm for PCB Surface Defect Detection

School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang 110168, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7460; https://doi.org/10.3390/app14177460 (registering DOI)
Submission received: 20 June 2024 / Revised: 16 August 2024 / Accepted: 22 August 2024 / Published: 23 August 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Printed circuit boards present several challenges to the detection of defects, including targets of insufficient size and distribution, a high level of background noise, and a variety of complex types. These factors contribute to the difficulties encountered by PCB defect detection networks in accurately identifying defects. This paper proposes a less-parametric model, YOLO-RRL, based on the improved YOLOv8 architecture. The YOLO-RRL model incorporates four key improvement modules: The following modules have been incorporated into the proposed model: Robust Feature Downsampling (RFD), Reparameterised Generalised FPN (RepGFPN), Dynamic Upsampler (DySample), and Lightweight Asymmetric Detection Head (LADH-Head). The results of multiple performance metrics evaluation demonstrate that YOLO-RRL enhances the mean accuracy (mAP) by 2.2 percentage points to 95.2%, increases the frame rate (FPS) by 12%, and significantly reduces the number of parameters and the computational complexity, thereby achieving a balance between performance and efficiency. Two datasets, NEU-DET and APSPC, were employed to evaluate the performance of YOLO-RRL. The results indicate that YOLO-RRL exhibits good adaptability. In comparison to existing mainstream inspection models, YOLO-RRL is also more advanced. The YOLO-RRL model is capable of significantly improving production quality and reducing production costs in practical applications while also extending the scope of the inspection system to a wide range of industrial applications.

1. Introduction

The role of printed circuit boards (PCBs) in electronic devices is of paramount importance, as they carry the layout of circuit components and wires, which directly affects the performance and functionality of the device. Various surface defects, such as cracks, scratches, holes, and short circuits, may occur during PCB manufacturing. If these defects are not detected and repaired promptly, they may lead to product failures and risks in use [1]. Consequently, the implementation of real-time and accurate quality inspection of PCBs represents a pivotal strategy for enhancing production efficiency and reducing costs.
Conventional PCB defect detection methodologies rely on either manual visual inspection or rule-based machine vision techniques [2]. These methods are too inflexible to be applied in the presence of high-level conditions. In recent years, the advent of numerous defect detection methods based on deep learning technology has emerged as a significant area of research [3]. The current mainstream detection algorithms can be divided into three categories: single-stage algorithms, two-stage algorithms, and detection algorithms based on the transformer architecture. These categories are distinguished by their technical characteristics. The most advanced single-stage models include the YOLO series [4], RetinaNet [5] and EfficientDet [6]. The most advanced two-stage detection models include the Mask R-CNN [7], Libra R-CNN [8], and Cascade R-CNN [9], among others.
As the prevalence of industrial automation and the pace of deep learning technology development increase, the adoption of unmanned processing in all industrial production processes, particularly those related to defect detection, is becoming more prevalent. The necessity for different algorithms in different target objects and application environments in the context of industrial applications is clear. Therefore, algorithms must be selected according to the specific needs of each application and modified to better target different tasks. The existing algorithmic improvement strategies are diverse and focus on optimizing various model components to achieve the desired outcome. Junjie Li et al. [10] introduced a novel algorithm, DEW-YOLO, which combines deformable convolutional networks (DCNs) with the YOLOv8 model, introduces an explicit visual center (EVC) structure, and enhances the defect detection performance by using the Wise-IoU (WIoU) loss function to improve the accuracy of defect detection on steel surfaces. Wen Zhou et al. [11] introduced a YOLO (TDD-YOLO) model based on tiny defect detection, which employs a four-layer ME structure in the backbone network and uses miniature detection heads in the head network to improve the accuracy and generalization performance. The model also utilizes a weighted intersection and merger ratio (W-IoU) mechanism to re-evaluate the bounding box regression loss and reduce false alarms. Rui Shao et al. [12] introduced a network called TD-Net for detecting tiny defects in industrial products, which addresses the limitations of current image-based defect detection methods in detecting tiny and differently shaped defects. Jinxiong Gao et al. [13] proposed a framework PE-Transformer to explore the semantic details of small underwater targets using a path-enhanced Transformer detection scheme. Yan Zhang et al. [14] introduced a novel approach, DsP-YOLO, which combines an anchorless frame YOLOv8 framework with a lightweight and lightweight YOLOv8 framework. Yan Zhang et al. introduced a novel approach, DsP-YOLO, which combines the anchorless YOLOv8 framework with a lightweight and detail-sensitive PAN (DsPAN) to achieve accurate and fast detection of industrial defects. Zhaohui Yuan et al. [15] proposed a framework called PCB Defects to address the challenge of accurately identifying defects on printed circuit boards (PCBs). To address the challenge of accurately identifying printed circuit board (PCB) defects, a lightweight model called LW-YOLO is proposed, which employs a bidirectional feature pyramid network, a partial convolutional module, and a minimum point distance intersection-and-parallel ratio loss function. Xinting Liao et al. [16] enhance the YOLOv4 model by using a modified backbone network and activation function and propose YOLOv4-MN3, which reduces the parameter space and the multiply-and-accumulate operation to make it more efficient. Minghao Yuan et al. [17] proposed a YOLO-HMC network to enhance defect detection using HorNet, MCBAM, and CARAFE, which solves the challenge of accurately identifying small defects due to the compact layout of printed circuit boards (PCBs).
Nevertheless, the existing methodologies continue to encounter certain limitations in the detection of surface defects on printed circuit boards. First, the datasets used to train deep learning models for PCB defect detection are often scarce and uneven, which makes it challenging to develop effective models [18]. Second, the images of printed circuit boards (PCBs) are frequently characterized by complex backgrounds and multiple types of defects, increasing the detection difficulty. Furthermore, the deep learning model has a high demand for computational resources, which represents a limitation for real-time inspection applications in real production. In the context of industrial applications of PCB defect detection models, there is a clear requirement for model standards that combine speed and accuracy and are easy to deploy. In light of the aforementioned issues, an enhanced model, designated YOLO-RRL, has been developed to address the pivotal challenges in PCB surface defect detection. The research presented here focuses on the following aspects:
  • The lightweight model design seeks to enhance the original redundant structures and modules within the network, thereby increasing their efficiency in resource-constrained environments. This is achieved while maintaining or even improving the performance of the model.
  • A multi-scale feature extraction mechanism has been introduced to enhance the model’s capability to deal with complex backgrounds and multiple defect types.
  • Data enhancement is a process that extends the defective sample dataset to improve the performance of the model in the context of data scarcity and imbalance.

2. Algorithm Description

2.1. Baseline YOLOv8

The YOLO family of algorithms has undergone numerous iterations, with YOLOv8 representing the latest version of the YOLO family of target detection models [19]. Several popular target detection models were considered when selecting the base model, including YOLOv4 [20], YOLOv5, and SSD [21]. However, YOLOv8 was ultimately selected as the base model primarily because of its ability to achieve a favorable balance between speed and accuracy and its capacity to perform effectively in the presence of complex backgrounds and multi-scale targets. Additionally, the architectural design of YOLOv8 proved to be more aligned with the specific requirements of our application in resource-constrained environments. YOLOv8 represents an iterative enhancement in the YOLO family, integrating state-of-the-art techniques. The incorporation of deep learning enables the model to achieve superior target detection performance. The backbone feature extraction component of YOLOv8 is based on CSPDarknet, which reduces redundant gradient information by cross-stage partially connected to reduce redundant gradient information, improves gradient flow, and helps the model retain gradient information more effectively. The feature fusion part comprises FPN and PAN, which facilitate a reduction in loss during feature delivery and enhance the network’s capacity to perceive contextual information. YOLOv8 employs an enhanced head architecture with an anchorless design and a segregated head for classification and localization tasks. BCE Loss [22] is employed for classification loss, while DFLLoss + CIOU Loss is utilized for regression loss.

2.2. YOLO-RRL Model

The YOLOv8 network model represents a more advanced target detection algorithm. It has been developed based on several improvements in previous generations and exhibits advantages in terms of speed, accuracy, and multitasking support. However, in specific applications, there are still slight shortcomings, such as slower inference, inaccurate positioning, and higher requirements for hardware resources. In particular, when confronted with PCB surface defects with small target sizes, issues such as missed detection and target loss frequently arise. Consequently, this paper proposes the YOLO-RRL model for real-time detection of small target defects in industrial applications for PCB surface defect detection. The design objectives of the YOLO-RRL model are twofold: first, to enhance the model’s efficiency in resource-constrained environments, and second, to augment the model’s capacity to process complex backgrounds. Although the model has been optimized and validated primarily for the detection of surface defects on printed circuit boards, the architectural design and improvement methods are equally applicable to other domains where efficient target detection is required. To make the YOLO-RRL model applicable to other problems, corresponding datasets must be collected and labeled for training and validation in new application areas. The model structure diagram of YOLO-RRL is shown in Figure 1. The specific improvements are as follows:
(1)
In the backbone network component, the downsampling module is replaced with the SRFD module, which is employed to process shallow feature maps, and the DRFD module, which is used to process deep feature maps. The two modules in question optimize the representation capability of shallow and deep features according to their different application phases. This ensures the integrity of the information and features robustness in different phases of the downsampling process.
(2)
In the Neck network component, RepGFPN is employed to supplant the entirety of the feature fusion section while substituting the up-sampling module within it with the Dysample module, thereby yielding a novel feature fusion network (RepGDFPN) as a supplementary Neck layer. The network’s multi-scale feature fusion and computationally efficient nature result in high performance and flexibility in object detection tasks. The RepGDFPN network is capable of efficiently handling targets at different scales through the use of multiple up-sampling and feature fusion operations while maintaining high computational efficiency.
(3)
In the detection head component, LADH-Head is employed for the replacement. This approach reduces the requirement for hardware resources for the model and also has better detection performance.

3. Structure of Key Improvement Components

3.1. RFD Module

The conventional approach to downsampling detection neural networks typically involves the utilization of maximal pooling, average pooling, and stepwise convolution, which collectively facilitate a reduction in the dimensionality of the feature map. In addition to this, there are several other improved downsampling methods, including Local Importance-based Pooling (LIP), Antialiased Convolutional Neural Networks (Antialiased CNNs), SoftPool, and so on. These techniques facilitate the network’s ability to more effectively capture essential information, mitigate the risk of overfitting, and so forth. Nevertheless, these techniques frequently encounter issues with information loss, localization, and fixation of feature selection when dealing with smaller targets, which can have a detrimental impact on the performance of the model in various respects.
This paper introduces the Robust Feature Downsampling (RFD) module [23], which improves the performance of the model in small target detection. The RFD module incorporates three different downsampling techniques: convolutional downsampling, slicing downsampling, and maximum pooling downsampling. The RFD module overcomes the limitations of the traditional convolutional downsampling by fusing multiple feature maps extracted using different downsampling techniques to create a more robust feature representation. The two variants of the RFD module are the Shallow RFD (SRFD) and the Deep RFD (DRFD), which are designed to handle different stages of the feature extraction process. The general architecture of the RFD module is illustrated in Figure 2. In this illustration, the SRFD module is employed to replace the initial downsampling layer of the original backbone network, while the DRFD module is used to replace the subsequent downsampling layers.
In both SRFD and DRFD structures, the steps of copying the initial feature representation into three copies (X, C, and M) and then downsampling them through three different downsampling techniques, namely convolutional downsampling, slice downsampling, and maximum pooling downsampling, are included. These are ultimately fused to generate consistent and robust feature representations. The SRFD module focuses on processing the shallow feature maps and contains the feature enhancement layer and slice downsampling. The feature enhancement layer employs convolutional operations to extract more detailed information from the input image, thereby improving the representation of the initial feature map. The initial feature map is sliced using a slice downsampling technique to preserve information from the original data. This process serves to maintain the integrity of shallow features and to reduce information loss.
The DRFD module is designed to process deeper feature maps using a range of downsampling methods, including convolutional downsampling, maximum pooling downsampling, and slicing downsampling. This approach is intended to generate more robust representations of deeper features, enhancing the overall performance of the model. The combination of each method ensures the completeness and detail of the feature maps, thereby preserving the texture and detail information of the feature maps and improving the model performance. Furthermore, the RFD method overcomes the limitations of the convolutional downsampling layer by merging multiple feature maps, thus prominently capturing complex details to produce a more robust feature representation. As illustrated in Figure 3, the detailed structure of SRFD and DRFD is depicted, where GConv, DWConvD, CutD, and MaxD are designated as grouped convolution, depth-separable convolutional downsampling, sliced downsampling, and maximally pooled downsampling.

3.2. RepGDFPN Network

The feature fusion component of the detection algorithm is of significant importance in the detection process, as it improves the detection performance by combining features at different scales and levels. Nevertheless, feature fusion continues to depend on specific network structures and methodologies, while challenges persist in addressing complex scenes and category imbalance issues. The original Neck model exhibits a notable enhancement in multi-scale feature fusion, yet it also exhibits certain drawbacks and shortcomings, including high computational complexity, a limited feature fusion effect, and local feature constraints. The Reparameterised Generalised FPN (RepGFPN) [24] has been designed to enhance the Neck layer by eliminating superfluous operations and leveraging sophisticated network structures. In this paper, we utilize the structure of RepGFPN to optimize the Neck layer. The fundamental concept of the RepGFPN is to achieve enhanced accuracy by employing a configuration where diverse scale feature maps with distinct channel counts are employed in the Neck feature fusion. The primary objectives of the RepGFPN are to enhance the interaction between high-level semantic features and low-level spatial features by efficiently refining and fusing multi-scale feature maps, thereby improving the performance of real-time target detection. RepGFPN provides a sophisticated feature fusion and interaction method that significantly improves target detection performance while optimizing computational efficiency. As illustrated in Figure 4, the structure of RepGFPN is as follows.
Classical up-sampling methods include nearest neighbor interpolation and linear interpolation, which are susceptible to issues such as the loss of details and the introduction of blurring effects. These limitations are readily apparent when high-quality image processing is required. To enhance the efficiency of the network, this paper replaces the upsampling module with the DySample module [25], which ultimately yields the feature fusion part of RepGDFPN. DySample is an ultra-lightweight and effective dynamic upsampler, which is designed to flexibly downsample the input feature maps by dynamically generating the offsets. The module adjusts the calculation of the offsets, which makes the module highly efficient in processing the different input feature maps and highly adaptable. The principal function of this module is to generate offsets based on the input feature maps and then sample the input feature maps based on these offsets. The DySample module is a rapid, efficacious, and versatile dynamic upsampler with superior performance and resource utilization compared with the original model, which employs inverse convolution to achieve upsampling. In summary, the module is capable of adaptively processing input features through flexible parameter configurations and dynamic sampling mechanisms, rendering it suitable for a variety of high-resolution reconstruction tasks. The fundamental configuration of the Dysample module is illustrated in Figure 5.

3.3. LADH-Head Module

The detection head represents a pivotal component within the target detection network, exerting a profound influence on the network’s overall performance, accuracy, and efficiency. The design of the detection head can be optimized to enhance the effectiveness of target detection significantly. The detection head of the original model employs an anchorless detection and separation head, which markedly enhances detection accuracy. However, this approach also entails greater complexity and necessitates the allocation of more computational resources in comparison to previous versions. To address the shortcomings of the original model detection head, a new Lightweight Asymmetric Detection Head (LADH-Head) [26] is introduced for head replacement. The network structure of the LADH-Head is illustrated in Figure 6.
The principal advantage of LADH-Head is that it markedly reduces the number of parameters by employing 3 × 3 Depth Separable Convolution (DWConv) instead of the conventional 3 × 3 convolution. This enables the extension of the sensory field, the augmentation of the task parameters of the IoU branch, and the diminution of the misdetections and false positives of the features. DWConv reduces the computational complexity of the computational process by decomposing standard convolutional operations into depth and point-by-point convolution. The method effectively reduces the number of parameters and improves the computational efficiency, making it particularly suitable for resource-limited environments. Concurrently, the point-by-point convolution operation integrates the information of disparate channels while retaining a certain degree of feature extraction capability. Following the replacement of the head component, the detection model demonstrates high detection accuracy while simultaneously reducing the number of model parameters and the required computational resources. This reduction in complexity and resources improves the computational efficiency of the model during training.

4. Experimental Procedure Design

4.1. Experimental Platform and Parameter Design

In conducting experiments on the model, the optimizer was selected to be SGD with an initial learning rate of 0.01, momentum set to 0.937, and a weight decay factor of 0.0005. The experiments were trained for 200 epochs using 16 batch sizes, and the PCB dataset and mosaic were disabled at the final 10 epochs. To facilitate the learning process of the model, the initial three epochs are designated as a warm-up phase, during which a momentum of 0.8 is employed. The experimental configuration is presented in Table 1. The aforementioned configuration is employed in all experiments presented in this paper unless otherwise specified.

4.2. Defect Dataset Selection and Enhancement

In this study, the PCB defect dataset (PKU-Market-PCB) [27], released by the Intelligent Robotics Open Lab of Peking University, is employed as the basis for the experimental dataset. The dataset comprises 693 high-resolution images of printed circuit board (PCB) defects, with an average resolution of 2777 by 2138 pixels. The dataset includes a comprehensive range of defect types, with each image labeled in detail. These encompass open circuits, shorts, missing holes, mouse bites, spurs, and spurious copper. The location and type of defects in each image are also labeled in detail, providing a substantial corpus for training and testing purposes. Figure 7 depicts the images of the six defect types. The images are in JPEG format and of high resolution, thus enabling the clear display of the intricate details of the PCB board. The dataset encompasses a diverse range of PCB types, including single-layer, multi-layer, rigid, flexible, and other unique PCB designs.
The augmentation of the dataset to generate a more diverse range of training data enables the model to gain a deeper understanding of and greater resilience to PCB defects in a variety of contexts. This, in turn, has the potential to enhance the performance of the PCB defect detection model significantly and to facilitate its deployment in a wider range of practical application scenarios. The images were enhanced through the use of techniques such as mirroring, brightness adjustment, cropping, noise addition, and rotation. This resulted in an enriched dataset comprising 3465 images, which were divided into three subsets: a training set (70%), a validation set (20%) and a test set (10%). Figure 8 illustrates the number of defects per category in the dataset, both before and after data enhancement. The orange bars represent the number of defects in the original dataset, while the blue bars indicate the number of defects in the enhanced dataset.

4.3. Evaluation Indicators

The use of evaluation metrics is a crucial aspect of model training, particularly in the context of PCB defect detection. The choice of evaluation metrics is dependent on the specific application and the desired outcome. Different metrics focus on different aspects of the model, and therefore, the selection of evaluation metrics should be based on the specific needs of the application. To provide a comprehensive assessment of model performance, it is generally considered appropriate to utilize several different evaluation metrics. The metrics of precision and recall are employed in the assessment of model performance. A high degree of precision indicates a reduced likelihood of false alarms, while a high degree of recall indicates a reduced likelihood of missed alarms. The number of floating point operations (GFLOPS) is indicative of the computational requirements and complexity of the model. The number of model parameters and the frames per second (FPS) are two key factors that influence the complexity and memory usage of the model, as well as its real-time processing capability. It is necessary to consider the relative merits of these factors in the context of the specific application scenario. For instance, in the case of resource-constrained devices, a small number of parameters and a high FPS are required. Average Precision (AP) and Mean Average Precision (mAP): mAP is particularly well-suited to multi-category tasks and can provide a comprehensive evaluation of the model’s performance across different categories. The combination of multiple metrics allows for a more comprehensive evaluation of the model’s performance, thereby facilitating the identification of the most suitable solution.
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P d R
m A P = 1 k i = 0 k A P i
where TP is the number of samples correctly predicted as positive cases, FP is the number of samples incorrectly predicted as negative cases, and FN is the number of samples incorrectly predicted as negative cases. k is the total number of categories of defects, and i is the number of the current category.

5. Experimental Results Validation Analysis

5.1. Ablation Experiments

The principal objective of the ablation experiments presented in this paper is to evaluate the influence of specific components or features of the model on its overall performance by modifying them. Consequently, it is possible to ascertain which components or features contribute the most to the model’s performance, to validate various assumptions in the model design, to ensure that the design of each part is reasonable and necessary, and by analyzing different configuration options, it is possible to identify the optimal combination of modules to achieve a balance between performance and efficiency. The original YOLOv8 model is identified as the baseline model, upon which different components are improved, and the training results of the model after each modification are recorded. The detailed experimental results of the different modified models are presented in Table 2. The following section presents a series of distinct modification scenarios.
(A)
The Backbone network downsampling module is replaced by the Robust Feature Downsampling (RFD) module, which enhances the model’s capacity to capture local and global information by optimizing the sensory field design, thereby enabling the model to extract more detailed information.
(B)
The neck is partially replaced by a RepGFPN feature fusion network, which enables the network to capture both high-level semantic information and low-level spatial details by fusing feature maps from different scales, thus improving the accuracy of target detection.
(C)
The up-sampling module of the neck part of the original model is replaced by the DySample module. This ensures the high accuracy and applicability of the up-sampling, improves the inference speed of the model, and enables the model to process high-resolution images quickly.
(D)
The replacement of the head component of the original model with a lightweight asymmetric detection head (LADH-Head) is proposed as a means of reducing the number of parameters and computational cost of the model.
(E)
The up-sampling module of the RepGFPN network will be enhanced based on scheme b, utilizing the DySample module, which enables the network neck to concentrate on a greater quantity of feature information.
(F)
The original model is to be enhanced simultaneously with the implementation of schemes A, D, and E to create the YOLO-RRL model.
Table 2. Results of ablation experiments with different improvement schemes.
Table 2. Results of ablation experiments with different improvement schemes.
PlanRFDRepGFPNDySampleLADHAP/%P/%mAP/%Params/106GFLOPsFPS
MhMbOcShSpSc
Baseline97.293.996.393.291.991.994.193.03.018.1184
A98.894.794.097.295.896.596.194.83.039.6159
B98.993.894.597.496.295.696.194.23.268.3213
C97.494.396.097.495.094.595.794.23.018.1222
D97.893.994.496.695.694.595.593.82.385.7232
E98.495.495.797.396.995.696.694.53.268.3227
F99.295.693.797.395.895.496.295.22.707.4206
From the table, it can be seen that the ablation experiments evaluate different model configuration schemes (Plan A–F). When compared with the baseline model, it can be seen that the proposed modifications achieve different degrees of improvement. The baseline model serves as a reference point, exhibiting satisfactory accuracy and average precision, although it is characterized by a higher number of parameters and computational complexity, as well as a lower frame rate.
The introduction of the RFD downsampling module in scheme A led to a notable enhancement in the accuracy, mAP, and AP for the five defect types. Furthermore, the heterogeneous copper defect exhibited the highest AP across all experiments. However, this approach also resulted in a marginal increase in the number of parameters, a significant surge in GFLOPs, and an expansion in computational complexity, ultimately reducing FPS. The scheme improves the accuracy, and the average accuracy mean but at the cost of inference speed. It is demonstrated that the downsampling process preserves the features better and reduces the loss during feature map compression.
The introduction of the RepGFPN network in Scenario B led to a notable enhancement in AP values across the four categories, with a 2.0% increase in accuracy, a 1.2% improvement in mAP, and a substantial acceleration in inference speed. However, this was accompanied by an expansion in the number of parameters and GFLOPs. While the introduction of this network resulted in more powerful feature representation and a significant improvement in model accuracy, it also increased computational complexity.
The introduction of the DySample upsampling module in scheme C led to an increase in AP for all five defect types. This was evidenced by a 1.6% and 1.2% increase in accuracy and mAP, respectively, and a significant increase in FPS, while the number of parameters and GFLOPs remained stable. The introduction of this module has the effect of reducing false positives and missed detections without increasing computational resources and optimizes identification accuracy for the majority of categories. It is demonstrated that replacing upsampling with the baseline model results in significantly superior detail recovery in the feature map while also reducing the incidence of false positives and false negatives.
The introduction of the LADH module into scheme D resulted in an increase in AP for all four defect types, a small increase in accuracy and mAP, and the best number of parameters, GFLOPs, and FPS for all experiments. The introduction of this module can significantly enhance the inference speed while reducing the computational complexity, rendering it well-suited for real-time application scenarios that are constrained by computational resources.
The introduction of RepGFPN and DySample in scheme E has led to an improvement in the AP values of all five defect categories. Furthermore, the AP value for spurious defects has reached its highest level across all experiments. Additionally, the FPS and mAP have been enhanced, while the number of parameters and GFLOPs have been increased. Consequently, the scheme achieves an optimal equilibrium between high accuracy and real-time performance, rendering it well-suited to scenarios that demand both high accuracy and real-time performance.
The detection model YOLO-RRL proposed in this paper is derived from the integration of all modules in scheme F. The experimental results demonstrate that the model achieves an mAP of up to 95.2%, indicating that it is capable of effectively reducing false alarms and detecting defects on the surface of the PCB. The number of parameters and GFLOPs were also significantly reduced, and the FPS was improved within an acceptable range. The mAP values for three defect types were the best in the experiment, especially for leakage holes (99.2%) and rat bites (95.6%). Consequently, the scheme achieves the highest mAP and low computational complexity with high inference speed, which represents the optimal configuration in terms of overall performance. This configuration demonstrates the scheme’s great potential and wide applicability in practical applications for scenarios requiring high accuracy, low computational resources, and high real-time performance.

5.2. Analysis of Model Training before and after Improvement

Figure 9 illustrates the alterations in pivotal metrics throughout the training process for both the YOLOv8 and YOLO-RRL models. The four sub-figures depict the fluctuations in precision, recall, mAP_0.5, and mAP_0.5:0.95 about the training period (epoch). A comparison of the precision rates reveals that the precision rate increases rapidly within the first 20 epochs. Throughout the training process, the precision rate of YOLO-RRL is consistently higher than that of YOLOv8, indicating that the model has been improved to have a lower false alarm rate. Observation of the recall rate reveals that the recall rate is rapidly improved within the first 50 epochs. Furthermore, the recall rate of YOLO-RRL is consistently higher than that of YOLOv8 throughout the training process. This indicates that the improved model is better able to detect all positive samples and has a lower underreporting rate. The mAP_0.5 curve indicates that mAP_0.5 is rapidly improved within the first 50 epochs of the two models. Furthermore, YOLO-RRL is consistently higher than YOLOv8 throughout the training process, suggesting that the improved model has a superior detection performance under the loose matching condition. From mAP_0.5:0.95, it can be observed that the rapid improvement within the first 100 epochs and YOLO-RRL consistently exhibits a higher performance than YOLOv8 throughout the training process, indicating that the improved model has a superior overall detection performance under different matching conditions. In conclusion, the advantages of YOLO-RRL in each index demonstrate that the newly introduced module or improved method has effectively enhanced the overall performance of the model, suggesting that YOLO-RRL has greater accuracy and reliability in detecting PCB defects.
Figure 10 illustrates the variation in loss during training and validation for both the YOLOv8 and YOLO-RRL models. The six subfigures illustrate the variation in different types of losses (box loss, DFL loss, cls loss) on the training and validation sets, respectively. The changes in box loss, DFL loss, and cls loss during training and validation demonstrate that YOLO-RRL has a lower loss than YOLOv8 in both localization and prediction distributions. This further validates the effectiveness of the improved model. YOLO-RRL has a lower box loss and DFL loss than YOLOv8 in both training and validation sets, indicating that it is more accurate in locating and predicting distributions. The classification loss performance of the two models is comparable, suggesting that the model improvement has a relatively minor impact on the classification task performance.
A comprehensive analysis of the two sets of charts reveals that the enhanced model outperforms the original model in all key indicators, with a clear loss optimization effect. In summary, the YOLO-RRL model has been enhanced structurally and optimized, resulting in a notable improvement in both detection accuracy and positioning accuracy while maintaining classification performance. The model has greater practical value in practical applications and is better suited to the needs of high-precision and high-reliability detection.

5.3. Model Performance Validation Results

The most effective detection models, trained before and after the improvement, are employed to validate the defect detection on the test set, thereby facilitating a more intuitive comparison of the detection performance of the models on PCB surface defects. Figure 11 illustrates the detection outcomes of the PCB surface defect detection model across various defect categories, including missing holes, short circuits, mouse bites, spurious copper, open circuits, and spurs. The red or yellow markers in the graphs indicate the detected defects and indicate the confidence level associated with each defect. Column A depicts the original PCB surface, while Column B displays the outcomes of the YOLOv8 model, and Column C illustrates the outcomes of the YOLO-RRL model.
The results of the detection tests presented in the figure demonstrate that the enhanced YOLO-RRL model is effective in detecting all types of defects. The results demonstrate that the model is capable of accurately locating the defects and marking their specific locations, indicating that the model exhibits high detection accuracy. The confidence level of the YOLO-RRL model is generally higher when the defects are detected, indicating that the improved model has enhanced the detection confidence in different defect categories, thereby reducing the occurrence of false alarms and omissions. The YOLO-RRL model not only performs well in a single category but also performs well in the detection task of multiple defect categories, demonstrating that the improved model is effective in the detection task of multiple defect categories. The YOLO-RRL model demonstrates robust performance across a range of defect categories, exhibiting strong multi-tasking and generalization capabilities.

5.4. Comparative Experiments with Multiple Datasets

The utilization of diverse datasets for training and validation plays a pivotal role in the performance and robustness of models. It can significantly enhance the generalization ability, robustness, performance, and robustness of target detection models, ensuring their efficacy in a multitude of real-world applications. The original models, YOLOv8 and YOLO-RRL, were tested on two datasets: the steel surface defect dataset (NEU-DET) and the aluminum surface defect dataset (APSPC). The steel dataset comprises 1800 high-quality defect images of six defect types, including cracks, inclusions, plaques, pitting surfaces, rolled oxides, and scratched defects. The images have a high resolution and clear details, which facilitate the learning and identification of defect features by the models. The aluminum dataset comprises a total of 1885 defect images, with ten distinct types of defects, including dents, non-conductivity, efflorescence, orange peel, bottom leakage, bruising, pitting, convex powder, coating cracking, and dirty spots. The dataset is divided into three subsets, with a ratio of 7:2:1. The remaining experimental settings are identical to those described in Section 3.1. The resulting experimental outcomes are presented in Table 3 and Table 4 below.
Table 3 reveals that the YOLOv8-MPP model exhibits enhanced performance in terms of mean average precision (mAP), with an improvement of 0.8%. Additionally, the number of model parameters has been reduced by 10.3%, while the GFLOPs have decreased by 8.6%. Furthermore, the frame per second (FPS) has been enhanced by 38.2% in comparison to the YOLOv8 model. The results demonstrate that the YOLOv8-MPP model continues to exhibit superior performance in terms of detection speed and accuracy compared with the YOLOv8 model for the detection of steel surface defects.
A comparison of Table 4 reveals that the YOLOv8-MPP model exhibits an improvement of 1.1% in mean average precision (mAP), a reduction of 10.3% in the number of model parameters, and a decrease of 8.6% in the number of floating-point operations per second (GFLOPs) compared with the YOLOv8 model. Additionally, the frame per second (FPS) has improved by 20.8%. The results demonstrate that YOLO-RRL also outperforms YOLOv8 in all aspects of the APSPC dataset.
The experimental results on the NEU-DET and APSPC datasets demonstrate that YOLO-RRL outperforms YOLOv8 in terms of accuracy, model complexity, computation, and processing speed. The experimental results indicate that YOLO-RRL has a higher utility value in detection and is more robust than YOLOv8.

5.5. Multi-Model Comparison Experiments

The conduct of comparative experiments using multiple models is a common and important step in the process of model evaluation. The objective of comparative experiments is to assess the performance of different models comprehensively, identify the most suitable model for a particular task, and ensure the reliability and robustness of the model under various conditions. By conducting a systematic comparison of the performance of different models, it is possible to make more informed choices, thereby ensuring that the models will operate efficiently and reliably in real-world applications.
The experiment employs a range of classical mainstream detection models, including not only the single-stage detection model Fast R-CNN [28] but also the two-stage detection models of the YOLO series and SSD. To ensure the validity and authenticity of the comparison experiments, the dataset used is identical to that in Section 3.2, and the experimental equipment and environmental settings are the same as those in Section 3.1. Table 5 presents the experimental results of different models on the PCB dataset.
Table 5 lists the experimental performance of different detection models on the PCB surface defect dataset, comparing the performance of each model across several metrics. The mAP of the RT-DETR model [29] is the lowest among all experiments, being 9.9% lower than that of YOLOv8-RRL. The Fast R-CNN model has the highest number of parameters and computational cost in all experiments, with 134.32 × 106 parameters and a computation amount 362.8 higher than that of YOLOv8-RRL. Additionally, the FPS of the Fast R-CNN model is the lowest among all experiments, being 86% lower compared with YOLOv8-RRL.
Table 6 presents a comparison of YOLO-RRL with previous works in the field. YOLO-MBBi [30], developed by Bowei Du et al., is a PCB defect detection model based on the YOLOv5s network. The improvements include the integration of MBConv modules, CBAM attention mechanisms, BiFPN, and depthwise convolutions, along with replacing the CIoU loss function with the SIoU loss function. XIAOHONG QIAN et al. proposed a new model, LFF-YOLO [31], demonstrating highly competitive detection accuracy for PCB defects. From the observations in Table 6, YOLO-RRL outperforms YOLO-MBBi with a 0.1% improvement in mAP, a reduction of 5.4 GFLOPs, and an increase of 157.1 FPS. Compared with LFF-YOLO, YOLO-RRL shows a 2.09% enhancement in mAP. These results indicate that YOLO-RRL exhibits superior overall performance.
In summary, the YOLO-RRL model performs well in PCB defect detection, having the highest accuracy and maintaining low computational complexity and high frame rate, making it a good balance between accuracy and efficiency and very suitable for application in real industrial environments. The other models have their advantages and disadvantages and can be selected according to specific application scenarios and needs.

6. Conclusions

In the field of printed circuit board surface defect detection, the main problems of end-device computational power and target features that are not easy to detect usually occur. To solve these problems, we make a large number of advanced improvements in the existing YOLOv8 detection model and propose a lightweight printed circuit board surface defect detection model, YOLO-RRL. After a large number of experiments, it is shown that the YOLO-RRL model demonstrates the excellent characteristics of high accuracy, low complexity, moderate computational requirements, and high processing speed for the task of detecting defects in the PCB surface defect dataset. Experimental results on NEU-DET and APSPC datasets demonstrate good robustness. In comparison with different models, YOLO-RRL has a very good competitiveness, and the practical significance of YOLO-RRL is reflected in its ability to significantly improve production quality and reduce production cost, as well as its wide applicability and real-time application capability. Therefore, the application of the YOLO-RRL model in industrial production is very promising and has important practical value. Despite the introduction of advanced methods, including RFD, RepGFPN, DySample, and LADH, certain defects remained undetected. To illustrate, the spurious copper category demonstrated consistently low AP scores across multiple schemes, indicating that the model encounters difficulties in addressing this particular type of defect. This issue may originate from the paucity of diversity in the training datasets for this category, which impairs the model’s capacity for generalization. To overcome these obstacles, future research should prioritize augmenting dataset diversity, particularly for underrepresented categories such as spurious copper. The incorporation of sophisticated data enhancement techniques could enhance the diversity of the training samples and bolster the model’s generalization proficiency.

Author Contributions

Writing—review and editing T.Z., J.Z. and P.P.; Supervision, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded and supported by the National Natural Science Foundation of China [grant numbers 52075348, 52175107, 52275119]; National Key Research and Development Program of China [Grant numbers 2023YFB3408502]; Natural Science Foundation of Liaoning Province [grant numbers 2022-MS-280]; Shenyang Outstanding Young and Middle-aged Science and Technology Talents Project, [Grant numbers RC230739]; Basic Scientific Research Project of Liaoning Provincial Department of Education [Grant numbers JYTMS20231568].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, X.; Wu, Y.; He, X.; Ming, W. A Comprehensive Review of Deep Learning-Based PCB Defect Detection. IEEE Access 2023, 11, 139017–139038. [Google Scholar] [CrossRef]
  2. Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Technol. 2022, 9, 661–691. [Google Scholar] [CrossRef]
  3. Chen, Y.; Ding, Y.; Zhao, F.; Zhang, E.; Wu, Z.; Shao, L. Surface Defect Detection Methods for Industrial Products: A Review. Appl. Sci. 2021, 11, 7657. [Google Scholar] [CrossRef]
  4. Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  5. Cheng, X.; Yu, J. RetinaNet with Difference Channel Attention and Adaptively Spatial Feature Fusion for Steel Surface Defect Detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
  6. Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. [Google Scholar]
  7. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Seattle, WA, USA, 13–19 June 2020; Volume 42, pp. 386–397. [Google Scholar]
  8. Pang, J.; Chen, K.; Shi, J.; Feng, H.; Ouyang, W.; Lin, D. Libra R-CNN: Towards Balanced Learning for Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 821–830. [Google Scholar]
  9. Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving into High Quality Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
  10. Li, J.; Chen, M. DEW-YOLO: An Efficient Algorithm for Steel Surface Defect Detection. Appl. Sci. 2024, 14, 5171. [Google Scholar] [CrossRef]
  11. Zhou, W.; Li, C.; Ye, Z.; He, Q.; Ming, Z.; Chen, J.; Wan, F.; Xiao, Z. An Efficient Tiny Defect Detection Method for PCB with Improved YOLO through a Compression Training Strategy. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
  12. Shao, R.; Zhou, M.; Li, M.; Han, D.; Li, G. TD-Net: Tiny Defect Detection Network for Industrial Products. Complex Intell. Syst. 2024, 10, 3943–3954. [Google Scholar] [CrossRef]
  13. Gao, J.; Zhang, Y.; Geng, X.; Tang, H.; Bhatti, U.A. PE-Transformer: Path Enhanced Transformer for Improving Underwater Object Detection. Expert Syst. Appl. 2024, 246, 123253. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Zhang, H.; Huang, Q.; Han, Y.; Zhao, M. DsP-YOLO: An Anchor-Free Network with DsPAN for Small Object Detection of Multiscale Defects. Expert Syst. Appl. 2024, 241, 122669. [Google Scholar] [CrossRef]
  15. Yuan, Z.; Tang, X.; Ning, H.; Yang, Z. LW-YOLO: Lightweight Deep Learning Model for Fast and Precise Defect Detection in Printed Circuit Boards. Symmetry 2024, 16, 418. [Google Scholar] [CrossRef]
  16. Liao, X.; Lv, S.; Li, D.; Luo, Y.; Zhu, Z.; Jiang, C. YOLOv4-MN3 for PCB Surface Defect Detection. Appl. Sci. 2021, 11, 11701. [Google Scholar] [CrossRef]
  17. Yuan, M.; Zhou, Y.; Ren, X.; Zhi, H.; Zhang, J.; Chen, H. YOLO-HMC: An Improved Method for PCB Surface Defect Detection. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
  18. Bai, D.; Li, G.; Jiang, D.; Yun, J.; Tao, B.; Jiang, G.; Sun, Y.; Ju, Z. Surface Defect Detection Methods for Industrial Products with Imbalanced Samples: A Review of Progress in the 2020s. Eng. Appl. Artif. Intell. 2024, 130, 107697. [Google Scholar] [CrossRef]
  19. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  20. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  21. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Volume 9905, pp. 21–37. [Google Scholar]
  22. Li, Q.; Jia, X.; Zhou, J.; Shen, L.; Duan, J. Rediscovering BCE Loss for Uniform Classification. arXiv 2024, arXiv:2403.07289. [Google Scholar]
  23. Lu, W.; Chen, S.-B.; Tang, J.; Ding, C.H.Q.; Luo, B. A Robust Feature Downsampling Module for Remote-Sensing Visual Tasks. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  24. Xu, X.; Jiang, Y.; Chen, W.; Huang, Y.; Zhang, Y.; Sun, X. DAMO-YOLO: A Report on Real-Time Object Detection Design. arXiv 2022, arXiv:2211.15444. [Google Scholar]
  25. Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to Upsample by Learning to Sample. arXiv 2023, arXiv:2308.15085. [Google Scholar]
  26. Zhang, J.; Chen, Z.; Yan, G.; Wang, Y.; Hu, B. Faster and Lightweight: An Improved YOLOv5 Object Detector for Remote Sensing Images. Remote Sens. 2023, 15, 4974. [Google Scholar] [CrossRef]
  27. Huang, W.; Wei, P. A PCB Dataset for Defects Detection and Classification. arXiv 2019, arXiv:1901.08204. [Google Scholar]
  28. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  29. Lv, W.; Xu, S.; Zhao, Y.; Wang, G.; Wei, J.; Cui, C.; Du, Y.; Dang, Q.; Liu, Y. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2023, arXiv:2304.08069. [Google Scholar]
  30. Du, B.; Wan, F.; Lei, G.; Xu, L.; Xu, C.; Xiong, Y. YOLO-MBBi: PCB Surface Defect Detection Method Based on Enhanced YOLOv5. Electronics 2023, 12, 2821. [Google Scholar] [CrossRef]
  31. Qian, X.; Wang, X.; Yang, S.; Lei, J. LFF-YOLO: A YOLO Algorithm with Lightweight Feature Fusion Network for Multi-Scale Defect Detection. IEEE Access 2022, 10, 130339–130349. [Google Scholar] [CrossRef]
Figure 1. A diagrammatic representation of the YOLO-RRL network structure.
Figure 1. A diagrammatic representation of the YOLO-RRL network structure.
Applsci 14 07460 g001
Figure 2. Generic RFD structure after replacement with SRFD and DRFD modules.
Figure 2. Generic RFD structure after replacement with SRFD and DRFD modules.
Applsci 14 07460 g002
Figure 3. SRFD and DRFD Modules Details.
Figure 3. SRFD and DRFD Modules Details.
Applsci 14 07460 g003
Figure 4. RepGFPN network architecture.
Figure 4. RepGFPN network architecture.
Applsci 14 07460 g004
Figure 5. Sampling-based dynamic upsampling and module designs in DySample. (a) Sampling-based dynamic upsampling. (b) Sampling point generator in DySample.
Figure 5. Sampling-based dynamic upsampling and module designs in DySample. (a) Sampling-based dynamic upsampling. (b) Sampling point generator in DySample.
Applsci 14 07460 g005
Figure 6. Network Architecture for Lightweight Asymmetric Detection Head (LADH-Head).
Figure 6. Network Architecture for Lightweight Asymmetric Detection Head (LADH-Head).
Applsci 14 07460 g006
Figure 7. PCB Defect Type Image. The blue boxes show specific defects: (a) missing hole; (b) mouse bite; (c) open circuit; (d) short; (e) spur; (f) spurious copper.
Figure 7. PCB Defect Type Image. The blue boxes show specific defects: (a) missing hole; (b) mouse bite; (c) open circuit; (d) short; (e) spur; (f) spurious copper.
Applsci 14 07460 g007
Figure 8. Number of labels for different defect types before and after data enhancement.
Figure 8. Number of labels for different defect types before and after data enhancement.
Applsci 14 07460 g008
Figure 9. Changes in key metrics during training in YOLOv8 and YOLO-RRL.
Figure 9. Changes in key metrics during training in YOLOv8 and YOLO-RRL.
Applsci 14 07460 g009
Figure 10. Changes in loss during YOLOv8 and YOLO-RRL training.
Figure 10. Changes in loss during YOLOv8 and YOLO-RRL training.
Applsci 14 07460 g010
Figure 11. Validation results for YOLO-RRL and YOLOv8. (a) Original image; (b) YOLOv8; (c) YOLO-RRL.
Figure 11. Validation results for YOLO-RRL and YOLOv8. (a) Original image; (b) YOLOv8; (c) YOLO-RRL.
Applsci 14 07460 g011
Table 1. Experimental environment configuration.
Table 1. Experimental environment configuration.
CategoryConfiguration
CPUAMD Ryzen 9 7945HX
GPUNVIDIA GeForce RTX 4060 8G
RAM16G
Operation SystemWindows 11
FrameworkPyTorch 2.0.0
Programming environmentPython 3.9
CUDA11.8
Table 3. Experimental results of NEU-DET on the dataset.
Table 3. Experimental results of NEU-DET on the dataset.
ModelmAP/%Params/106GFLOPsFPS
YOLOv873.13.018.1157
YOLO-RRL73.92.707.4217
Table 4. Experimental results of APSPC on the dataset.
Table 4. Experimental results of APSPC on the dataset.
ModelmAP/%Params/106GFLOPsFPS
YOLOv861.23.018.1197
YOLO-RRL62.32.787.4238
Table 5. Comparative Experimental Results of Different Models on PCB Defect Dataset.
Table 5. Comparative Experimental Results of Different Models on PCB Defect Dataset.
ModelmAP/%Params/106GFLOPsFPS
Fast R-CNN92.8137.10370.228
SSD87.326.2962.762
YOLOv592.62.517.1192
YOLOv690.94.2311.8188
YOLOv791.16.0213.1105
YOLOv893.03.018.1183
RT-DETR85.528.5100.689
YOLO-RRL95.42.787.4206
Table 6. Comparison results of YOLO-RRL, YOLO-MBBi, and LFF-YOLO.
Table 6. Comparison results of YOLO-RRL, YOLO-MBBi, and LFF-YOLO.
ModelmAP/%GFLOPsFPS
YOLO-MBBi95.312.848.9
LFF-YOLO93.31
YOLO-RRL95.47.4206
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, T.; Zhang, J.; Pan, P.; Zhang, X. YOLO-RRL: A Lightweight Algorithm for PCB Surface Defect Detection. Appl. Sci. 2024, 14, 7460. https://doi.org/10.3390/app14177460

AMA Style

Zhang T, Zhang J, Pan P, Zhang X. YOLO-RRL: A Lightweight Algorithm for PCB Surface Defect Detection. Applied Sciences. 2024; 14(17):7460. https://doi.org/10.3390/app14177460

Chicago/Turabian Style

Zhang, Tian, Jie Zhang, Pengfei Pan, and Xiaochen Zhang. 2024. "YOLO-RRL: A Lightweight Algorithm for PCB Surface Defect Detection" Applied Sciences 14, no. 17: 7460. https://doi.org/10.3390/app14177460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop