Next Article in Journal
A Switching Observer for State-of-Charge Estimation of Reconfigurable Supercapacitors
Previous Article in Journal
Breast Ultrasound Computer-Aided Diagnosis System Based on Mass Irregularity Features in Frequency Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fabric Defect Detection Based on Improved Lightweight YOLOv8n

1
Hubei Key Laboratory of Digital Textile Equipment, Wuhan Textile University, Wuhan 430073, China
2
School of Mechanical Engineering and Automation, Wuhan Textile University, Wuhan 430073, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 8000; https://doi.org/10.3390/app14178000 (registering DOI)
Submission received: 11 July 2024 / Revised: 1 September 2024 / Accepted: 5 September 2024 / Published: 7 September 2024

Abstract

:
In response to the challenges posed by complex background textures and limited hardware resources in fabric defect detection, this study proposes a lightweight fabric defect detection algorithm based on an improved GSL-YOLOv8n model. Firstly, to reduce the parameter count and complexity of the YOLOv8n network, the GhostNet concept is used to construct the C2fGhost module, replacing the conventional convolution layers in the YOLOv8n structure with Ghost convolutions. Secondly, the SimAM parameter-free attention mechanism is embedded at the end of the backbone network to eliminate redundant background, enhance semantic information for small targets, and improve the network’s feature extraction capability. Lastly, a lightweight shared convolution detection head is designed, employing the scale layer to adjust features, ensuring the lightweight nature of the model while minimizing precision loss. Compared to the original YOLOv8n model, the improved GSL-YOLOv8n algorithm increases the [email protected] by 0.60% to 98.29% and reduces model size, computational load, and parameter count by 66.7%, 58.0%, and 67.4%, respectively, meeting the application requirements for fabric defect detection in textile industry production.

1. Introduction

During textile production, defects such as thread breaks, pilling, hole, thread floats, and stains may occur due to machinery issues or operator errors, leading to poor product quality and adversely affecting the company’s production efficiency [1]. The complex texture of fabric surfaces makes traditional fabric defect detection methods, which often rely on manual visual inspection, inefficient, subjective, and prone to visual fatigue. According to [2], the accuracy of manual inspection is only around 70%, which is insufficient to meet the demands of large-scale production. Therefore, it is crucial to develop an automated, efficient, accurate, and cost-effective fabric defect detection system using computer vision and deep learning technologies. Traditional object detection algorithms rely on a combination of handcrafted features and classifiers, which require high-quality images, involve complex processing, and are highly sensitive to noise and interference, limiting their effectiveness in detecting fabric defects.
In recent years, deep learning-based object detection algorithms have rapidly developed in object detection due to their strong learning capabilities and robustness to scale variations. Based on the number of detection stages, deep learning-based object detection algorithms can be divided into two-stage detection algorithms, represented by Faster R-CNN [3], and one-stage detection algorithms, represented by YOLO (You Only Look Once) [4] and SSD (Single-Shot MultiBox Detector) [5]. In research on two-stage fabric defect detection algorithms, Sun Xuan et al. [6] improved the prediction boxes by using an enhanced K-means clustering method and replaced the backbone network of Faster R-CNN with an optimized ResNet50. This algorithm improved the accuracy of fabric defect detection but did not address the issue of computational complexity.
Compared to two-stage detection algorithms, single-stage algorithms do not require the generation of candidate regions, offering higher real-time performance. They have unique advantages in multi-scale detection and multi-task learning, and multi-scale detection can recognize objects at different scales, thereby improving detection accuracy, making them a current research hotspot in fabric defect detection. Xie HS et al. [7] added the Fully Convolutional Squeeze and Excitation (FCSE) module to the traditional SSD and validated it on the TILDA and Xuelang datasets, resulting in improved detection accuracy. Faced with a complex textile texture background, Guo YB (Guo, Yongbin) et al. [8] proposed a Convolutional Squeeze and Excitation (CSE) channel attention module and integrated it into the YOLOv5 backbone, enhancing defect detection and anti-interference capabilities. Fan et al. [9] embedded a channel and spatial dual-attention mechanism (CBAM) into YOLOv5, effectively improving the feature allocation issues associated with a single-attention mechanism and enhancing model accuracy. However, the computational load of the model was not significantly reduced. Jing, J. F. (Jing, Junfeng) et al. [10] applied the k-means algorithm for the dimensional clustering of target frames and added YOLO detection layers on feature maps of different sizes. The improved network model achieved an error detection rate of less than 5%. These methods generally suffer from high computational complexity and an imbalance between accuracy and detection speed, making them difficult to deploy on resource-constrained edge devices.
With the advent of lightweight networks, various scholars have combined them with YOLO models to propose new lightweight object detection algorithms. Kang, X. et al. [11] addressed the issues of complex deep learning model construction and high network complexity by using the lightweight YOLOv5s model as a base. They integrated the Convolutional Block Attention Module (CBAM) and feature enhancement module into the backbone and neck, respectively, and modified the loss function to CIoU_Loss. Liu BB et al. [12] integrated new convolutional operators into the Extended Efficient Layer Aggregation Network to optimize feature extraction, effectively capturing spatial features while reducing computation. The experiments demonstrated that this fabric defect detection model reduced model parameters and the computational load by 18.03% and 20.53%. Although the aforementioned researchers have made significant progress in lightweight YOLO network detection methods, the complex background texture and numerous small targets in fabric defects pose challenges. The feature extraction capabilities of these networks remain limited, indicating that there is still room for improvement in the model’s lightweight design.
To address the aforementioned issues, this study proposes an efficient and accurate lightweight YOLOv8n-based fabric defect detection algorithm, GSL-YOLOv8n, suitable for textile production lines. The algorithm includes the following improvements:
  • Ghost Network Integration: The Ghost network is used to enhance the standard convolution (Conv) modules and C2f in the YOLOv8n network, significantly reducing the model’s parameter count.
  • Semantic Information Extraction: To address the loss of semantic information between different features and the loss of small target features in the C2f module of YOLOv8, the parameter-free attention mechanism SimAM is embedded at the end of the backbone network.
  • Lightweight Detection Head: To further achieve model lightweighting, a lightweight detection head (LSCDH) is designed by combining the GroupNorm and shared convolution concepts.
The experimental results on fabric defect datasets demonstrate the effectiveness of the proposed algorithm. It successfully balances detection accuracy and speed while reducing model complexity, making it suitable for deployment on devices with limited computational resources.
The remaining sections of this paper are organized as follows: The second section elaborates on the YOLOv8 algorithm and the proposed improvements. The third section introduces the fabric dataset, experimental setup, and detailed experimental results. The final section presents the conclusions and future work outlook of this study.

2. Models and Methods

2.1. YOLOv8 Algorithm

Ultralytics released YOLOv8 in early 2023, which boasts higher detection accuracy and fewer parameters compared to other single-stage detection networks in the YOLO series, such as YOLOv5 and YOLOv7. The YOLOv8 model draws on the design advantages of models like YOLOv5 and YOLOv6, offering a new state-of-the-art (SOTA) model. It is divided into five different scales—n, s, l, m, and x—to meet the requirements of various deployment platforms and application scenarios, achieving improvements in both detection accuracy and speed. Considering the characteristics of these models and to ensure the algorithm’s real-time performance while accounting for the number of model parameters, this study selects the YOLOv8n version.
The YOLOv8 network utilizes mosaic data augmentation at the input stage to enhance the network’s feature extraction capability. The epoch parameter can be set to stop data augmentation to ensure model accuracy. The kernel size of the initial convolution layer has been changed from 6 × 6 to 3 × 3, improving computational efficiency without sacrificing accuracy. The backbone network consists of Conv, C2f, BN, and SPPF modules. The C2f module, designed based on the ELAN structure in YOLOv7, provides a richer gradient flow, enhancing the model’s feature representation capability while lightweighting the model [13]. The feature fusion layer (Neck) employs an FPN-PAN structure designed based on PANet (Path Aggregation Network) [14], using upsampling and convolution layers to achieve multi-level feature fusion, strengthening the network’s ability to fuse multi-scale features. The detection head (Head) uses a decoupled head structure to extract target category and location features, with an anchor-free mechanism applied to reduce regression time.

2.2. Improved YOLOv8

Although YOLOv8n performs well in terms of detection speed and accuracy, it remains unsuitable for the rapid detection needs of embedded systems. This study introduces the GhostNet network, utilizing GhostConv and C2fGhost to replace the ordinary convolution modules and C2f modules, respectively, in order to reduce the model’s parameter count. In complex scenarios, the YOLOv8 algorithm’s feature extraction for small objects can be misled by large objects, resulting in a lack of relevant feature information during deep feature extraction. To address this issue, the SimAM parameter-free attention mechanism is embedded to enhance the detection capability for small fabric defects while suppressing the interference of complex fabric textures. The detection head adopts a newly designed lightweight module, LSCDH, which uses shared convolution and scale layers to resize defect features of varying scales, minimizing precision loss while maintaining lightweight characteristics. The improved GSL-YOLOv8 model structure is shown in Figure 1.

2.2.1. Lightweight Ghost Backbone Feature Extraction Network

In 2020, Huawei’s Noah’s Ark Lab designed the lightweight network GhostNet [15], primarily composed of Ghost bottlenecks based on Ghost modules. GhostConv is a module within the network that can replace conventional convolutions. The core design of GhostNet is illustrated in Figure 2. The lightweight Ghost network utilizes a small number of traditional convolutions to extract features and then applies linear transformations to obtain additional information, maintaining similar recognition performance while reducing the parameter count and computational complexity. GhostConv initially uses fewer convolutions to extract features from the input feature map through a convolution kernel, followed by multiple simple linear transformations (cheap operations) to process the feature map, and finally concatenates the results to produce the final feature map.
YOLOv8 employs traditional convolutional layers (Conv) to extract feature information, which, although comprehensive, includes a substantial amount of redundant information, increasing the model’s parameter count. Lightweight networks such as MobileNet and ShuffleNet use smaller convolution filters to reduce model parameters but still consume considerable memory. In contrast, Ghost can generate feature information through linear transformations using fewer standard convolutions. Therefore, to achieve model lightweighting, this study replaces all convolutional layers (Conv) in YOLOv8 with GhostConv based on the Ghost convolution concept. A newly optimized C2f structure is designed, replacing the conventional convolutions in the bottleneck block with Ghost convolutions, and the optimized structure is named C2fGhost. This structure utilizes feature fusion and gradient flow to reduce redundant gradient information, enhancing the network’s learning capabilities, significantly compressing the model size, and reducing computational complexity.

2.2.2. Parameter-Free Attention Mechanism SimAM

Fabric defects vary in shape and size, and the background often has a color similar to the target, which affects the extraction of semantic features. To address this issue, this study introduces the SimAM attention mechanism [16] to enhance the network’s ability to extract target features while reducing the attention paid to irrelevant features. Unlike existing attention modules such as BAM and CBAM [17], which can lose important information when distributing weights based on channels and spatial dimensions, SimAM can infer three-dimensional attention weights for intra-layer feature maps without increasing network parameters. It does this by calculating the separability of features to identify key features rich in information, thus reducing model complexity and computational cost. Its structure is illustrated in Figure 3.
In the SimAM attention mechanism, adjacent neurons exhibit different firing patterns and produce a spatial inhibition effect to achieve better attention, usually being given higher priority during task processing. SimAM evaluates the importance of each neuron by defining an energy function, measuring the linear separability between neurons. The energy function for defining neuron importance is formulated as shown in Equations (1)–(4).
e t = 4 ( σ 2 + λ ) ( t μ ) 2 + 2 σ 2 + 2 λ
μ = 1 M i = 1 M x i
σ 2 = 1 M i = 1 M ( x i μ ) 2
X ¯ = s i g m o i d ( 1 e t ) × X
In the formula, the following applies:
e t represents the energy function for each channel.
σ 2 is the variance of each channel in X.
λ is a hyperparameter.
t is the target neuron of the input feature.
μ is the mean of each channel in X.
M is the number of energy functions.
X ¯ denotes the enhanced features.
X represents the input features.
In this study, some fabric defect features occupy a relatively small portion of the image and are surrounded by a significant amount of irrelevant feature information. The similarity between adjacent pixels in the image is strong, while the similarity between distant pixels is weak. By introducing SimAM at the end of the YOLOv8 backbone, the features can be better refined, the network’s ability to extract global features is enhanced, and redundant background information is effectively removed. This reduces the interference of irrelevant and complex fabric textures on defect detection.

2.2.3. Shared Convolution Detection Head (LSCDH)

The original YOLOv8n network uses three detection heads to detect targets of different sizes. However, as the model deepens, its performance in handling small targets diminishes. This study introduces a newly designed Lightweight Shared Convolutional Detection Head (LSCDH) to enhance the network’s precision and speed in multi-scale target detection.
The overall structure of the LSCDH detection head is illustrated in Figure 4. Firstly, the feature output information from the YOLOv8 network is compressed by replacing the 3 × 3 convolutional kernels with 1 × 1 convolutional kernels, effectively improving the training speed of the network. Secondly, to integrate feature information from different branches, Group Normalization (GroupNorm) [18] is applied for feature fusion, enhancing the model’s generalization ability. Next, two shared convolutions (Conv_GN) are consecutively applied to process the feature maps of different scale output by YOLOv8, achieving parameter sharing. GroupNorm is used to normalize the feature information, followed by an activation function to enhance the network’s ability to handle non-linear expressions. The resulting feature maps are divided into three branches based on their scales. Subsequently, each branch splits the output into regression and classification outputs according to the position and category of the predicted objects. After the regression output passes through the scale layer, the feature map output dimensions are restored to the same dimensions as the original input. The normalized weights are then added to the features of each channel, allowing the better fusion of low-level feature details with high-level information, ensuring the effectiveness and adaptability of different scales.

3. Result and Discussion

3.1. Experimental Platform

The experiments were conducted on a 64-bit Windows 10 platform equipped with an NVIDIA GeForce RTX 3060Ti (NVIDIA, Santa Clara, CA, USA). The implementation utilized Python 3.11.5 and was conducted within the PyTorch 2.1.0 deep learning framework. CUDA 12.1 and CUDNN 12.1 were employed for GPU acceleration, enhancing the training speed of the YOLOv8 model. The training parameters are detailed in Table 1.

3.2. Dataset Description

The experimental dataset was selected from actual scenarios captured by a textile manufacturing enterprise and the Tianchi [19] public dataset. Images captured from actual scenarios were used for training and testing; a total of 2115 images containing textile defects were used for training and testing. These images include six common textile defects: Float, Skips, Pilling, Hole, Pulling Out Yarn, and Stain. The six types of common textile defects are illustrated in Figure 5. Due to the high similarity between noise and minor defects in textile images, to improve the model’s generalization ability and robustness and to prevent overfitting, this study utilized data augmentation methods such as rotation, scaling, cropping, and color changes to expand the dataset to three times its original size, resulting in a total of 6345 images. The dataset was randomly divided into training, validation, and test sets in a ratio of 8:1:1. The final training samples consisted of 1376 images of Float, 1011 images of Skips, 606 images of Pilling, 336 images of Hole, 2016 images of Pulling Out Yarn, and 1734 images of Stain, totaling 6345 images. Additionally, a portion of the Tianchi public textile image dataset was selected to validate the generalization capability of the proposed model.
To investigate the impact of augmented data on defect detection performance, a comparative experiment was designed to evaluate the detection effectiveness before and after data augmentation. The evaluation metrics selected for this experiment were [email protected], Recall (R), and Precision (P). The experimental results are shown in Table 1. According to the results presented in Table 2, it can be concluded that the augmented dataset demonstrates higher [email protected], Recall, and Precision values compared to the original dataset, indicating improved overall model performance. Therefore, the subsequent comparative and ablation experiments were conducted using the augmented dataset.
The confusion matrices in Figure 6 compare the model’s performance before and after augmenting the training dataset. The left matrix, based on the original dataset, shows a higher number of false positives, particularly in the “Stains” and “Background” categories, with dispersed off-diagonal values, indicating overestimation errors caused by insufficient and imbalanced training samples [20]. After augmenting the dataset (right matrix), the off-diagonal values significantly decrease, and the Precision improves, with more concentrated diagonal values for categories like “Stains” and “Background”, indicating more accurate predictions. This improvement demonstrates that data augmentation effectively reduces overestimation errors, enhancing the model’s robustness and generalization capability.

3.3. Evaluation Metrics

To verify the effectiveness of the experiments, the following evaluation metrics were used: Mean Average Precision ([email protected]), Precision (P), model parameters, Recall (R), model size, and detection speed (Frames Per Second, FPS). [email protected], P, and R were used to evaluate the detection performance of the model. Higher values of [email protected], P, R, and FPS indicate better detection performance. The model parameters, model size, and YOLO (Giga Floating-point Operations Per Second) were used to evaluate the lightweight performance of the model. Smaller values of model parameters, model size, and GFLOPs indicate better lightweight performance.

3.4. Ablation Experiments

To validate the effectiveness of each improvement module in the proposed GSL-YOLOv8n algorithm, we conducted eight sets of ablation experiments on the fabric defect dataset. The evaluation metrics included model size, GFLOPs, FPS, model parameters, and [email protected]. Based on YOLOv8, we replaced standard convolutions with Ghost convolutions, upgraded the detection head to the newly designed shared convolution detection head (LSCDH), and introduced the SimAM attention mechanism. The detection results are shown in Table 3, where “√” indicates the use of the corresponding module.
Based on the experimental results presented in Table 3 the following observations can be made: (1) Complexity Reduction with Ghost Convolutions: By replacing the standard convolutions in the YOLOv8 network with Ghost convolutions, the model size decreased by 39.7%, the computation and parameter count were reduced by 38.3% and 43.0%, respectively, and the [email protected] increased by 0.46% compared to the unmodified YOLOv8 model. (2) Improvement with SimAM Attention Mechanism: To address the multi-scale feature problem of textile defects, the SimAM parameter-free attention mechanism was introduced. This resulted in a 0.46% increase in [email protected] and a 25.8% increase in FPS without increasing the model size, parameters, or computational load. This implies that SimAM enhances the model’s ability to select relevant features, allowing it to locate and process important features more quickly and reducing the time spent on irrelevant features, thus improving inference efficiency. (3) Network Lightweighting with LSCDH: To further lightweight the network, replacing the original YOLOv8 detection head with the shared convolution detection head (LSCDH) resulted in a model size reduction of 20.6%, computation and parameter count decreases of 19.8% and 21.4%, respectively, and an [email protected] improvement of 0.33%. (4) Combination of improvements: After replacing the standard convolutions in YOLOv8 with Ghost convolutions and introducing SimAM, the model size and parameter count decreased by 41.3% and 43.9%, respectively, with the [email protected] increasing to 97.99. Replacing the detection head with LSCDH after introducing Ghost convolutions resulted in model size and parameter count reductions of 60.3% and 56.8%, with the [email protected] increasing to 97.75. Combining all three improvements (Ghost convolutions, SimAM, and LSCDH) into the YOLOv8 model, creating the GSL-YOLOv8 model, resulted in reductions of 66.7% in model size, 58.0% in computation, and 67.4% in parameter count, with the FPS remaining nearly constant and the [email protected] improving by 0.60%. The ablation experiments indicate that each improvement point and their combinations positively impacted the model’s performance, demonstrating that the proposed algorithm effectively enhances the original model’s capability to detect fabric defects.
Figure 7 compares the training performance of the baseline YOLOv8 and the improved GSL-YOLOv8 models across four key metrics. In Figure 7a, GSL-YOLOv8 converges faster and consistently maintains higher mAP@50 values, indicating improved detection accuracy. Figure 7b highlights its higher Precision, especially in early epochs. Figure 7c shows improved Recall, with better instance detection and fewer misses. Figure 7d presents lower and more stable loss values, reflecting enhanced optimization and generalization. Overall, GSL-YOLOv8 achieves superior accuracy, stability, and robustness in object detection.

3.5. Comparison Experiments

3.5.1. Attention Mechanism SimAM Comparison Experiments

To address the issue of background noise affecting the model’s accuracy in detecting small fabric defects, this study incorporates SimAM into the end of the YOLOv8n backbone to enhance the network’s capability of extracting global features. The attention mechanism module can be added to various parts of the YOLOv8 network, such as the backbone and feature extraction networks. To verify the impact of SimAM on algorithm performance when positioned differently within YOLOv8n, SimAM is embedded at the end of the backbone and within the neck network for comparative experiments with the original YOLOv8n model. “YOLOv8n+backbone” indicates SimAM embedded at the end of the backbone, while “YOLOv8n+neck” denotes SimAM embedded within the neck network. The experimental results are shown in Table 4.
As shown in Table 4, when SimAM is embedded at the end of the backbone network, the AP values for three types of detection targets—fiber breaks, holes, and loose fibers—are the highest. Compared to the original YOLOv8 and the version with SimAM added to the neck network, the [email protected] shows the best performance. Specifically, the AP values for holes and loose fibers are improved by 2.8% and 1.6%, respectively, over the YOLOv8 model. In summary, embedding the SimAM attention mechanism at the end of the backbone network significantly enhances the network’s feature extraction capability, improving both the accuracy and robustness of target detection.
To verify the advantages of the parameter-free attention module SimAM, this study added a Deformable Attention Transformer (DAttention), Bi-Level Routing Attention (BiFormer), Mixed Local Channel Attention (MLCA), and Convolutional Block Attention Module (CBAM) to the same position in the YOLOv8 network. These four commonly used attention mechanisms were compared with SimAM in the experiments. The experimental results are shown in Table 5.
Table 5 experimental results show that introducing attention mechanisms at the end of the backbone network improves both Recall (R) and mean Average Precision at 0.5 ([email protected]), thereby enhancing the model’s ability to recognize defect features. Among the four mainstream attention mechanisms, SimAM achieved the highest improvement in FPS, indicating that SimAM helps the model extract and utilize features more efficiently, reducing redundant computations during inference and thereby increasing inference speed. Although CBAM performed best in terms of [email protected], its Recall and FPS performance were average. In contrast, SimAM demonstrated superior overall performance across all metrics.

3.5.2. Result Visualization

To provide a more intuitive observation of the improved algorithm’s effectiveness in fabric defect detection, this paper selects several fabric images for comparison of the detection results. Figure 8 shows the visualization results of detecting textile defects using different models under the same environment. From Figure 8a,b, it can be seen that for the detection of Pilling defects, the improved GSL-YOLOv8 algorithm achieved a detection accuracy of 0.85, which is better than the original YOLOv8n model. In the detection of small target stains, GSL-YOLOv8n exhibited the best detection accuracy. When detecting elongated defects such as Pulling Out Yarn, Float, and Skips, although GSL-YOLOv8 misclassified some textile backgrounds as stain defects, resulting in false detections, it still achieved the highest detection accuracy overall. In summary, the GSL-YOLOv8n algorithm demonstrated superior recognition accuracy in detecting textile defects of varying sizes and shapes compared to other algorithms.

3.5.3. Comparison of Algorithms on Different Datasets

To validate the generalization capability of the GSL-YOLOv8n algorithm, a subset of 3,563 images from the Tianchi dataset was selected for comparative experiments against different algorithms. The fabric defect types include holes, stains, coarse weft, loose warp, fuzz, knots, and warping defects. The dataset was randomly divided into training, validation, and test sets in a ratio of 8:1:1. The experimental results are shown in Table 6.
As shown in Table 6, the YOLOv6 algorithm achieves the highest FPS at 303.1, but it suffers from high model complexity and a slower convergence speed. Compared to the other three algorithms, the GSL-YOLOv8n performs best in terms of GFLOPs, Recall (R), and [email protected]/%. Additionally, the GFLOPs of the GSL-YOLOv8n algorithm are reduced by 58.0% compared to the original YOLOv8n model. This indicates that the GSL-YOLOv8n algorithm demonstrates superior performance on this dataset, offering better robustness and generalization capabilities.

3.5.4. Comparison Experiment of Different Algorithms

To validate the effectiveness of the proposed GSL-YOLOv8n algorithm compared to other classical algorithms, we use the YOLOv8n model as the baseline network model. The evaluation metrics include model size, GFLOPs, the number of model parameters, and [email protected]. We selected the classical two-stage model Faster R-CNN, and the single-stage models SSD and YOLO for comparison with the proposed GSL-YOLOv8n algorithm on the enhanced dataset. The experimental results are shown in Table 7.
From the comparison results of different algorithms shown in Table 7, it is evident that the proposed GSL-YOLOv8n algorithm offers advantages in both detection accuracy and speed over Faster R-CNN. For single-stage algorithms like SSD, which struggle with detecting small targets in fabric defects, the [email protected] is the lowest, and both the model size and parameter count are relatively high. Among the different versions of the YOLO algorithm, YOLOv5s performs best in terms of [email protected], but its large model size and parameter count make it less suitable for deployment on resource-constrained embedded devices. YOLOv6 demonstrates the best performance in terms of FPS. The proposed GSL-YOLOv8n algorithm excels in GFLOPs and parameter count, with a model size as low as 2.1 and an [email protected] of 98.29, making it fully capable of meeting the demands for fast and accurate detection in industrial production.
Table 8 presents the AP and [email protected] detection results for six types of defects across eight different algorithms. It is evident that the SSD algorithm performs worse compared to other models in various metrics for fabric defect detection. The improved GSL-YOLOv8n model shows an increase in AP values for most defects, especially for small defect targets such as hole and loose threads. Additionally, the GSL-YOLOv8n model exhibits a significant improvement in AP values for elongated defects like Skips and loose threads.
In a comparison of Precision–Recall curves, Figure 9a shows a steeper curve compared to Figure 9b, indicating that the GSL-YOLOv8 model converges more quickly in detection tasks, achieving higher accuracy in the early stages of training. The Recall–Confidence curve in Figure 9c is smoother than that in Figure 9d and demonstrates better generalization in the low-confidence range. This suggests that GSL-YOLOv8 can maintain high detection accuracy, even as confidence decreases. In summary, the optimized model outperforms the original model in terms of overall performance.

3.6. Computational Complexity Analysis

To provide a more comprehensive analysis of the lightweight characteristics of the proposed GSL-YOLOv8n, we conduct a theoretical computational complexity assessment. For the original YOLOv8n model, the core convolution operations have a time complexity of O(N^2 * K^2 * C_in * C_out), where N is the input feature map size, K is the kernel size, and C_in and C_out represent the input and output channels, respectively. The improved GSL-YOLOv8n incorporates Ghost convolutions, reducing redundancy and lowering the complexity to O(N^2 * K^2 * C_in * C_out/r), where r is the compression ratio of the Ghost module.
Additionally, the lightweight shared convolution detection head (LSCDH) used in GSL-YOLOv8n further enhances computational efficiency. The LSCDH module allows for multi-scale feature fusion with a time complexity of O(N^2 * C_in * C_out), significantly reducing repeated calculations compared to traditional detection heads. The integration of SimAM provides enhanced feature extraction with negligible computational overhead.
As indicated in the results, GSL-YOLOv8n maintains a near-equivalent FPS to YOLOv8n (238.9 vs. 239.6) while drastically reducing model size, parameter count, and computational load by 66.7%, 67.4%, and 58.0%, respectively. These improvements highlight the effectiveness of the proposed method in optimizing computational complexity while preserving high detection accuracy.

4. Conclusions

Based on the multi-scale characteristics of fabric defects, the dataset was preprocessed to enhance the model’s generalization ability. Building on the YOLOv8 model, we applied the concept of Ghost convolutions to improve the YOLOv8 backbone network. Specifically, we replaced the standard convolutions (Conv) in YOLOv8 with GhostConv and modified the C2f module to C2fGhost. This approach maintained model accuracy while reducing its complexity. Additionally, we integrated the parameter-free attention mechanism SimAM at the end of the backbone network, which enhanced the model’s ability to integrate multi-scale information.
The YOLOv8 detection head was upgraded to the newly designed LSCDH, which utilized the concept of shared convolutions to merge and adjust channel numbers of features. This combination of low-level detail and high-level semantic information boosts the model’s feature extraction capabilities while ensuring its lightweight nature, making it suitable for deployment on mobile devices. Ablation experiments validated the effectiveness of each module in our proposed GSL-YOLOv8n algorithm. Compared to the YOLOv8n algorithm, the improved GSL-YOLOv8n model reduced model size, computational complexity, and parameter count by 66.7%, 58.0%, and 67.4%, respectively. This resulted in a lightweight model with enhanced small target defect feature extraction and anti-interference capabilities while maintaining detection speed, achieving an [email protected] of 98.29%.
In the future, we plan to integrate this model into resource-constrained embedded devices, exploring the application of lightweight models for fabric defect detection in environments with limited computing resources.

Author Contributions

Conceptualization, S.M. and Y.L.; methodology, S.M. and Y.L.; software, Y.L.; validation, Y.L.; formal analysis, Y.L.; investigation, Y.L.; resources, S.M.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, S.M.; visualization, Y.L.; supervision, S.M.; project administration, Y.Z.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Program of the National Natural Science Foundation of China (62103309).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lu, H.; Chen, Y. Surface defect detection method of carbon fiber prepreg based on machine vision. J. Text. Res. 2020, 41, 51–57. [Google Scholar]
  2. Selvi, S.S.T.; Nasira, G. An effective automatic fabric defect detection system using digital image processing. J. Environ. Nanotechnol. 2017, 6, 79–85. [Google Scholar]
  3. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. 28th Int. Conf. Neural Inf. Process 2015, 28, 291–299. [Google Scholar] [CrossRef] [PubMed]
  4. Kim, J.H.; Kim, N.; Won, C.S. High-speed drone detection based on YOLOv8. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  5. Qi, J.; Nguyen, M.; Yan, W. Waste classifcation from digital images using ConvNeXt. In Proceedings of the Pacifc-Rim Symposium on Image and Video Technology, Virtual Event, 12–14 November 2022; pp. 1–13. [Google Scholar]
  6. Sun, X.; Gao, X.; Cao, G. Fabric defect detection algorithm based on improved Faster R-CNN. Wool Text. Sci. Technol. 2022, 50, 77–84. [Google Scholar] [CrossRef]
  7. Xie, H.S.; Zhang, Y.F.; Wu, Z.S. An Improved Fabric Defect Detection Method Based on SSD. Aatcc J. Res. 2021, 8 (Suppl. 1), 182–191. [Google Scholar] [CrossRef]
  8. Guo, Y.B.; Kang, X.J.; Li, J.F.; Yang, Y.X. Automatic Fabric Defect Detection Method Using AC-YOLOv5. Electronics 2023, 12, 2950. [Google Scholar] [CrossRef]
  9. Fan, H.; Zhu, D.; Li, Y. An improved yolov5 marine biological object detection algorithm. In Proceedings of the 2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Hangzhou, China, 5–7 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 29–34. [Google Scholar]
  10. Jing, J.F.; Zhuo, D.; Zhang, H.H.; Liang, Y.; Zheng, M. Fabric defect detection using the improved YOLOv3 model. J. Eng. Fibers Fabr. 2020, 15, 1558925020908268. [Google Scholar] [CrossRef]
  11. Kang, X.J. Research on fabric defect detection method based on lightweight network. J. Eng. Fibers Fabr. 2024, 19, 15589250241232153. [Google Scholar] [CrossRef]
  12. Liu, B.B.; Wang, H.Y.; Cao, Z.F.; Wang, Y.; Tao, L.; Yang, J.; Zhang, K. PRC-Light YOLO: An Efficient Lightweight Model for Fabric Defect Detection. Appl. Sci. 2024, 14, 938. [Google Scholar] [CrossRef]
  13. Wang, X.; Xu, Y.; Zhou, J.; Chen, J. Saffron picking recognition in complex environments based on improved YOLOv7. Trans. Chin. Soc. Agric. Eng. 2023, 39, 169–176. [Google Scholar]
  14. Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; Yang, J. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Adv. Neural Inf. Process. Syst. 2020, 33, 21002–21012. [Google Scholar]
  15. Wang, T.; Chen, Q.; Lang, X.; Xie, L.; Li, P.; Su, H. Detection of Oscillations in Process Control Loops from Visual Image Space Using Deep Convolutional Networks. IEEE/CAA J. Autom. Sin. 2024, 11, 982–999. [Google Scholar] [CrossRef]
  16. Xu, Q.; Wei, Y.; Gao, J.; Yao, H.; Liu, Q. ICAPD framework and simAM-YOLOv8n for student cognitive engagement detection in classroom. IEEE Access 2023, 11, 136063–136076. [Google Scholar] [CrossRef]
  17. Fu, H.; Song, G.; Wang, Y. Improved YOLOv4 marine target detection combined with CBAM. Symmetry 2021, 13, 623. [Google Scholar] [CrossRef]
  18. Yi, D.; Ahmedov, B.H.; Jiang, S.; Li, Y.; Flinn, S.J.; Fernandes, P.G. Coordinate-Aware Mask R-CNN with Group Normalization: A underwater marine animal instance segmentation framework. Neurocomputing 2024, 583, 127488. [Google Scholar] [CrossRef]
  19. Yu, X.; Lyu, W.; Zhou, D.; Wang, C.; Xu, W. ES-Net: Efficient Scale-Aware Network for Tiny Defect Detection. IEEE Trans. Instrum. Meas. 2022, 71, 1–14. [Google Scholar] [CrossRef]
  20. Salazar, A.; Vergara, L.; Vidal, E. A proxy learning curve for the Bayes classifier. Pattern Recognit. 2023, 136, 109240. [Google Scholar] [CrossRef]
Figure 1. GSL-YOLOv8 Model.
Figure 1. GSL-YOLOv8 Model.
Applsci 14 08000 g001
Figure 2. Core design of GhostNet.
Figure 2. Core design of GhostNet.
Applsci 14 08000 g002
Figure 3. SimAM structure.
Figure 3. SimAM structure.
Applsci 14 08000 g003
Figure 4. Overall structure of detection Head LSCDH.
Figure 4. Overall structure of detection Head LSCDH.
Applsci 14 08000 g004
Figure 5. Fabric defect types.
Figure 5. Fabric defect types.
Applsci 14 08000 g005
Figure 6. Confusion matrix. The left matrix represents the data without augmentation, and the right matrix shows the data with augmentation.
Figure 6. Confusion matrix. The left matrix represents the data without augmentation, and the right matrix shows the data with augmentation.
Applsci 14 08000 g006
Figure 7. Performance comparison. (a) Training curve of GSL-YOLOv8s and YOLOv8s in [email protected]; (b) training curve of GSL-YOLOv8s and YOLOv8s in Precision; (c) training curve of GSL-YOLOv8s and YOLOv8s in Recall; (d) training curve of GSL-YOLOv8s and YOLOv8s in loss.
Figure 7. Performance comparison. (a) Training curve of GSL-YOLOv8s and YOLOv8s in [email protected]; (b) training curve of GSL-YOLOv8s and YOLOv8s in Precision; (c) training curve of GSL-YOLOv8s and YOLOv8s in Recall; (d) training curve of GSL-YOLOv8s and YOLOv8s in loss.
Applsci 14 08000 g007
Figure 8. Comparison experiment detection result graph. (a) Pilling; (b) Stains; (c) Pulling Out; (d) Float; (e) Skips.
Figure 8. Comparison experiment detection result graph. (a) Pilling; (b) Stains; (c) Pulling Out; (d) Float; (e) Skips.
Applsci 14 08000 g008
Figure 9. (a) Precision–Recall curves of GSL-YOLOv8; (b) Precision–Recall curves of YOLOv8; (c) Recall–Confidence curves of GSL-YOLOv8; (d) Recall–Confidence curves of YOLOv8.
Figure 9. (a) Precision–Recall curves of GSL-YOLOv8; (b) Precision–Recall curves of YOLOv8; (c) Recall–Confidence curves of GSL-YOLOv8; (d) Recall–Confidence curves of YOLOv8.
Applsci 14 08000 g009
Table 1. Training parameter settings.
Table 1. Training parameter settings.
ParametersConfigureParametersConfigure
optimizerSGDweight_decay0.0005
epochs250close_mosaic10
workers8batch16
lr0.01momentum0.9
imgsz640
Table 2. Comparative experiments of different models before and after data augmentation.
Table 2. Comparative experiments of different models before and after data augmentation.
ModelData Augmentation StatusR (%)P (%)[email protected] (%)
YOLOv8nNO89.9092.9094.40
YES94.8096.6097.70
GSL-YOLOv8NO87.4893.1492.87
YES96.3497.0398.29
Table 3. Dissolution experiment results.
Table 3. Dissolution experiment results.
Experiment Serial NumberGhostSimAMLSCDHModel Size/MBGFLOPs/GParametersFPS/(Frame·s−1)[email protected]/%
1 6.38.13,006,818239.697.70
2 3.85.01,715,246236.298.15
3 6.38.13,006,818301.598.16
4 5.06.52,362,713299.498.02
5 3.75.01,687,566244.997.99
6 2.53.51,071,141226.997.75
7 4.06.11,878,617299.297.99
82.13.4981,381238.998.29
Table 4. Comparative experiments of different positions of attention in SimAM.
Table 4. Comparative experiments of different positions of attention in SimAM.
ModelAP/%[email protected]/%
FloatSkipsPillingHolePulling Out YarnStain
YOLOv8n+backbone98.7399.3399.3699.5098.6093.4298.16
YOLOv8n+neck98.9598.5299.5096.6097.6693.2197.41
YOLOv8n99.3098.8099.3096.7097.0094.9097.70
Table 5. Comparative experiments of different attention mechanisms.
Table 5. Comparative experiments of different attention mechanisms.
ModelGFLOPs/GR/%FPS/(Frame·s−1)[email protected]/%
YOLOv8 (base)8.194.80239.697.70
YOLOv8+ DAttention8.395.27227.098.04
+BiFormer8.495.99260.797.97
+MLCA8.195.71288.197.89
+CBAM8.195.12280.398.45
+SimAM8.195.31301.598.16
Table 6. Comparison of algorithms on different datasets.
Table 6. Comparison of algorithms on different datasets.
ModelGFLOPs/GR/%FPS[email protected]/%
YOLOv5n7.184.26288.686.75
YOLOv611.882.75303.187.12
YOLOv8n8.182.34300.488.47
GSL-YOLOv8n3.484.74217.588.94
Table 7. Comparative experiments of different algorithms.
Table 7. Comparative experiments of different algorithms.
ModelModel Size/MBGFLOPs/GParametersFPS/(Frame·s−1)[email protected]/%
SSD100.362.726,285,486184.1288.21
Faster R-CNN159.3134.441,755,28625.4797.13
YOLOv5n5.37.12,504,114286.596.51
YOLOv5s18.523.89,113,858206.798.34
YOLOv68.78.74,234,338336.696.99
YOLOv7-Tiny74.9105.237,221,63568.893.60
YOLOv8n6.38.13,006,818239.697.70
GSL-YOLOv8n2.13.4981,381238.998.29
Table 8. Comparative experiments of AP values of different algorithms.
Table 8. Comparative experiments of AP values of different algorithms.
ModelAP/%[email protected]/%
FloatSkipsPillingHolePulling Out YarnStains
SSD83.8292.8497.7395.0082.6977.1888.21
Faster R-CNN98.3997.6299.4496.5598.6292.1797.13
YOLOv5n99.0097.4798.7094.8697.5591.4896.51
YOLOv5s99.3298.7799.4499.5098.5394.5198.34
YOLOv699.0697.6799.4498.8197.1689.8296.99
YOLOv7-Tiny92.5096.1097.8093.3092.9089.0093.60
YOLOv8n99.3098.8099.3096.7097.0094.9097.70
GSL-YOLOv8n99.1399.2899.5099.3198.1294.4198.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, S.; Liu, Y.; Zhang, Y. Fabric Defect Detection Based on Improved Lightweight YOLOv8n. Appl. Sci. 2024, 14, 8000. https://doi.org/10.3390/app14178000

AMA Style

Ma S, Liu Y, Zhang Y. Fabric Defect Detection Based on Improved Lightweight YOLOv8n. Applied Sciences. 2024; 14(17):8000. https://doi.org/10.3390/app14178000

Chicago/Turabian Style

Ma, Shuangbao, Yuna Liu, and Yapeng Zhang. 2024. "Fabric Defect Detection Based on Improved Lightweight YOLOv8n" Applied Sciences 14, no. 17: 8000. https://doi.org/10.3390/app14178000

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop