Next Article in Journal
Speech Emotion Recognition Based on Temporal-Spatial Learnable Graph Convolutional Neural Network
Previous Article in Journal
Pedestrian Trajectory Prediction Based on an Intention Randomness Influence Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Fabric Defect Detection Algorithm Based on Improved YOLOv8n Algorithm

1
Hubei Digital Textile Equipment Key Laboratory, Wuhan Textile University, Wuhan 430073, China
2
The Advanced Textile Technology Innovation Center, Jianhu Laboratory, Shaoxing 312000, China
3
School of Mechanical & Electrical Engineering, Zhongyuan University of Technology, Zhengzhou 450007, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(11), 2009; https://doi.org/10.3390/electronics13112009
Submission received: 9 April 2024 / Revised: 9 May 2024 / Accepted: 14 May 2024 / Published: 21 May 2024

Abstract

:
In the process of fabric production, various types of defects affect the quality of a fabric. However, due to the wide variety of fabric defects, the complexity of fabric textures, and the concealment of small target defects, current fabric defect detection algorithms suffer from issues such as having a slow detection speed, low detection accuracy, and a low recognition rate of small target defects. Therefore, developing an efficient and accurate fabric defect detection system has become an urgent problem that needs to be addressed in the textile industry. Addressing the aforementioned issues, this paper proposes an improved YOLOv8n-LAW algorithm based on the YOLOv8n algorithm. First, LSKNet attention mechanisms are added to both ends of the C2f module in the backbone network to provide a broader context area, enhancing the algorithm’s feature extraction capability. Next, the PAN-FPN structure of the backbone network is replaced by the AFPN structure, so that the different levels of features of the defects are closer to the semantic information in the progressive fusion. Finally, the CIoU loss is replaced with the WIoU v3 loss, allowing the model to dynamically adjust gradient gains based on the features of fabric defects, effectively focusing on distinguishing between defective and non-defective regions. The experimental results show that the improved YOLOv8n-LAW algorithm achieved an accuracy of 97.4% and a detection speed of 46 frames per second, while effectively increasing the recognition rate of small target defects.

1. Introduction

As is widely recognized, the textile industry is a typical labor-intensive industry. With the rapid advancement of science and technology, automated machinery has replaced traditional manual labor as the primary means of production in the textile industry [1]. During the fabric production process, defects may arise on the surface due to poor-quality raw materials, yarn friction during weaving, production equipment failures, operator negligence, environmental factors, and other reasons. Defects in textile products compromise not only their aesthetics but also significantly impact their selling prices and, in severe cases, the viability of an enterprise. As people’s quality of life continuously improves, the demand for textile quality correspondingly increases. A survey indicates that the selling price of defective textile products may decrease by more than 45% [2]. Therefore, how to accurately identify fabric defects has become a key step in the textile industry that cannot be ignored [3,4].
Due to the complexity and variety of defects of different shapes, current worker detection is susceptible to human subjective influence and high instability, while manual detection poses significant risks to employees’ eyesight. In recent years, advancements in artificial intelligence have led to fruitful outcomes in target detection within industrial quality inspection scenarios. The textile industry urgently requires this advanced technology to enable intelligent defect detection, reduce the reliance on extensive manual labor, minimize incidences of leakage and misdetection, and enhance product quality [5,6].

Related Works

Wei et al. proposed using an improved Faster Region-based Convolutional Neural Network (Faster R-CNN) method to detect fabric defects. The detected fabric defect types include broken warp, lint, defects, impurities, broken yarn, and oil stains, among others. Although this method has achieved good detection results, the Faster R-CNN model suffers from drawbacks such as having a slow detection speed and high training costs [7]. Chen et al. optimized Gabor filters and integrated them into the Faster R-CNN algorithm to detect fabric defects using the improved algorithm, but the detection efficiency was low and could not meet the requirements of real-time detection [8]. Li et al. improved the Cascade RCNN algorithm to obtain more comprehensive multi-scale feature information on fabric blemishes, but the method could not meet the demand of real-time detection. [9]. Hanlin et al. used the SSD algorithm with MobileNet as the main network to detect fabric defects, but the detection performance was poor when dealing with small target defects [10]. Han et al. applied an improved feature fusion SSD algorithm for fabric defect detection, but the detection accuracy was low, and the bounding box positions were inaccurate [11]. Zhao et al. introduced new SPP structures and adopted improvement strategies such as SoftPool in the YOLOv3 object detection algorithm, which improved detection accuracy but did not increase detection speed significantly [12]. Gai et al. improved the YOLOv4 object detection algorithm by enhancing feature extraction, deepening the network structure, and increasing detection accuracy, but the detection speed needs further improvement [13]. Li et al. improved the recognition rate of small fabric defect targets in YOLOv5 by introducing the CBAM attention mechanism, but it may decrease detection efficiency [14]. Wang et al. incorporated adaptive spatial feature fusion (ASFF) into YOLOv5, which can mitigate the adverse effects of multi-scale fusion, but the average precision is not high [15]. Zhou et al. conducted a preliminary exploration of object detection based on the YOLOv5s algorithm, but the detection speed needs further improvement [16]. With the continuous development of the YOLO algorithm, the YOLOv8 model has seen better improvements compared to previous versions. While reducing the parameters of the network, the accuracy of object detection has also been improved to a certain extent. These improvements have certain application prospects in the area of fabric flaw identification. Zhang et al. conducted preliminary research based on the YOLOv8 algorithm and improved the feature extraction ability of fabric defect points by adding a coordinate attention mechanism. However, the detection accuracy of small target defects still needs further improvement [17].
In summary, the current fabric defect detection methods based on deep learning mainly suffer from insufficient recognition accuracies of small target fabric defects, low precisions in defect localization, and relatively low detection speeds. In response to the above issues, this paper proposes the YOLOv8n-LAW algorithm based on the improved YOLOv8n algorithm for fabric defect detection. The aim is to effectively focus on distinguishing between defective and non-defective areas of the fabric, better capturing the characteristics of various defects in the fabric, reducing false positives and false negatives, and improving the detection rate of small targets.
The main contributions of this paper are as follows:
  • Compared to mainstream target detection algorithms, the improved algorithm presented in this paper achieves a higher detection accuracy and a smaller model size.
  • The integration of an attention module enhances the precise localization of fabric defects and improves the algorithm’s consistency across defects of various sizes and shapes.
  • The improved feature fusion structure effectively integrates multi-scale features, optimizes the processing of small-scale features, and enhances the model’s ability to detect defects of various sizes.
  • The adoption of a new type of loss function dynamically adjusts the gradient gain based on the characteristics of fabric defects, thereby better capturing the diversity of defects and improving the detection rate of small targets.
The first part of the article outlines the fundamentals of the textile industry and emphasizes the importance of high-quality fabrics, followed by a detailed analysis of the limitations of current detection methods. In the second part, three significant improvements are proposed to address the existing challenges in defect detection. In the third part, experiments and analyses are conducted after establishing a dataset and configuring the environment. In the fourth part, the paper is summarized, and future research directions are proposed.

2. Improvement of Fabric Defect Detection Algorithm for Small Targets Based on YOLOv8n

2.1. YOLOv8n Algorithm Structure

The YOLOv8 model was released in January 2023 by Ultralytics, the company that developed YOLOv5. Depending on the size of the model and the depth and width of the network, the model can be categorized into five versions: YOLOv8n (nano), YOLOv8s (small), YOLOv8m (medium), YOLOv8l (large), and YOLOv8x (mega), with YOLOv8n being the fastest and smallest model [18]. The YOLOv8n algorithm, compared with the current two-stage target detection algorithm, in terms of accuracy, may be slightly inferior to the two-stage network, but the detection speed enhancement is higher, and the network structure is simple, more convenient, and quick, so this paper chooses to use YOLOv8n as the basis of the network model for fabric defect detection, and the structure of the network is shown in Figure 1.

2.2. Disadvantages and Improvement Strategies of the YOLOv8n Algorithm

2.2.1. Disadvantages of the YOLOv8n Algorithm

Although YOLOv8n shows excellent performance in image recognition and target detection tasks, with significant improvements in accuracy and speed compared to other YOLO versions, it still suffers from the following shortcomings in real-world fabric defect detection:
  • In fabric blemish detection, for different scales and types of fabric blemishes, YOLOv8n is unable to dynamically adjust the scope of its attention to the contextual information, thus omitting some of the key features, and failing to identify the blemishes more accurately in a variety of complex situations;
  • The YOLOv8n algorithm uses a path aggregation network with a feature pyramid network (PAN-FPN) structure for multi-scale feature extraction, which may be affected by single-scale feature extraction limitations, the loss of low-level feature information, incomplete information transfer, and incomplete feature representations, thus affecting the algorithm’s ability to accurately locate and capture the details of blemishes;
  • The CIoU loss function of the YOLOv8n algorithm does not take into account the difficulty balance of the samples and involves the normalization of the diagonal length and the penalty term for the aspect ratio, which increases the computational effort of the training process.

2.2.2. The YOLOv8n Algorithm Improvement Strategy

Sharshar et al. incorporated the LSKNet attention mechanism into the backbone network to improve the detection accuracy of small and medium targets in the field of aerial image analysis [19]. Therefore, this study adopts this concept, utilizing the LSKNet attention mechanism to innovatively design the c2f-LSKNet, which incorporates LSKNet into both ends of the C2F module, thereby broadening the context regions to enhance the model’s feature extraction capabilities. Liu et al. integrated the AFPN module into the YOLOv5 algorithm, significantly enhancing its detection accuracy, especially for passion fruits containing small defects, in their quality classification efforts [20]. In this study, the original PAN-FPN module in YOLOv8n is replaced by AFPN, adopting a progressive fusion approach that first fuses low-level features, then mid-level, and finally top-level features. This method narrows the semantic gaps between different feature levels, yielding more accurate and comprehensive feature integration. Lei et al. replaced the loss function in the YOLOv5s algorithm with the WIoU loss function, enhancing both the convergence speed and the regression accuracy [21]. In this study, the WIoU loss function is introduced to enhance the distinction between fabric defects and non-defective regions, considering the quality of the anchor frames and dynamically adjusting the gradient gain based on the characteristics of the fabric defects.

2.3. The Improved Algorithm, YOLOv8n-LAW

This paper integrates the YOLOv8n algorithm with the LSKNet attention mechanism, introduces the AFPN module, and incorporates the WIoU loss function, naming it the YOLOv8n-LAW algorithm. Figure 2 illustrates the algorithm’s structure. The pink box labeled C2F_Attention represents the precise location where the LSKNet attention mechanism is fused, while the red boxes labeled ASFF_3 and ASFF_2 indicate the positions of the AFPN module. The blue portion within the Detect box represents the position of the replaced loss function, which consists of the Bbox Loss (WIoU + DFL). The three enhancements each target a different stage of the fabric defect detection algorithm. The LSKNet attention mechanism boosts the algorithm’s feature extraction capabilities. The AFPN module improves the model’s feature fusion capabilities. The WIoU loss function accelerates detection and enhances accuracy. Together, these enhancements significantly improve the overall performance of the fabric defect detection algorithm.

2.3.1. The Integration of the LSKNet Attention Module

In fabric defect detection, due to the small size of most defects and their difficulty in being identified based on appearance alone, they are often overlooked. To more accurately identify defects of varying sizes, this paper introduces the LSKNet attention mechanism into the YOLOv8n algorithm. Proposed by Li et al. in 2023, LSKNet consists of large kernel selection (LK Selection) sub-blocks and feed-forward network (FFN) sub-blocks [22]. Its functions are:
  • A series of large kernel convolutions with increasing kernel sizes and dilation rates to expand the receptive field, facilitating the extraction of features from various contextual regions;
  • A dynamic selection of convolutional kernels suitable for different defects based on the extracted features. The overall structure of the LSK module is illustrated in Figure 3.
In this paper, the LSKNet attention mechanism is added to the input and output sides of the C2f module of the backbone network of the YOLOv8n algorithm, respectively, and the new module incorporating the LSKNet attention mechanism is called the C2f_Attention module, the structure of which is shown in Figure 4.
Compared to the C2f module, the input features go through the initial convolutional layer, the Conv1 layer for channel adjustment, and then enter the LSKNet module, and the final output also needs to go through the LSKNet module again when passing through the Conv2 layer to complete the final output. The C2f_Attention module adapts to various scales of feature extraction by incorporating the LSKNet attention mechanism and utilizing multi-sized convolutional kernels to dynamically adjust the receptive field. This adaptation allows the algorithm to more finely capture defects of various sizes, particularly in small-sized and complex-textured fabric blemishes, thereby enhancing the detection’s accuracy and sensitivity.

2.3.2. The Introduction of the AFPN Module

In target detection, feature extraction is the process of learning important features in an image so that the model can accurately distinguish between different classes of targets and pinpoint their locations. The PAN-FPN structure employed by the YOLOv8n algorithm enhances the information flow through both bottom-up and top-down pathways, processing and fusing all types of features uniformly. This uniform processing approach can lead to information loss, particularly with small target blemishes, where details in small-scale features are diminished through repetitive upscaling and downscaling, thereby compromising the detection of minor imperfections.
For this reason, in this paper, AFPN is used to allow the YOLOv8n algorithm to obtain more accurate and useful feature information in fabric blemish detection [23]. The AFPN first completes the multi-layer feature extraction, then extracts different levels of features from the backbone network, and then combines the features extracted in the last layer of each feature layer into a set of features with different scales. The last level of features extracted from the layers is combined into a set of features with different scales. Subsequently, these features are added one by one to the feature pyramid network for feature fusion to generate a set of multi-scale feature groups as output. The architecture of this is shown in Figure 5 [23].
During the feature extraction process, AFPN utilizes 1 × 1 convolutions and bilinear interpolation to upsample features, employs 2 × 2 convolutions for 2× downsampling, 4 × 4 convolutions for 4× downsampling, and ultimately completes feature extraction. In the feature fusion process, the ASFF module is used to weight different level features. The features of the different layers are effectively fused, and the interfering information is reduced.
In this paper, the AFPN architecture is integrated into the YOLOv8n fabric defect detection algorithm. Featuring an adaptive fusion module, this approach dynamically adjusts the fusion strategy based on the content and importance of different feature layers. This strategy efficiently integrates multi-scale features, particularly excelling in optimizing the processing of small-scale features. It enhances the model’s detection capabilities, enabling the more accurate location and identification of fabric defects across various sizes. Additionally, the module reduces false positives and missed detections, making it highly suitable for practical fabric defect detection scenarios.

2.3.3. Fusion of the WIoU Loss Function

The loss function is an important metric used to evaluate the accuracy of model predictions. The loss function of the YOLOv8n algorithm consists of DFL loss and CIoU loss, where CIoU is a variant based on the IoU that addresses the issue of hindering network optimization when the target overlap area is zero and penalizes aspect ratios to improve the robustness of bounding boxes [24]. However, given the variability in the size and shape of fabric blemishes, the CIoU loss function does not specifically emphasize size adjustments. Small blemishes receive less weight in the calculation of bounding box overlaps, which are inadequate for handling objects with minimal overlaps and ineffective for detecting small target blemishes. Moreover, the CIoU loss function’s reliance on extensive geometric calculations contributes to computational inefficiency, particularly in large-scale datasets and within complex network architectures. To improve these problems, this paper introduces the WIoU v3 loss function, which takes into account the aspect ratio, centroid distance, and overlap area, and introduces a dynamic non-monotonic focusing mechanism, which provides an intelligent gradient gain assignment strategy for fabric blemish detection [25].
WIoU has three versions in total, namely WIoU v1, WIoU v2, and WIoU v3. WIoU v1 constructs a two-layer attention mechanism based on distance variables, reducing the penalty of geometric measurements when dealing with overlapping box issues and thus improving the model’s generalization ability on images of different qualities. WIoU v2 builds a monotonic focus coefficient on the basis of WIoU v1 to enhance gradient gains and improve classification performance. WIoU v3 introduces the concept of outlierness (represented by β ) to measure the quality of anchor boxes, with its formula as follows:
β = I o U l o s s * I o U l o s s ¯ [ 0 , + ) ,
where I o U l o s s * is the monotonic focusing coefficient and I o U l o s s ¯ is the mean value of I o U l o s s . The quality classification criteria for the anchor frame can be dynamically adjusted. A smaller β represents a higher quality anchor frame, and a larger β represents a lower quality anchor frame. During the gradient assignment process, high quality anchor frames are assigned small gradient gains, while low quality anchor frames are assigned even smaller gradient gains to prevent the creation of harmful gradients. The formula for this is shown below:
W I o U   v 3 l o s s = r W I o U   v 1 l o s s ,   r = β δ α β δ
where α and δ are hyperparameters used to fit different models, r is a non-monotonic focusing coefficient, and WIoU v3 can be assigned the appropriate gradient gain at different moments for that given moment.
In this paper, the WIoU v3 loss function replaces the original CIoU loss function in YOLOv8n. In fabric blemish detection, this function dynamically adjusts the gradient gain by taking into account the quality of the anchor frame and the specific characteristics of the fabric defects. Moreover, it introduces a weighting factor to account for the varying sizes and shapes of defects, particularly in cases of overlap. This approach yields a superior gradient response, enhancing the boundary localization accuracy for fabric defects of diverse sizes and shapes. Furthermore, it effectively distinguishes between defective and non-defective regions, not only capturing the characteristics of various fabric defects more effectively but also reducing false detections and omissions, thereby improving the detection rate of small targets.

3. Experimental Results and the Analysis of the Improved YOLOv8n-LAW Algorithm

3.1. Experimental Environment Configuration and Procedure

In order to provide a reliable operating system environment, this study builds a deep learning model using a PyTorch framework on the Windows 10 system for experiments. The experimental environment is detailed in Table 1.
The experimental steps are as follows:
  • Preprocess the dataset;
  • Determine the model’s performance evaluation metrics;
  • Adjust the model parameters and train the improved YOLOv8n algorithm;
  • Use the trained model to detect defects in fabric images.

3.2. Model Parameters and Adjustments

During model training, pre-trained YOLOv8n.pt weights are loaded to accelerate the convergence of the model on the current task with the help of already-learned features and knowledge. The parameters of the entire model after completing the network initialization are shown in Table 2 below:
In the YOLOv8n-LAW algorithm, the dataset was trained, resulting in the box_loss change curve (Figure 6) used to measure the loss function for bounding box predictions, the cls_loss change curve (Figure 7) used to measure the classification accuracy of the model for target categories, and the dfl_loss change curve (Figure 8) used to measure the regression loss function for predicting target positions.
From Figure 6, Figure 7 and Figure 8, it can be observed that the box_loss, cls_loss, and dfl_loss of YOLOv8n-LAW exhibit a consistent downward trend as the number of training epochs increases. The loss curves begin to flatten and stabilize before the epoch approaches 300. Therefore, the iteration number in this paper is set to 300. The comparison of the training and validation loss curves shows that both are decreasing and stabilizing with a minimal difference between them. This pattern suggests effective model generalization to the validation set, indicating minimal or no overfitting.

3.3. Experimental Evaluation Indicators

Experimental models need to be evaluated based on their performance on the validation and test sets of the dataset, measured using different metrics. In this paper, the performance of the improved YOLOv8n algorithm is evaluated using an IoU threshold of 0.5 and a target category confidence threshold of 0.5. The evaluation is performed based on five dimensions: the precision (P), recall (R), F1-measure (F1), mean accuracy (mAP), and frames per second (FPS).
The P is used to determine how accurately the model predicts the data. It is calculated as follows:
P = N T P N T P + N F P × 100 % ,
The R is used to evaluate the ability of the algorithm to recall a target. It is calculated as follows:
R = N T P N T P + N F N × 100 % ,
The F1 is used to evaluate recall and precision. The formula for calculating F1 is as follows:
F 1 = 2 × P × R P + R × 100 % ,
The mAP measures the accuracy of the system in detecting a target and is a commonly used metric in target detection evaluation, which is calculated as follows:
m A P = i = 0 n A P i n × 100 % ,
where N T P is the overall number of positive cases, N F P is the number of false positive detections, N F N is the number of positive cases that the algorithm missed, and A P is the detection accuracy of each category. The A P formula is as follows:
A P = 0 1 P ( R ) d R ,
The performance of the model in the task of detecting fabric defects can be measured by selecting suitable evaluation metrics to determine its validity and reliability in practical applications.

3.4. Constructing the Dataset

The datasets collected in this paper were all photographed and summarized on-site in a textile factory in Zhejiang, aiming at reflecting real defects in textile production. The size of each picture is 512 × 512 pixels. There are 2116 pictures in the dataset, and the fabrics used in the collection are composed of checkered blanks formed by different color lines, grain lines, or weaving methods. The dataset contains a variety of fabric defects, covering different types of defects such as yarn banding, yarn breakage, cotton balls, holes, yarn removal, stains, and so on, with about 300 pictures each. Samples of the defects in the dataset are shown in Figure 9.
The initial dataset for this paper was manually labeled using LabelImg, a graphical image annotation tool [26]. This tool was used to delineate areas in images containing defects and to assign the appropriate label for each defect annotation box, such as “Ribbon yarn” and “Cotton balls”. The LabelImg annotation diagram is shown in Figure 10. This meticulous alignment ensures that the input images and their corresponding defect annotations match perfectly. It also guarantees that the model input aligns strictly with the labeled data during the training of the YOLOv8n-LAW model, thereby ensuring data consistency and accuracy throughout the training process.
A corresponding information file in an XML format is generated through tagging; it contains the category and precise location of each defect. A sample XML file is displayed in Figure 11.
Then, the labeled images are augmented through operations such as mirroring, random rotation, random cropping, adding noise, and color adjustment to expand the dataset. Additionally, four original images are randomly stitched together into a new training sample using mosaic augmentation. Subsequently, the synthetic image is annotated to complete dataset preprocessing. Examples of images after data augmentation and enhancement are shown in Figure 12.
After preprocessing, the number of image samples increased from the initial 2116 to 7581, with each image saved in a “.jpg” format. Among them, there are 1275 samples with yarn defects, 924 samples with broken yarn defects, 1104 samples with cotton ball defects, 1008 samples with holes, 2004 samples with yarn slippage defects, and 1266 samples with stains. To avoid bias in the results, the entire dataset is divided into training, validation, and test sets at proportions of 70%, 20%, and 10%, respectively, to ensure the balance of the dataset. The training and validation sets are used for training the neural network, while the test set is used to evaluate the model’s performance on unseen data.
Through data visualization tools, the defect information diagram of the dataset was obtained. Figure 13a shows the distribution of the center coordinates of the target bounding boxes, indicating a uniform distribution of the center positions of the target boxes. Figure 13b displays the proportion of the target boxes relative to the images, with the bottom left corner being the darkest, indicating that the current dataset mainly consists of small defects.

3.5. Comparative Experiments

3.5.1. A Comparative Analysis of the Results of the Improved Algorithms for Each Module

In order to verify the effectiveness of the improved YOLOv8n-LAW algorithm in detecting fabric defects and to evaluate the influence of each improvement factor on network performance, three sets of comparative experiments were carried out. The algorithm’s performance detection results are shown in Table 3.
The experiments aimed to compare the performance disparity between the YOLOv8n algorithm and the improved YOLOv8n-LAW algorithm in fabric defect detection tasks. The experimental environment and parameter settings remained consistent, with improvements gradually added to the original algorithm. Each added improvement part corresponds to an improved algorithm name, with three improvement modules in total:
  • The model with the added LSKNet attention mechanism is named YOLOv8n-L;
  • The model with the integrated AFPN module is named YOLOv8n-LA;
  • The model with the added WIoU loss function is named YOLOv8n-LAW.
According to the analysis from the table:
  • YOLOv8n-L added the LSKNet attention mechanism on the basis of YOLOv8n, which gives the model a wider contextual area to enhance its feature extraction capability. The F1 score increased by 1.7%, the mAP increased by 2.2%, the FPS (frames per second) for detection speed decreased by 2 frames per second, the parameter count decreased by 0.3, and the model size increased by 0.34 MB;
  • YOLOv8n-LA introduces the Asymptotic Feature Pyramid Network (AFPN) module on top of YOLOv8n-L, enabling semantic information from different hierarchical features to converge progressively, allowing the model to acquire accurate and comprehensive features. This enhances detection precision and the recognition rate of small target defects. The F1 score increased by 1.4%, the mAP increased by 1.5%, the FPS (frames per second) increased by 3 frames per second, the parameter count decreased by 1.2, and the model size increased by 0.41 MB;
  • YOLOv8n-LAW, based on YOLOv8n-LA, replaces the CIoU loss with the WIoU loss, enabling consideration of anchor box quality and the dynamic adjustment of gradient gains based on fabric defect features. This effectively focuses on distinguishing fabric defects from non-defective areas, enhancing both detection speed and accuracy. Its F1 score increased by 0.4%, its mAP increased by 0.5%, while its FPS, parameter count, and model size remained largely unchanged.
Through the above analysis, compared to the YOLOv8n algorithm, the YOLOv8n-LAW algorithm achieves higher accuracy by reasonably increasing the model size, resulting in higher F1 and mAP values. The higher FPS value enables the model to have faster detection speed, while fewer parameters reduce the complexity of model computation, thereby improving the model’s architecture.

3.5.2. A Comparison of the Precision and Recall of the Improved Algorithms for Each Module

In this paper, the YOLOv8n algorithm has been improved to develop the YOLOv8n-LAW algorithm, which is specifically designed for fabric defect detection. In order to demonstrate the detection effect of the model more intuitively and effectively, to understand the difference between the detection capabilities of the original algorithm YOLOv8n and the improved algorithm YOLOv8n-LAW, precision (P-plots), and recall (R-plots) are plotted throughout the training process to analyze the precision and recall of the original YOLOv8n algorithm and various improved versions. Figure 14, below, displays the P and R comparison plots of the various improved algorithms.
From Figure 14, it can be observed that the metrics of all models fluctuate as the training epochs increase. As the training progresses, these fluctuations become smaller, and the models begin to converge and stabilize. As shown in Figure 14a, there is a significant difference in the accuracy of different models before 100 epochs. YOLOv8n-LAW exhibits the fastest and highest increase in accuracy in the early stages, with the smallest epoch value where convergence begins, indicating a faster convergence speed. As seen in Figure 14b, although all models tend to stabilize in terms of their recall after a sufficient number of training epochs, YOLOv8n-LAW also shows a higher recall rate in the early stages, with the smallest epoch value where convergence begins, indicating a faster convergence speed. Thus, it is shown that the YOLOv8n-LAW algorithm improves the convergence ability of the network, and the P and R are improved compared to the original YOLOv8n algorithm.

3.5.3. Comparisons of the Detection Performance of Different Algorithms

In order to verify the superiority and efficacy of the YOLOv8n-LAW fabric defect detection algorithm described in this research, comparison experiments were performed with the Faster RCNN, Cascade Faster RCNN [27], SSD, YOLOv5s, YOLOv5s-4SCK [16], YOLOX, YOLOv7-tiny, and YOLOv8n algorithms. Table 4 presents the experimental results for comparison.
The data comparison in the table indicates that the YOLOv8n-LAW fabric defect detection algorithm proposed in this paper outperforms the two-stage detection algorithm, Faster RCNN, with an increase of 32.8% in the F1 score, 30.5% in the mAP, and 34 in the FPS, and reduces the model size by 102.09 MB. Compared to the improved two-stage Cascade Faster RCNN algorithm proposed by Zhao et al. [27], the F1 score increases by 3.6%, the mAP by 2.7%, and the FPS by 33%. Due to the two-stage process required for target detection and classification, Cascade Faster RCNN’s training is more complex compared to one-stage algorithms. It also has slower detection speeds and demands more computational resources and storage. Consequently, both the pre-improved and post-improved Faster RCNNs show lower detection accuracies and speeds than the enhanced YOLOv8n-LAW algorithm, making them less suitable for real-time detection scenarios.
In this paper, the proposed YOLOv8n-LAW fabric defect detection algorithm, when compared to the first-stage detection algorithm SSD, shows a 57.2% increase in F1 score, a 52.7% increase in mAP, a 19-unit increase in FPS, and a reduction in model size by 71.39 MB. Although the SSD algorithm improves the detection speed compared to second-stage algorithms, it has a larger model size and a significantly lower detection accuracy and speed than the improved YOLOv8n-LAW algorithm. Moreover, it struggles with small targets and complex backgrounds. This limitation becomes particularly problematic when detecting fabric defects characterized by small targets and complex backgrounds.
Compared to the other YOLO series algorithms listed in the table, the improvements of the YOLOv8n-LAW algorithm in each performance metric are clear. Compared to YOLOv5s, the YOLOv8n-LAW algorithm increases the F1 score by 8.2%, the mAP by 6.9%, and the FPS by 12 units; compared to Zhou et al.’s improved YOLOv5s-4SCK, it increases the F1 score by 5.6%, the mAP by 2.8%, and the FPS by 10 units [16]; compared to the YOLOX, it increases the F1 score by 7.1%, the mAP by 4.3%, the FPS by 10 units, and reduces the model size by 6.99 MB; compared to YOLOv7-tiny, it increases the F1 score by 5.6%, the mAP by 4.7%, the FPS by 2 units, and reduces the model size by 4.99 MB; compared to YOLOv8n, this paper’s improved YOLOv8n-LAW increases the F1 score by 3.5%, the mAP by 4.2%, the FPS by 1 unit, and maintains a similar model size. One-stage detection algorithms, which do not require a complex training phase or an additional region proposal step, offer faster detection speeds. However, their detection accuracy is relatively lower, often resulting in missed detections, particularly with small target defects. The advanced YOLOv8n-LAW algorithm, compared to other one-stage detection algorithms, shows significant improvements in detection accuracy and speed. It also achieves a higher recognition rate of small target defects and has a smaller model size, making it more suitable for real-time fabric defect detection.
In summary, the two-stage detection algorithms, Faster RCNN and its enhanced version, Cascade Faster RCNN, offer high detection accuracy but suffer from large model sizes, complex training processes, and slow detection speeds, making them unsuitable for real-time fabric defect detection. The one-stage SSD algorithm, while offering improved detection speeds, suffers from low detection accuracy and is prone to both misdetections and missed detections. While other YOLO series algorithms have shown improvements in both detection speed and accuracy, they perform significantly worse than the YOLOv8n-LAW algorithm introduced in this paper. In the fabric defect detection task, the YOLOv8n-LAW algorithm not only achieves a higher amount of FPS but also maintains a smaller model size, thereby consuming fewer computational resources and requiring less storage space. Moreover, its significantly enhanced detection accuracy makes it highly suitable for fabric defect detection tasks.

3.6. A Comparative Analysis of Test Results

To demonstrate the actual detection effects of the algorithms before and after improvement, this paper selected some of their detection results for comparison. In order to compare with the actual fabric defects observed by the human eye, manually selected fabric defect images are additionally included in the comparison images. The comparison images are shown in Figure 15, where Figure 15(a1,b1,c1,d1) are manually selected fabric defect images, Figure 15(a2,b2,c2,d2) are YOLOv8n-LAW fabric defect accuracy detection images, and Figure 15(a3,b3,c3,d3) are YOLOv8n fabric defect accuracy detection images.
Using the first manually annotated fabric defect image depicting a yarn break as the reference image (Figure 15(a1)), a comparison was made between the YOLOv8n-LAW and YOLOv8n algorithms for the first round of detection, resulting in the following observations:
  • The YOLOv8n-LAW algorithm successfully detected the fabric defect type as a yarn break with an accuracy of 92% (Figure 15(a2));
  • The YOLOv8n algorithm also successfully detected the fabric defect type as a yarn break with an accuracy of 88% (Figure 15(a3)).
After comparison, it is evident that the YOLOv8n-LAW algorithm outperforms the YOLOv8n algorithm in this task, demonstrating higher detection accuracy and more precise defect localization.
Using the second manually annotated fabric defect image depicting stains as the reference image (Figure 15(b1)), which contains a total of five stains, a comparison was made between the YOLOv8n-LAW and YOLOv8n algorithms for the second round of detection, resulting in the following observations:
  • The YOLOv8n-LAW algorithm successfully detected the fabric defect type as stains, with a total quantity of five stains. The accuracies from top to bottom were 62%, 41%, 69%, 54%, and 58% (Figure 15(b2));
  • The YOLOv8n algorithm successfully detected the fabric defect type as stains, with a total quantity of four stains. The accuracies from top to bottom were 55%, 65%, 49%, and 54% (Figure 15(b3)).
The results indicate that the YOLOv8n-LAW algorithm can accurately identify all defects, while the YOLOv8n algorithm exhibits instances of missed detections. Furthermore, compared to the YOLOv8n algorithm, the YOLOv8n-LAW algorithm demonstrates higher detection accuracy.
Using the third manually annotated fabric defect image depicting stains as the reference image (Figure 15(c1)), which contains a total of three stain defects, a comparison was made between the YOLOv8n-LAW and YOLOv8n algorithms for the third round of detection, resulting in the following observations:
  • The YOLOv8n-LAW algorithm successfully detected the fabric defect type as stains, with a total quantity of three stains. The accuracies were 95%, 84%, and 80% (Figure 15(c2));
  • The YOLOv8n algorithm successfully detected the fabric defect type as stains, with a total quantity of three stains. The accuracies were 82%, 87%, and 84% (Figure 15(c3)).
The results indicate that the YOLOv8n-LAW algorithm performs better in this task, demonstrating higher detection accuracy and precision, particularly in detecting small target defect points.
Using the fourth manually annotated fabric defect image depicting lint balls as the reference image (Figure 15(d1)), a comparison was made between the YOLOv8n-LAW and YOLOv8n algorithms for the fourth round of detection, resulting in the following observations:
  • The YOLOv8n-LAW algorithm successfully detected the fabric defect type as lint balls with an accuracy of 89% (Figure 15(d2));
  • The YOLOv8n algorithm successfully detected the fabric defect type as lint balls with an accuracy of 84% (Figure 15(d3)).
The results indicate that the YOLOv8n-LAW algorithm has higher detection accuracy, particularly in detecting small target defect points, leading to more precise fabric defect detection.
The comparison highlights the superior performance of the YOLOv8n-LAW algorithm over the YOLOv8n algorithm in terms of detection accuracy, defect localization, and the ability to detect small target defects. This makes the YOLOv8n-LAW algorithm more well-suited for real-world fabric defect detection applications.

4. Conclusions

Compared to other mainstream target detection algorithms, the YOLOv8 algorithm exhibits a superior target localization ability and higher detection efficiency. This makes it particularly well-suited for automated textile defect detection, significantly improving textile quality inspection and reducing costs. Addressing the challenges of diverse fabric defect types, complex fabric textures, and concealed small target defects, this paper improves the overall performance of fabric defect detection algorithms by enhancing the YOLOv8n algorithm. Firstly, an LSKNet attention mechanism is added to both ends of the C2f module in the backbone network to enhance its sensitivity to small and minute defects. Secondly, replacing the PAN-FPN structure in the backbone network with the AFPN module reduces instances of false alarms and missed detections. Lastly, replacing the CIoU loss with the WIoU v3 loss enhances the detection accuracy of fabric defects. The research shows that the proposed YOLOv8n-LAW algorithm achieves significant improvements in F1 score, mAP, and FPS values. The improved algorithm effectively captures small target defects during fabric defect detection, significantly enhancing detection accuracy and speed, thus, better meeting practical detection requirements. The YOLOv8n-LAW algorithm introduced in this paper demonstrates significant scalability and an enhanced generalization ability. Following its empirical validation, the overall performance has shown considerable improvement, and it holds promising potential for future implementation in real-world engineering applications. In the future, exploring finer network structures and larger datasets, integrating multimodal detection techniques, and adapting further to diverse detection environments will better optimize the system to meet the specific needs of automated production lines for fabric defect detection.

Author Contributions

Conceptualization, Y.S.; methodology, Y.S.; software, Y.S. and H.G.; validation, Y.S.; formal analysis, Y.S.; investigation, Y.S.; resources, L.T.; data curation, S.M.; writing—original draft preparation, Y.S.; writing—review and editing, S.M.; visualization, Y.S. and H.G.; supervision, Y.S. and L.T.; project administration, S.M.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “The National Key Research and Development Program, grant number SQ2023YFB4600241”, “Hubei Province Sc.& Tech. Research Plan, grant number 2023EHA027”, and “National Innovation Base plan, grant number 111HTE2022002”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in this article. The data that were used are confidential.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rasheed, A. Classification of technical textiles. Fibers Tech. Text. 2020, 49–64. [Google Scholar] [CrossRef] [PubMed]
  2. Almeida, T.; Moutinho, F.; Matos-Carvalho, J.P. Fabric defect detection with deep learning and false negative reduction. IEEE Access 2021, 9, 81936–81945. [Google Scholar] [CrossRef]
  3. Mahmood, T.; Ashraf, R.; Faisal, C.N. An efficient scheme for the detection of defective parts in fabric images using image processing. J. Text. Inst. 2023, 114, 1041–1049. [Google Scholar] [CrossRef]
  4. GEZE, R.A.; AKBAŞ, A. Detection and Classification of Fabric Defects Using Deep Learning Algorithms. Politek. Derg. 2023, 27, 371–378. [Google Scholar] [CrossRef]
  5. Jeyaraj, P.R.; Samuel Nadar, E.R. Computer vision for automatic detection and classification of fabric defect employing deep learning algorithm. Int. J. Cloth. Sci. Technol. 2019, 31, 510–521. [Google Scholar] [CrossRef]
  6. Ngan, H.Y.; Pang, G.K.; Yung, N.H. Automated fabric defect detection—A review. Image Vis. Comput. 2011, 29, 442–458. [Google Scholar] [CrossRef]
  7. Wei, B.; Hao, K.; Tang, X.-s.; Ren, L. Fabric defect detection based on faster RCNN. In Artificial Intelligence on Fashion and Textiles; AITA 2018. Advances in Intelligent Systems and Computing; Wong, W., Ed.; Springer: Cham, Switzerland, 2019; Volume 849, pp. 45–51. [Google Scholar] [CrossRef]
  8. Chen, M.; Yu, L.; Zhi, C.; Sun, R.; Zhu, S.; Gao, Z.; Ke, Z.; Zhu, M.; Zhang, Y. Improved faster R-CNN for fabric defect detection based on Gabor filter with Genetic Algorithm optimization. Comput. Ind. 2022, 134, 103551. [Google Scholar] [CrossRef]
  9. Li, H.; Zhang, H.; Liu, L.; Zhong, H.; Wang, Y.; Wu, Q.J. Integrating deformable convolution and pyramid network in cascade R-CNN for fabric defect detection. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3029–3036. [Google Scholar]
  10. Huang, H. Fabric defect detection based on MF-SSD network cotton. Text. Technol. 2020, 48, 11–16. [Google Scholar]
  11. Han, J.; Cao, J.; Wang, H.; Ji, X. A Review of Fabric Defect Detection Methods Based on Computer Vision. J. Liaoning Univ. Pet. Chem. Technol. 2022, 42, 70. [Google Scholar]
  12. Zhao, L.; Li, S. Object detection algorithm based on improved YOLOv3. Electronics 2020, 9, 537. [Google Scholar] [CrossRef]
  13. Gai, R.; Chen, N.; Yuan, H. A detection algorithm for cherry fruits based on the improved YOLO-v4 model. Neural Comput. Appl. 2023, 35, 13895–13906. [Google Scholar] [CrossRef]
  14. Li, R.; Wu, Y. Improved YOLO v5 wheat ear detection algorithm based on attention mechanism. Electronics 2022, 11, 1673. [Google Scholar] [CrossRef]
  15. Wang, Y.; Hao, Z.; Zuo, F.; Pan, S. A fabric defect detection system based improved yolov5 detector. Proc. J. Phys. Conf. Ser. 2021, 2010, 012191. [Google Scholar] [CrossRef]
  16. Zhou, S.; Zhao, J.; Shi, Y.S.; Wang, Y.F.; Mei, S.Q. Research on improving YOLOv5s algorithm for fabric defect detection. Int. J. Cloth. Sci. Technol. 2023, 35, 88–106. [Google Scholar] [CrossRef]
  17. Zhang, M.; Yu, W.; Qiu, H.; Yin, J.; He, J. A Fabric Defect Detection Algorithm Based on YOLOv8. In Proceedings of the International Conference on Image Processing, Computer Vision and Machine Learning (ICICML), Chengdu, China, 3–5 November 2023; pp. 1040–1043. [Google Scholar]
  18. Terven, J.; Cordova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  19. Sharshar, A.; Matsun, A. Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for Advanced Object Detection. arXiv 2023, arXiv:2311.12956. [Google Scholar]
  20. Liu, C.; Lin, W.; Feng, Y.; Guo, Z.; Xie, Z. ATC-YOLOv5: Fruit Appearance Quality Classification Algorithm Based on the Improved YOLOv5 Model for Passion Fruits. Mathematics 2023, 11, 3615. [Google Scholar] [CrossRef]
  21. Han, L.; Niu, H. Improved electric bike helmet wearing detection algorithm for YOLOv5s. In Proceedings of the Third International Conference on Artificial Intelligence, Virtual Reality, and Visualization (AIVRV 2023), Chongqing, China, 7–9 July 2023; pp. 440–445. [Google Scholar]
  22. Li, Y.; Hou, Q.; Zheng, Z.; Cheng, M.-M.; Yang, J.; Li, X. Large selective kernel network for remote sensing object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 16794–16805. [Google Scholar]
  23. Yang, G.; Lei, J.; Zhu, Z.; Cheng, S.; Feng, Z.; Liang, R. AFPN: Asymptotic feature pyramid network for object detection. In Proceedings of the 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, HI, USA, 1–4 October 2023; pp. 2184–2189. [Google Scholar]
  24. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
  25. Cho, Y.-J. Weighted intersection over union (wIoU): A new evaluation metric for image segmentation. arXiv 2021, arXiv:2107.09858. [Google Scholar]
  26. Pajaziti, A.; Basholli, F.; Zhaveli, Y. Identification and classification of fruits through robotic system by using artificial intelligence. Eng. Appl. 2023, 2, 154–163. [Google Scholar]
  27. Jia, Z.; Shi, Z.; Quan, Z.; Shunqi, M. Fabric defect detection based on transfer learning and improved Faster R-CNN. J. Eng. Fibers Fabr. 2022, 17. [Google Scholar] [CrossRef]
Figure 1. A diagram of the YOLOv8n network’s architecture.
Figure 1. A diagram of the YOLOv8n network’s architecture.
Electronics 13 02009 g001
Figure 2. A diagram of the YOLOv8n-LAW network’s architecture.
Figure 2. A diagram of the YOLOv8n-LAW network’s architecture.
Electronics 13 02009 g002
Figure 3. The overall structure of the LSKNet module.
Figure 3. The overall structure of the LSKNet module.
Electronics 13 02009 g003
Figure 4. The structure of the C2f_Attention module.
Figure 4. The structure of the C2f_Attention module.
Electronics 13 02009 g004
Figure 5. The AFPN structure’s architecture.
Figure 5. The AFPN structure’s architecture.
Electronics 13 02009 g005
Figure 6. The box_loss change curve.
Figure 6. The box_loss change curve.
Electronics 13 02009 g006
Figure 7. The cls_loss change curve.
Figure 7. The cls_loss change curve.
Electronics 13 02009 g007
Figure 8. The dfl_loss change curve.
Figure 8. The dfl_loss change curve.
Electronics 13 02009 g008
Figure 9. Sample pictures of defects in the dataset: (a) with yarn; (b) broken yarn; (c) cotton balls; (d) holes; (e) stripping; (f) stains.
Figure 9. Sample pictures of defects in the dataset: (a) with yarn; (b) broken yarn; (c) cotton balls; (d) holes; (e) stripping; (f) stains.
Electronics 13 02009 g009
Figure 10. The LabelImg interface for labelling fabric defects.
Figure 10. The LabelImg interface for labelling fabric defects.
Electronics 13 02009 g010
Figure 11. A sample XML file.
Figure 11. A sample XML file.
Electronics 13 02009 g011
Figure 12. A sample dataset’s expansion and enhancement: (a) the original; (b) a random cropping; (c) tessellation enhancement.
Figure 12. A sample dataset’s expansion and enhancement: (a) the original; (b) a random cropping; (c) tessellation enhancement.
Electronics 13 02009 g012
Figure 13. A fabric defects infographic: (a) a distribution of target box positions; (b) A distribution of target box sizes.
Figure 13. A fabric defects infographic: (a) a distribution of target box positions; (b) A distribution of target box sizes.
Electronics 13 02009 g013
Figure 14. The comparison between the YOLOv8n-LAW algorithm’s and the YOLOv8n algorithm’s Ps and Rs: (a) P comparison plot; (b) R comparison plot.
Figure 14. The comparison between the YOLOv8n-LAW algorithm’s and the YOLOv8n algorithm’s Ps and Rs: (a) P comparison plot; (b) R comparison plot.
Electronics 13 02009 g014
Figure 15. A comparison of the experimental results: (a1d1) are manually selected fabric defect images; (a2d2) are YOLOv8n-LAW fabric defect accuracy detection images; (a3d3) are YOLOv8n fabric defect accuracy detection images.
Figure 15. A comparison of the experimental results: (a1d1) are manually selected fabric defect images; (a2d2) are YOLOv8n-LAW fabric defect accuracy detection images; (a3d3) are YOLOv8n fabric defect accuracy detection images.
Electronics 13 02009 g015aElectronics 13 02009 g015b
Table 1. The experimental environment.
Table 1. The experimental environment.
HardwareSoftware
NameModelNameVersions
CPUInter Core i5-13600KFPython3.8
GPUNAIDIA GeForce RTX 2080Ti 11GPytorch1.10
RAM11GCUDA10.2
Table 2. The model’s parameters.
Table 2. The model’s parameters.
OptimizerSGD
momentum0.937
weight_decay0.0005
learning_rate0.01
epoch300
batch_size40
Table 3. The improvement comparison results of each module.
Table 3. The improvement comparison results of each module.
AlgorithmF1MAPFPSFLOPSModel Size (MB)
YOLOv8n90.1%93.2%459.25.97
YOLOv8n-L91.8%95.4%438.96.31
YOLOv8n-LA93.2%96.9%467.76.72
YOLOv8n-LAW
(Improved Algorithm)
93.6%97.4%467.76.71
Table 4. The comparison of the experimental results.
Table 4. The comparison of the experimental results.
AlgorithmInput SizeF1MAPFPSModel Size (MB)
Faster RCNN512 × 51260.8%66.9%12108.80
Cascade Faster RCNN [27]512 × 51290%94.7%13-
SSD512 × 51236.4%44.7%2778.10
YOLOv5s512 × 51285.4%90.5%3413.70
YOLOv5s-4SCK [16]512 × 51288.0%94.6%36-
YOLOX512 × 51286.5%93.1%3634.40
YOLOv7-TINY512 × 51288%92.7%4411.70
YOLOv8n512 × 51290.1%93.2%455.97
YOLOv8n-LAW
(Improved Algorithm)
512 × 51293.6%97.4%466.71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, S.; Shi, Y.; Gao, H.; Tang, L. Research on Fabric Defect Detection Algorithm Based on Improved YOLOv8n Algorithm. Electronics 2024, 13, 2009. https://doi.org/10.3390/electronics13112009

AMA Style

Mei S, Shi Y, Gao H, Tang L. Research on Fabric Defect Detection Algorithm Based on Improved YOLOv8n Algorithm. Electronics. 2024; 13(11):2009. https://doi.org/10.3390/electronics13112009

Chicago/Turabian Style

Mei, Shunqi, Yishan Shi, Heng Gao, and Li Tang. 2024. "Research on Fabric Defect Detection Algorithm Based on Improved YOLOv8n Algorithm" Electronics 13, no. 11: 2009. https://doi.org/10.3390/electronics13112009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop