Next Article in Journal
Clamping Pressure and Catalyst Distribution Analyses on PEMFC Performance Improvement
Previous Article in Journal
Proposal of Three Methods for Deriving Representative Mean Radiant Temperatures Considering Zone Spatial Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on a Photovoltaic Panel Dust Detection Algorithm Based on 3D Data Generation

College of Energy and Mechanical Engineering, Shanghai University of Electric Power, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Energies 2024, 17(20), 5222; https://doi.org/10.3390/en17205222
Submission received: 16 September 2024 / Revised: 12 October 2024 / Accepted: 18 October 2024 / Published: 20 October 2024

Abstract

:
With the rapid advancements in AI technology, UAV-based inspection has become a mainstream method for intelligent maintenance of PV power stations. To address limitations in accuracy and data acquisition, this paper presents a defect detection algorithm for PV panels based on an enhanced YOLOv8 model. The PV panel dust dataset is manually extended using 3D modeling technology, which significantly improves the model’s ability to generalize and detect fine dust particles in complex environments. SENetV2 is introduced to improve the model’s perception of dust features in cluttered backgrounds. AKConv replaces traditional convolution in the neck network, allowing for more flexible and accurate feature extraction through arbitrary kernel parameters and sampling shapes. Additionally, a DySample dynamic upsampler accelerates processing by 8.73%, improving the frame rate from 87.58 FPS to 95.23 FPS while maintaining efficiency. Experimental results show that the 3D image expansion method contributes to a 4.6% increase in detection accuracy, an 8.4% improvement in recall, a 5.7% increase in mAP@50, and a 15.1% improvement in mAP@50-95 compared to the original YOLOv8. The expanded dataset and enhanced model demonstrate the effectiveness and practicality of the proposed approach.

1. Introduction

With continuous advancements in global industrialization and rapid socio-economic development, mankind’s uninterrupted exploitation of natural resources has caused great damage to the environment. Many environmental problems, such as extreme weather, have affected the normal life of human beings. To realize sustainable economic development, governments are actively promoting clean energy to replace traditional fossil energy. According to statistics, photovoltaic (PV) power generation technology has become a potential major power generation technology worldwide [1]. However, due to the large number of PV panels in a PV power plant and their exposure to the natural environment, the problem of dust deposition is unavoidable, and this problem is particularly serious in high-dust areas such as the Middle East [2,3]. Sayyah et al. [4] discussed the impact of dust deposition on power generation, emphasized the importance of keeping the PV surfaces clean to optimize performance, and pointed out a direct correlation. Yang et al. [5] further investigated the deposition of dust particles on solar panels through numerical simulations, solidifying the understanding of the impact of dust on power generation efficiency. Their work adds a computational perspective that provides insights into potential loss percentages and effective dust management practices in the real world. In response to the dust coverage problem, it is crucial to develop detection solutions for PV panels. A review by Dantas et al. [6] explored various image processing techniques for detecting dust on solar panels, providing an innovative approach to monitor and assess the cleanliness of solar panels, which is critical for post-operation and maintenance. Yue et al. [7] further complemented the review by investigating dust under different environmental conditions by examining its behavior, complementing the detection process, and providing information for developing more-effective cleaning techniques and prevention strategies. Ocean He et al. [8], on the other hand, focused on a PV panel dust detection system using an improved Mobilenet algorithm, and their study demonstrated the effectiveness of combining a convolutional neural network with a machine-learning classifier to accurately identify dust buildup on solar panels. This approach provides a promising direction for automated and real-time dust detection, advancing the development of computerized surrogate detection.
Current state-of-the-art methods combining Artificial Intelligence (AI) and Deep Learning for fault and dust detection are also being explored. The study by Naima Elyanboiy et al. [9] highlights the fact that AI-based techniques can not only detect the presence of dust, but also accurately quantify its impact on the panel’s performance, which significantly enhances the decision-making process in O&M and promises to lead to more efficient and cost-effective maintenance programs. The image processing and artificial neural network (ANN) approach proposed by Dania Saquib et al. [10] further illustrates an innovative path for dust detection and power prediction in PV systems. By utilizing image processing to detect dust and using artificial neural networks to predict the indirect effect on power output, researchers are paving the way for more sophisticated monitoring and maintenance systems that can dynamically adapt to environmental conditions. Finally, El Karch Hajar et al. [11] applied deep learning models (e.g., YOLOv5) to dust recognition in PV modules, demonstrating the potential of deep learning and computer vision techniques in developing high-performance real-time systems. These systems can accurately recognize dust and dirt on PV panels.
Overall, detecting PV panel dust is a challenging task. The variety of dust morphologies and the differences in different regions due to soil structure and moisture content have led to the poor performance of previous studies in dust detection, with low detection accuracy and poor image recognition. These studies share common problems: first, the use of a squared convolution kernel to process dust images does not sufficiently consider the contour features of dust, resulting in poor recognition; second, the problem of model generalizability due to the differences in dust morphology across regions is not effectively addressed. To address these difficulties, we propose the YOLOv8-DSDA method, and at the same time, we use Blender 4.1 software to complete the modeling of PV panels and realize the customization of various attributes of dust particles to adapt to the dust characteristics of different regions. These methods effectively solve the above problems. In summary, our contributions to this paper are described as follows:
We embed Alterable Kernel Convolution (AKConv) into the model to adaptively adjust the shape of the convolution kernel to capture the dust contour more efficiently, thus improving the accuracy of dust recognition.
We integrated the SENetV2 module into the dust detection model, which enhances the ability to model inter-channel dependencies through its multi-branch fully connected layer, and captures complex variations in dust characteristics more efficiently, thereby improving the sensitivity and accuracy of dust detection.
WE introduced the DySample dynamic upsampler, which employs a point sampling strategy to realize efficient upsampling, significantly reducing computational complexity and resource consumption. It not only improves the processing speed, but also maintains the high-precision prediction results.
We used Blender software to realize the construction of a three-dimensional model of PV panel dust, to realize the free control of dust granularity and random coverage of dust, and to improve the ability of dust detection. The general structure of the thesis is shown in Figure 1.

2. Improved YOLOV8 Network Structure

YOLOv8, the latest target detection model released by Ultralytics in early 2023 [12], demonstrates significant performance improvements in tasks such as object detection, semantic segmentation, and image categorization compared to previous YOLO models. The network architecture of YOLOv8 consists of four main components: an input layer, a backbone network, a neck network, and a prediction network. The network structure of YOLOv8 is shown in Figure 2.
In this paper, the basic framework of YOLOv8 is modified to make it more suitable for dust detection in PV panels. The modified framework is shown in Figure 2 and is called YOLO8-Dust SENetV2 Dysample AKConv (YOLO8-DSDA). The main improvements are as follows: (1) introducing the SENetV2 module in the backbone and neck networks, which is represented by the red module in the figure; (2) replacing the traditional upsampling (upsample) with Dysample, which is represented by the orange module in the figure; (3) replacing the convolutional layer in the neck network with AKConv which is tabulated with khaki color in figure.
The overall structure of the model is shown in Figure 3:

2.1. SENetV2 Module

The SENet module enhances the channel representation through two processes: squeezing and excitation. The squeezing process compresses the input features through global average pooling to obtain a global information representation. The excitation process uses a fully connected layer to learn how to calibrate the importance of each channel based on global information [13]. This approach ensures that the most critical features are reinforced and efficiently passed on to subsequent layers of the network, but this filtering mechanism can also lead to partial feature loss. To improve this, the SENetV2 module [14] was introduced, which combines the features of ResNeXt [15] and SENet by using a multibranch fully connected layer (FC) for squeezing and excitation operations, and finally feature scaling. A comparison of the SENet and SENetV2 modules is shown in Figure 4 and Figure 5. The SENetV2 module incorporates a squeeze and an excitation. The SENetV2 module adds an aggregation layer between extrusion and excitation, and the multi-branch dense layers are connected in the circled part of the figure.
Due to the environment and other factors, the collected dust data contains a lot of non-important noise information, and these noises will bring interference to the recognition of dust in order to effectively suppress the noise and highlight the spatial location information of the important areas; therefore, this paper introduces the attention mechanism SENetV2 module, and compared with the original network, adding the attention mechanism SENetV2 module is almost no additional computation.
The SENet module reduces the dimension of the input by the extrusion operation, but this reduced dimension is not restored to the original size in the subsequent process. SENetV2 incorporates an aggregation operation after the extrusion, where the extruded output is fed into the multibranch FC for aggregation, and then the excitation process is performed. In this way, the branched input can be passed to the end to recover its original shape. It is assumed that the input is x , the aggregation is denoted by F , the squeeze operation is S , and the excitation operation is E . The SENetV2 workflow is shown in Equation (1) below:
S E N e t V 2 = x + F ( E ( S ( x ) ) x )

2.2. Dysample Module

In the original YOLOv8 network, the upsampling module is implemented using nearest neighbor interpolation, which follows a fixed rule to interpolate the upsampled values. Due to the diversity of PV panel installation environments, the morphology and distribution of dust may vary depending on geographic locations, climatic conditions, and seasonal changes. To ensure that the model maintains stable and efficient performance in various application scenarios, we introduce a lightweight dynamic upsampler, DySample [16]. DySample not only simplifies the computational process, but also reduces the number of parameters of the model and the memory requirement at runtime through its unique point sampling mechanism, and DySample can dynamically adjust the upsampling process to make the model better adapt to the dust morphology. DySample can dynamically adjust the up-sampling process so that the model can better adapt to the changes in dust patterns. The dynamic up-sampling structure of DySample is shown in Figure 6. For a feature mapping χ with a given upsampling factor s and size C × H × W , a linear layer with input and output channel numbers C and 2 s 2 , respectively, is used to generate a mapping ψ with an offset size 2 s 2 × H × W , which is then reshaped by a pixel transformation to 2 × s H × s W . The sampling set s is the sum of the offset ψ and the original sampling mesh φ , where shaping operations are omitted. Finally, an upsampled feature map χ of size C × s H × s W can be generated, and the overall process is shown in Equations (2) and (3):
ψ = l i n e a r ( χ )
S = φ + ψ
where ψ is the offset network, χ is a feature map with a given upsampling coefficient and size, φ is the original-sized feature network, and s is the sum of the offset and the original sampling network.

2.3. AKConv Module

In the recognition of dust images, due to the variability in the size and shape of dust particles, a fixed-size convolutional kernel is ineffective in the recognition process. If we directly introduce different sizes of convolutional kernels, such as InceptionV1 [17], it will bring a lot of computational burden. Aiming at the above problems, to capture different scale features in the recognition process of the convolutional layer without introducing more computational and storage overheads, this paper draws on the idea of adaptive convolution in the deep convolution of the AKConv network, which can flexibly adjust the size and shape according to the actual needs.
In the neck part of the model, we replaced the conventional convolution using AKConv [18]. This convolution module allows for the convolution kernel to have a variable number of parameters and adaptive sampling shapes, enabling the convolution operation to flexibly adjust the size and shape according to practical needs. This adaptive capability significantly improves the model’s adaptability and detection efficiency for different dust patterns and distributions on PV panels.
The operation flow for sand and dust detection for PV panels is shown in Figure 7, where we first calculate the corresponding offset Y by applying a two-dimensional convolution operation (Conv2d) on the input feature map X . This part is shown in Equation (4):
o f f s e t = C o n v 2 d ( X )
The calculated offset Y is then added to the original coordinates P to obtain the modified coordinates:
P n = Y + P
Finally, the final eigenvalue T is obtained by interpolation and resampling based on the modified coordinates:
T = Re ( T i n , P n )
where Re denotes the difference and resampling operation. By introducing AKConv, the PV panel dust detection model not only demonstrates greater flexibility and efficiency in dealing with complex and variable environments, but also significantly improves the accuracy of the detection.

3. Experimental Datasets and Data Enhancement Method

The experimental samples in this paper come from the dust particle accumulation images on PV panels in the open-source dataset. A total of 730 dust particle accumulation images are obtained through data collection, including 331 images in desert areas, 261 images in plain urban areas, and 138 images in hilly areas. The particles in the collected dust images are randomly distributed, and most of them are irregular particles, but after comparison, it is found that the dust particles on the surface of PV panels in different regions have certain differences. For example, the overall grain size of sand particles in the desert region is relatively fine, the uniformity of sand particle size is high, the distribution of particle size is more concentrated, and the material composition of sand particles is more consistent; it is based on the fine-grained component that is easily transported by the wind [19] and the plains urban area dust from construction, industrial production, and other human activities such as direct emissions or soil weathering of the suspended particulate matter deposited in the formation of solid particulate matter particles. Therefore, the plains urban dust coarse particulate matter content and the overall dust particle size are large. Since the dust on the surface of PV panels in different regions has different manifestation patterns, the dust dataset for a particular region is small, and to prevent the model from overfitting during the training process and improve the generalization ability of the model, this paper proposes a manual modeling approach to generate the dust images of PV panels to improve the generalization ability of the model.
Through 3D modeling, we can accurately control the size, shape, distribution, and material composition of the dust particles, as well as their precise location on the PV panel. This allows for the generation of data that are not limited to pre-existing sampling areas, but also simulate other environmental conditions that are not directly observed, such as dust accumulation under the influence of climatic extremes. For example, models can be created to simulate dust accumulation in desert areas after a dust storm or dust scenarios in urban areas during periods of peak construction activity.

3.1. Modeling Methods for Artificial Photovoltaic Panel Dust

Three-dimensional modeling allows us to rapidly generate large amounts of data without physical acquisition constraints. Compared to traditional data acquisition methods, 3D modeling can generate thousands of dust distribution images in a shorter period, greatly increasing the diversity and size of the dataset. This is especially important for training deep learning models, as larger datasets can significantly reduce overfitting phenomena and improve the robustness and accuracy of the model in practical applications.
In addition to the differences in dust characteristics in different regions, the current PV panel dust image acquisition mainly comes from aerial photography by UAVs. However, the shooting distance and angle are limited by the tilt angle of the PV panels, the sunlight irradiation angle, and the performance of the UAV, which leads to the insufficient number and diversity of aerial images. In addition, aerial images are often affected by weather conditions, wind speed, and UAV battery life, making it difficult to obtain a large number of high-quality images in a short period. In this case, deep learning models relying on aerial images for PV panel dust detection have difficulty in meeting the actual demand in terms of the number and diversity of training data, which also affects the generalization ability and detection accuracy of the models.
To solve the above two types of problems, we consider introducing a manual modeling approach to generate PV panel dust images. Through 3D modeling and rendering techniques, we generated diverse PV panel dust images. This method can simulate different scenarios and conditions and generate a large amount of high-quality training data.
Manual modeling to expand the PV panel dust images is created as follows: first, standardized 3D modeling is performed for each component of the PV panel according to the existing PV panel representation. Then, the material parameters of each component are set, and the 3D model is rendered in color and material. The dust-covering effect is designed in Blender and applied to the surface of the PV panel. Next, a virtual vision camera is configured to increase the realism of the 3D model through settings such as light source and depth of field and to increase the diversity of the PV panel model by changing the above settings [20]. The virtual camera is utilized to capture diverse PV panel dust images with different angles and coverage ratios through operations such as scaling and rotation. Finally, the image annotation software Labelimg is used to batch label the generated PV panel images. Figure 8 shows the 3D modeling image of the PV panel; after modeling the overall shape and distribution of the dust on the surface of the model, a randomized coverage script is added, which can be set to a specific range of random values to ensure that the characteristics of the dust are in a controllable range, and the script is set as shown in Figure 9. Figure 10 shows the 3D image of the PV panel covered by sand and dust.

3.2. Image Quality Assessment

To evaluate the quality of the generated images, this paper proposes a multi-scale feature extraction and similarity calculation method. We use ResNet50 as the feature extraction model, and its network structure is shown in Figure 11 [21]. ResNet50 was initially designed for classification tasks, and in this paper, we removed the top classification layer and adapted it as a feature extractor.
Image evaluation for the 3D image X g and real image X r ; the input image is first normalized to ensure that all images are of the same size, and the preprocessed image can be represented as:
X = O ( X ) = X min ( X ) max ( X ) min ( X )
where max ( X ) and min ( X ) represent the maximum and minimum values of the image data.
The features are then extracted from multiple layers using ResNet50 network. For layer i , we extracted feature vectors S i and L i , respectively:
{ S i = f i ( X g ; θ i ) L i = f i ( X r ; θ i )
where f i is the feature of the layer i cascade and θ i is the parameter of the layer i cascade.
After performing the above feature extraction for each image at different resolution scales, the feature vectors obtained from the cascade form the final output vectors H and I .
H = i = 1 k S i , I = i = 1 k L i
where denotes the cascade and k is the total number of selected feature layers.
For the extracted image feature vectors, the cosine similarity metric is used to quantify the similarity between the feature vectors from the real image and the generated image, calculated as:
( H , I ) = H , I H 2 I 2
where H , I denotes the dot product of H and I , and H 2 and I 2 are vector L2 norms, respectively.
In this paper, we define similarity above 0.8 as high-quality samples and quality below 0.8 as low-quality samples. Figure 12 illustrates the low-quality samples with similarity between 0 and 0.2 and the high-quality samples between 0.8 and 1.
The screening revealed that the main reason for the low-quality samples was problems with the camera angle settings. These problems resulted in the sample surface being difficult to recognize or shadows appearing on the surface of the PV panel, which in turn affected the quality of the image. This is usually due to the improper shooting angle resulting in excessive reflections on the surface of the PV panels, which cover off the detailed features and make it difficult for the dust detection algorithm to accurately recognize the dust distribution. In addition, changes in lighting conditions, such as direct sunlight or shadow coverage, can significantly affect the contrast and brightness distribution of the image, increasing the difficulty of detection.
To address these issues, multiple angles and light sources are used in the subsequent sample generation to improve the quality and usability of the samples. These measures will help generate clearer and more stable training data and improve the generalization ability and detection accuracy of the model.

4. Experimental Results

4.1. Experimental Data and Parameter Settings

This paper is based on the following hardware platform and software platform:
(1)
Blender graphical modeling and rendering platform [22], using Cycles renderer.
(2)
Intel Core i9-12900K processor (Intel, Santa Clara, USA) and ASUS RTX 3090 graphics card (ASUS, Taipei, China).
(3)
The dust data generation program was written based on the Blender Python API [23], and a total of 4000 dust coverage images were generated.
All other data for the experiments in this paper come from open source data, and the original sand and dust images come from the Kaggle PV panel sand and dust dataset, of which 730 sand and dust images were screened for model training and testing.

4.2. Dust Detection Experiment

This paper is mainly carried out on YOLO series algorithms; the algorithms are not loaded with pre-training parameters. Choose the stochastic gradient descent (stochastic gradient descent, SGD) algorithm to optimize the loss function; the learning rate is set to 0.01, the number of training samples per time is 8, and the number of iterations is set to 150. The experiments are divided into four groups, which are described as follows:
Experiment 1: The original number of dust pictures are used as the dataset and divided, of which 510 are used as the training set, 147 are used as the test, and 73 are used as the validation. Part of the data is shown in Figure 13. The dataset is fed into the YOLOv8 network for training, the network parameters are saved in real-time, and the network’s dust detection performance is tested on the test set after the training is completed.
Experiment 2: The same-sized dataset is input into the YOLOv8-DSDA network for re-training, and the PV panel sand and dust detection performance is performed on the test set.
Experiment 3: Expand the number of training samples to 3500 by rotating and cropping, with all other things remaining unchanged, and input the expanded dataset into the YOLOv8 network for training.
Experiment 4: Data expansion using the method of this paper also expands the training set samples to 3500. The expanded training set containing 3500 mixed samples is input into the YOLOv8 network for training, and the performance of PV panel sand and dust detection is performed on the test set.
Experiment 5: Data expansion with the method of this paper, using the expanded 3500 mixed samples in YOLOv8-DSDA training, and PV panel sand and dust detection performance on the test set.
To obtain multiple sets of experimental results, repeat the above experiments 1~3, and at the beginning of each repetition, the original 730 PV panel sand and dust data images are re-expanded with data enhancement and other operations, and the test sets used in the experiments are all real 147 PV panel images containing sand and dust. The experimental results are shown in Table 1.
The results of some experimental data after 3D image expansion are shown in Figure 14. According to the experimental results in Table 1, it can be found that the accuracy, mAP@50%, and mAP@50-95 of YOLOv8-DSDA are higher than those of YOLOv8 trained with 510 original images, which indicates that the overall performance of the model has been improved by the introduction of SENetV2, DySample, and AKConv modules. The YOLOv8 model trained with the 3D image expansion dataset outperforms the dataset expanded with rotation, stitching, and cropping in all metrics. Notably, the 3D image expansion approach significantly improves the detection accuracy, especially in complex scenes with varying dust patterns.
This suggests that the 3D image extension can better model complex PV panel dust patterns and provide more representative training data. YOLOv8-DSDA significantly outperforms YOLOv8 when using the 3D image expansion dataset, especially in terms of mAP@50% and mAP@50-95. This further validates the superiority of the improved YOLOv8-DSDA model in processing complex data and evaluating fine features in cluttered backgrounds, and shows that it has higher accuracy in detecting dust in complex scenes.
From the above experimental results, it can be seen that the 3D image expansion technique used in this paper significantly improves the accuracy, recall, and mAP metrics of the model. Compared with the initial YOLOV8 model, the accuracy of the YOLOV8-DSDA model with the 3D image expansion improves by 4.6%, the recall improves by 8.4%, and the mAP50 improves by 5.7%. In order to validate the model’s detection ability in real scenarios, we used a test set consisting of real dust samples for comparison experiments, as shown in Figure 15 (the first one is the test image and the others are YOLOv8 based on 510 images, YOLOv8-DSDA based on 510 images, YOLOv8 based on the expansion of traditional images, YOLOv8 based on the expansion of 3D data, YOLOv8 based on the expansion of 3D data, and YOLOv8 based on the expansion of 3D data). (YOLOv8, YOLOv8-DSDA based on 3D data expansion). The results of the comparison experiments show that the 3D image expansion method is significantly better than the traditional rotation, splicing, and cropping expansion method. This indicates that the 3D image expansion can more effectively simulate the complex dust morphology in the actual scene, provide richer training data, and thus improve the generalization ability and detection accuracy of the model.
The improved YOLOv8-DSDA model generally outperforms the original YOLOv8 model in various indexes; especially when using the 3D image extended dataset, the detection results are better. This further demonstrates the effectiveness of introducing modules such as SENetV2, DySample, and AKConv, showing that these improvements can enhance the model’s adaptability and detection efficiency when dealing with complex dust patterns and distributions.

4.3. Ablation Study

In order to systematically assess the impact of each key component in the PV panel dust inspection model on the model performance, the same experimental environment and parameters are set up and a series of ablation experiments are conducted uniformly on the basis of an expanded dataset of 3D generated data, and the results of the ablation experiments are shown in Figure 16. The contribution of each step of improvement to the overall model performance is quantified by removing the key components in the model one by one. The results of the related experiments are shown in Table 2.
In terms of comprehensive metrics, the improved YOLOv8-DSDA model significantly outperforms other models on the dust detection task. In particular, our improved model shows significant performance gains in recall, mAP@50%, and mAP@50-95%. Compared with the original YOLOv8 model, the metrics of the improved model range from 0.1% to 3.9%, respectively, which highlights the advantages of our improved approach in enhancing detection accuracy.
Further analysis shows that the SENetV2 + Dysample configuration performs best in the PV panel dust recognition task. This combination utilizes the channel attention mechanism of SENetV2 and the dynamic up-sampling function of Dysample, which effectively improves the model’s ability to capture both fine and complex dust features. In addition, SENetV2 + AKConv demonstrated better performance, where the combination of the SENetV2 channel attention mechanism and the flexible core design of AKConv significantly improved the model’s detection accuracy of dust.
Through comprehensive comparison, SENetV2, DySample, and AKConv all contribute significantly to the model’s prediction accuracy. Based on the above advantages, the YOLOv8-DSDA model integrates the data enhancement function of DySample and the channel focusing technique of SENetV2, while utilizing the dynamic kernel adjustment capability of AKConv to further accurately identify the microscopic dust on PV panels. This integration significantly improves the model’s detection accuracy, making it more suitable for application in real-world PV panel maintenance and inspection tasks.

4.4. Other Algorithm Validation

In order to verify the effectiveness of the dust dataset generated using the 3D modeling technique in real target detection tasks, this study employs Faster R-CNN, a widely used two-stage target detection model, to verify the applicability of the 3D modeling technique. The experiment aims to evaluate the performance of other classes of target detection models in processing dust images of PV panel panels generated using 3D modeling, thus confirming the utility of these synthetic data in enhancing the training effectiveness of target detection algorithms.
The two-stage target detection model Faster R-CNN is selected for the experiment, and the dataset is chosen to be the original 730 images of PV panel dust data and 3500 images of mixed dust data after 3D expansion. The experimental results show that Faster R-CNN significantly outperforms the original 510 image dataset on the expanded 3500 image dataset in terms of accuracy, recall, mAP@50%, and F1 score, which improved by 6.9%, 8%, 7.7%, and 7.4%, respectively The Faster R-CNN model trained using the data generated using 3D modeling shows good accuracy and recall on the dust detection task. Detailed performance data are displayed in Table 3, and the results demonstrate the effectiveness of the synthetic dataset in improving model generalization ability and detection accuracy.

5. Conclusions

In this paper, an algorithm for PV panel dust detection based on an improved YOLOv8 model is proposed, and the performance of the model is enhanced by various technical means. At the same time, the proposed method can solve the problem of difficult-to-obtain samples in the physical environment, and the experiment proves the effectiveness and superiority of these improved methods. The specific conclusions are as follows:
(1)
The 3D image expansion technique significantly improves the performance of the PV panel dust detection model, especially in terms of generalization ability and detection accuracy. The diverse PV panel dust images generated using the 3D modeling and rendering techniques provide richer and more diverse training data, enabling the model to better cope with complex scenes in real applications.
(2)
The improved YOLOv8-DSDA model exhibits higher accuracy and robustness in dealing with complex data, proving the effectiveness of introducing the SENetV2, DySample, and AKConv modules. These modules enable the improved model to outperform the traditional model in a number of indicators, especially in the detection tasks of complex scenes and different dust patterns, which verifies its potential and advantages in practical applications.
(3)
The results of multiple experiments show that the improved YOLOv8-DSDA model performs well under various data expansion methods, especially with the support of 3D image expansion technology, and the detection accuracy and generalization ability of the model are significantly improved. This provides reliable technical support for PV panel dust detection, and can effectively cope with the challenges in practical applications.
(4)
The artificial modeling method of the expanding dataset proposed in this paper is not only applicable to YOLO series algorithms, but is also applicable to other target detection algorithms, and this method has a wide range of application prospects. In other fields that require high-precision target detection, such as traffic monitoring and industrial inspection, similar technical means can be used for model improvement and data expansion to enhance detection performance.

Author Contributions

Conceptualization, C.X. and X.L.; methodology, C.X.; software, C.X.; validation, Y.Y., C.X., and Q.L.; formal analysis, L.Z.; investigation, C.X.; resources, C.X.; data curation, L.Z.; writing—original draft preparation, C.X.; writing—review and editing, X.L.; visualization, Y.Y.; supervision, Q.L.; project administration, C.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

All authors collectively declare no conflicts of interest.

Nomenclature

F Aggregation operations.
S Squeeze operation.
E Excitation operation.
ψ Network mapping after offset.
χ Feature mapping given upsampling scale factor and size.
φ Raw sampling network mapping.
o f f s e t Offset obtained using Conv2d.
C o n v 2 d Two-dimensional convolution operation.
P n The calculated offset is added to the original coordinates to obtain the modified coordinates.
Y Calculated offset.
P Original coordinate.
T Final eigenvalues obtained using interpolation and resampling.
X g Normalized image.
max ( X ) Maximum values of the image data.
min ( X ) Maximum values of the image data.
S i Feature vector of layer i of the original image.
L i Generate a feature vector for layer i of the image.
f i The feature of the layer i cascade.
θ i The parameter of the layer i cascade.
H The feature vectors obtained from the cascade in the original graph form the final output vector.
I The feature vectors obtained from the cascade in the generated image form the final output vector.
H , I The dot product of H and I .
H 2 The L2 paradigm of the output vector of the original map.
I 2 The L2 paradigm that generates the image output vector.

References

  1. Jang, S.L.; Chen, L.J.; Chen, J.H.; Chiu, Y.C. Innovation and production in the global solar photovoltaic industry. Scientometrics 2013, 94, 1021–1036. [Google Scholar] [CrossRef]
  2. Elshazly, E.; El-Rehim AA, A.; Kader, A.A.; El-Mahallawi, I. Effect of dust and high temperature on photovoltaics performance in the new capital area. WSEAS Trans. Environ. Dev. 2021, 17. [Google Scholar] [CrossRef]
  3. Mostamandi, S.; Ukhov, A.; Engelbrecht, J.; Shevchenko, I.; Osipov, S.; Stenchikov, G. Fine and coarse dust effects on radiative forcing, mass deposition, and solar devices over the Middle East. J. Geophys. Res. Atmos. 2023, 128, e2023JD039479. [Google Scholar] [CrossRef]
  4. Sayyah, A.; Horenstein, M.N.; Mazumder, M.K. Energy yield loss caused by dust deposition on photovoltaic panels. Sol. Energy 2014, 107, 576–604. [Google Scholar] [CrossRef]
  5. Yang, H.; Wang, H. Numerical simulation of the dust particles deposition on solar photovoltaic panels and its effect on power generation efficiency. Renew. Energy 2022, 201, 1111–1126. [Google Scholar] [CrossRef]
  6. Dantas, G.M.; Mendes, O.L.C.; Maia, S.M.; Alexandria, A.R.D. Dust detection in solar panel using image processing techniques: A review. Res. Soc. Dev. 2020, 9, e321985107. [Google Scholar] [CrossRef]
  7. Yue, S.; Li, M. Study on the formation and evolution mechanism of dust deposition on solar photovoltaic panels. Chem. Pap. 2022, 76, 763–774. [Google Scholar] [CrossRef]
  8. He, H.; Zhou, C.; Yu, P.; Lu, X. Research on a Photovoltaic Panel Dust Detection System Based on Improved Mobilenet Algorithm. In Proceedings of the 2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 15–17 March 2024; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  9. Elyanboiy, N.; Eloutassi, O.; Khala, M.; Elabbassi, I.; Elhajrat, N.; Elhassouani, Y.; Messaoudi, C. Advanced intelligent fault detection for solar panels: Incorporation of dust coverage ratio calculation. In Proceedings of the 2024 4th International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Fez, Morocco, 16–17 May 2024; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  10. Saquib, D.; Nasser, M.N.; Ramaswamy, S. Image Processing Based Dust Detection and prediction of Power using ANN in PV systems. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020. [Google Scholar] [CrossRef]
  11. Rachid, E.; Mohamed, B.; Youssef, A.; Abdelkader, M. (2023, May). Application of Deep Learning Based Detector YOLOv5 for Soiling Recognition in Photovoltaic Modules. In Proceedings of the 2023 3rd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET), Mohammedia, Morocco, 18–19 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
  12. Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  13. Ozen, I.; Subramani, K.; Vadrevu, P.; Perdisci, R. SENet: Visual Detection of Online Social Engineering Attack Campaigns. arXiv 2024, arXiv:2401.05569. [Google Scholar] [CrossRef]
  14. Narayanan, M. SENetV2: Aggregated dense layer for channelwise and global representations. arXiv 2023, arXiv:2311.10807. [Google Scholar] [CrossRef]
  15. Avranas, A.; Kountouris, M. Towards disentangling information paths with coded ResNeXt. Adv. Neural Inf. Process. Syst. 2022, 35, 8926–8939. [Google Scholar] [CrossRef]
  16. Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to upsample by learning to sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 6027–6037. [Google Scholar] [CrossRef]
  17. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
  18. Zhang, X.; Song, Y.; Song, T.; Yang, D.; Ye, Y.; Zhou, J.; Zhang, L. AKConv: Convolutional kernel with arbitrary sampled shapes and arbitrary number of parameters. arXiv 2023, arXiv:2311.11587. [Google Scholar] [CrossRef]
  19. Jin, L.; He, Q.; Li, Z.; Deng, M.; Abbas, A. Variation characteristics of dust in the taklimakan desert. Nat. Hazards 2024, 120, 2129–2153. [Google Scholar] [CrossRef]
  20. Rao GR, K.; Sgar, P.V.; Bikku, T.; Prasad, C.; Cherukuri, N. Comparing 3D rendering engines in Blender. In Proceedings of the 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 7–9 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 489–495. [Google Scholar] [CrossRef]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar] [CrossRef]
  22. Hess, R. Blender Foundations; Taylor & Francis Ltd.: Milton Park, UK, 2010. [Google Scholar]
  23. Conlan, C. The Blender Python API: Precision 3D Modeling and Add-On Development; Apress: New York, NY, USA, 2017. [Google Scholar]
Figure 1. Overall flow chart of the experiment.
Figure 1. Overall flow chart of the experiment.
Energies 17 05222 g001
Figure 2. Structure of the YOLOv8 model.
Figure 2. Structure of the YOLOv8 model.
Energies 17 05222 g002
Figure 3. Improvement of YOLOV8 network structure diagram.
Figure 3. Improvement of YOLOV8 network structure diagram.
Energies 17 05222 g003
Figure 4. SENet module.
Figure 4. SENet module.
Energies 17 05222 g004
Figure 5. SENetV2 module.
Figure 5. SENetV2 module.
Energies 17 05222 g005
Figure 6. Dysample dynamic upsampling structure.
Figure 6. Dysample dynamic upsampling structure.
Energies 17 05222 g006
Figure 7. AKConv structure.
Figure 7. AKConv structure.
Energies 17 05222 g007
Figure 8. Three-dimensional modeling of PV panels in Blender.
Figure 8. Three-dimensional modeling of PV panels in Blender.
Energies 17 05222 g008
Figure 9. Dust randomization override script settings.
Figure 9. Dust randomization override script settings.
Energies 17 05222 g009
Figure 10. Surface dust of PV panels at different particle sizes.
Figure 10. Surface dust of PV panels at different particle sizes.
Energies 17 05222 g010
Figure 11. Resnet50 network.
Figure 11. Resnet50 network.
Energies 17 05222 g011
Figure 12. Low-quality-samples (first row) and high-quality-samples (second row).
Figure 12. Low-quality-samples (first row) and high-quality-samples (second row).
Energies 17 05222 g012
Figure 13. Photographs of real samples.
Figure 13. Photographs of real samples.
Energies 17 05222 g013
Figure 14. Partial experimental data results after 3D image expansion.
Figure 14. Partial experimental data results after 3D image expansion.
Energies 17 05222 g014
Figure 15. Display of inference results.
Figure 15. Display of inference results.
Energies 17 05222 g015aEnergies 17 05222 g015b
Figure 16. Effect of different modules on Map@50 and R.
Figure 16. Effect of different modules on Map@50 and R.
Energies 17 05222 g016
Table 1. Comparison of experimental results.
Table 1. Comparison of experimental results.
ModelTraining DataPrecisionRecallmAP@50mAP@50-95F1
YOLOv8510 pictures of dust0.9170.8290.8910.5280.87
YOLOv8-DSDA510 pictures of dust0.9410.840.9120.5220.89
YOLOv8Rotated, spliced, cropped and expanded 3500 dust images0.9190.8630.9220.6720.89
YOLOv83500 images blended after 3D image expansion0.9560.8950.9320.6730.924
YOLOv8-DSDA3500 images blended after 3D image expansion0.9630.9130.9480.6790.937
Table 2. Comparison of experimental results of different improved methods.
Table 2. Comparison of experimental results of different improved methods.
ModelTraining DataPrecisionRecallmAP@50%mAP@50-95%F1FPS
YOLOv83500 images blended after 3D image expansion0.9560.8950.9320.6730.92487.58
YOLOv8 + SENetV2 + Dysample3500 images blended after 3D image expansion0.9570.8920.9350.6920.92396.46
YOLOv8 + SENetV2 + AKConv3500 images blended after 3D image expansion0.9680.9070.9380.6850.93681.04
YOLOv8 + Dysample + AKConv3500 images blended after 3D image expansion0.9620.9050.9450.7020.93292.63
YOLOv8-DSDA3500 images blended after 3D image expansion0.9630.9130.9480.6790.93795.23
Table 3. Faster R-CNN comparison experiments.
Table 3. Faster R-CNN comparison experiments.
ModelTraining DataPrecisionRecallmAP@50%F1
Faster R-CNN510 pictures of dust0.8320.9030.9030.866
Faster R-CNN3500 images blended after 3D image expansion0.9010.9830.9800.940
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, C.; Li, Q.; Yang, Y.; Zhang, L.; Liu, X. Research on a Photovoltaic Panel Dust Detection Algorithm Based on 3D Data Generation. Energies 2024, 17, 5222. https://doi.org/10.3390/en17205222

AMA Style

Xie C, Li Q, Yang Y, Zhang L, Liu X. Research on a Photovoltaic Panel Dust Detection Algorithm Based on 3D Data Generation. Energies. 2024; 17(20):5222. https://doi.org/10.3390/en17205222

Chicago/Turabian Style

Xie, Chengzhi, Qifen Li, Yongwen Yang, Liting Zhang, and Xiaojing Liu. 2024. "Research on a Photovoltaic Panel Dust Detection Algorithm Based on 3D Data Generation" Energies 17, no. 20: 5222. https://doi.org/10.3390/en17205222

APA Style

Xie, C., Li, Q., Yang, Y., Zhang, L., & Liu, X. (2024). Research on a Photovoltaic Panel Dust Detection Algorithm Based on 3D Data Generation. Energies, 17(20), 5222. https://doi.org/10.3390/en17205222

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop