Previous Article in Journal
Digital Twin Technology and Social Sustainability: Implications for the Construction Industry
Previous Article in Special Issue
Secure, Sustainable Smart Cities and the Internet of Things: Perspectives, Challenges, and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Dust Detection on Solar Panels: A Low-Cost Sustainable Solution for Increased Solar Power Generation

1
Renewable Energy and Environmental Technology Center, University of Tabuk, Tabuk 47913, Saudi Arabia
2
Electrical Engineering Department, Faculty of Engineering, University of Tabuk, Tabuk 47913, Saudi Arabia
3
Department of Computer Science, National University of Computer and Emerging Science (NUCES-FAST), Jamrud Road 160 Industrial Estate Road, Phase 1 Hayatabad, Peshawar 25100, Khyber Pakhtunkhwa, Pakistan
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(19), 8664; https://doi.org/10.3390/su16198664
Submission received: 14 August 2024 / Revised: 29 September 2024 / Accepted: 4 October 2024 / Published: 7 October 2024
(This article belongs to the Special Issue Secure, Sustainable Smart Cities and the IoT)

Abstract

:
The world is shifting towards renewable energy sources due to the harmful effects of fossils fuel-based power generation in the form of global warming and climate change. When it comes to renewable energy sources, solar-based power generation remains on top of the list as a clean and carbon cutting alternative to the fossil fuels. Naturally, the sites chosen for installing solar parks to generate electricity are the ones that get maximum solar radiance throughout the year. Consequently, such sites offer challenges for the solar panels such as increased temperature, humidity and high dust levels that negatively affect their power generation capability. In this work, we are more concerned with the detection of dust from the images of the solar panels so that the cleaning process can be done in time to avoid power loses due to dust accumulation on the surface of solar panels. To this end, we utilize state-of-art deep learning-based image classification models and evaluate them on a publicly available dataset to identify the one that gives maximum classification accuracy for dusty solar panel detection. We utilize pre-trained models of 20 deep learning models to encode the images that are then used to train and validate four variants of a support vector machine. Among the 20 models, we get the maximum classification of 86.79% when the images are encoded with the pre-trained model of DenseNet169 and then use these encodings with a linear SVM for image classification.

1. Introduction

Traditionally, the task of image classification was performed using classical methods which heavily relied on manual feature extraction [1] using techniques such as color histograms [2] edge detection filters [3] local features descriptors [4] and texture measures [5]. These methods often utilized straightforward machine learning algorithms like support vector machines (SVM) [6] or k-nearest neighbors (k-NN) for classification once features were extracted.
However, with the rise of deep learning-based methods, particularly the use of Convolutional Neural Networks (CNNs) [1] the paradigm shifted significantly. CNNs and other deep learning models automate the feature extraction process [7] learning optimal features directly from raw image data during the training phase. This capability not only improved the accuracy and efficiency of image classification tasks but also scaled better with increasing data sizes and complexity [8,9].
Despite these advancements, image classification remains a challenging field. One of the major hurdles is the variability in images due to changes in illumination, viewpoint, and occlusions, which can negatively affect the performance of the classification models [10]. Yet another challenge major challenge is the requirement of vast amounts of labeled data to train deep learning models [11].
Image classification has also proven to be exceptionally beneficial in vast majority of the application areas such as medical imaging [12,13] autonomous vehicles [14,15] agriculture [16] and retail [17] etc. Each of these applications demonstrates the versatile potential of image classification to not only automate and enhance existing processes but also to provide insights and capabilities that were previously unattainable.
A global shift towards the renewable energy sources has been caused by the disadvantages of the fossil fuels-based power generation [18]. Figure 1 shows this global trend in detail where the solar power generation remains the major contributor towards the achievement of sustainable energy source. The use of fossil fuels cause emission of greenhouse gases such as carbon dioxide and methane, that are linked to the phenomena of climate change and global warming [19]. Additionally, fossil fuels are finite resources, leading to concerns over long-term sustainability and energy security as these resources become scarcer and their extraction becomes more challenging and expensive.
In contrast, renewable energy sources offer numerous advantages, particularly in terms of environmental and economic benefits [20]. These sources, which include solar, wind, hydro, and geothermal energy, are abundant and essentially inexhaustible [21]. Using renewable energy reduces dependence on imported fuels, which can enhance national security and reduce exposure to volatile fossil fuel markets. Moreover, renewable sources produce little to no greenhouse gas emissions during operation, helping to mitigate the impact of energy production on global climate change [21].
In the domain of renewable energy, solar panels in particular, offer unique benefits [22]. They convert sunlight directly into electricity using photovoltaic (PV) cells, operating silently and with minimal environmental impact compared to traditional power plants. Solar power generation does not produce air pollution or greenhouse gases, and after initial installation, the operating costs are relatively low since solar panels require little maintenance and provide free energy as long as the sunlight is available.
The placement of large solar power plants often requires careful consideration of several factors [23]. These installations typically occupy extensive land areas and are preferably located in regions with high solar irradiance to maximize energy production. Common sites include deserts or large flatlands that receive extensive sunlight throughout the year. It helps that these areas receive direct sunlight, which is an essential requirement for efficient solar power generation.
However, depending on a multitude of other factors, the solar panels at these locations may not produce power particularly well [24]. One of the challenges that has significant effects on solar panel power generation is the dirt accumulation [25] on the panel’s surface as shown in Figure 2. In fact, especially in many large solar plants which are commonly installed in arid or semi-arid regions, the panel surfaces get dusty and sandy most of the times. This is partly due to frequent wind carry off soil, and reduced rainfall to wash away this dust. A single layer of dust leads to the blockage of the sunlight and preventing it to the photovoltaic cells, leading these panels to become less efficient in converting sunlight into electricity [26]. Regular cleaning may be needed to maintain the optimal output level of photovoltaic panels because they can suffer significant losses due to dust accumulation. Additionally, dust accumulation decreases overall energy production in the affected spots and can lead to overheating which shortens panel lifespan.
The deep learning-based binary image classification can be used effectively to differentiate dusty solar panels than the clean ones using image data. The benefits of such an automated image-based system are particularly non-trivial primarily due to the economic advantages as well as time-saving and efficiency-raising opportunities [27]. An automated cleaning system that is based on the recognition of dusty solar panels using images will result in reducing the labor involved in manual cleaning. In manual cleaning, first, employees would have to survey the vast territories where large solar farms are located [28]. Second, time-saving is also significant as the system is capable of processing vast volumes of image data and providing immediate feedback, ensuring the prompt cleaning of the panels. Moreover, with the capacity to have dust-free panels, the efficiency of the solar farm as an energy generation tool remains high; therefore, the photovoltaic system will produce electricity effectively and with as few interruptions as possible.
We propose an image classification framework that utilizes pre-trained deep learning models for image representations. These pre-trained models are trained on a large and varied dataset of images, such as ImageNet [7]. The images are encoded with these pre-trained models into feature vectors that are then used to train a relatively simpler classifier such as the support vector machine (SVM) to peform the task of image-based dusty solar panel recognition. In such a case, a deep learning model takes on the role of a feature extractor, while an SVM acts upon these features to classify the observed image as dusty or clean. This two-stage method takes advantage of the power of capturing complex patterns by the pre-trained model and combines it with more efficient binary classification by traditional machine learning models.
A pre-trained model can be trained further on the images of the dusty and clean solar panels to perform a binary classification tasks for the recognition of dusty solar panels. However, such training would not be as computationally efficient as the proposed approach due to the fully connected layers of the pre-trained models that take more time than a linear SVM to get trained. The proposed two-step approach relieve the computational load during the training stage, and due to the high accuracy and rapid execution times, it can help provide a real-time solution for the recognition of dusty solar panels.

2. Literature Review

Dust, dirt, and other particles are responsible for causing soiling losses, which in turn decrease the efficiency of PV modules [29]. Dust encompasses particles in the environment with a diameter less than 10 mm, originating from several sources including sand, dirt, rocks, construction waste, volcanic emissions, eroded limestone, and bird’s excrement [30]. The accumulation of dust particles inside the panels might worsen the soiling impact and consistently decrease the total power production. The deposition of dust particles is primarily affected by the sun’s angle of inclination and the material of the PV module’s cover. Other factors that impact dust build-up include ambient temperature, tilting angle, soil conditions, and the presence of plants in the vicinity. The accumulation of dust on the PV surface occurs by three mechanisms: occult deposits (such as mist, clouds, high humidity, moisture in fog, and dew), dry deposits (caused by wind), and wet deposition (resulting from rainfall).
The chemical composition of the dust and the mineral structure it carries is dependent on the specific environmental conditions in a given area. The location of PV installations affects the accumulation of dust: in this example, greater deposit rates occur near industrial, volcanic regions, while in areas prone to sandstorms there will be a higher rate also [31]. Moreover, for the panels to work properly, they must be installed in the open air; dust from their ambient environment will settle as layers on them until they cut off sunlight to some extent. The layer that forms inhibits light from getting to the cells which in turn affects efficiency of the production process [32].
In outdoor environments, dust and sand is commonly accumlated on PV panels. Different PV systems can have quite different levels of accumulation, merely because parameters such as the amount of sunlight panel gets or in what directions it lies differ depending on time and place; but this will depend on surface area of the panel, tilt angle and wind speed as well [33]. Detecting and rectifying this condition early is vital as it can lead to a 15% decrease in the monthly energy production from photovoltaic system [34]. Much research has been done to investigate the effect of dust on the process generating energy in PV systems [35,36,37]. Such studies are characterized by running a series of tests where different concentrations of dust are directed to the surface of a photovoltaic panel. The only intention of such kind of tests is to access the extent to which the power output of the entire photovoltaic system becomes decreased throughout dust collection. According to the study by [38], it is undeniably evident that dust collection has a direct impact on the efficiency and overall productivity of the tested panel. Furthermore, the authors propose to use an artificial neural network (ANN) to predict the impact of dust particles size on the power generated by the solar panels.
The works in [39,40] also present the influence of dust particulate characteristics including size and composition on the power generation process of the PV systems. The conclusion of these studies is that the efficiency loss of a PV panel may reach up to 57% due to the dust accumulation. Given all those studies, it is clear that the challenge of dealing with the cleaning process of the solar panels may be optimized in order to reduce the power generation losses.
Some further researches aim to develop a mathematical foundation for predicting power supply changes due to changing dust levels. As shown in [41], the authors conducted an investigation that included six different varieties of dust, where they applied differing amounts of the dust on to the surface of the PV panels. The researchers then examined how such dust accumulation would affect the performance in PV-system output. Consequently it was concluded that the smallest particles have a greater negative affect on the power generation ability of the PV system. It was also demonstrated that increase in the dust amount results to power losses in the output. They also provided six models of the types of dust to predict these losses into system operation at given irradiance and concentration levels from experimental data. But, in order to use this model it is important we have some knowledge priori of the dust composition on that panel. Further, the models do not consider multiple-contaminant mixtures that are common in environmental samples.
In [42] an experiment was performed to apply various quantities of dust to the surface of a photovoltaic panel. The study developed a model to assess the efficiency of the panel. First, it was discovered that the collection of the dust leads to an increase in the surface temperature of the PV panel. Second, there was observed a decrease in the short circuit current and open circuit voltage. In addition, it is reported that the most severe adverse impacts events occurred during the first stages of dust accumulation. All these works assist in fully understanding the role of dust accumulation of the process of power generation by the solar panel. At the same time, they also help to estimate the performance of the panel while the amount of dust, deposited on their surface, is changed. Furthermore, according to the studies, simply examining electrical variables may not be sufficient for accurately indicating the level of dust because estimation accuracy may depend on the environmental conditions, such as sun radiance and temperature. It is stated in the studies, based on such a nonlinear estimation, that all such factors should also be discussed when providing considerations. The works are important because they prove that ANN can be used in the estimation of the aforementioned levels of accumulation. However, they do not offer an approach for automatically indicating when it is definitely urgent to deal with dust buildup.
The common methods in this area involve applying statistical analysis and linear regression methods [43,44], image processing techniques [45,46], and the use of artificial intelligence, such as ANN [47] or fuzzy logic [48]. The mentioned research analyse various problems related to PV panels including the performance degradation due to dust accumulation. Each individual research takes a different line of approach by using data that is extracted from Voltage, current, humidity and temperature. Such kind of data and the methods applied on it gives accurate information about problems such as power generation forecasting, however, the problem of dust accumulation can not be approached with these methods. To this end, machine learning have achieved remarkable results. For instance, in [49], the authors utilize a deep neural network in combination with image processing techniques that include segmentation and clustering for the identification of the solar panel surface where dust is accumulated. In addition, the concentration of the dust can also be estimated with their proposed model.
A practical solution is introduced in [24]. This solution suggests using regression models and decision trees to accurately measure the amount of dust present in a PV panel. In their work, the authors provide a model that enables the calculation of a value ranging from 0 g/m2 to 0.9 g/m2 for the accumulation of dust. While these recent studies may effectively categorise and measure varying degrees of dust accumulation on a PV panel, they do not provide guidance on the optimal timing for doing maintenance activities.
To conclude, most of the recent methods employ CNN-based solution to detect dust on solar panels from their images. They either use CNN architectures that are pre-trained on large-scale image datasets such as ImageNet or use similar novel architectures that are more or less based on the same module/layers already proposed by those pre-trained ones. The main contribution of this paper is on the extensive evaluation of these mostly used pre-trained CNN architectures and make the achieved results available to the research community. Hence, for future studies/research, our effort in this paper makes the process of selection of a CNN architecture for the image-based solar dust detection convenient, time saving, and may also prove helpful in the proposal of more robust and accurate novel architectures. Besides, this study also makes it convenient to select the best architecture from the application point of view. This can prove instrumental for systems developed for solar panel cleaning to use image-based solutions that are implemented via the deep learning algorithms.

3. Dataset

Most of the images of the dataset are from a public online repository [50]. The dataset consists of images of clean and dusty solar panels that are acquired at various viewpoint, daytime, and in case of dusty panels with various types of dirt. The total number of images are 1068 where the number of clean images are 405 and the number images of dirty solar panels is 663.

4. Methodology

Following are the steps that we follow to perform out image-based classification of solar panels which are shown Figure 3.
  • Step 1: Selecting a Pre-Trained Model
The pre-trained models such ass VGG and ResNet are widely used pre-trained models in deep learning, developed primarily for image recognition tasks. Both networks have been trained on large datasets like ImageNet and can encode generic features from images that are highly effective for various visual recognition tasks.
  • Step 2: Image Representation
When an image is passed through a pre-trained network like VGG or ResNet, each layer of the network processes the image and transforms it into a set of feature maps. These feature maps increase in abstraction as you move deeper into the network. By discarding the final classification layers, we utilize these deep convolutional layers to extract a rich feature set that describes the image. However, the features map is of high dimensions that is reduced to a single feature vector via the Global Average Pooling (GAP) layer. GAP not only reduces the number of parameters but also helps in reducing the overfitting due to the high dimensionality of the feature map. For instance, in the VGG19 model, a global average pooling layer can be applied over the final feature map. If the final feature map has dimensions of 7 × 7 × 512, GAP would compute the average of each 7 × 7 grid across the 512 channels, resulting in a 512-D vector. This vector is then used to represent the entire image to be used for image classification.
  • Step 3: Preparing the Dataset
For any image classification task, we first need to compile a dataset of images and their corresponding labels. Each image in the dataset is passed through the pre-trained CNN, using only its convolutional base to extract features. The output is a set of feature vectors—one for each image.
  • Step 4: Training a Classifier
Once we have these feature representations, we can use them as input to train a classifier. A Linear SVM is a popular choice due to its effectiveness in handling high-dimensional data and its ability to model complex boundaries with linear functions in high-dimensional spaces.
  • Step 5: Model Training and Validation
Using the extracted features, the SVM is trained on a subset of the dataset (training set). The effectiveness of this classifier is then validated using another subset (validation set) to check for overfitting and to tune any hyperparameters. Finally, the model’s performance is evaluated on a separate test set to gauge its real-world applicability.
  • Step 6: Testing and Classification
In the testing phase, images from the test set are also passed through the same pre-trained network to extract features. These features are then fed into the trained SVM to classify the images. The SVM’s predictions are compared against the actual labels to determine the accuracy of the classification. Using pre-trained models as feature extractors followed by a linear SVM for classification allows us to leverage the power of deep learning even with smaller datasets, or in scenarios where training a full CNN might be computationally expensive. This approach is versatile and can be adapted to various types of image classification tasks.
Here’s a brief overview of the listed Convolutional Neural Network (CNN) models, which are widely used in various computer vision tasks while their architecture details are summarize in Table 1:
  • VGG16 and VGG19: Developed by researchers at Oxford, both VGG16 and VGG19 are deep convolutional networks known for their simplicity and depth. VGG16 consists of 16 layers, and VGG19 has 19 layers. They are characterized by their use of 3 × 3 convolutional layers stacked on top of each other in increasing depth, and they have been very influential in the area of deep learning for image recognition.
  • ResNet50, ResNet101, ResNet152: ResNet, which stands for Residual Network, introduces a novel architecture with “skip connections” or “shortcut connections” that allow it to train deeper networks. The numbers (50, 101, 152) refer to the number of layers in the networks. These skip connections help to avoid the problem of vanishing gradients by allowing direct paths for gradients to propagate.
  • DenseNet121, DenseNet169, DenseNet201: DenseNet (Densely Connected Convolutional Networks) operates on the principle that adding direct connections from any layer to all subsequent layers makes the network easier to train. This is because each layer receives additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. The numbers indicate the depth of the network.
  • MobileNet and MobileNetV2: MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build lightweight deep neural networks. These models are designed for mobile and resource-constrained environments without compromising on performance. MobileNetV2 introduces linear bottlenecks and inverted residuals as improvements over the original MobileNet.
  • Xception: Xception, which stands for “Extreme Inception”, modifies the original Inception architecture by replacing the Inception modules with depthwise separable convolutions. This model pushes the boundaries of depth and complexity beyond what is achievable with Inception models.
  • InceptionV3 and InceptionResNetV2: InceptionV3 improves upon earlier versions of Inception by adjusting the architecture for better performance and computational efficiency. InceptionResNetV2 combines the Inception architecture with residual connections, which makes it possible to train very deep networks.
  • NASNetLarge and NASNetMobile: NASNet architectures are designed using a search method that automatically learns the best convolutional layer (or “cell”) designs. NASNetLarge and NASNetMobile are the results of this architecture search, where the former is designed for maximizing accuracy and the latter for mobile environments.
  • EfficientNetB0, EfficientNetB1, EfficientNetB2, EfficientNetB3, EfficientNetB4: The EfficientNet models scale up CNNs in a more structured way. Starting from a baseline model (EfficientNetB0), each subsequent model (B1 to B4 and beyond) increases the depth, width, and resolution of the layers based on a set of scaling coefficients. These models achieve excellent accuracy and efficiency in both resources and performance.
Table 1. Summary of Convolutional Neural Network Models.
Table 1. Summary of Convolutional Neural Network Models.
ModelNumber of LayersOutput
Vector Size
Trainable
Parameters
VGG1616 layers (Conv, Max Pool, FC)4096∼138 M
VGG1919 layers (Conv, Max Pool, FC)4096∼143 M
ResNet5050 layers2048∼25 M
ResNet101101 layers2048∼44 M
ResNet152152 layers2048∼60 M
DenseNet121121 layers1024∼8 M
DenseNet169169 layers1664∼14 M
DenseNet201201 layers1920∼20 M
MobileNet28 layers (Depthwise Conv)1024∼4.2 M
MobileNetV253 layers1280∼3.5 M
Xception71 layers (Depthwise Conv)2048∼23 M
InceptionV348 layers2048∼23.8 M
InceptionResNetV2Over 100 layers1536∼55.8 M
NASNetLargeAdjustable4032∼88.9 M
NASNetMobileAdjustable1056∼5.3 M
EfficientNetB0∼237 layers1280∼5.3 M
EfficientNetB1Scaled up from B01280∼7.8 M
EfficientNetB2Scaled up from B11280∼9.2 M
EfficientNetB3Scaled up from B21280∼12 M
EfficientNetB4Scaled up from B31280∼19 M

5. Results and Discussion

The results of the evaluated models on 4 kernels of SVM are summarized in the Table 2. Following is the elaborated discussion of the achieved results.

5.1. Understanding Key Metrics

  • Classification Accuracy: Measures the percentage of correctly predicted instances out of the total instances. It gives an overall effectiveness of the model in classifying images correctly.
  • Precision: Indicates the proportion of positive identifications that were actually correct. A high precision means a lower false positive rate.
  • Recall (Sensitivity): Measures the proportion of actual positives that were correctly identified, indicating the model’s ability to find all relevant cases within the dataset.
  • F1 Score: Combines precision and recall into a single metric by taking their harmonic mean. It is particularly useful when the classes are imbalanced, providing a balance between precision and recall.

5.2. Classification Accuracy Insights

The classification accuracies achieved by all the pre-trained models with various SVM kernels are shown in Table 2. The linear SVM performs the best among all the kernels and hence its results are shown in Figure 4 for all the pre-trained models. In case of linear SVM, DenseNet169 outperformed the rest of the models by attaining an accuracy of 86.79% followed by MobileNet achieving 85.38%, thus demonstrating their strong performance on the task of solar panel classification. EfficientNetB2 demonstrates the least accurate performance with a classification accuracy of 64.15%. Similarly in case of RBF, polynomial and sigmoid kernels, DenseNet169 and MobileNet remain the top performers.
To summarize, regardless of the type of kernel used, DenseNet169 consistently outperforms all the other pre-trained models. This indicates that the architecture of DenseNet169 is very suitable for the task of image-based classification of solar panels when used for image encoding. Secondly, when these encodings are used for image classification with an SVM, a linear SVM performs better than kernel-based SVM which clearly indicates that the high dimensions of the pre-trained models are enough for the task at hand. Hence, the encoded vectors must not be projected to higher kernel spaces that would result in encoded vectors of higher dimensions. This may cause the “the curse of dimensionality” and hence negatively affect the classification accuracies [51]. This is evident by the results achieved by RBF, sigmoid, and polynomial kernels where all of them perform inferior to the linear SVM. Talking about the pre-trained models, although less complicated models may struggle to attain high accuracy, it is important to note that more complex models generally perform better across multiple kernels. This highlights the significance of model complexity in kernel-based approaches. Nevertheless, additional investigation and testing are required to completely comprehend the correlation between model architecture and SVM kernel choice.

5.3. Precision Insights

Precision metrics are also led by DenseNet169 using the LINEAR kernel at 88.24%. This suggests that when DenseNet169 predicts an image as belonging to a class, it is correct most of the time. The lower precision scores with the POLY and SIGMOID kernels across most models could indicate a higher number of false positives.

5.4. Recall Insights

Recall is exceptionally high for many models when using the RBF kernel, with DenseNet169 reaching up to 96.97%. This indicates that RBF kernel is particularly good at ensuring that most true positive cases are captured, but at the cost of potentially higher false positives, as seen in the precision results.

5.5. F1 Score Insights

The F1 scores are balanced with DenseNet169 again leading using the LINEAR kernel at 89.55%. High F1 scores in LINEAR and RBF kernels suggest that these models are effective at balancing the trade-off between precision and recall, making them suitable for cases where both false positives and false negatives are costly.

5.6. Summary of Results and Top Performing Models

Figure 5 summarizes results of the Top-3 performing models. Overall, DenseNet169, MobileNet, and DenseNet201 consistently show top performance across the metrics, especially with the LINEAR kernel. These models and kernels show a robust capability in handling the complexities of binary image classification with high accuracy, precision, recall, and F1 scores, indicating their suitability for practical applications.
The detailed confusion matrices of all the evaluated models are shown in Figure 6 in an order of decreasing performance. For instance, the very first confusion matrix is that of DenseNet169 which is the Top performer on the dataset which is then followed by the confusion matrix of DenseNet201; the second best performer and so on. It can be seen that the top four performers detect the same number of True Positives which is the dusty solar panels and this is the most desired result and the main aim of the paper. With this result we can sufficiently conclude that the variants of DenseNets and the MobileNet are most suitable architectures to be used for image encoding of solar panels for the image-based identification of dusty solar panels.

5.7. Generalization Capability

In order to evaluate the generalization capability of the best performing model, we collected a new test dataset that contains 100 images of the dusty solar panels from different sites. These image are not used in the performance evaluation of the pre-trained models, and hence are more suitable to validate the generalization capability of the best performing model and classifier combination which in our case is DenseNet169 for image encoding and linear SVM for classification. We follow the same pipeline for these images. The images of the main dataset and the exclusive test dataset, both are encoded with the pre-trained model of DenseNet169. The encodings of the main dataset are used with a linear SVM to perform a 5-fold stratified cross validation where the best performing SVM model is selected. This model is then used to predict the label/classify the 100 images of the exclusive test dataset. For this dataset, we get classification accuracy of 88.5%, precision of 1.00, recall 0.88 and F1-score of 0.93. Some examples of the correctly and wrongly classified images are shown in Figure 7.

5.8. Contributing Image Regions towards Dusty Solar Panels Prediction

We also carried out a qualitative analysis of the image regions/patches that contribute towards the final decision of the SVM classifier in predicting the label of the image. Since we are particularly interested in the detection of the dust on the solar panel surface, we perform such analysis on the dusty solar panel images that shown in Figure 8. In addition, our task is an image-based binary classification i.e., dust or no dust, we are not dealing with determining the quantity or thickness of the dust layer. Such binary classification is naturally a first step towards a details and fine-grained classification that deals with the type and thickness etc. of the accumulated dust on the panel’s surface.

6. Conclusions and Future Work

The dust accumulation of the surface of solar panels reduce their power generation capability due to which it becomes important to perform inspections for their cleaning. However, such manual inspection becomes time-consuming and labor intensive when it comes to huge solar parks. Hence, we proposed to perform such inspection with the images of solar panels from the cameras that may be installed for security or surveillance. We extensively evaluated 20 pre-trained deep learning models to classify images of solar panels being dusty or cleaned hence detecting the dusty solar panels. We achieved a maximum classification accuracy of 86.79% with DenseNet169 when used with a linear support vector machine.
In future, we plan to expand our dataset, use more state-of-the art models to achieve higher classification accuracy and then use such model in a real-time scenario.

Author Contributions

Conceptualization, A.M.A., H.A. (Hani Albalawi), A.W. and H.A. (Hafeez Anwar); Data curation, A.W. and H.M.E.-H.; Formal analysis, H.M.E.-H.; Funding acquisition, A.M.A. and H.A. (Hani Albalawi); Investigation, A.M.A. and H.A. (Hani Albalawi); Methodology, A.M.A., H.A. (Hani Albalawi), A.W. and H.A. (Hafeez Anwar); Project administration, A.M.A., H.A. (Hani Albalawi) and H.M.E.-H.; Resources, A.W. and H.M.E.-H.; Software, H.A. (Hafeez Anwar); Supervision, H.A. (Hani Albalawi) and A.W.; Validation, H.A. (Hafeez Anwar); Visualization, H.M.E.-H.; Writing—original draft, A.W. and H.A. (Hafeez Anwar); Writing—review & editing, A.M.A., H.A. (Hani Albalawi) and H.A. (Hafeez Anwar). All authors have read and agreed to the published version of the manuscript.

Funding

This article is derived from a research grant funded by the Research, Development, and Innovation Authority (RDIA)—Kingdom of Saudi Arabia—with grant number (13385-Tabuk-2023-UT-R-3-1-SE).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the corresponding author.

Acknowledgments

The authors extend their appreciation to the Research, Development, and Innovation Authority (RDIA), Saudi Arabia for funding this work through Grant number (13385-Tabuk-2023-UT-R-3-1-SE).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Toennies, K.D. An Introduction to Image Classification: From Designed Models to End-to-End Learning; Springer: Singapore, 2024. [Google Scholar]
  2. Van De Weijer, J.; Schmid, C. Coloring local feature extraction. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7–13, 2006, Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2006; pp. 334–348. [Google Scholar]
  3. Castleman, K.R. Digital Image Processing; Prentice Hall Press: Englewood Cliffs, NJ, USA, 1996. [Google Scholar]
  4. Anwar, H.; Zambanini, S.; Kampel, M. A rotation-invariant bag of visual words model for symbols based ancient coin classification. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5257–5261. [Google Scholar]
  5. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [PubMed]
  6. Anwar, H.; Zambanini, S.; Kampel, M. A bag of visual words approach for symbols-based coarse-grained ancient coin classification. arXiv, 2013; arXiv:1304.6192. [Google Scholar]
  7. Anwar, A.; Anwar, H.; Anwar, S. Towards Low-Cost Classification for Novel Fine-Grained Datasets. Electronics 2022, 11, 2701. [Google Scholar] [CrossRef]
  8. Anwar, H.; Anwar, S.; Zambanini, S.; Porikli, F. Deep ancient Roman Republican coin classification via feature fusion and attention. Pattern Recognit. 2021, 114, 107871. [Google Scholar] [CrossRef]
  9. Imran, M.; Anwar, H.; Tufail, M.; Khan, A.; Khan, M.; Ramli, D.A. Image-Based Automatic Energy Meter Reading Using Deep Learning. Comput. Mater. Contin. 2023, 74, 203–216. [Google Scholar] [CrossRef]
  10. Anwar, H. Invariant Image Representations for Object Category-Based Image Classification. Ph.D. Thesis, Technische Universität Wien, Vienna, Austria, 2015. [Google Scholar]
  11. Zambanini, S. Insensitive Image Comparison in the Absence of Training Data. Ph.D. Thesis, Technische Universität Wien, Vienna, Austria, 2014. [Google Scholar]
  12. Lu, S.; Zhang, W.; Zhao, H.; Liu, H.; Wang, N.; Li, H. Anomaly Detection for Medical Images using Heterogeneous Auto-Encoder. IEEE Trans. Image Process. 2024, 33, 2770–2782. [Google Scholar] [CrossRef]
  13. Behrendt, F.; Bhattacharya, D.; Krüger, J.; Opfer, R.; Schlaefer, A. Patched diffusion models for unsupervised anomaly detection in brain MRI. In Proceedings of the Medical Imaging with Deep Learning, Paris, France, 3–5 July 2024; pp. 1019–1032. [Google Scholar]
  14. Soylu, E.; Soylu, T. A performance comparison of YOLOv8 models for traffic sign detection in the Robotaxi-full scale autonomous vehicle competition. Multimed. Tools Appl. 2024, 83, 25005–25035. [Google Scholar] [CrossRef]
  15. Usama, M.; Anwar, H.; Anwar, A.; Anwar, S. Vehicle and License Plate Recognition with Novel Dataset for Toll Collection. empharXiv 2022, arXiv:2202.05631. [Google Scholar]
  16. Chen, J.; Chen, W.; Nanehkaran, Y.; Suzauddola, M. MAM-IncNet: An end-to-end deep learning detector for Camellia pest recognition. Multimed. Tools Appl. 2024, 83, 31379–31394. [Google Scholar] [CrossRef]
  17. Meftah, M.; Ounacer, S.; Azzouazi, M. Enhancing Customer Engagement in Loyalty Programs through AI-Powered Market Basket Prediction Using Machine Learning Algorithms. In Engineering Applications of Artificial Intelligence; Springer: Cham, Switzerland, 2024; pp. 319–338. [Google Scholar]
  18. Yi, J.; Zhang, G.; Yu, H.; Yan, H. Advantages, challenges and molecular design of different material types used in organic solar cells. Nat. Rev. Mater. 2024, 9, 46–62. [Google Scholar] [CrossRef]
  19. Jia, A.; Liu, H.; Yun, Y.; Jiang, R.; Pouramini, S. Energy efficiency measures in existing buildings by a multiple-objective optimization with a solar panel system using Marine Predators Optimization Algorithm. Sol. Energy 2024, 267, 112208. [Google Scholar] [CrossRef]
  20. Hassan, Q.; Viktor, P.; Al-Musawi, T.J.; Ali, B.M.; Algburi, S.; Alzoubi, H.M.; Al-Jiboory, A.K.; Sameen, A.Z.; Salman, H.M.; Jaszczur, M. The renewable energy role in the global energy Transformations. Renew. Energy Focus 2024, 48, 100545. [Google Scholar] [CrossRef]
  21. Deevela, N.R.; Kandpal, T.C.; Singh, B. A review of renewable energy based power supply options for telecom towers. Environ. Dev. Sustain. 2024, 26, 2897–2964. [Google Scholar] [CrossRef] [PubMed]
  22. Karayel, G.K.; Dincer, I. Green hydrogen production potential of Canada with solar energy. Renew. Energy 2024, 221, 119766. [Google Scholar] [CrossRef]
  23. Pourasl, H.H.; Barenji, R.V.; Khojastehnezhad, V.M. Solar energy status in the world: A comprehensive review. Energy Rep. 2023, 10, 3474–3493. [Google Scholar] [CrossRef]
  24. Bao, J.; Li, X.; Yu, T.; Jiang, L.; Zhang, J.; Song, F.; Xu, W. Are Regions Conducive to Photovoltaic Power Generation Demonstrating Significant Potential for Harnessing Solar Energy via Photovoltaic Systems? Sustainability 2024, 16, 3281. [Google Scholar] [CrossRef]
  25. Ding, R.; Cao, Z.; Teng, J.; Cao, Y.; Qian, X.; Yue, W.; Yuan, X.; Deng, K.; Wu, Z.; Li, S.; et al. Self-Powered Autonomous Electrostatic Dust Removal for Solar Panels by an Electret Generator. Adv. Sci. 2024, 11, 2401689. [Google Scholar] [CrossRef]
  26. Elamim, A.; Sarikh, S.; Hartiti, B.; Benazzouz, A.; Elhamaoui, S.; Ghennioui, A. Experimental studies of dust accumulation and its effects on the performance of solar PV systems in Mediterranean climate. Energy Rep. 2024, 11, 2346–2359. [Google Scholar] [CrossRef]
  27. Kabir, A.; Sunny, M.; Siddique, N. Assessment of grid-connected residential PV-battery systems in Sweden-A Techno-economic Perspective. In Proceedings of the 2021 IEEE International Conference in Power Engineering Application (ICPEA), Shah Alam, Malaysia, 8–9 March 2021; pp. 73–78. [Google Scholar]
  28. Vedulla, G.; Geetha, A. Real-time investigation of dust collection effects on solar PV panel efficiency. EAI Endorsed Trans. Energy Web 2024, 11, 1–6. [Google Scholar] [CrossRef]
  29. Chanchangi, Y.; Ghosh, A.; Sundaram, S.; Mallick, T. Dust and PV Performance in Nigeria: A review. Renew. Sustain. Energy Rev. 2020, 121, 109704. [Google Scholar] [CrossRef]
  30. Tanesab, J.; Parlevliet, D.; Whale, J.; Urmee, T. Dust effect and its economic analysis on PV modules deployed in a temperate climate zone. Energy Procedia 2016, 100, 65–68. [Google Scholar] [CrossRef]
  31. Maghami, M.; Hizam, H.; Gomes, C.; Radzi, M.; Rezadad, M.; Hajighorbani, S. Power loss due to soiling on solar panel: A review. Renew. Sustain. Energy Rev. 2016, 59, 1307–1316. [Google Scholar] [CrossRef]
  32. Kazem, H.; Chaichan, M. The Effect of Dust Accumulation and Cleaning Methods on PV Panels’ Outcomes Based on an Experimental Study of Six Locations in Northern Oman. Sol. Energy 2019, 187, 30–38. [Google Scholar] [CrossRef]
  33. Memiche, M.; Bouzian, C.; Benzahia, A.; Moussi, A. Effects of Dust, Soiling, Aging, and Weather Conditions on Photovoltaic System Performances in a Saharan Environment—Case Study in Algeria. Glob. Energy Interconnect. 2020, 3, 60–67. [Google Scholar] [CrossRef]
  34. Jaszczur, M.; Koshti, A.; Nawrot, W.; Sędor, P. An Investigation of the Dust Accumulation on Photovoltaic Panels. Environ. Sci. Pollut. Res. 2020, 27, 2001–2014. [Google Scholar] [CrossRef]
  35. Farahmand, M.; Nazari, M.; Shamlou, S.; Shafie-khah, M. The Simultaneous Impacts of Seasonal Weather and Solar Conditions on PV Panels Electrical Characteristics. Energies 2021, 14, 845. [Google Scholar] [CrossRef]
  36. Salimi, H.; Mirabdolah Lavasani, A.; Ahmadi-Danesh-Ashtiani, H.; Fazaeli, R. Effect of Dust Concentration, Wind Speed, and Relative Humidity on the Performance of Photovoltaic Panels in Tehran. Energy Sources Part A Recover. Util. Environ. Eff. 2023, 45, 7867–7877. [Google Scholar] [CrossRef]
  37. Coşgun, A.; Demir, H. The Experimental Study of Dust Effect on Solar Panel Efficiency. Politek. Derg. 2022, 25, 1429–1434. [Google Scholar] [CrossRef]
  38. Liu, X.; Yue, S.; Lu, L.; Li, J. Investigation of the Dust Scaling Behaviour on Solar Photovoltaic Panels. J. Clean. Prod. 2021, 295, 126391. [Google Scholar] [CrossRef]
  39. Shenouda, R.; Abd-Elhady, M.; Kandil, H. A Review of Dust Accumulation on PV Panels in the MENA and the Far East Regions. J. Eng. Appl. Sci. 2022, 69, 8. [Google Scholar] [CrossRef]
  40. Fan, S.; Wang, Y.; Cao, S.; Sun, T.; Liu, P. A Novel Method for Analyzing the Effect of Dust Accumulation on Energy Efficiency Loss in Photovoltaic (PV) System. Energy 2021, 234, 121112. [Google Scholar] [CrossRef]
  41. Chen, Y.; Liu, Y.; Tian, Z.; Dong, Y.; Zhou, Y.; Wang, X.; Wang, D. Experimental Study on the Effect of Dust Deposition on Photovoltaic Panels. Energy Procedia 2019, 158, 483–489. [Google Scholar] [CrossRef]
  42. Hammad, B.; Al-Abed, M.; Al-Ghandoor, A.; Al-Sardeah, A.; Al-Bashir, A. Modeling and Analysis of Dust and Temperature Effects on Photovoltaic Systems’ Performance and Optimal Cleaning Frequency: Jordan Case Study. Renew. Sustain. Energy Rev. 2018, 82, 2218–2234. [Google Scholar] [CrossRef]
  43. Murillo-Soto, L.; Meza, C. Photovoltaic array fault detection algorithm based on least significant difference test. In Applied Computer Sciences in Engineering, Proceeding of the 7th Workshop on Engineering Applications, WEA 2020, Bogota, Colombia, 7–9 October 2020, Proceedings; Figueroa-García, J., Garay-Rairán, F., Hernández-Pérez, G., Díaz-Gutierrez, Y., Eds.; Springer: Cham, Switzerland, 2020; pp. 501–515. [Google Scholar]
  44. Saquib, D.; Nasser, M.; Ramaswamy, S. Image processing based dust detection and prediction of power using ANN in PV systems. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 1286–1292. [Google Scholar]
  45. Wang, Q.; Paynabar, K.; Pacella, M. Online Automatic Anomaly Detection for Photovoltaic Systems Using Thermography Imaging and Low Rank Matrix Decomposition. J. Qual. Technol. 2022, 54, 503–516. [Google Scholar] [CrossRef]
  46. Li, S.; Yang, J.; Wu, F.; Li, R.; Rashed, G. Combined Prediction of Photovoltaic Power Based on Sparrow Search Algorithm Optimized Convolution Long and Short-Term Memory Hybrid Neural Network. Electronics 2022, 11, 1654. [Google Scholar] [CrossRef]
  47. Vieira, R.; Dhimish, M.; de Araújo, F.; Guerra, M. PV Module Fault Detection Using Combined Artificial Neural Network and Sugeno Fuzzy Logic. Electronics 2020, 9, 2150. [Google Scholar] [CrossRef]
  48. Fan, S.; Wang, Y.; Cao, S.; Zhao, B.; Sun, T.; Liu, P. A Deep Residual Neural Network Identification Method for Uneven Dust Accumulation on Photovoltaic (PV) Panels. Energy 2022, 239, 122302. [Google Scholar] [CrossRef]
  49. Shaaban, M.; Alarif, A.; Mokhtar, M.; Tariq, U.; Osman, A.; Al-Ali, A. A New Data-Based Dust Estimation Unit for PV Panels. Energies 2020, 13, 3601. [Google Scholar] [CrossRef]
  50. Sai, H. Solar Panel Dust Detection. 2023. Available online: https://www.kaggle.com/datasets/hemanthsai7/solar-panel-dust-detection (accessed on 13 June 2024).
  51. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Volume 2, pp. 1122–1128. [Google Scholar]
  52. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
Figure 1. Global rise in electricity power installation using renewable energy sources (a) Fossil Fuel vs. Renewable sources for electricity installation in MWs from the Year 2000 to 2023 (b) Rise in the use of Renewable sources for electricity installation particularly a sharp rise is noted since 2018 until 2023 (c) Share of types of renewable energy sources with Solar leading the table followed by Hydro while Wind comes at the third place. The statistics are taken from https://www.irena.org (accessed on 3 October 2024).
Figure 1. Global rise in electricity power installation using renewable energy sources (a) Fossil Fuel vs. Renewable sources for electricity installation in MWs from the Year 2000 to 2023 (b) Rise in the use of Renewable sources for electricity installation particularly a sharp rise is noted since 2018 until 2023 (c) Share of types of renewable energy sources with Solar leading the table followed by Hydro while Wind comes at the third place. The statistics are taken from https://www.irena.org (accessed on 3 October 2024).
Sustainability 16 08664 g001
Figure 2. Various types of dirt found on solar panels.
Figure 2. Various types of dirt found on solar panels.
Sustainability 16 08664 g002
Figure 3. Proposed methodology: The images of both the classes i.e., clean and dusty solar panels are encoded using a pre-trained model. These encodings are then used to train and test an SVM using its various kernels. Finally, the performance is measured using various metrics such as classification accuracy, precision, recall and F1-score.
Figure 3. Proposed methodology: The images of both the classes i.e., clean and dusty solar panels are encoded using a pre-trained model. These encodings are then used to train and test an SVM using its various kernels. Finally, the performance is measured using various metrics such as classification accuracy, precision, recall and F1-score.
Sustainability 16 08664 g003
Figure 4. The achieved values of classification accuracies, precisions, recalls and F1-scores of all the 20 pre-trained models with a linear SVM. The names of the models are shown on x-axis while their achieved scores in Percentages (%) are shown on y-axis.
Figure 4. The achieved values of classification accuracies, precisions, recalls and F1-scores of all the 20 pre-trained models with a linear SVM. The names of the models are shown on x-axis while their achieved scores in Percentages (%) are shown on y-axis.
Sustainability 16 08664 g004
Figure 5. The Top-3 performing models are shown with respect to their achieved scores on a Linear SVM. The names of the scores are shown on x-axis while their values in Percentages are shown on y-axis. The three different bars with respect to their colors and patterns represent each of the three models.
Figure 5. The Top-3 performing models are shown with respect to their achieved scores on a Linear SVM. The names of the scores are shown on x-axis while their values in Percentages are shown on y-axis. The three different bars with respect to their colors and patterns represent each of the three models.
Sustainability 16 08664 g005
Figure 6. Confusion matrices of the all the evaluate 20 pre-trained models with a linear Support Vector Machine.
Figure 6. Confusion matrices of the all the evaluate 20 pre-trained models with a linear Support Vector Machine.
Sustainability 16 08664 g006
Figure 7. The correctly and wrongly classified examples of the test dataset are depicted in the figure. The top row shows the images that are correctly classified where the bottom row shows wrongly classified dusty solar panel images.
Figure 7. The correctly and wrongly classified examples of the test dataset are depicted in the figure. The top row shows the images that are correctly classified where the bottom row shows wrongly classified dusty solar panel images.
Sustainability 16 08664 g007
Figure 8. The qualitative images of patches contributing towards the prediction of SVM generated with LIME [52]. The first row shows the original dusty solar panel images while the second row shows the boundary of contributing regions generated by LIME.
Figure 8. The qualitative images of patches contributing towards the prediction of SVM generated with LIME [52]. The first row shows the original dusty solar panel images while the second row shows the boundary of contributing regions generated by LIME.
Sustainability 16 08664 g008
Table 2. Classification Metrics of all the evaluated models using various Kernels of a Support Vector Machine (SVM).
Table 2. Classification Metrics of all the evaluated models using various Kernels of a Support Vector Machine (SVM).
Model NameClassification AccuracyPrecisionRecallF1-Score
LINEARRBFPOLYSIGMOIDLINEARRBFPOLYSIGMOIDLINEARRBFPOLYSIGMOIDLINEARRBFPOLYSIGMOID
DenseNet16986.7981.2280.7184.4388.2480.2680.884.1490.9196.9793.9794.789.5586.0184.5588.09
MobileNet85.3881.1380.184.0486.8679.8780.6284.0390.8496.2191.3893.9488.4885.9283.8287.68
DenseNet20185.3880.6679.5983.4986.2379.6180.1682.9990.1596.2190.693.1888.4885.7183.4787.46
MobileNetV283.4980.1979.2982.9485.1979.3579.6182.5290.1595.4589.7493.1387.1885.7183.487.14
DenseNet12182.5580.1978.5782.5584.478.2179.2382.0188.6495.4289.6692.4286.3585.2183.1486.74
Xception82.5579.6278.1781.684.1778.0378.4681.8887.1294.788.8992.4286.1485.0282.9385.82
InceptionV381.6978.378.0679.7283.9477.8578.4681.7687.1294.788.7992.4285.584.1482.5984.13
VGG1679.2576.8976.5378.483.3377.7877.3481.3486.3693.1888.7992.3783.3382.5681.1584.03
InceptionResNetV278.373.5872.7377.3681.1677.2276.2677.5684.8593.1888.0391.6782.9681.779.783.56
NASNetMobile77.8372.6471.5777.3680.9276.1275.4577.2783.3393.1887.9391.6782.9180.1378.0383.22
VGG1976.4271.2370.4177.3679.7272.0571.1476.2583.3393.1387.1890.9180.9279.1876.3681.95
NASNetLarge75.1271.2369.3972.9978.0171.8469.5973.1280.9292.4287.0790.1580.5978.2176.2380.41
ResNet5069.3470.7568.0267.6175.5770.2969.4767.875.7692.4287.0789.3975.2978.1275.4677.67
ResNet10168.467.9267.1766.0475.1968.0568.2467.367592.4287.0789.3174.5278.0375.4377.4
ResNet15268.466.9866.565.7374.8167.7866.4666.137591.6786.3286.3674.3577.6874.8177.36
EfficientNetB467.4566.5165.9963.6874.0566.4965.6165.4574.2487.8885.3482.5874.3377.4474.5576.31
EfficientNetB067.4565.5765.3161.572.9965.1365.5864.9373.4887.8884.6273.4873.7677.3474.2970.29
EfficientNetB165.5765.0964.4756.8771.8864.865.5664.2573.4887.1277.7866.4173.0676.6973.8865.66
EfficientNetB364.6264.6263.7855.1971.2264.3263.4164.1273.4878.0371.5564.1272.1276.6973.4563.88
EfficientNetB264.1563.9863.2754.9870.864.163.0163.6469.777.2770.0963.6470.7776.4173.3963.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alatwi, A.M.; Albalawi, H.; Wadood, A.; Anwar, H.; El-Hageen, H.M. Deep Learning-Based Dust Detection on Solar Panels: A Low-Cost Sustainable Solution for Increased Solar Power Generation. Sustainability 2024, 16, 8664. https://doi.org/10.3390/su16198664

AMA Style

Alatwi AM, Albalawi H, Wadood A, Anwar H, El-Hageen HM. Deep Learning-Based Dust Detection on Solar Panels: A Low-Cost Sustainable Solution for Increased Solar Power Generation. Sustainability. 2024; 16(19):8664. https://doi.org/10.3390/su16198664

Chicago/Turabian Style

Alatwi, Aadel Mohammed, Hani Albalawi, Abdul Wadood, Hafeez Anwar, and Hazem M. El-Hageen. 2024. "Deep Learning-Based Dust Detection on Solar Panels: A Low-Cost Sustainable Solution for Increased Solar Power Generation" Sustainability 16, no. 19: 8664. https://doi.org/10.3390/su16198664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop