Next Article in Journal
Rapid Resilience Assessment and Weak Link Analysis of Power Systems Considering Uncertainties of Typhoon
Previous Article in Journal
Application of Carbon-Isotope-Logging Technology in High-Temperature and High-Pressure Wells: A Case Study of the Ledong Gas Field in the Yinggehai Basin
Previous Article in Special Issue
Energy Burden in the United States: An Analysis Using Decision Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating Snow Coverage Percentage on Solar Panels Using Drone Imagery and Machine Learning for Enhanced Energy Efficiency

1
Department of Applied Computing, College of Computing, Michigan Technological University, Houghton, MI 49931, USA
2
Department of Manufacturing and Mechanical Engineering Technology, College of Engineering, Michigan Technological University, Houghton, MI 49931, USA
3
Department of Mathematical Sciences, College of Sciences and Arts, Michigan Technological University, Houghton, MI 49931, USA
4
Department of Mechanical and Aerospace Engineering, College of Engineering, Michigan Technological University, Houghton, MI 49931, USA
*
Author to whom correspondence should be addressed.
Energies 2025, 18(7), 1729; https://doi.org/10.3390/en18071729
Submission received: 21 February 2025 / Revised: 19 March 2025 / Accepted: 25 March 2025 / Published: 31 March 2025
(This article belongs to the Special Issue Application of Machine Learning Tools for Energy System)

Abstract

:
Snow accumulation on solar panels presents a significant challenge to energy generation in snowy regions, reducing the efficiency of solar photovoltaic (PV) systems and impacting economic viability. While prior studies have explored snow detection using fixed-camera setups, these methods suffer from scalability limitations, stationary viewpoints, and the need for reference images. This study introduces an automated deep-learning framework that leverages drone-captured imagery to detect and quantify snow coverage on solar panels, aiming to enhance power forecasting and optimize snow removal strategies in winter conditions. We developed and evaluated two approaches using YOLO-based models: Approach 1, a high-precision method utilizing a two-class detection model, and Approach 2, a real-time single-class detection model optimized for fast inference. While Approach 1 demonstrated superior accuracy, achieving an overall precision of 89% and recall of 82%, it is computationally expensive, making it more suitable for strategic decision making. Approach 2, with a precision of 93% and a recall of 75%, provides a lightweight and efficient alternative for real-time monitoring but is sensitive to lighting variations. The proposed framework calculates snow coverage percentages (SCP) to support snow removal planning, minimize downtime, and optimize power generation. Compared to fixed-camera-based snow detection models, our approach leverages drone imagery to improve detection precision while offering greater scalability to be adopted for large solar farms. Qualitative and quantitative analysis of both approaches is presented in this paper, highlighting their strengths and weaknesses in different environmental conditions.

1. Introduction

Solar energy has become a cornerstone of renewable energy initiatives due to its sustainability and scalability. However, in regions that experience significant snowfall, the performance of solar photovoltaic (PV) systems can be severely impacted during the winter months. Snow accumulation on solar panels obstructs sunlight, which is critical for energy generation, leading to drastic reductions in energy efficiency. A study by Jackson and Gunda [1] analyzed extreme weather impacts on utility-scale PV systems, finding that snow events caused the highest performance loss (54.5%), followed by hurricanes (12.6%) and storms (1.1%). They identified low irradiance, snowfall accumulation, and location as key performance factors.
The impact of snow on energy efficiency extends beyond immediate power losses. Prolonged snow cover can cause uneven energy output across arrays, which may lead to hot spots, increased system inefficiencies, and even long-term physical damage to solar panels due to ice formation or thermal cycling [2]. In theory, snow cover would increase panel efficiency slightly since PV efficiency decreases with increasing temperature [3], a fact that drives research in PV-thermal (PV-T) systems [4,5], wherein improved cooling from the backside can significantly boost PV performance. This temperature dependence would improve the performance of the backside generation of bifacial panels when the front side is snow covered. However this benefit is marginal relative to the loss of front side generation. Furthermore, in grid-tied systems, widespread snow events may reduce capacity on the grid during certain winter periods [6]. Therefore, timely snow shedding is crucial to restoring optimal energy generation and minimizing energy losses. The research has highlighted that delayed snow removal can result in up to a 34% energy loss [7], particularly in high-snowfall regions where solar energy is already limited during winter due to shorter daylight hours and lower sun angles [8,9].
Various methods can be used to speed up snow removal from solar panels instead of waiting for natural melting. These include leveraging the rotation capabilities of tracking systems or manually clearing the snow. However, traditional snow detection methods, such as fixed cameras and manual inspections [10,11,12], require significant manual effort, making them inefficient and impractical for large-scale solar farms. To overcome these limitations, automated and scalable snow detection systems are crucial for ensuring timely snow removal and maximizing energy efficiency in photovoltaic (PV) systems.
A recent study by Araji et al. [13] introduced a deep-learning-based framework for real-time snow cover detection and energy loss estimation in photovoltaic (PV) systems. The study developed a CNN-based model, achieving an 81% accuracy in detecting snow accumulation, with a mean error of less than 5% in estimating energy losses. By analyzing six PV arrays under varying snow coverage conditions, the study demonstrated that timely snow removal could save up to 0.13 kWh/m² per month, underscoring the potential of deep-learning-driven solutions in enhancing solar energy efficiency in snowy regions. Although their work included snow cover area calculation, it relied on fixed-camera data, which limits scalability for large solar farms where monitoring extensive areas efficiently is essential. Fixed-camera systems provide a restricted field of view, requiring multiple installations to cover large-scale PV arrays. Their stationary viewpoints also make them susceptible to occlusions caused by snowdrifts or shading, further compromising detection accuracy. In contrast, drone-based monitoring offers a flexible and scalable alternative, enabling comprehensive coverage of large solar farms by capturing images from multiple angles and altitudes. This adaptability improves detection accuracy and reduces infrastructure costs, making it a more practical solution for large-scale solar farm snow monitoring.
In this context, drones have emerged as indispensable tools for solar panel inspection, particularly during summer months, where they are used to detect cracks, dirt, or misalignments [14,15,16]. These applications leverage high-resolution aerial imagery to identify localized defects that impact solar panel performance. However, snow detection presents unique challenges, as snow coverage is a widespread and dynamic environmental factor rather than a fixed defect. The transition from static defect detection to environmental monitoring requires advancements in deep learning segmentation techniques to handle variations in snow texture, lighting conditions, and perspective distortions. While prior research has effectively used drones for detecting hotspots and contamination, our study expands the role of aerial PV monitoring by introducing a real-time, scalable deep learning framework for automated snow coverage percentage (SCP) estimation. This enables proactive decision making for energy optimization, bridging the gap between drone-based defect detection and environmental impact assessment in solar farm operations. Expanding the use of drones to winter-specific challenges, such as snow detection, presents a cost-effective and scalable solution for year-round solar farm maintenance. To the best of our knowledge, this is the first study to utilize drones for estimating SCP on solar panels. While Al-Dulaimi et al. [17] developed an ML-based method to detect snow coverage using segmentation techniques, their approach was designed for fixed-camera systems, which limits its scalability for large solar farms, where extensive area monitoring is essential. Furthermore, their approach categorized snow coverage into three discrete classes: all_snow, no_snow, and partial_snow. In contrast, our objective is to provide a precise snow coverage percentage for each panel, offering a more detailed and actionable assessment for solar farm operators to optimize snow removal strategies and energy output.
While drone imagery offers significant advantages for snow detection and coverage area estimation, particularly in terms of scalability for large solar farms, it also introduces several challenges for machine learning (ML) and image processing algorithms. Unlike fixed-camera systems, which capture images from a consistent angle and lighting condition, drones operate at varying altitudes, orientations, and lighting environments, leading to changes in perspective, shadowing effects, and variations in contrast. These factors can impact the accuracy of snow segmentation and object detection models. Additionally, real-time inference and decision-making become more complex when processing images captured under different weather conditions.
Building on recent advancements in drone-based monitoring and machine learning, this paper presents a novel deep learning framework that utilizes drone imagery and YOLO-based models for automated snow detection and coverage estimation on solar panels. By leveraging high-resolution aerial imagery, the proposed system enables efficient and scalable monitoring of large solar farms, addressing the limitations of fixed-camera-based solutions. The framework integrates oriented bounding boxes and segmentation-based approaches to enhance detection accuracy and provide a precise estimation of snow coverage percentage (SCP).
To balance detection accuracy and real-time operational efficiency, we developed two complementary approaches:
  • Approach 1: A high-precision model designed for accurate SCP estimation, making it ideal for strategic power forecasting and maintenance planning.
  • Approach 2: A real-time model optimized for fast inference, allowing immediate decision making for snow removal operations.
A comprehensive qualitative and quantitative evaluation was conducted to analyze the performance, strengths, and limitations of both approaches. The findings highlight the effectiveness of drone-based imaging for scalable snow monitoring, making it a practical and cost-efficient solution for large-scale solar farms during winter months. The primary contributions of this work are as follows:
  • A deep-learning framework for snow detection and coverage estimation, integrating oriented bounding boxes and segmentation-based techniques to optimize snow management strategies for solar farms.
  • A comparative evaluation of two YOLO-based approaches, highlighting the trade-offs between high-precision detection (Approach 1) and real-time inference (Approach 2) to balance accuracy and computational efficiency.
  • A detailed performance analysis using real-world drone imagery, demonstrating the impact of lighting variations and image pose changes on model accuracy and providing insights into improving robustness in aerial snow detection.
  • An automated method for snow coverage percentage (SCP) estimation, enabling drone-based data to enhance power forecasting, optimize snow removal scheduling, and minimize downtime in large-scale solar farms.
The remainder of this paper is structured as follows: Section 2 describes the methodology, including data collection, image preprocessing, and the machine learning pipeline. Section 3 presents the experimental results and system validation. Section 4 discusses the implications of this work and outlines future research directions. Finally, Section 5 provides conclusions.

2. Materials and Methods

The proposed system uses drone imagery and machine learning to automate the detection and quantification of snow coverage on solar panels. For data acquisition, high-resolution aerial imagery of solar panels was collected using the Parrot Anafi Gov drone [18], equipped with a 21 MP RGB camera (84° field of view) capable of 4K video resolution and 32x digital zoom. The images were captured under diverse conditions, including varying lighting, ambient temperatures, and snow coverage profiles, using the solar panels available on the Michigan Technological University (MTU) campus. The collected images reached a total of 248 images. The dataset is manually split into 70% training, 10% validation, and 20% testing, in which the scenes from all three splits are unique. Moreover, the testing set is exclusively compiled from images that were collected in a different year with different timings and configurations, such as lighting conditions and snow coverage and texture, to make sure our methods are generalizable.
The dataset was annotated using Robofolow for two classes: Solar panel (SP) class, representing the entire body of panels, and uncovered solar panel (USP) class, representing snow-free areas of the panel. Polygon annotation was used to accurately represent the two classes. The full annotated dataset can be found at: https://github.com/RSSL-MTU/RSSL-MTU-Solar-Panel-Snow-Coverage, accessed on 16 March 2025. Figure 1 showcases examples of RGB images captured under different locations and conditions, along with their corresponding polygon annotations. Blue polygons indicate the SP class, while cyan polygons represent the USP class.
Because it is difficult to attain drone imagery that is aligned with the solar panels all the time, using standard object detection will result in inaccurate snow coverage calculations. Therefore, we used oriented bounding box (OBB)-based detection and segmentation-based detection. The former’s predictions are in the shape of regular bounding boxes but with an added angle to each box that better aligns with the detected solar panel, whereas the latter’s prediction are in the form of contours around the detected objects.
YOLO was selected over alternative deep learning models due to its real-time efficiency and adaptability to aerial imagery. Unlike region-based models such as Mask R-CNN, which require multiple processing stages, YOLO performs detection in a single forward pass, making it faster while maintaining high accuracy. Additionally, compared to segmentation-based architectures like U-Net, YOLO’s object detection framework with OBBs provides effective snow segmentation with lower computational overhead, making it more practical for large-scale solar farm monitoring.
To accurately estimate snow coverage on solar panels and optimize operational strategies, we employed two different YOLO11-based models [19]: YOLO x-large (YOLOxl) and YOLO Nano (YOLOn). The first approach utilizes YOLOxl, a high-precision model trained to detect both SP and USP. This method is well-suited for detailed snow-shedding analysis and power forecasting but comes with a higher computational cost. The second approach leverages YOLOn, a lightweight and efficient model designed for real-time snow coverage estimation. By detecting only SP instances and using pixel intensity to isolate snow-covered areas from the rest of the detected panel, this method enables rapid and cost-effective snow monitoring. Together, these approaches provide valuable insights for improving snow removal efficiency, reducing downtime, and enhancing power generation in solar farms during winter conditions.

2.1. Approach 1: High-Precision Snow Coverage Estimation Using YOLOxl for Power Forecasting

This approach utilizes the YOLOxl model, which is trained to classify solar panels into two categories: solar panels (SP) and uncovered solar panels (USP). We trained two instances of the YOLOv11 model, one for the OBB detection task and another for the segmentation task. The highest performing model is selected for snow calculation. All model instances from all approaches start training from pre-trained weights on the COCO dataset [20] with the hyperparameter settings shown in Table 1. To eliminate duplicate detections for a single solar panel, we ran inference with a non-maximum suppression (NMS) threshold of 0.3.
The snow coverage percentage ( S C P ) for each detected solar panel is calculated as the total area of all USP instances detected inside the solar panel to the area of the detected SP instance. This is performed using the equation below.
S C P ( S P ) = 1 ( Area U S P 1 Area U S P 2 Area U S P n ) Area S P Area S P × 100 % ,
where ∪ represents Union, and ∩ represents Intersection. U S P n is the n t h detected uncovered solar panel instance in the entire image.
A detailed explanation of calculating the SCP for every detected SP in the image can be found in Algorithm 1, where all detected instances of both SP and USP classes in an input image are first converted into binary masks. Subsequently, the binary masks for all USP instances are unified using a logical OR function (∨), as shown in lines (4–6) in the algorithm. The resultant USP unified mask is then intersected with every detected SP instance using a logical AND function (∧) to produce the intersection masks. The intersection masks are then divided by their corresponding SP instances. Finally, the results are converted into percentages representing the SCP of every detected solar panel in the input image, as illustrated in lines (8–11) in the algorithm.
Algorithm 1 Calculation of Panel’s Snow Coverage Percentage (SCP)
  • Input: Input image, detection boxes labels, and coordinates
  • Output: Snow Coverage Percentages (SCP)
    1:
    Store all detected SP instances as binary masks into the ‘SPs’ array
    2:
    Store all detected USP instances as binary masks into the ‘USPs’ array
    3:
    U S P M a s k s = [ ]
    4:
    for  USP in USPs  do
    5:
         U S P M a s k s = U S P M a s k s U S P
    6:
    end for
    7:
    S C P = [ ]
    8:
    for  SP in SPs  do
    9:
           I n t e r s e c t i o n M a s k = U S P M a s k s S P
    10:
         S C P . a p p e n d ( 1 ( I n t e r s e c t i o n M a s k / S P ) × 100 % )
    11:
    end for
    12:
    return SCP
While this method provides highly accurate results, it is computationally intensive, making it more suitable for detailed analysis rather than real-time deployment. The insights derived from this model help assess snow-shedding behavior and enhance power forecasting during winter. In addition, it helps develop a baseline forecast for “snowrise”, the phenomenon where snow sheds from panels following a sunny period after a storm. These insights inform stow strategies and operational decisions in large-scale utility solar farms, ultimately improving energy efficiency.

2.2. Approach 2: Real-Time Snow Coverage Estimation Using YOLO Nano for Efficient Snow Removal

To enable real-time snow monitoring, we adopted a more efficient approach using YOLOn, a lightweight model optimized for fast inference. Unlike the first approach, this method uses only one class (SP) for training while incorporating oriented bounding boxes and segmentation techniques. The hyperparameters used to train this model are summarized in Table 1. In the first stage, a pre-trained YOLO v11 Nano model is utilized for instance segmentation of solar panels. Instance segmentation [21] ensures that each solar panel in the image is uniquely identified.
Once the solar panels are detected, a binary mask is generated using instance segmentation, effectively removing the background and isolating the panels for precise snow coverage analysis. Snow coverage is estimated by analyzing pixel intensities, where black represents the solar panel surface and white indicates snow. Ideally, snow-covered pixels would have an intensity of 255. However, to account for variations caused by shadows, noise, and weather conditions, pixels with an intensity greater than 240 are classified as snow, enhancing detection robustness. Figure 2 illustrates the step-by-step process of approach 2 for snow detection: (i) A binary mask is generated using the YOLO-based model to segment the solar panels while eliminating the background. (ii) The region of interest (ROI) is extracted, isolating the detected panel to ensure that only the relevant area is analyzed. (iii) An intensity-based mask is applied, where pixels with an intensity above 240 are classified as snow-covered regions.
To account for all solar panels detected in the image using YOLOv11 Nano’s instance segmentation or OBB, we extend the SCP calculation to iterate over all segmented (detected) panels. The equation used to estimate the snow coverage area is expressed as
S C P i = N snow , i N total , i × 100 , i { 1 , 2 , , N }
where
  • N is the total number of detected solar panels in the image,
  • S C P i is the snow coverage percentage for the i-th solar panel,
  • N snow , i is the number of pixels classified as snow ( I > 240 ) in the i-th panel, and
  • N total , i is the total number of pixels in the i-th segmented solar panel.
While this approach is significantly faster and more suitable for real-time decision-making, its accuracy in coverage area estimation is lower than the first approach, as will be discussed in the Results section. Additionally, it is sensitive to changes in lighting conditions since a fixed threshold is used to classify snow, which may lead to inconsistencies in detection. Despite these limitations, its speed and efficiency make it highly effective for real-time operational strategies. This approach is essential for ensuring consistent energy production in solar farms during winter by minimizing downtime and enhancing snow removal efficiency. It achieves this by prioritizing the cleaning of solar panels based on SCP.

3. Results

This section will discuss the evaluation metrics used to assess detection and segmentation performance. In addition, the results from both SCP calculation approaches will be presented later in this section.

3.1. Evaluation Metrics for Performance Assessment

Evaluation metrics are essential to understand how well a machine learning model performs, especially in classification and object detection tasks. This section introduces some of the most commonly used metrics to evaluate detection performance, including precision, recall, F1 score, and mean average precision (mAP) [22].
  • Precision: measures how many of the predictions made by the model are correct. It focuses on the accuracy of the model’s positive predictions. High precision means that when the model makes a detection, it is usually correct. Mathematically, precision is defined as
    Precision = T P T P + F P
    where T P (true positives) represents correctly predicted positive samples and F P (false positives) represents incorrectly predicted positive samples.
  • Recall: Recall measures how many actual objects were correctly detected. It focuses on the completeness of the model’s detections. High recall means the model correctly detects most objects and rarely misses any actual objects. Recall is given by
    Recall = T P T P + F N
    where F N (false negatives) represents actual positive instances that were misclassified as negative.
  • F1-score: considers both precision and recall to evaluate the model’s performance. The F1-score is robust against class imbalance and is given by
    F 1 - score = 2 × Precision × Recall Precision + Recall
  • mean average precision (mAP): is a commonly used metric in object detection tasks to evaluate a model’s ability to detect and localize objects correctly. It is calculated by averaging the average precision (AP) on all objects from all classes. The mAP is calculated as the area under the precision–recall curve, as shown in Equation (6).
    m A P = 1 N i = 1 N A P i ,
    where
    A P = 0 1 P ( R ) d R ,
    where A P i is the average precision for the i-th class, N is the number of object classes, and P ( R ) represents precision as a function of recall.
Using these evaluation metrics, we assess the two approaches outlined in Section 2 for calculating the SCP in solar panels.

3.2. Evaluation of Detection Performance

To select the appropriate model for each SCP approach, we first evaluate both single-class and dual-class detection models based on OBB and segmentation techniques. The results of all four models are illustrated in Table 2. In Approach 1, which employs two-class models, OBB (dual) achieves balanced precision for both SP and USP at 0.89, demonstrating its ability to classify snow-covered and uncovered areas while minimizing false detections correctly. In contrast, Seg(dual) exhibits a discrepancy, with a lower SP precision at 0.67 but a higher USP precision at 0.91, indicating that it excels in identifying uncovered regions but is prone to misdetection when detecting the full solar panel. In terms of recall, OBB(dual) outperforms Seg(dual) for SP at 0.75 compared to drastically lower Seg(dual) at 0.48, meaning it more effectively captures the edges of the solar. The Seg(dual) model performs lower in USP recall at 0.77 compared to 0.88 of the OBB(dual) model, suggesting increased false negatives and fewer snow-uncovered areas detected. This trend is reflected in the F1-score, where OBB(dual) maintains a higher SP value at 0.82 compared to as low as 0.56 of Seg(dual). The mAP50 values further confirm these differences, as Seg(dual) achieves a strong USP mAP50 (0.85) but struggles in SP (0.49), while OBB(dual) performs well in both SP (0.86) and USP (0.93), indicating a more robust overall performance in both classes.
In Approach 2, a similar trend can be observed where the OBB performs better than segmentation on all metrics apart from the recall, where both OBB (single) and Seg (single) achieve a recall of 0.75. The lower precision of Seg (single) at 0.83 compared to OBB(single) at 0.93 suggests that segmentation-based models introduce more false positives due to sensitivity to light variation. The high F1-scores in OBB models, particularly OBB (single) (0.83) and OBB (dual) (0.82 for SP), suggest they strike a strong balance between recall and precision, minimizing both false positives and false negatives. We conclude that the lower SP performance on all models could be attributed to the fact that some solar panels are fully covered by snow, which makes them appear as a part of the background. Based on these results, we selected the OBB-based models for SCP estimation using the two approaches explained in Section 2.
To qualitatively assess the performance of the models in Table 2, we visualized the predictions made by all models on four representative images, highlighting each model’s key strengths and weaknesses. As shown in Figure 3, the first column shows the ground truth annotations (GT Anno.). The subsequent columns display the outputs of each model across four different samples. The results demonstrate notable variations in detection performance across different models. The OBB (dual) model effectively detected both SP and USP regions with well-aligned oriented bounding boxes, closely matching the ground truth in structure and spatial coverage, particularly in samples #1, #3, and #4. However, minor misalignments and overlapping detections appear in some cases, such as sample #2. The Seg (dual) model demonstrates some capability by distinguishing between covered and uncovered areas, as observed in samples #1, #2, and #3, but occasionally overestimates snow-covered regions, as seen in sample #4. This may explain the low detection performance it achieved in Table 2. A similar trend can be seen for the Seg (single) model where it introduces white background false positives as shown in sample #2. Although the OBB (single) produces some overlapping predictions, as in sample #3, we opt to use it for SCP estimation because having inaccurate true positives is less impactful than having higher false positives. Therefore, we used both OBB (dual) and OBB (single) models in Approach 1 and 2.

3.3. Statistical Significance Analysis

To assess whether the observed performance differences between the two approaches are statistically significant, we conducted a paired t-test on the SP class across four evaluation metrics: precision, recall, F1-score, and mAP50.
Let X i denote the metric score for Approach 1 and Y i denote the corresponding score for Approach 2. The difference for each metric is defined as
D i = X i Y i
Let D ¯ be the mean of these differences and s D be their standard deviation. The t-statistic is calculated using
t = D ¯ s D / n
where n is the number of paired observations (here, n = 4 ).
The hypotheses tested are H 0 : μ D = 0 (no significant difference between the two approaches) versus H 1 : μ D 0 (a significant difference exists). We performed the test at a significance level of α = 0.05 .
Results:
  • OBB comparison: The calculated t-statistic is t = 0.93 with a corresponding p-value of 0.42 . Since p > 0.05 , the difference between the two approaches using OBB is not statistically significant. This indicates that the performance improvements observed in Approach 2 may be attributed to random variation.
  • Segmentation comparison: The t-statistic is t = 7.93 , and the p-value is 0.0042 . Since p < 0.05 , the difference between the two segmentation approaches is statistically significant. This demonstrates that Approach 2 significantly outperforms Approach 1 in SP class detection based on segmentation.

3.4. Evaluation of Snow Coverage Percentage (SCP) Estimation

To evaluate the accuracy of snow coverage percentage (SCP) estimation, we analyzed the performance of both Approach 1 (YOLOxl with two classes: SP and USP) and Approach 2 (YOLOn with a single SP class). Given the detection performance of both approaches, we opted to utilize OBB (dual) in Approach 1 and OBB (single) in Approach 2. Using the ground truth images, we calculated the reference SCP values based on Equation (1), and the results of the area estimation using both approaches are illustrated in Figure 4.
A detailed analysis of four representative samples reveals that Approach 1 consistently provided more accurate SCP estimations than Approach 2, particularly under challenging conditions. For instance, in Sample #1, the SCP error in Approach 1 was only 9%, which is negligible in terms of operational decision making. However, Approach 2 incorrectly estimated the SCP as 0% due to variations in lighting conditions and the snow’s thickness on the panel, which affected the pixel-intensity-based classification. A similar issue was observed in Sample #4, where Approach 2 again failed to detect snow coverage, leading to an SCP of 0%, while Approach 1 accurately estimated the snow-covered area. In contrast, in samples #2 and #3, both approaches performed similarly, with Approach 1 still slightly outperforming Approach 2, indicating that under optimal lighting and snow distribution, both methods produce reliable SCP estimates.
The observed differences in performance stem from the fundamental differences in the detection methodologies of the two approaches. Approach 1 explicitly detects both SP and USP classes, minimizing the impact of lighting variations and different snow thickness levels. This allows for more robust SCP estimation, even under challenging conditions. Conversely, Approach 2 relies on pixel intensity thresholding, which assumes consistent lighting conditions. As a result, Approach 2 is highly sensitive to variations in illumination and snow thickness, leading to misclassification errors in cases where snow-covered panels appear darker or lighting conditions obscure the contrast between snow and panel surfaces. Despite these limitations, Approach 2 remains advantageous for real-time applications due to its lower computational cost and faster inference speed. However, for cases where accuracy in SCP estimation is crucial for decision-making, Approach 1 is preferable as it provides more reliable and consistent results.

4. Discussion

The findings of this study highlight the importance of accurate and real-time snow coverage estimation for enhancing solar power generation efficiency in cold and snow-prone regions. Large-scale solar farms, particularly those in northern climates, experience significant energy losses due to snow accumulation on panels, making automated monitoring systems crucial for optimizing maintenance and power forecasting. In this paper, we introduce two distinct approaches, each designed for a specific application. Approach 1 utilizes the high-precision YOLOxl model to detect and classify both solar panels and uncovered solar panel regions. This dual-class detection requires more computational resources compared to Approach 2, which uses a single-class detection model. Additionally, Approach 1 incorporates OBBs to improve detection accuracy, introducing extra geometric transformations that increase the computational load. Post-processing steps further add to this overhead, as SCP is estimated using an IoU calculation between detected uncovered solar panel regions and the total solar panel area. These factors contribute to the higher computational cost of Approach 1, making it more suitable for strategic energy forecasting rather than real-time deployment. In contrast, Approach 2 employs the lightweight YOLO Nano model with a simplified single-class detection approach, using pixel-intensity-based thresholding for SCP estimation. This results in significantly lower computational demands, making Approach 2 an efficient alternative for real-time monitoring and snow removal planning. By integrating these models into operational workflows, solar farm operators can reduce downtime, minimize manual interventions, and improve overall energy output during winter months.
Our evaluation of detection performance and SCP estimation reveals the key strengths and limitations of each approach. Approach 1, utilizing a two-class YOLOxl model, outperforms Approach 2 in terms of detection accuracy, particularly in differentiating uncovered solar panels (USP) from snow-covered regions. The explicit classification of both SP and USP reduces false positives, leading to more accurate SCP estimation across different lighting and snow conditions. However, this approach is computationally intensive, making it less practical for real-time applications. Conversely, Approach 2 offers a lightweight and fast alternative, making it ideal for real-time decision-making in snow removal operations. The downside, however, is its sensitivity to lighting variations and reduced accuracy in SCP estimation, particularly under challenging conditions where snow thickness and ambient light affect pixel intensity analysis. The fixed thresholding mechanism used in Approach 2 further contributes to misclassifications, leading to zero SCP estimations in some cases. For example, the testing set contains some images with different snow textures and poorly-lit scenes, resulting in a low accuracy algorithm for such cases due to the pre-set threshold, as observed in Figure 4, Samples #1 and #4.
The effectiveness of the proposed approaches depends on the spatial arrangement of solar panels, particularly in terms of orientation and shape variations. To address this, we employ OBBs, which enhance detection accuracy for panels positioned at different angles. Additionally, instance segmentation techniques allow for precise shape detection, improving snow coverage estimation even in non-uniform panel layouts. Our dataset includes drone imagery captured from diverse perspectives to improve model generalization. However, extreme variations in panel shape, such as curved designs, may require further model adaptation and additional training data. Future work will explore incorporating geometric transformations to enhance robustness across different solar farm configurations.
A direct comparison between our drone-based approach and fixed-camera-based snow detection highlights key advantages of using aerial imagery. Fixed-camera solutions, while effective in localized monitoring, provide a limited field of view, requiring multiple cameras for large-scale farms, limiting their scalability. Our drone-based approach achieves higher adaptability across various solar farm configurations, providing improved scalability and enhanced accuracy in detecting snow-covered and uncovered regions. Additionally, our high-precision Approach 1 (YOLOxl-based) outperforms the fixed-camera method in detection accuracy, achieving a precision of 89% and an F1-score of 88% compared to the 81% accuracy reported by Araji et al., 2024 [13].
Moving forward, future research will focus on enhancing both approaches to improve overall system robustness. Segmentation-based models, while useful, underperformed compared to OBB-based models, suggesting that additional improvements in segmentation techniques are necessary to increase detection reliability. Efforts will also be directed toward making Approach 2 more robust to lighting variations. Another key direction for improvement is the incorporation of thermal imaging, which can provide additional contrast and features that are less affected by ambient light changes. By integrating thermal and RGB images, we expect to enhance the discrimination of snow-covered panels from uncovered ones, leading to more reliable SCP estimations.

5. Conclusions

In this study, we developed and evaluated two deep-learning-based approaches for automated snow coverage estimation on solar panels, addressing the challenges faced by large-scale solar farms in cold and snow-prone regions. The proposed methods—one focusing on high-precision estimation (Approach 1) and the other on real-time decision-making (Approach 2)—offer valuable solutions for improving power forecasting, optimizing snow removal strategies, and reducing operational downtime in winter conditions. Our evaluation demonstrated that Approach 1 (YOLOxl with two-class detection and IoU-based area calculation) provides superior accuracy in detecting snow-covered and uncovered solar panels, making it well-suited for strategic decision making and long-term power forecasting. However, this approach is computationally expensive, limiting its applicability for real-time monitoring. On the other hand, Approach 2 (YOLO Nano with single-class detection and intensity map-based area calculation) enables real-time snow coverage estimation, offering a lightweight and fast alternative for immediate operational response. Despite its efficiency, Approach 2 is more sensitive to lighting variations and has lower detection accuracy, particularly in challenging conditions where snow thickness and ambient illumination affect pixel-intensity-based classification. This study advances prior snow detection research by introducing a drone-based framework that overcomes the limitations of fixed-camera systems, which are restricted by stationary viewpoints and scalability issues. Additionally, while previous segmentation-based models classify snow presence into discrete categories, our method provides a continuous SCP estimation for more precise quantification. By leveraging OBBs and a dual-model strategy, our framework balances accuracy and efficiency, making it a scalable and adaptable solution for solar farm monitoring. To further enhance the performance of both approaches, future research will focus on improving segmentation-based models, which currently exhibit lower detection accuracy compared to OBB-based models. Additionally, we aim to make Approach 2 more robust to lighting variations. Furthermore, we plan to incorporate thermal imaging, which can enhance feature extraction and improve the ability to distinguish between snow-covered and uncovered solar panels, thereby improving SCP estimation accuracy under varying environmental conditions.

Author Contributions

Conceptualization, A.S.; methodology, A.S. and A.D.; software, A.A., Z.M. and A.M.; validation, A.A. and A.M.; formal analysis, A.S., A.A., Z.M. and A.M.; investigation, A.D.; resources, A.S. and A.D.; data curation, A.A. and Z.M.; writing—original draft, A.S., A.A., Z.M. and A.M.; visualization, A.M. and A.A.; supervision, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The full annotated dataset can be found at: https://github.com/RSSL-MTU/RSSL-MTU-Solar-Panel-Snow-Coverage, accessed on 16 March 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jackson, N.D.; Gunda, T. Evaluation of extreme weather impacts on utility-scale photovoltaic plant performance in the United States. Appl. Energy 2021, 302, 117508. [Google Scholar]
  2. Abou Yassine, A.H.; Khoshbakhtnejad, E.; Sojoudi, H. Economics of Snow Accumulation on Photovoltaic Modules. Energies 2024, 17, 2962. [Google Scholar] [CrossRef]
  3. Skoplaki, E.; Palyvos, J. On the temperature dependence of photovoltaic module electrical performance: A review of efficiency/power correlations. Sol. Energy 2009, 83, 614–624. [Google Scholar] [CrossRef]
  4. Perwez, A.; Ahmaed, A.; Li, D.; Zheng, X.; Qin, G. Performance enhancement of photovoltaic/thermal system using dimpled channel subjected to forced convection. Renew. Energy 2024, 237, 121851. [Google Scholar] [CrossRef]
  5. Xiao, Y.; Bao, Y.; Yu, L.; Zheng, X.; Qin, G.; Chen, M.; He, M. Ultra-stable carbon quantum dot nanofluids as excellent spectral beam splitters in PV/T applications. Energy 2023, 273, 127159. [Google Scholar] [CrossRef]
  6. Wickett, S.; Dyreson, A. Trends in Solar PV Growth in Snowy Climates and Impact on Resource Adequacy. In Proceedings of the 2023 IEEE 50th Photovoltaic Specialists Conference (PVSC), San Juan, PR, USA, 11–16 June 2023; pp. 1–10. [Google Scholar] [CrossRef]
  7. Pawluk, R.E.; Chen, Y.; She, Y. Photovoltaic electricity generation loss due to snow–A literature review on influence factors, estimation, and mitigation. Renew. Sustain. Energy Rev. 2019, 107, 171–182. [Google Scholar]
  8. Heidari, N.; Gwamuri, J.; Townsend, T.; Pearce, J.M. Impact of snow and ground interference on photovoltaic electric system performance. IEEE J. Photovoltaics 2015, 5, 1680–1685. [Google Scholar]
  9. Chutani, A.; Dyreson, A.; Burnham, L.; Lee, K. Snow Sensing for Photovoltaic Single Axis Tracker Systems. In Proceedings of the 2023 IEEE 50th Photovoltaic Specialists Conference (PVSC), San Juan, PR, USA, 11–16 June 2023; pp. 1–4. [Google Scholar]
  10. Zhang, X.; Araji, M.T. Snow loss modeling for solar modules using image processing and deep learning. Sustain. Energy Grids Netw. 2023, 34, 101036. [Google Scholar]
  11. Braid, J.L.; Riley, D.; Pearce, J.M.; Burnham, L. Image analysis method for quantifying snow losses on PV systems. In Proceedings of the 2020 47th IEEE Photovoltaic Specialists Conference (PVSC), Calgary, ON, Canada, 15 June–21 August 2020; pp. 1510–1516. [Google Scholar]
  12. Ozturk, O.; Hangun, B.; Eyecioglu, O. Detecting snow layer on solar panels using deep learning. In Proceedings of the 2021 10th International Conference on Renewable Energy Research and Application (ICRERA), Istanbul, Turkey, 26–29 September 2021; pp. 434–438. [Google Scholar]
  13. Araji, M.T.; Waqas, A.; Ali, R. Utilizing deep learning towards real-time snow cover detection and energy loss estimation for solar modules. Appl. Energy 2024, 375, 124201. [Google Scholar]
  14. Wang, Z.; Zheng, P.; Bahadir Kocer, B.; Kovac, M. Drone-Based Solar Cell Inspection With Autonomous Deep Learning. In Infrastructure Robotics: Methodologies, Robotic Systems and Applications; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2024; pp. 337–365. [Google Scholar]
  15. Meribout, M.; Tiwari, V.K.; Herrera, J.P.P.; Baobaid, A.N.M.A. Solar panel inspection techniques and prospects. Measurement 2023, 209, 112466. [Google Scholar] [CrossRef]
  16. Park, J.; Lee, D. Precise Inspection Method of Solar Photovoltaic Panel Using Optical and Thermal Infrared Sensor Image Taken by Drones. In Proceedings of the IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2019; Volume 611, p. 012089. [Google Scholar]
  17. Al-Dulaimi, A.A.; Guneser, M.T.; Hameed, A.A.; Márquez, F.P.G.; Fitriyani, N.L.; Syafrudin, M. Performance analysis of classification and detection for PV panel motion blur images based on deblurring and deep learning techniques. Sustainability 2023, 15, 1150. [Google Scholar] [CrossRef]
  18. Florida Drone Supply. Parrot Anafi USA Gov Edition; Florida Drone Supply: Fort Myers, FL, USA, 2025. [Google Scholar]
  19. Khanam, R.; Hussain, M. Yolov11: An overview of the key architectural enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
  20. Lin, T.Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  21. Hafiz, A.M.; Bhat, G.M. A survey on instance segmentation: State of the art. Int. J. Multimed. Inf. Retr. 2020, 9, 171–189. [Google Scholar]
  22. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar]
  23. Ultralytics. 2025. Available online: https://github.com/ultralytics/ultralytics (accessed on 16 March 2025).
Figure 1. Sample images from the collected dataset and their corresponding visualized ground truth annotations.
Figure 1. Sample images from the collected dataset and their corresponding visualized ground truth annotations.
Energies 18 01729 g001
Figure 2. Approach 2’s snow coverage detection process for solar panels. Each row represents a detected solar panel in the image. From left to right: (1) original image, (2) binary mask generated by YOLO to isolate the solar panel, (3) extracted region of interest (ROI), and (4) intensity-based mask highlighting snow-covered areas.
Figure 2. Approach 2’s snow coverage detection process for solar panels. Each row represents a detected solar panel in the image. From left to right: (1) original image, (2) binary mask generated by YOLO to isolate the solar panel, (3) extracted region of interest (ROI), and (4) intensity-based mask highlighting snow-covered areas.
Energies 18 01729 g002
Figure 3. Visualized sample images for detection, segmentation, and ground truth annotations.
Figure 3. Visualized sample images for detection, segmentation, and ground truth annotations.
Energies 18 01729 g003
Figure 4. Visualized sample images for detection, segmentation, and ground truth annotations.
Figure 4. Visualized sample images for detection, segmentation, and ground truth annotations.
Energies 18 01729 g004
Table 1. Hyperparameters used to train all modes. x and n are x-large and nano, respectively, referring to the size of the model. Cls refers to the number of classes.
Table 1. Hyperparameters used to train all modes. x and n are x-large and nano, respectively, referring to the size of the model. Cls refers to the number of classes.
HyperparametersDescriptionOBB(x, 2Cls)Seg(x, 2Cls)OBB(n, 1Cls)
batchBatch size888
epochsTotal number of training epochs5324793
optimizer SGD, Adam, AdamW, etc.AutoAutoAuto
lr0Initial learning rate0.010.010.01
lrfFinal learning rate0.00010.00010.0001
weight_decayL2 regularization term0.00050.00050.0005
warmup_epochsFor learning rate warmup333
warmup_momentumInitial momentum for warmup phase0.80.80.8
Table 2. Performance evaluation of the oriented bounding box (OBB)-based and segmentation (Seg)-based YOLOv11 [23] detection models, where Cls refers to the number of classes used in each model. Highest values are highlighted in bold.
Table 2. Performance evaluation of the oriented bounding box (OBB)-based and segmentation (Seg)-based YOLOv11 [23] detection models, where Cls refers to the number of classes used in each model. Highest values are highlighted in bold.
ModelPrecisionRecallF1-ScoremAP50
SP USP SP USP SP USP SP USP
Approach 1OBB(dual)0.890.890.750.880.820.880.860.93
Seg(dual)0.670.910.480.770.560.830.490.85
Approach 2OBB(single)0.93-0.75-0.83-0.85-
Seg (single)0.83-0.75-0.79-0.79-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saleem, A.; Awad, A.; Mazen, A.; Mazurkiewicz, Z.; Dyreson, A. Estimating Snow Coverage Percentage on Solar Panels Using Drone Imagery and Machine Learning for Enhanced Energy Efficiency. Energies 2025, 18, 1729. https://doi.org/10.3390/en18071729

AMA Style

Saleem A, Awad A, Mazen A, Mazurkiewicz Z, Dyreson A. Estimating Snow Coverage Percentage on Solar Panels Using Drone Imagery and Machine Learning for Enhanced Energy Efficiency. Energies. 2025; 18(7):1729. https://doi.org/10.3390/en18071729

Chicago/Turabian Style

Saleem, Ashraf, Ali Awad, Amna Mazen, Zoe Mazurkiewicz, and Ana Dyreson. 2025. "Estimating Snow Coverage Percentage on Solar Panels Using Drone Imagery and Machine Learning for Enhanced Energy Efficiency" Energies 18, no. 7: 1729. https://doi.org/10.3390/en18071729

APA Style

Saleem, A., Awad, A., Mazen, A., Mazurkiewicz, Z., & Dyreson, A. (2025). Estimating Snow Coverage Percentage on Solar Panels Using Drone Imagery and Machine Learning for Enhanced Energy Efficiency. Energies, 18(7), 1729. https://doi.org/10.3390/en18071729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop