Next Article in Journal
Structural Performance of Additively Manufactured Cylinder Liner—A Numerical Study
Previous Article in Journal
Improved Thermal Performance of a Serpentine Cooling Channel by Topology Optimization Infilled with Triply Periodic Minimal Surfaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantic Segmentation Algorithm-Based Calculation of Cloud Shadow Trajectory and Cloud Speed

Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Energies 2022, 15(23), 8925; https://doi.org/10.3390/en15238925
Submission received: 24 October 2022 / Revised: 23 November 2022 / Accepted: 23 November 2022 / Published: 25 November 2022
(This article belongs to the Topic Artificial Intelligence and Sustainable Energy Systems)

Abstract

:
Cloud covering is an important factor affecting solar radiation and causes fluctuations in solar energy production. Therefore, real-time recognition and the prediction of cloud covering and the adjustment of the angle of photovoltaic panels to improve power generation are important research areas in the field of photovoltaic power generation. In this study, several methods, namely, the principle of depth camera measurement distance, semantic segmentation algorithm, and long- and short-term memory (LSTM) network were combined for cloud observation. The semantic segmentation algorithm was applied to identify and extract the cloud contour lines, determine the feature points, and calculate the cloud heights and geographic locations of the cloud shadows. The LSTM algorithm was used to predict the trajectory and speed of the cloud movement, achieve accurate and real-time detection, and track the clouds and the sun. Based on the results of these methods, the shadow area of the cloud on the ground was calculated. The recursive neural LSTM network was also used to predict the track and moving speed of the clouds according to the cloud centroid data of the cloud images at different times. The findings of this study can provide insights to establish a low-cost intelligent monitoring predicting system for cloud covering and power generation.

1. Introduction

Solar energy is a widely distributed and sustainable source of energy worldwide. Photovoltaic power generation technology can directly convert light energy into electrical energy through the photovoltaic effect, and it has the advantages of no pollution, safe use, and convenient maintenance. With continuous technical improvement and cost reduction, photovoltaic power generation has increased rapidly. In 2005, the global cumulative installed photovoltaic capacity exceeded 5 GW. According to “Snapshot of Global PV Markets 2020” [1] issued by the International Energy Agency, by the end of 2019, the global installed capacity exceeded 600 GW, and the average annual growth rate was 41%. In the past three years (2019–2022), the annual installed capacity has exceeded 100 GW. Figure 1 shows the global installed photovoltaic capacity over the past 10 years (2011–2019).
Large-scale photovoltaic projects require real-time monitoring of power quality and operating information while maintaining optimal scheduling. Therefore, it is essential to ensure the accurate forecasting of generation capacity, especially short-term and real-time forecasting [2]. Therefore, dynamically adjusting the solar panel according to weather type, cloud occlusion, and the radiation angle of sunlight to maximize the power generated by photovoltaic modules has always been an important research topic in the field of photovoltaic power generation [3,4,5]. Changes in photovoltaic power generation are almost proportional to the changes in radiation intensity, which are directly affected by cloud occlusion. Different weather types and cloud cover lead to considerable changes in the power generated by photovoltaic systems and power grid fluctuations [6,7,8,9].
However, in the field of photovoltaic power generation, it has always been challenging to accurately predict the weather type and cloud movement [10,11].
Cloud cover is usually analyzed based on the shape, size, distribution and evolution, and height of the clouds. The cloud shape changes easily; therefore, clouds should be monitored continuously and in real-time. Traditional artificial observation is based on subjective judgment and observation experience, so it cannot accurately predict the cloud shade. With research advances, observation technology has developed significantly. Contemporary observation technologies for different space environments include satellite- or space-based equipment for atmospheric observations and ground-based equipment for near-Earth observations. In large-scale ground-based photovoltaic power stations, power prediction is mainly based on short-term and real-time monitoring of the weather in the plant area. Considering the area of the power station, the speed and economic cost of ground equipment required for cloud monitoring as well as the monitoring method should be selected for cloud observation. Table 1 lists the various cloud height observation methods.
Among the methods mentioned in Table 1, cloud meter and radar measurement are widely used; however, these methods have the disadvantages of high cost and inconsistent measurement results; moreover, it is challenging to obtain the edge profiles of clouds and predict the cloud shading range. Therefore, in this study, we developed and investigated a new low-cost prediction method that combined sky images and machine learning methods to obtain an accurate cloud height, extract the edge contours of clouds, measure the shade range (i.e., cloud cover), and predict and analyze the moving direction and speed of the clouds.
In this study, the weather type was detected and identified in real-time by using artificial intelligence algorithms and deep learning networks. In sky images, the existence of cloud shielding, range of shielding, and moving speed and track of clouds are determined to obtain insights for guiding the angle of photovoltaic panels and increasing power generation, providing a basis for the real-time prediction of photovoltaic power generation.

2. Predicting Cloud Shadow Moving Trajectory and Speed

2.1. Method for Cloud Monitoring

For cloud monitoring, multiple cloud cameras are distributed across a photovoltaic power station field to obtain aerial images that contain high-resolution spatiotemporal information about solar radiation. Various software can be used to process the information in the control room. Thereafter, the predicted value of solar energy is obtained. Several researchers have conducted related studies. In 2013, Tao et al. used a pair of CCD (charge coupled device) digital cameras to set a baseline length of 60 m to form a cloud base-height measurement system with binocular imaging [12,13,14]. The Harris corner detector was used to extract the corner features of the images, and then the relative disparity was obtained according to the matching feature points. The principle of photogrammetry was used to calculate the height of the cloud base. In 2013, Zhang et al. used industrial cameras and image processing technologies for cloud monitoring [15,16]. Cloud height was calculated based on the dual-camera measuring distance principle. The same feature points were obtained using CSIFT (color scale-invariant feature transform) and SIFT (scale-invariant feature transform) methods for object matching to detect the cloud speed. In 2015, Peng et al. used support vector machine classifiers to identify cloud clusters from multiple TSI images and evaluated the essential height and movement of each cloud cluster [17,18,19,20,21]. In 2018, the German DLR Solar Energy Research Institute developed the WobaS system [22,23,24,25,26]. This system comprises 2–4 cloud cameras that are used to capture sky images. These images are evaluated and the cloud speed and future distribution are calculated, enabling the successful prediction of solar radiation values in the next 15 min.
Typical cloud detection and measurement methods based on dual imaging systems use similar hardware; these methods often utilize two or more cameras (especially fisheye cameras with large viewing angles and TSI devices) and apply a similar triangle principle to calculate the distance between the cloud and the cameras (depth camera principle). These methods can thus achieve high-resolution images at low equipment cost. However, the software used in these methods are often different from those used in conventional machine learning algorithms or early deep learning algorithms [27,28,29,30,31] However, the recognition accuracy and feature matching of clouds and sun were insufficient. The results showed significant calculation errors in cloud parameters such as cloud height, cloud area, cloud shadow, and cloud speed.
Deep learning algorithms are mainly divided into three categories:
A convolutional neural network (CNN) is commonly used for image data analysis and processing such as image classification, target detection, and semantic segmentation (i.e., Mask R-CNN and YOLACT). A recurrent neural network (RNN) such as a long- and short-term memory (LSTM) network is often used for text analysis or natural language processing. A generative adricative network (GAN) is typically used for data generation or unsupervised learning applications such as generating similar original data; 3D-GAN is used to generate high-quality 3D objects [32,33,34,35]. In 2018, He et al. proposed the Mask R-CNN method based on faster R-CNN, which is an extension of Mask [36,37,38].
In this study, first, the edge contours of clouds were obtained, and then, the feature points on the edges were obtained using the PSPNet semantic segmentation algorithm [39] based on the images obtained from the CMOS imaging system. Furthermore, the LSTM algorithm was used to obtain the cloud parameters such as cloud height and moving track and speed of cloud shadow, which were then combined with geographic information to predict the cloud shadow occlusion on the ground.

2.2. Cloud Edge Contour Extraction and Feature Point Recognition

This study requires distinguishing the cloud and non-cloud parts of a picture, that is, to classify each pixel to form the boundary of a cloud. The sky and clouds have different colors; therefore, we considered using color features to classify each pixel to form a boundary. The texture features of the sky and clouds are also different; thus, texture features can also be used for classification. Texture features can be extracted using the gray-gradient co-occurrence matrix (GGCM).
Several types of neural networks can realize semantic segmentation; herein, the PSPNet network semantic segmentation algorithm (Figure 2) [11] was used for classification. In the PSPNet network, the netscope space pyramid pool structure was adopted, as shown in Figure 3.
PSPNet is a modification of the basic RESNET architecture, and it uses hole convolution. First, it pools the features and then processes them at the same resolution in the whole encoder network (one-fourth of the original image input) until a spatial pooling module is obtained. Auxiliary loss is considered in the middle layer of RESNET to optimize the overall learning, and the global context model is aggregated in the spatial pyramid pooling layer at the top of the modified RESNET encoder.
In this study, 3800 cloud pictures were annotated using lableme software and given as input to the network model for training. The training process is as follows:
Step 1: The weight and deviation of neurons in each layer is initialized.
Step 2: Forward propagation: the image is converted into a matrix input in RGB format, the linear combination value is obtained through the weight and deviation of neurons in each layer, and then the activation function is applied to the linear combination value.
Step 3: The loss function is used to calculate the error between the output value of forward propagation and the annotated images, and the weight and deviation of neurons in each layer are optimized using the back propagation algorithm according to the error.
Step 4: Steps 2 and 3 are repeated iteratively to reduce the error to a specified value, and the weight and deviation of each layer of neurons are saved to obtain a well-trained model.
The cloud images are given as input into the model, and the training results are shown in Figure 4.

2.3. Cloud Movement Trajectory and Velocity Recognition Based on LSTM Network

Cloud moving can be considered as a time series prediction problem. Contemporary deep learning methods mainly use RNNs. In this study, a LSTM network model was used, which comprises an input layer, a hidden layer, and an output layer. The internal structure of the hidden layer is shown in Figure 5.
In Figure 5, t − 1, t, and t + 1 are continuous time series, X is the input sample, St is the memory of the sample at time t, and St = f(W × St−1 + U × Xt), where W represents the weight of the last time, U represents the weight of the input sample at the moment, and V represents the weight of the output.
For general initialization, the start time is considered as t = 1, input is S0 = 0, and W, U, and V are initialized randomly; then, Equation (1) is used for prediction.
h 1 = U x 1 + W s 0
s 1 = f h 1
o 1 = g V s 1
where f and g are activation functions.
As time progresses, the state s1 is considered the memory state at start time t1, and these parameters then participate in the next predicting activity, as shown in Equation (2).
h 2 = U x 2 + W s 1
s 2 = f h 2
o 2 = g V s 2
Finally, the final output value is obtained using Equations (2) and (3).
h t = U x t + W s t 1
s t = f h t
o t = g V s t
LSTM updates the weight parameters W, U, and V using the loss function. For each time sequence, LSTM produces an error value et. The total error value E is calculated using Equation (4).
E = t e t
U = E U = t e t U
V = E V = t e t V
W = E W = t e t W E = t e t

2.4. Calculating the Cloud Height and Shadow

2.4.1. Method of Calculating the Cloud Height and Shadow

Two cameras with the same internal parameters were placed in parallel so that their optical axes were parallel to each other and the cameras faced vertically upward. Another pair of coordinate axes were collinear. The two imaging planes were coplanar. The optical centers of the two cameras were at a fixed distance of d. Figure 6 shows a schematic of binocular stereo vision.
In the above camera arrangement method, we assumed that the coordinate system of camera C 1 was O 1 X 1 Y 1 Z 1 , the coordinate system of camera C 2 was O 2 X 2 Y 2 Z 2 , the focal length of the two cameras was f , and the distance of the camera was d . The coordinates of any space point p photographed by two cameras at the same time are expressed as x 1 , y 1 , z 1 in the C 1 coordinate system and x 2 , y 2 , z 2 in the C 2 coordinate system. The image coordinates of space point p in camera C 1 are ( u 1 , v 1 ) . The coordinate of the image point in camera C 2 is ( u 2 , v 2 ) . Therefore, the ratio of p to camera C 1 ’s X wheelbase from x 1 and u 1 is equal to the ratio of the Y wheelbase from y 1 and u 1 to camera C 1 , and this ratio is equal to the ratio of the camera’s focal length f and p . to camera C 1 ’s Z wheelbase from z 1 . The same is true for p and camera C 2 . From this, the 3D space depth camera uses Equations (5) and (6).
f z 1 = u 1 x 1 = v 1 y 1 f z 2 = u 2 x 2 = v 2 y 2
X = x 1 = x 2 + d Y = y 1 = y 2 Z = z 1 = z 2
These two equations are combined as follows.
x 1 x 2 = d
x 1 = z 1 f u 1 = z f u 1
x 2 = z 2 f u 2 = z f u 2
The binocular 3D vision method is used to reconstruct the 3D space points using Equations (8)–(12).
d = z f ( u 1 u 2 )
X = x 1 = z f u 1 = u 1 u 1 u 2 d
Y = y 1 = z f v 1 = v 1 u 1 u 2 d
Z = f u 1 u 2 d
S a c t = ( Z f ) 2 S i m g

2.4.2. Calculation Method of Solar Irradiation Angle

To calculate the illumination angle of sunlight, first, the altitude and azimuth of the Sun are calculated. The relationship between the altitude angle and the latitude angle and the time angle is obtained from the geometric relationship of the Sun and the Earth using Equation (13):
sin φ = sin sin δ + cos cos δ cos ω
where is the local latitude; δ is the declination angle; and ω is the hour angle.
The solar declination angle ( δ ) is the angle between the Sun and the Earth center line and the equatorial plane. As the Earth moves around the Sun, the declination angle changes accordingly. The declination angle is representative of the season and fluctuates between −23°26′ and +23°26′, and it repeats the cycle in years. The approximate declination angle is calculated using Equation (14):
δ = 23.45 × sin 360 × 284 + n 365
where n represents the date serial number (based on 1 year), and it is in the range of 1–365. For a leap year, the value of n will be 1–366, and the denominator 365 will be changed to 366.
Azimuth is represented by γ , and it can be considered the approximate angle between the shadow and the meridian of a straight line erected on the ground under the Sun, that is, the angle between the shadow cast by the light falling on the ground and the local meridian. γ is set to 0 in the due north of the target, continues to expand clockwise, and changes in the range 0–360°. The relevant measurement work was carried out in a clockwise direction, the starting destination of the solar azimuth was set in the north of the reference object, the ending destination was considered the incident direction of sunlight, and the required angle was measured in a clockwise direction.
The relationship between the azimuth, altitude angle, declination angle, dimension, and time angle is expressed using the following equations.
sin γ = cos δ sin ω cos φ
cos γ = sin α sin sin δ cos φ cos
The solar time angle ω in Equations (13) and (15) can be obtained using the following equations.
ω = 15 S T 12
S T = L T + Z
where S T is true solar time, L T is the local time, and Z is the time zone; the 24 h format is used to calculate time.
The projection area of a cloud on the ground is predicted by calculating the cloud height, the edge contour of cloud, and the illumination angle of sunlight.

3. Results and Discussion

3.1. Verification Experiment and Results of Object Shadow Casting

Real-time measurement of clouds and cloud shadows is challenging; therefore, we used fixed objects such as a flagpole to replace clouds for the experiments. A local coordinate system was established with the flagpole as the origin. First, the relative position of the flagpole and the camera was estimated, and then the relative position of the flagpole and the shadow was determined. The estimated results were compared with the actual measurement to verify the effectiveness of the cloud shadow position calculation in the local coordinate system. There was only a rotation and translation transformation relationship between the local coordinate system and the world coordinate system; therefore, the effectiveness in the local coordinate system is equal to that in the world coordinate system. The experimental method is presented in Figure 7.
The experimental steps are as follows:
Step 1. Two adjustable level platforms are set up under the flagpole, the level ruler is placed on the level platform, and adjusted to the level of the water platform.
Step 2. The cameras are placed on a horizontal platform in parallel, and they capture images in a vertically upward position.
Step 3. The distance between the two water platforms is measured and recorded.
Step 4. The length and azimuth of the shadow is measured and recorded.
Step 5. The azimuth and distance of the flagpole relative to the two cameras are measured and recorded.
Step 6. The length and azimuth of the shadow and the azimuth and distance of the flagpole relative to the two cameras are calculated.
Step 7. The local coordinate system is built with the flagpole as the origin, and the calculation results and measurement results are expressed in the local coordinate system for comparison.
The relative position between the flagpole and the camera can be determined by the distance between the flagpole and the camera and the angle between the flagpole and the camera and the two cameras. First, the angle between the flagpole and the camera line and the two camera lines is calculated, that is, the angles α and β, respectively (Figure 8). α and β are calculated using the images captured using cameras A and B. We carried out camera correction; therefore, the connecting line between the two observation points can be considered the transverse dividing line passing through the center point in the photos taken by cameras A and B. O is considered the center point for taking photos, and P is the imaging point of the flagpole vertex in the photo; it was assumed that the pixel coordinates of points O and P were (x0, y0) and (x, y), respectively. Then,
sin α = y y 0 x x 0 2 + y y 0 2
α = arcsin sin α
The α obtained using the above equation is in agreement with that obtained in Figure 7. β was obtained in a similar manner.
Using the obtained values of α and β, the distance between the flagpole and the camera was calculated, as shown in Figure 7, DEAB, DE = AB/(cotα + cotβ). Then, AD = DE/tanα, where AD is the horizontal distance between observation point A and flagpole vertex C. Similarly, the distance BD between the flagpole and observation point B could be calculated. Thus, we determined the relative positions of the flagpole and the camera.
Because the Sun is sufficiently far from the Earth, the sunlight reaching the Earth can be considered parallel light. Therefore, for the same object, the length of its shadow is determined by the solar altitude angle. A larger solar altitude angle implies a shorter shadow, smaller solar altitude angle, and longer shadow. As shown in Figure 8, when the cloud height h and the solar altitude angle α are known, then, the shadow length d = H/tanα.
The direction in which the shadow extends is opposite to the direction of the Sun; thus, the direction of the shadow can be calculated by the Sun azimuth, as shown in Figure 9; α is the azimuth of the Sun with 0° due north, and β is the shadow azimuth with 0° due north. Β = α − 180°.
The experimental contents and steps are as follows.
Two platforms were placed under the flagpole and adjusted to horizontal using a level ruler. The cameras were placed on a horizontal platform, and the lens was placed facing vertically upward to capture the top of the flagpole. The distance between two horizontal platforms as well as the length and extension direction of the flagpole in the ground shadow were measured and recorded. Any distortion of the captured picture was corrected, and then the height h of the flagpole was calculated. The solar altitude angle at that time was calculated according to the longitude and latitude of the shooting location and the shooting time α and azimuth β. Then, the shadow length and extension direction were calculated. The local coordinate system was built with the flagpole as the origin, and the calculation results and measurement results were expressed in the local coordinate system for comparison.
The pictures taken by the left and right cameras are presented in Figure 10.
The pixel coordinates of the center point of the picture were (2144,1424); in the pictures taken by the left and right cameras, the pixels at the top of the middle flagpole were (2913,1849) and (1465,1797), respectively. Camera distance (baseline) was 7.35 m, the measured shadow length was 31.2 m, and the measured shadow orientation was 94° (0° due north). The angle between the connecting line of flagpole and the left camera and the connecting line of the two cameras was 28°, and the distance from the flagpole to the left camera was 3.9 m. The angle between the connecting line of the flagpole and the right camera and the connecting line of the two cameras was 30°, and the distance from the flagpole to the right camera was 3.8 m.
According to the principle of calculating the relative position between the flagpole and the cameras, the angle between the connecting line of the flagpole and the left camera and the connecting line of the two cameras was 28.93°, with an error rate of 3.32%. The distance from the left camera to the flagpole was 3.66 m, with an error rate of 6.15%. The angle of the connecting line between the flagpole and the right camera and the connecting line between the two cameras was 28.78°, with an error rate of 4.07%. The distance from the right camera to the flagpole was 3.69 m, with an error rate of 2.89%.
The calculated flagpole height was 16.3039 m. The longitude and latitude of the flagpole were 122.082920° E and 37.530085° N. The shadow was measured at 16:40:00 on 23 July 2021. The calculated solar altitude angle and azimuth angle were 27.2826° and 275.2011°, respectively.
According to the shadow length d = H/tanα, the calculated shadow length was 31.6117 m, and the error rate was ~1.32%.
Because the shadow azimuth equal to the sun azimuth minute 180° and the shadow azimuth was 95.2012°; therefore, three groups of experiments were carried out, and the results are shown in Table 2.
After many experiments, the experimental data of the solar altitude and solar azimuth were compared with the reference data, and the average errors were 0.0568° and 0.0629°. Therefore, we believe that the experimental method for calculating the solar altitude and solar azimuth is reliable.

3.2. Verification Experiment and Results of Cloud Shadow Moving Track and Speed

For continuously moving clouds, the LSTM network is used to predict the moving direction and speed of clouds from a set of continuous cloud images. Considering the influence of the change of solar orientation on the cloud shadow position, the cloud shadow cannot be predicted directly. The proposed method first predicts the cloud position and then calculates the cloud shadow position combined with the cloud height and solar orientation information. Because the cloud height changes, it is necessary to train the data pertaining to a change in cloud height. Using multiple groups of continuous cloud images, the cloud centroid and cloud height were obtained as the training set for training the LSTM network. The LSTM network brings an additional operation into the network through exquisite gate control, solving the problem of gradient disappearance.
After training the neural network, some data are used for prediction. After using the cloud centroid and cloud height calculated from continuous pictures as the input, the network predicts the cloud position and cloud height for a certain time in the future; then, the shadow position combined with the sun orientation information is calculated. This predicted position of the cloud is the position of the cloud centroid. The changes in cloud shape are irregular; therefore, the shape of the last input picture is used as the approximate shape of the prediction result.
Next, the prediction results were verified. For example, when the predicted position of the cloud was for 5 min later, then the cloud was photographed 5 min later to calculate the actual position and compare it with the predicted position.
According to the taken photos, it was observed that the cloud contour changed with time; thus, the contour information cannot adequately characterize a cloud (the cloud contours in the pictures taken at different times are different). Therefore, this study used the centroid of the cloud contour to identify the location characteristics of the cloud.
To calculate the center of mass, n contour mass points are set on the x–O–y coordinate plane as the mass points of m 1 , m 2 , …… m n , respectively. Their coordinates can be ( x 1 , y 2 ), ( x 1 , y 2 ), ……( x n , y n ), respectively; then, these n particles comprise a particle system.
Furthermore,   x ¯ = M y M = i = 1 n m i x i i = 1 n m i ,       y ¯ = M x M = i = 1 n m i y i i = 1 n m i , where M is the total mass of the contour point, and M x and M y are the static moments of the particle on the x- and y-axes, respectively; then point ( x ¯ ,   y ¯ ) is the required center of mass.
The centroid of the cloud was obtained by calling the API for computing the centroid in Opencv library function in PYTHON and giving the coordinate data of the cloud contour as the input to the proposed model.
The centroid data of the cloud was first obtained using the centroid acquisition method to recognize the cloud edge contour of the cloud image through the UNET network, and then the centroid position of the cloud was calculated according to the edge contour of the cloud, as shown in Figure 11.
From the images captured at 10 s intervals, we manually selected 11 images with whole clouds. Continuous photos were obtained by using the centroid acquisition method in succession; a data record contains 11 triples of information including cloud centroid longitude, cloud centroid dimension, and time, as shown in Figure 12.
Figure 13 shows 1140 records of such data; the first 1040 pieces were used as the training dataset and the remaining 100 pieces were used as the test dataset. The data were divided into two parts. The centroid longitude and latitude from time 1 to time 10 were used as the training input set, and the centroid longitude and latitude from time 2 to time 11 were used as the verification result set. Similarly, the centroid longitude and latitude from time 1 to time 10 of each data in the test dataset were used as the input, and the output of the model was the centroid longitude and latitude prediction of the next time corresponding to each time.
The locus diagram of centroid points is drawn to demonstrate the real and predicted centroid longitude and latitude of the cloud, wherein the red dot is the real centroid longitude and latitude, the blue dot is the predicted centroid longitude and latitude, t0 is time 1, t1 is time 2, and so on. Figure 13 shows the comparison between the predicted results and the real longitude and latitude by tracking and predicting the centroids of different clouds at two different times. The red and blue dots in the figure are the true centroid longitude and latitude and the predicted centroid longitude and latitude, respectively.
After randomly selecting 100 centroid points, these were normalized and the mean square error was calculated, as shown in Figure 14. The blue points in the figure are the root mean square error of the predicted value. The root mean square error is the square root of the ratio of the sum of squares of prediction errors to the number of prediction times. A smaller value of the root mean square error implies a more accurate prediction.
Multiple groups of data were verified; then, the CNN-LSTM network model was used to predict the moving trajectory of the cloud more accurately.

4. Conclusions

This study presents a new low-cost and easy-to-implement method for predicting the influence of cloud on solar radiation.
This method can accurately predict the trajectory of a cloud and be used at solar power stations to effectively predict the location of cloud shadows in tens of minutes, thus enabling an adjustment of the solar panels to a suitable angle in advance. Compared with other implementations, it can save product costs and increase the rate of generation of solar panel energy. This research is conducive to the progress of related works of solar radiation and energy generation.
The proposed method also had some limitations. The sky camera’s shooting field of vision is limited; thus, the calculation and prediction of clouds in the sky are affected to some extent and can only be predicted within a limited range. Considering this limitation, during the actual implementation, the cloud prediction range can be increased by deploying sky cameras at multiple points around the area of the photovoltaic power station.

Author Contributions

Conceptualization, S.W.; Methodology, M.S. and Y.S.; Validation, S.W. and M.S.; Formal analysis, S.W.; Investigation, S.W. and M.S.; Supervision, Y.S.; Writing—original draft preparation, S.W.; Writing—review and editing, Y.S. and M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Detollenaere, A.; Wetter, J.V.; Masson, G. Snapshot of Global PV Markets 2020. IEA Photovolt. Power Syst. Programme 2020, 4, 10–11. Available online: https://iea-pvps.org/wp-content/uploads/2020/04/IEA_PVPS_Snapshot_2020.pdf (accessed on 1 September 2022).
  2. Behera, M.K.; Majumder, I.; Nayak, N. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique. Eng. Sci. Technol. Int. J. 2018, 21, 428–438. [Google Scholar] [CrossRef]
  3. Antonanzas, J.; Osorio, N.; Escobar, R.; Urraca, R.; Martinez-de-Pison, F.J.; Antonanzas-Torres, F. Review of photovoltaic power forecasting. Sol. Energy 2016, 136, 78–111. [Google Scholar] [CrossRef]
  4. Perez, R.; Kivalov, S.; Schlemmer, J.; Hemker, K., Jr.; Renné, D.; Hoff, T.E. Validation of short and medium term operational solar radiation forecasts in the US. Sol. Energy 2010, 84, 2161–2172. [Google Scholar] [CrossRef]
  5. Sfetsos, A.; Coonick, A.H. Univariate and multivariate forecasting of hourly solar radiation with artificial intelligence techniques. Sol. Energy 2000, 68, 169–178. [Google Scholar] [CrossRef]
  6. Barbieri, F.; Rajakaruna, S.; Ghosh, A. Very short-term photovoltaic power forecasting with cloud modeling: A review. Renew. Sustain. Energy Rev. 2017, 75, 242–263. [Google Scholar] [CrossRef] [Green Version]
  7. Hu, K.; Cao, S.; Wang, L.; Li, W.; Lv, M. A new ultra-short-term photovoltaic power prediction model based on ground-based cloud images. J. Clean. Prod. 2018, 200, 731–745. [Google Scholar] [CrossRef]
  8. Bacher, P.; Madsen, H.; Nielsen, H.A. Online short-term solar power forecasting. Sol. Energy 2009, 83, 1772–1783. [Google Scholar] [CrossRef] [Green Version]
  9. Ghimire, S.; Deo, R.C.; Raj, N.; Mi, J. Deep solar radiation forecasting with convolutional neural network and long short-term memory network algorithms. Appl. Energy 2019, 253, 113541. [Google Scholar] [CrossRef]
  10. Si, Z.; Yang, M.; Yu, Y.; Ding, T. Photovoltaic power forecast based on satellite images considering effects of solar position. Appl. Energy 2021, 302, 117514. [Google Scholar] [CrossRef]
  11. Lin, F.; Zhang, Y.; Wang, J. Recent advances in intra-hour solar forecasting: A review of ground-based sky image methods. Int. J. Forecast. 2021. [Google Scholar] [CrossRef]
  12. Wang, G.C.; Urquhart, B.; Kleissl, J. Cloud base height estimates from sky imagery and a network of pyranometers. Sol. Energy 2019, 184, 594–609. [Google Scholar] [CrossRef]
  13. Hutchison, K.; Wong, E.; Ou, S.C. Cloud base heights retrieved during night-time conditions with MODIS data. Int. J. Remote Sens. 2006, 27, 2847–2862. [Google Scholar] [CrossRef]
  14. Theocharides, S.; Makrides, G.; Livera, A.; Theristis, M.; Kaimakis, P.; Georghiou, G. Dayahead photovoltaic power production forecasting methodology based on machine learning and statistical post-processing. Appl. Energy 2020, 268, 115023. [Google Scholar] [CrossRef]
  15. Li, Z.; Li, J.; Menzel, W.P.; Schmit, T.J.; Ackerman, S.A. Comparison between current and future environmental satellite imagers on cloud classification using MODIS. Remote Sens. Environ. 2007, 108, 311–326. [Google Scholar] [CrossRef]
  16. Zhang, X.; Liu, K.; Wang, X.; Yu, C.; Zhang, T. Moving Shadow Removal Algorithm Based on HSV Color Space. TELKOMNIKA Indones. J. Electr. Eng. 2014, 12, 2769–2775. [Google Scholar] [CrossRef]
  17. Peng, Z.; Yu, D.; Huang, D.; Heiser, J.; Yoo, S.; Kalb, P. 3D cloud detection and tracking system for solar forecast using multiple sky imagers. Sol. Energy 2015, 118, 496–519. [Google Scholar] [CrossRef] [Green Version]
  18. Urquhart, B.; Kurtz, B.; Dahlin, E.; Ghonima, M.; Shields, J.E.; Kleissl, J. Development of a sky imaging system for short-term solar power forecasting. Atmos. Meas. Tech. 2015, 8, 875–890. Available online: https://amt.copernicus.org/articles/8/875/2015/amt-8-875-2015.pdf (accessed on 15 September 2022). [CrossRef] [Green Version]
  19. Escrig, H.; Batlles, F.; Alonso, J.; Baena, F.; Bosch, J.; Salbidegoitia, I.; Burgaleta, J. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast. Energy 2013, 55, 853–859. [Google Scholar] [CrossRef]
  20. Héas, P.; Mémin, É. Three-dimensional motion estimation of atmospheric layers from image sequences. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2385–2396. Available online: https://www.researchgate.net/publication/3205956_Three-Dimensional_Motion_Estimation_of_Atmospheric_Layers_From_Image_Sequences (accessed on 15 September 2022). [CrossRef]
  21. Chu, Y.; Pedro, H.T.; Coimbra, C.F. Hybrid intra-hour DNI forecasts with sky image processing enhanced by stochastic learning. Sol. Energy 2013, 98, 592–603. [Google Scholar] [CrossRef]
  22. Caldas, M.; Alonso-Suárez, R. Very short-term solar irradiance forecast using all-sky imaging and real-time irradiance measurements. Renew. Energy 2019, 143, 1643–1658. [Google Scholar] [CrossRef]
  23. Wang, F.; Xuan, Z.; Zhen, Z.; Li, Y.; Li, K.; Zhao, L.; Shafie-khah, M.; Catalão, J.P. A minutely solar irradiance forecasting method based on real-time sky image-irradiance mapping model. Energy Convers. Manag. 2020, 220, 113075. [Google Scholar] [CrossRef]
  24. El Alani, O.; Abraim, M.; Ghennioui, H.; Ghennioui, A.; Ikenbi, I.; Dahr, F.-E. Short term solar irradiance forecasting using sky images based on a hybrid CNN-MLP model. Energy Rep. 2021, 7, 888–900. [Google Scholar] [CrossRef]
  25. Du, J.; Min, Q.; Zhang, P.; Guo, J.; Yang, J.; Yin, B. Short-Term Solar Irradiance Forecasts Using Sky Images and Radiative Transfer Model. Energies 2018, 11, 1107. [Google Scholar] [CrossRef] [Green Version]
  26. Kong, W.; Jia, Y.; Dong, Z.Y.; Meng, K.; Chai, S. Hybrid approaches based on deep whole-sky-image learning to photovoltaic generation forecasting. Appl. Energy 2020, 280, 115875. [Google Scholar] [CrossRef]
  27. Voyant, C.; Notton, G.; Kalogirou, S.; Nivet, M.L.; Paoli, C.; Motte, F.; Fouilloy, A. Machine learning methods for solar radiation forecasting: A review. Renew. Energy 2017, 105, 569–582. [Google Scholar] [CrossRef]
  28. Gala, Y.; Fernández, Á.; Díaz, J.; Dorronsoro, J.R. Hybrid machine learning forecasting of solar radiation values. Neurocomputing 2016, 176, 48–59. [Google Scholar] [CrossRef]
  29. Fernández, Á.; Gala, Y.; Dorronsoro, J.R. Machine learning prediction of large area photovoltaic energy production. In Data Analytics for Renewable Energy Integration; Springer: Cham, Switzerland, 2014; pp. 38–53. Available online: http://link.springer.com/10.1007/978-3-319-13290-7_3 (accessed on 15 September 2022).
  30. Mellit, A.; Massi Pavan, A.; Lughi, V. Deep learning neural networks for short-term photovoltaic power forecasting. Renew. Energy 2021, 172, 276–288. [Google Scholar] [CrossRef]
  31. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  32. Suh, S.; Kim, J.; Lukowicz, P.; Lee, Y.O. Two-stage generative adversarial networks for binarization of color document images. Pattern Recognit. 2022, 130, 108810. [Google Scholar] [CrossRef]
  33. Terrén-Serrano, G.; Martínez-Ramón, M. Multi-layer wind velocity field visualization in infrared images of clouds for solar irradiance forecasting. Appl. Energy 2021, 288, 116656. [Google Scholar] [CrossRef]
  34. Paoli, C.; Voyant, C.; Muselli, M.; Nivet, M. Forecasting of preprocessed daily solar radiation time series using neural networks. Sol. Energy 2010, 84, 2146–2160. [Google Scholar] [CrossRef] [Green Version]
  35. Cao, J.; Lin, X. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks. Energy Convers. Manag. 2008, 49, 1396–1406. [Google Scholar] [CrossRef]
  36. Zhao, X.; Wei, H.; Wang, H.; Zhu, T.; Zhang, K. 3D-CNN-based feature extraction of ground-based cloud images for direct normal irradiance prediction. Sol. Energy 2019, 181, 510–518. [Google Scholar] [CrossRef]
  37. Wu, K.; Xu, Z.; Lyu, X.; Ren, P. Cloud detection with boundary nets. ISPRS J. Photogramm. Remote Sens. 2022, 186, 218–231. [Google Scholar] [CrossRef]
  38. Wang, Q.; Zhou, C.; Zhuge, X.; Liu, C.; Weng, F.; Wang, M. Retrieval of cloud properties from thermal infrared radiometry using convolutional neural network. Remote Sens. Environ. 2022, 278, 113079. [Google Scholar] [CrossRef]
  39. Fang, H.; Lafarge, F. Pyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information. ISPRS J. Photogramm. Remote Sens. 2019, 154, 246–258. [Google Scholar] [CrossRef]
Figure 1. Global photovoltaic installed capacity during 2011–2019.
Figure 1. Global photovoltaic installed capacity during 2011–2019.
Energies 15 08925 g001
Figure 2. PSPNet network.
Figure 2. PSPNet network.
Energies 15 08925 g002
Figure 3. Netscope space pyramid pool structure.
Figure 3. Netscope space pyramid pool structure.
Energies 15 08925 g003
Figure 4. Example results of PSPNet.
Figure 4. Example results of PSPNet.
Energies 15 08925 g004
Figure 5. Internal structure of the hidden layer.
Figure 5. Internal structure of the hidden layer.
Energies 15 08925 g005
Figure 6. Schematic of binocular stereo vision.
Figure 6. Schematic of binocular stereo vision.
Energies 15 08925 g006
Figure 7. Relative position model of the flagpole and camera.
Figure 7. Relative position model of the flagpole and camera.
Energies 15 08925 g007
Figure 8. Demonstration of shadow length.
Figure 8. Demonstration of shadow length.
Energies 15 08925 g008
Figure 9. Demonstration of the shadow orientation.
Figure 9. Demonstration of the shadow orientation.
Energies 15 08925 g009
Figure 10. Pictures taken by the (left) and (right) cameras.
Figure 10. Pictures taken by the (left) and (right) cameras.
Energies 15 08925 g010
Figure 11. Method of calculating the centroid position.
Figure 11. Method of calculating the centroid position.
Energies 15 08925 g011
Figure 12. Cloud position data.
Figure 12. Cloud position data.
Energies 15 08925 g012
Figure 13. Comparison of the prediction of different cloud centroids at two times and corresponding real longitude and latitude.
Figure 13. Comparison of the prediction of different cloud centroids at two times and corresponding real longitude and latitude.
Energies 15 08925 g013
Figure 14. Mean square error of 100 centroids randomly selected; the blue points in the figure are the root mean square error of the predicted value.
Figure 14. Mean square error of 100 centroids randomly selected; the blue points in the figure are the root mean square error of the predicted value.
Energies 15 08925 g014
Table 1. Comparison of different cloud height observation methods.
Table 1. Comparison of different cloud height observation methods.
ManualRadiosondeCloud MeterLiDARWeather Radar
Observation rangeWhole skySingle pointSingle pointSingle pointSingle point
Monitoring rangeMaximum visibility8–10 km10–12 km1–12 km15 km
Frequency0.5–6 h6–12 hContinuousContinuousContinuous
Accuracy20–30%100–200 m2%2%60 m
Sub attributeCloud amount and typeCloud top height-Cloud microphysical propertiesCloud top height and cloud microphysical properties
DifficultyLowMediumMediumHighHigh
AutomationNoNoYesYesYes
Table 2. Experimental results.
Table 2. Experimental results.
GroupP1P2BLLSSACLSCSAER
1(2633,1825)(1465,1797)5.7331.294°30.4995.2012°2.89%
2(2733,1833)(1465,1797)6.531.294°30.5995.2012°1.94%
3(2913,1849)(1465,1797)7.3531.294°31.6195.2012°1.32%
P1 and P2—pixel coordinates of the top of the flag pole in the images taken by the left and right cameras, respectively; BL—length of the baseline, that is, distance between the two cameras (m); LS—measured shadow length of the flagpole (m); SA—azimuth angle of the flagpole shadow; CLS—flagpole shadow length predicted using the proposed method; CSA—flagpole shadow azimuth predicted using the proposed method; ER—error ratio of the predicted shadow length.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, S.; Sun, M.; Shen, Y. Semantic Segmentation Algorithm-Based Calculation of Cloud Shadow Trajectory and Cloud Speed. Energies 2022, 15, 8925. https://doi.org/10.3390/en15238925

AMA Style

Wang S, Sun M, Shen Y. Semantic Segmentation Algorithm-Based Calculation of Cloud Shadow Trajectory and Cloud Speed. Energies. 2022; 15(23):8925. https://doi.org/10.3390/en15238925

Chicago/Turabian Style

Wang, Shitao, Mingjian Sun, and Yi Shen. 2022. "Semantic Segmentation Algorithm-Based Calculation of Cloud Shadow Trajectory and Cloud Speed" Energies 15, no. 23: 8925. https://doi.org/10.3390/en15238925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop