Next Article in Journal
Impacts of UHI on Heating and Cooling Loads in Residential Buildings in Cities of Different Sizes in Beijing–Tianjin–Hebei Region in China
Previous Article in Journal
Spider Lightning Characterization: Integrating Optical, NLDN, and GLM Detection
Previous Article in Special Issue
Quantifying the Spatio-Temporal Pattern Differences in Climate Change before and after the Turning Year in Southwest China over the Past 120 Years
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting

1
GEPASUD, Université de Polynésie Française, Campus d’Outumaoro, 98718 Puna’auia, Tahiti, French Polynesia
2
FEMTO-ST/FCLAB, Université de Franche-Comté, CNRS, Rue Thierry Meg, F-90010 Belfort, France
*
Author to whom correspondence should be addressed.
Atmosphere 2023, 14(7), 1192; https://doi.org/10.3390/atmos14071192
Submission received: 14 June 2023 / Revised: 16 July 2023 / Accepted: 17 July 2023 / Published: 24 July 2023
(This article belongs to the Special Issue Problems of Meteorological Measurements and Studies)

Abstract

:
For grid stability, operation, and planning, solar irradiance forecasting is crucial. In this paper, we provide a method for predicting the Global Horizontal Irradiance (GHI) mean values one hour in advance. Sky images are utilized for training the various forecasting models along with measured meteorological data in order to account for the short-term variability of solar irradiance, which is mostly caused by the presence of clouds in the sky. Additionally, deep learning models like the multilayer perceptron (MLP), convolutional neural networks (CNN), long short-term memory (LSTM), or their hybridized forms are widely used for deterministic solar irradiance forecasting. The implementation of probabilistic solar irradiance forecasting, which is gaining prominence in grid management since it offers information on the likelihood of different outcomes, is another task we carry out using quantile regression. The novelty of this paper lies in the combination of a hybrid deep learning model (CNN-LSTM) with quantile regression for the computation of prediction intervals at different confidence levels. The training of the different machine learning algorithms is performed over a year’s worth of sky images and meteorological data from the years 2019 to 2020. The data were measured at the University of French Polynesia (17.5770° S, 149.6092° W), on the island of Tahiti, which has a tropical climate. Overall, the hybrid model (CNN-LSTM) is the best performing and most accurate in terms of deterministic and probabilistic metrics. In addition, it was found that the CNN, LSTM, and ANN show good results against persistence.

1. Introduction

According to estimates, the amount of electricity produced by solar photovoltaics (PVs) increased by 22% globally in 2021 and surpassed 1000 TWh. As a result, the global electricity market share of solar PV is currently close to 3.6% [1]. Government support programs, technological breakthroughs, and sharp cost reductions are all responsible for this surge [2]. The intermittence, randomness, and volatility of PV power output is caused by uncontrollable factors such as weather, seasonality, and climate [3]. These significant constraints still hinder the large-scale integration of PVs into the power grid and interfere with the reliability and stability of existing grid-connected power systems [4]. Therefore, PV power predictions are essential to ensure the stability, reliability, and cost effectiveness of the system [5]. Usually, PV power prediction can be achieved through the prediction of global horizontal irradiance (GHI) or PV power output, with techniques applied to forecast PV power production being an extension of those used for predicting GHI. According to forecast horizons and model types, solar irradiance forecasting models can be broadly categorized [6]. Forecasting horizons can be categorized as short term [7], medium term [8], and long term [9]. For the real-time optimization of various grid components such as electrolyzers—which are essential for producing low-emission hydrogen—short-term forecasting (a few minutes to an hour) is used. For intraday operation optimization, medium-term (1 h to 6 h) forecasting is used. For days-ahead optimization, long-term forecasting (6 h to days ahead) is used. Furthermore, there are two main groups of solar irradiance forecasting model types, i.e., physical methods and data-driven models [6]. Physical models use numerical weather predictions (NWPs), sky imagery, and satellite imaging to obtain information on the physical state and dynamic motion of solar radiation through the atmosphere [10,11]. Data-driven models, on the other hand, are based on historical data and have the capacity to extract information from the data in order to forecast time series [12]. Finally, data-driven models are further sub-categorized into statistical or machine learning models. Statistical models include the autoregressive integrated moving average (ARIMA) [13], the autoregressive moving average (ARMA) [14], and the coupled autoregressive and dynamic system (CARD) [15]. Support-vector machines (SVMs) [8] and feed-forward neural networks (FFNNs) [16] are examples of machine learning models, as are convolutional neural networks (CNNs) [7] and recurrent neural networks (RNNs) [17]. A wealth of studies demonstrate that machine learning algorithms are better than statistical models for time series forecasting.
All-sky images can be used, coupled with meteorological data, to perform short-term irradiance forecasting. These techniques are better suited for short-term forecasting since they provide real-time information on the cloud cover above the considered site.
Peng et al. [18] employed sky images from numerous total sky imagers (TSIs) to detect clouds more effectively and make up for incorrect or excessively exposed TSI images. This technique, which is used for very-short-term forecasting (1 to 15 minutes in advance), makes use of redundant data from numerous cameras’ overlapping views to figure out the base heights and wind field of various layers. Under the majority of cloud situations, the suggested methodology offers accurate 15 min forecasts.
Kuhn et al. [19] used multiple TSIs for temporal and spatial aggregation in order to improve very-short-term solar irradiance forecasting. Those aggregation effects are relevant only for large solar plants, covering several square kilometers. It is shown that considering aggregation effects significantly reduces forecasting errors, especially spatial aggregation that lowers the relative root mean square error from 30.9% to 23.5% on a day with variable conditions for 1 min averages and a time horizon of 15 min.
Additionally, image processing and short-term irradiance forecasting can be performed with a CNN. The 2D structure of images is utilized by this neural network to apply filters and extract significant features. Ref. [20] employed the CNN model SUNSET for short-term PV forecasting. For a 30 kW PV system, the model forecasts the PV power output 15 min in advance with a 1 min resolution. The training was conducted on sky images captured the course of a year and PV output power. Along with the SUNSET model, deeper and more complicated CNN models were also used, as well as auto-regression techniques. Mean square error (MSE) and forecast skill (%) were two metrics used to assess the models’ accuracy. With an MSE of 4.51 kW² and forecasting skill of 26.22% and 16.11% for bright and cloudy days, respectively, the SUNSET model performs better than other models.
For very-short-term forecasting of solar irradiation, ref. [21] employed a hybrid model combining a convolutional neural network and multilayer perceptron (CNN-MLP). Images of the sky and weather data gathered from a ground meteorological station in Morocco are processed using the suggested method. The CNN-MLP produces the best results with an RMSE ranging from 13.05 W/m² to 49 W/m², while the persistence model’s RMSE is between 45.076 W/m² and 114.19 W/m².
Other neural networks can also be used for medium-term forecasting, such as RNN, which specializes in time-series forecasting. However, the vanishing gradient issue with the classical RNN [22] makes it impossible to learn lengthy data sequences. This issue is resolved using LSTM, a form of RNN, which enforces a continuous error flow (i.e., one that is neither exploding nor disappearing) across the internal states of the neural network’s cells [23]. Making the LSTM neural network appropriate for long time-series forecasting. An RNN implemented by [17] can forecast over multiple time horizons at once, including 1 h, 2 h, 3 h, and 4 h ahead. In parallel, four independent RNN models were developed for predictions that are 1 h, 2 h, 3 h, and 4 h in advance. The models were trained with two years (2010 and 2011) of meteorological data and tested on four different years (2009, 2015, 2016, 2017). With data collected at Bonville, USA, in 2009, the multi-horizon model produced a mean RMSE (across the four time horizons) of 14.6 W/m², while at the same location and time, the four independent models’ mean RMSE was 18.6 W/m². The RMSE achieved using the RNN models is lower than the RMSE for other machine learning methods, such as random forests, support-vector machine, gradient boosting, and FFNN, described in [24].
Kumari and Toshniwal [25] used a hybrid CNN-LSTM model for estimating solar irradiation. The model was trained using GHI values collected over a 12-year period (2006–2018). The observations were made in Alice Spring, Australia, which is notable for having a desert climate and 300 clear days per year. The models’ time horizon can be adjusted from one day to one month in advance. The CNN-LSTM model was compared with other standalone models such as CNN, LSTM, RNN, and deep neural network (DNN). The mean absolute percentage error (MAPE) was found equal to 4.84%, 6.48%, 5.84%, and 6.56% for the CNN-LSTM, LSTM, CNN, and DNN, respectively, for all simulations over one-day prediction. The results show that the hybrid CNN-LSTM model to be the best for GHI forecasting.
Kumari and Toshniwal [26] proposed a hybrid deep learning CNN-LSTM model for hourly GHI forecasting. The training was implemented at 23 locations in California, USA. The historical data used for the training of the different forecasting models was composed of meteorological data such as GHI, temperature, precipitation, cloud cover, etc. The proposed method showed a forecast skill of about 37–45% over other standalone models such as smart persistence, SVM, ANN, LSTM, and CNN. It suggests that the proposed hybrid model is appropriate for short-term GHI forecasting, due to its high accuracy under diverse climatic, seasonal, and sky conditions.
In summary, a CNN-LSTM hybrid model is an interesting and promising deep learning model for deploying point forecasts of solar irradiance. The benefit of this model is that it can simultaneously analyze sky images and meteorological data. In fact, combining feature extraction from the CNN and the ability to detect long-term dependencies from the LSTM ought to increase the accuracy of the implemented solar irradiance forecasts. In addition, one of the disadvantages of GHI point forecasts (or deterministic forecasts) is that they do not contain sufficient information about the errors that forecasting models may generate, as well as the volatility and randomness of the solar irradiance. Point forecasts are insufficient for optimizing the operation of power systems [27].
The objective of this paper is to implement hourly probabilistic forecasts through combining quantile mapping with a hybrid model (CNN-LSTM). The analysis of hourly residuals allows the computation of prediction intervals with varying levels of confidence. The hybrid model is compared to other standalone models, namely ANN, CNN, and LSTM, for comparative purposes.
The novelty of this paper resides in the residual modeling implemented with the hybrid CNN-LSTM model to derive hourly GHI probabilistic forecasts. Through the analysis of hourly residuals, also known as quantile mapping [28], probabilistic forecasts are obtained through the computation of prediction intervals with various levels of confidence. In this application, sky images, which are typically utilized for very-short-term forecasting (1 to 15 min), are utilized for 1 h-ahead GHI forecasting. This paper’s probabilistic forecasts account for the fact that approaching clouds are not yet visible to the TSI (on an hourly timescale). Combining probabilistic forecasts with sky images is pertinent for hourly predictions, which are typically made using satellite images [29,30].
In order to produce these forecasts, various forecasting tools are trained using historical meteorological data measured on site and sky images. The GHI forecasting models are intended to control a combined cold and power generation system (isolated micro-grid for electricity and cold production, such as air conditioning), with multiple energy production and storage sub-systems, all of which are solar-powered. The RECIF project’s (French acronym for micro-grid for electricity and cold cogeneration) electrolyzer must have sufficient power for at least one hour in order to function correctly and avoid potential misfires, which would reduce the component’s lifetime. Consequently, the necessity and implementation of the mean hourly forecasting models are presented in this paper. This initiative was developed within the framework of a French National Agency for Research (ANR)-funded project and is being carried out at the University of French Polynesia (UPF).
The paper is organized as follows: the data collection and data processing methods are presented in Section 2.1, followed by the GHI forecasting models in Section 2.2. The quantile mapping method for prediction interval computation is detailed in Section 2.3. Section 3 details the different metrics and reference models used for the comparison between the different deep learning algorithms. The results for point and probabilistic forecasts are detailed in Section 4. Section 5 provides the main achievements of the study in a summary form.

2. Materials and Methods

2.1. Data Collection

The various solar irradiance forecasting models were trained using one year’s worth of meteorological data and sky images collected at the UPF in Tahiti. The images of the sky were captured using a total sky imager consisting of a digital camera (Axis 212 PTZ) with a fish-eye lens. The images were collected every 10 s with a resolution of 640 by 480 pixels (Figure 1). The nighttime hours were removed from the historical meteorological data and sky images; therefore, the forecasts are only available from 5 am to 8 pm. In addition, the processed images were down-sampled to 64 × 64 pixels to decrease the training time for the models. The edges of the images were also removed as they gave no relevant information about the cloud cover. Finally, the 8-bit sky images, containing values ranging from 0 to 255, were normalized through dividing its values by 255 right before training the machine learning models. Hence, the normalized images have pixel values ranging from 0 to 1.
Table 1 outlines the meteorological data used to train the various forecasting tools: GHI (W/m²), temperature T (C°), relative humidity H (%), wind velocity WV (m/s), and wind direction WD (°) with a 1 min time interval. The GHI is measured with an Apogee Instruments-supplied pyranometer. In order to compute the theoretical values of GHI under clear sky conditions, the clear sky model (CLS) is also used as an input by the forecasting models. The Python implementation of [31] clear sky model was made possible by means of the pvlib package. The turbidity (TL), which is correlated to the opacity of the atmosphere, is an essential parameter for a precise clear sky model. Table 2 lists the values of TL that have been employed.
Using the interquartile range method (IQR), outliers in the data were identified and eliminated, along with other anomalous data such as Not a Number (or NaN values) or negative values caused by technical glitches in the sensors. Between October 2019 and September 2020, each measured meteorological variable was normalized between 0 and 1 using the following equation:
X norm = X   X min X max   X min
With X min and X max representing the minimum and maximum values of the considered variable, respectively. X and X norm are the non-normalized and normalized value.
After normalization, the data were divided into training data (70%), validation data (20%), and testing data (10%). This first test data will henceforth be referred to as test data n°1. In addition, nine days with varying cloud cover were removed from the dataset and used as second test data (designated test data n°2) to determine the efficacy of the models with varying cloud cover. These nine days correspond to three clear-sky days, three partly cloudy days, and three cloudy days.
The inputs for the implemented models include GHI, humidity, temperature, and wind speed and direction. These variables have moderate correlations with GHI (Figure 2). Using Pearson’s coefficient of correlation:
r =   ( x i   x ¯ ) ( y i y ¯ )   ( x i x ¯ ) 2   ( y i y ¯ ) 2 .
with xi and yi being the i-th value of x and y, x ¯ and y ¯ the mean values of x and y.

2.2. GHI Predictions Algorithms

In order to train the various forecasting tools, 1 min time-step historical data is utilized. Figure 3 depicts the calculation of the GHI’s autocorrelation up to a time-shift of 300 min in order to determine how many lagged terms should be provided to the model in order to yield a prediction. This graph demonstrates that a time-shift of 300 min still yields a Pearson coefficient of r = 0.1, which is sufficient for training neural networks. Therefore, 300 min prior is regarded as the standard for constructing the input vectors for the various forecasting models. To produce one prediction of the hourly mean value of GHI, forecasting models require 300 meteorological data measurements with the corresponding sky images and a 1 min time-step as input vectors.
The initial model implemented was a conventional artificial neural network (ANN) trained using only meteorological data. This neural network consisted of two hidden layers containing 100 neurons each. In addition, every hidden layer was followed by a dropout layer. Dropout layers permit the arbitrary removal of certain neurons with a specified dropout rate (in our case, 10%). During a particular forward and reverse pass of the training process, the discarded units were not considered. This prevents units from co-adapting excessively. The inputs to this ANN were 300 previous meteorological measurements with a 1 min time-step. The predicted value was the hourly average irradiance. The second model implemented was a CNN (Figure 4) that simultaneously takes meteorological data and sky images as input for cloud monitoring. This CNN was directly influenced by [20] SUNSET model implementation. Our proposed CNN possesses three convolutional-pooling structures (conv-pool) consisting of two convolutional layers and a max-pooling layer. In its two convolutional layers, the first structure has 64 filters, the second has 128 filters, and the third has 256 filters. These layers are used to extract features from the images. The output of the convolution-pooling structures was concatenated with meteorological data and then passed to an ANN with two fully connected layers. Each layer contains 100 neurons and is followed by a dropout layer with a 10% dropout rate. The CNN made an average hourly forecast through correlating 300 previous meteorological measurements with their corresponding sky images.
In addition, an LSTM has been implemented for mean hourly forecasting. Like the ANN, this LSTM only accepts meteorological data as input. In our case, the input is equivalent to 300 min of prior measurements. This LSTM consists of 4 layers, each containing 200 neurons, followed by a dropout layer with a 10% dropout rate.
Finally, a CNN-LSTM neural network (or hybrid model) was implemented (see Figure 5). Indeed, the combination of the feature extraction through the processing of sky images of the CNN and the learning of long-data sequences of LSTMs enables the construction of a model capable of simultaneously taking sky images and meteorological data to make accurate mean hourly forecasts. First, the CNN processes the sky images. The output is then connected to an LSTM model with two layers of 200 neurons each, followed by a dropout layer with a 10% dropout rate. In addition to the first LSTM, a second LSTM with the same parameters processes the meteorological data. In order to make predictions, the output of two LSTMs is concatenated into a dense layer with a single neuron. The characteristics of the models are summarized in Table 3. The hybrid model was influenced by [33], but with two convolutional layers in each conv-pool structure and only two LSTMs to reduce training time. Another significant difference between our work and [33]’s is that we implemented probabilistic forecasts with the considered hybrid model via quantile mapping, as explained in the following subsection.

2.3. Residual Modeling

To enhance the accuracy of hourly microgrid management, probabilistic forecasts based on modeling errors (or residuals) were implemented. A probabilistic forecast assigns probabilities to all possible futures predicted by the model. This is in stark contrast to a deterministic or point forecast, which provides no information regarding the probabilities of the predicted outcome. This can be accomplished through calculating prediction intervals (PIs), which represent a range of values with a specified probability of containing a new measurement. This technique is based on the work of [28].
The model residuals can be calculated using the following equation:
r i = y i   y ^ i
with y i   being the measurement, y ^ i representing the predicted value, and r i standing for the corresponding residual.
We obtained residuals for each hour of the day in order to compute the hourly PIs of the deep learning models. The residual distribution parameters were then estimated for each hour of the day and are presented in Table 4. On Figure 6, the histograms for 10 am and 3 pm, we can observe the following: Assuming that the distribution of the residuals follows a symmetric normal distribution or a symmetric Laplacian distribution, the PIs can be computed with a certain level of confidence from the hypothetical distributions.
The mean values μ and variances σ of the hourly normal distributions were calculated from Equations (4) and (5) and summarized in the next table for each distribution:
f ( x ) = 1 σ 2 π e 1 2 ( x μ σ ) 2
f ( x ) = 1 2 σ e ( | x μ | σ )
The parameter estimation results indicate that the variances of the Laplacian distribution are all smaller than those of the Gaussian distribution, which is expected given that the Laplacian distribution is typically narrower than the Gaussian distribution. Comparing variances alone is insufficient to determine which distribution best characterizes residuals and ultimately provides the most accurate probabilistic predictions. In order to compare the various Gaussian and Laplacian predictions, probabilistic metrics are employed and presented in the following section.

2.4. Quantile Regression

The quantile regression method was used to compute the quantiles of the residuals. The computed quantiles were taken as prediction intervals at different confidence levels. To compute the different quantiles for all the considered distributions, we first needed to consider their cumulative distribution function (cdf) F R e s i d u s ( x ) as:
x ,   F R e s i d u s ( x ) = ( R e s i d u s x )
The inverse of the cdf is called the percent point function or quantile function Q ( q ) and is explicated below:
q [ 0 , 1 ] ,   Q ( q ) = F R e s i d u s 1 ( x ) = i n f { x , F R e s i d u s ( x )   q   }
Q(0.25), Q(0.5), and Q(0.75) are, respectively, the first quantile, the median, and the third quantile and were calculated according to the specific distribution (Gaussian or Laplacian). To compute the PIs at 38%, 68%, 95%, and 99% confidence levels, we use the next equations:
P I   ( 38 % ) = [ Q ( 0.5 0.38 2 ) ,   Q ( 0.5 + 0.38 2 ) ]
P I   ( 68 % ) = [ Q ( 0.5 0.68 2 ) ,   Q ( 0.5 + 0.68 2 ) ]
P I   ( 95 % ) = [ Q ( 0.5 0.95 2 ) ,   Q ( 0.5 + 0.95 2 ) ]
P I   ( 99 % ) = [ Q ( 0.5 0.99 2 ) ,   Q ( 0.5 + 0.99 2 ) ]

3. Metrics and Reference Model

In this section, the deterministic and probabilistic metrics for accuracy assessment of the different forecasting models are presented. The persistence model is also presented. This model was used as a reference for comparison purposes.

3.1. Metrics for Deterministic Forecasting

In order to train and compare the precision of the different models applied to short-term solar forecasting, we adopted four different deterministic metrics called mean square error (MSE), mean absolute error (MAE), root mean square error (RMSE), and determination coefficient (R²):
MSE = 1 N i = 0 N ( y measured , i   y predicted , i ) ²
RMSE = 1 N i = 0 N ( y measured , i   y predicted , i ) ²
MAE = 1 N i = 0 N | y measured , i y predicted , i |
R 2 = 1 i = 0 N ( y measured , i   y predicted , i ) ² i = 0 N ( y measured , i y ¯ measured ) ²
where N is the number of observations, ymeasured,i is the i-th measurement, y ¯ measured is the mean value of the measurements, and ypredicted,i is the predicted values.
RMSE is a quadratic scoring rule that assesses the average error magnitude. This metric has the disadvantage of not distinguishing between numerous minor errors and fewer but larger errors. In addition, since the errors are squared prior to being averaged, this metric provides a disproportionately high weight to large errors. The MAE is a linear score, meaning that all individual differences contribute equally to the mean. The MAE and RMSE can be used in conjunction to diagnose the variance in the forecast error. RMSE will always be greater than or equal to MAE. When RMSE and MAE are equal, all errors have the same magnitude, whereas, the greater the difference between the two metrics, the greater the variance in the individual errors in the sample.
For comparison purposes, the normalized mean absolute percentage (nMAP) metric is used. The nMAP is explicated below:
nMAP = 1 N i = 0 N | y measured , i y predicted , i | 1 n i = 1 n y measured , i × 100

3.2. Metrics for Probabilistic Forecasting

In order to quantify the quality of the probabilistic forecasts, we use three different metrics. The first one is the prediction interval coverage percentage (PICP), the second one is the prediction interval normalized average width (PINAW), and the third is the coverage width-based criterion (CWC).
The PICP indicates how many real values lie within the bounds of the prediction interval [28]:
PICP = 1 N i = 1 N δ i
δ i = { 1     i f   y i   [ L i , U i ] 0   i f   y i   [ L i , U i ]
where   L i   is the lower bound and U i is the upper bound of the prediction intervals. From (16) and (17), we can deduce that the higher the value of PICP, the better the prediction intervals are.
The PINAW quantitatively measures the width of the different PIs [28]:
PINAW = 1 N R i = 1 N ( U i L i )
where R is a normalizing factor equal to the width of possible values taken from the measurements. As the PINAW quantitatively represents the width of the PIs, a lower value of PINAW represents a better confidence for the prediction intervals. However, a model with low PINAW values can also perform poorly. This is why we used the coverage width-based criterion.
The CWC combines the PICP and PINAW to optimally balance between probability and coverage [28]:
CWC =   PINAW   [ 1 +   γ ( PICP ) e ρ ( P I C P Є ) ]
γ ( P I C P ) = { 0     i f   P I C P > Є 1   i f   P I C P < Є
where Є is the preassigned PICP that is to be satisfied, and ρ is the penalizing term (in our case equal to 0.01). When the preassigned PICP is not satisfied (e.g., a PI with a 95% confidence level containing only 93% of the measurement), the CWC increases exponentially. The CWC is a negatively oriented metric, meaning lower values signify a better prediction.

3.3. Persistence Forecasting Model

The persistence model was also used as a reference to compare the errors of the implemented models. This model assumes that the solar radiation at time t is equal to the solar radiation measured at time t − T, T being the time horizon of the model. The model is formulated below:
f ( t ) = f ( t T )
In our case, the forecast parameter is the mean solar radiation averaged over a time window from t to t + T and is equal to the mean solar radiation measured from t − T to t. The persistence model used here is shown in the next equation.
f ^ ( t ,   t + T ) = f ( t T ,   t )

4. Results

4.1. Prediction Results

After training the forecasting models, they were evaluated on test data n°1 and n°2. The results for all implemented models are presented in Table 5 with the deterministic metrics for test data n°1, and in Table 6 for test data n°2. Table 7 presents the results for the probabilistic forecasts.
Table 5 demonstrates that the CNN-LSTM is the most precise model for point forecasts, with MAE = 66.09 W/m², RMSE = 100.58 W/m², and R² = 0.85. The standalone LSTM performs well with MAE = 71.89 W/m², RMSE = 113.24 W/m², and R² = 0.81. This demonstrates that the addition of convolutional layers improves the accuracy of predictions. The CNN also shows good results with R² = 0.83 against the ANN with R² = 0.80 and R² = 0.69 for the persistence model. The RMSE values are higher than the values of MAE for all types of days and for both test datasets. This difference signifies a high variance in the errors. Variance quantified using the previously computed prediction intervals. Despite the forecast errors, the hybrid model is the most precise for each type of day and both test datasets, with forecasts following the general trend of the measurements. This can be seen in Figure 7, where the hybrid model is shown to be the most precise. Indeed, the CNN-LSTM points are all converging well around the bisector (black line), which shows a high precision of forecasts.
Table 6 shows that the best model is the CNN-LSTM with RMSE = 91.73 W/m², MAE = 60.46 W/m², and R² = 87 for test data n°2. The hybrid model also shows the best results for each type of day, with a R² equal to 0.98, 0.84, and 0.69 for a clear, overcast, and cloudy day, respectively. The CNN is second best with R² = 0.84, followed by the LSTM and ANN. This demonstrates that the processing of sky images through convolutional layers is effective at increasing the accuracy of short-term solar irradiance forecasting.
For the probabilistic forecasts, the PICP, PINAW, and CWC values were computed for all forecasting models. The best CWC values for 38%, 68%, 95%, and 99% PIs are highlighted in black in Table 7. The best model for probabilistic forecasting is the CNN-LSTM with CWC values equal to 8.54, 17.04, 33.37, and 47.98 for PI(38%), PI(68%), PI(95%), and PI(99%), respectively. The best performance is obtained with a Gaussian distribution for all prediction intervals. The probabilistic forecasts are shown in Figure 8.
In summary, the hybrid model is the best for all testing data (test data n°1 and n°2) for each type of day and for both point and probabilistic forecasting. The implemented forecasting models were implemented with Python 3.7, with the Keras 2.3.0 and Tensorflow 2.2.0 packages.
A comparison of the results from the implemented hybrid model for deterministic hourly predictions in terms of nMAP is given in Table 8. We can see that when the sun is low (early and late hours), the nMAP values are high, which can be explained by the low luminosity and high saturation of the corresponding sky images, whereas during sunny hours, the errors range from 11% to 36%, which correspond to high luminosity and intermediate saturation in sky images, situations that are handled better by the hybrid model. In the best case (Colorado dataset), ref. [33] obtained nMAP values ranging from 15% to 20%. The results obtained in this paper by our hybrid model are comparable but slightly higher to those obtained by Siddiqui’s model. This can be explained by the high variability that characterizes Tahiti’s weather, which is strongly correlated with the orography of the island.

4.2. Perspectives and Future Research

For solar irradiance forecasting, ref. [34] implemented a novel method incorporating LSTM and metaheuristic models. Simultaneously, the metaheuristic chicken swarm optimization (CSO) and grey wolf optimization (GWO) models were used to optimize the LSTM’s parameters. The proposed hybrid model outperformed other reference models and demonstrated consistent performance for forecasting at all scales with enhanced metric results. Using a metaheuristic model (or combining two metaheuristic models) to optimize the CNN-LSTM and other isolated models’ parameters could be a strategy for improving the results presented in this article.
Implementing probabilistic forecasts, ref. [27] utilized kernel density estimation (KDE). For the 90% prediction interval, this method produced PINAW values of 19.55. This appears to be a suitable method for narrowing the prediction intervals computed in this article, which derived a PINAW value of 33.37 for the CNN-LSTM for the 95% prediction interval.
The probabilistic forecast could also be improved through training a neural network to directly predict the various quantiles of the obtained residuals. Encoder-decoder architecture was utilized by [35] for quantile regression. This method enhances the accuracy of forecasts while remaining computationally efficient.

5. Conclusions

In this article, we implemented a hybrid deep learning model for mean hourly forecasting, using convolutional layers for feature extraction from images and LSTM technology for meteorological data processing and time-series forecasting. The performance of the hybrid model was compared to the performance of three other neural nets, specifically, ANN, CNN, and LSTM. The persistence model was compared to each of the implemented models. The novelty of this study resides in the residual modeling implemented with the CNN-LSTM model, which enables us to generate probabilistic forecasts from the statistical models. Overall, the testing data, for each type of day, and for all the deterministic and probabilistic metrics, the hybrid model was found to be the best performing and most accurate. Indeed, the RMSE, MAE, and R² for the CNN-LSTM were equal to 100.58 W/m², 66.09 W/m², and 0.85, respectively, which is better than the other benchmark models. For probabilistic forecasts, the CNN-LSTM has the best CWC values with 8.54, 17.04, 33.37, and 47.98, respectively, for PI(38%), PI(68%), PI(95%), and PI(99%).

Author Contributions

Conceptualization, V.S.; methodology, V.S., D.H., P.O. and F.F.; software, V.S. and P.O.; validation, P.O., D.H. and F.F.; formal analysis, V.S., P.O. and D.H.; investigation, V.S.; resources, P.O.; data curation, V.S. and P.O.; writing—original draft preparation, V.S.; writing—review and editing, V.S., P.O., D.H. and F.F.; supervision, P.O., D.H. and F.F.; project administration, P.O., D.H. and F.F.; funding acquisition, P.O., D.H. and F.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EIPHI Graduate School (contract ANR-17-EURE-0002) and the Region Bourgogne Franche-Comté. We thank the National Agency of Research (ANR-18-CE05-0043) for buying the equipment needed for this investigation. We also thank the FEMTO-ST laboratory and the University of French Polynesia for funding this research.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

ANNArtificial Neural Networks
ARIMAAutoregressive Integrated Moving Average
ARMAAutoregressive Moving Average
CARDCoupled Autoregressive and Dynamic System
CNNConvolutional Neural Network
DNNDeep Neural Network
DHIDiffuse Horizontal Irradiance
DNIDirect Normal Irradiance
FFNNFeed Forward Neural Network
GHIGlobal Horizontal Irradiance
LM-BPLevenberg–Marquardt Backpropagation
LSTMLong Short-Term Memory
MAEMean Absolute Error
Max-pooling layerLayer that selects the maximum feature in a patch of the extracted feature map
MSEMean Square Error
NWPNumerical Weather Predictions
RMSERoot Mean Square Error
RNNRecurrent Neural Network

References

  1. IEA. Solar PV; IEA: Paris, France, 2022; Available online: https://www.iea.org/reports/solar-pv (accessed on 15 January 2023).
  2. Child, M.; Bogdanov, D.; Breyer, C. The role of storage technologies for the transition to a 100% renewable energy system in Europe. In Proceedings of the Energy Procedia, 12th International Renewable Energy Storage Conference, IRES 2018, Düsseldorf, Germany, 13–15 March 2018; Volume 155, pp. 44–60. [Google Scholar] [CrossRef]
  3. Yu, Y.; Cao, J.; Zhu, J. An LSTM Short-Term Solar Irradiance Forecasting Under Complicated Weather Conditions. IEEE Access 2019, 7, 145651–145666. [Google Scholar] [CrossRef]
  4. Ramsami, P.; Oree, V. A hybrid method for forecasting the energy output of photovoltaic systems. Energy Convers. Manag. 2015, 95, 406–413. [Google Scholar] [CrossRef]
  5. Ehara, T. Overcoming PV Grid Issues in the Urban Areas. Switzerland. 2009. Available online: https://www.osti.gov/etdeweb/biblio/22119637 (accessed on 2 January 2022).
  6. Husein, M.; Chung, I.-Y. Day-Ahead Solar Irradiance Forecasting for Microgrids Using a Long Short-Term Memory Recurrent Neural Network: A Deep Learning Approach. Energies 2019, 12, 1856. [Google Scholar] [CrossRef] [Green Version]
  7. Sun, Y.; Szucs, G.; Brandt, A.R. Solar PV output prediction from video stremas using convolutional neural networks. Energy Environ. Sci. 2018, 7, 1811–1818. [Google Scholar]
  8. Jang, H.S.; Bae, K.Y.; Park, H.-S.; Sung, D.K. Solar Power Prediction Based on Satellite Images and Support Vector Machine. IEEE Trans. Sustain. Energy 2016, 7, 1255–1263. [Google Scholar] [CrossRef]
  9. Tiwari, S.; Sabzehgar, R.; Rasouli, M. Short Term Solar Irradiance Forecast Using Numerical Weather Prediction (NWP) with Gradient Boost Regression. In Proceedings of the 2018 9th IEEE International Symposium on Power Electronics for Distributed Generation Systems (PEDG), Charlotte, NC, USA, 25–28 June 2018; pp. 1–8. [Google Scholar] [CrossRef]
  10. Kambezidis, H.D.; Psiloglou, B.E.; Karagiannis, D.; Dumka, U.C.; Kaskaoutis, D.G. Meteorological Radiation Model (MRM v6.1): Improvements in diffuse radiation estimates and a new approach for implementation of cloud products. Renew. Sustain. Energy Rev. 2017, 74, 616–637. [Google Scholar] [CrossRef]
  11. Kambezidis, H.D.; Kampezidou, S.I.; Kampezidou, D. Mathematical Determination of the Upper and Lower Limits of the Diffuse Fraction at any Site. Appl. Sci. 2021, 11, 8654. [Google Scholar] [CrossRef]
  12. Akhter, M.N.; Mekhilef, S.; Mukhlis, H.; Shah, N.M. Review on forecasting of photovoltaic power generation based on machine learning and metaheuristic techniques. IET Renew. Power Gener. 2019, 13, 1009–1023. [Google Scholar] [CrossRef] [Green Version]
  13. Reikard, G. Predicting solar radiation at high resolutions: A comparison of time series forecasts. Sol. Energy 2009, 83, 342–349. [Google Scholar] [CrossRef]
  14. Mora-López, L.; Sidrach-de-Cardona, M. Multiplicative ARMA models to generate hourly series of global irradiation. Sol. Energy 1998, 63, 283–291. [Google Scholar] [CrossRef]
  15. Huang, J.; Korolkiewicz, M.; Agrawal, M.; Boland, J. Forecasting solar radiation on an hourly time scale using a Coupled AutoRegressive and Dynamical System (CARDS) model. Sol. Energy 2013, 87, 136–149. [Google Scholar] [CrossRef]
  16. Crisosto, C.; Hofmann, M.; Mubarak, R.; Seckmeyer, G. One-Hour Prediction of the Global Solar Irradiance from All-Sky Images Using Artificial Neural Networks. Energies 2018, 11, 2906. [Google Scholar] [CrossRef] [Green Version]
  17. Mishra, S.; Palanisamy, P. Multi-time-horizon Solar Forecasting Using Recurrent Neural Network. In Proceedings of the 2018 IEEE Energy Conversion Congress and Exposition (ECCE), Portland, OR, USA, 23–27 September 2018; pp. 18–24. [Google Scholar] [CrossRef] [Green Version]
  18. Peng, Z.; Yu, D.; Huang, D.; Heiser, J.; Yoo, S.; Kalb, P. 3D cloud detection and tracking system for solar forecast using multiple sky imagers. Sol. Energy 2015, 118, 496–519. [Google Scholar] [CrossRef]
  19. Kuhn, P.; Nouri, B.; Wilbert, S.; Prahl, C.; Kozonek, N.; Schmidt, T.; Yasser, Z.; Ramirez, L.; Zarzalejo, L.; Meyer, A.; et al. Validation of an all-sky imager-based nowcasting system for industrial PV plants. Prog. Photovolt. Res. Appl. 2018, 26, 608–621. [Google Scholar] [CrossRef]
  20. Sun, Y.; Venugopal, V.; Brandt, A.R. Convolutional Neural Network for Short-term Solar Panel Output Prediction. In Proceedings of the 2018 IEEE 7th World Conference on Photovoltaic Energy Conversion (WCPEC) (A Joint Conference of 45th IEEE PVSC, 28th PVSEC & 34th EU PVSEC), Waikoloa Village, HI, USA, 10–15 June 2018; pp. 2357–2361. [Google Scholar] [CrossRef]
  21. El Alani, O.; Abraim, M.; Ghennioui, H.; Ghennioui, A.; Ikenbi, I.; Dahr, F.-E. Short term solar irradiance forecasting using sky images based on a hybrid CNN–MLP model. Energy Rep. 2021, 7, 888–900. [Google Scholar] [CrossRef]
  22. Kolen, J.; Kremer, S.C. Gradient Flow in Recurrent Nets: The Difficulty of Learning LongTerm Dependencies; Wiley-IEEE Press: Hoboken, NJ, USA, 2001. [Google Scholar] [CrossRef] [Green Version]
  23. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  24. Dobbs, A.; Elgindy, T.; Hodge, B.-M.; Florita, A.; Novacheck, J. Short-Term Solar Forecasting Performance of Popular Machine Learning Algorithms; National Renewable Energy Laboratory: Jefferson County, CO, USA, 2017. [Google Scholar]
  25. Ghimire, S.; Deo, R.C.; Raj, N.; Mi, J. Deep solar radiation forecasting with convolutional neural network and long short-term memory network algorithms. Appl. Energy 2019, 253, 113541. [Google Scholar] [CrossRef]
  26. Kumari, P.; Toshniwal, D. Extreme gradient boosting and deep neural network based ensemble learning approach to forecast hourly solar irradiance. J. Clean. Product. 2021, 279, 123285. [Google Scholar] [CrossRef]
  27. Li, X.; Ma, L.; Chen, P.; Xu, H.; Xing, Q.; Yan, J.; Lu, S.; Fan, H.; Yang, L.; Cheng, Y. Probabilistic solar irradiance forecasting based on XGBoost. Energy Rep. 2022, 8, 1087–1095. [Google Scholar] [CrossRef]
  28. He, H.; Lu, N.; Jie, Y.; Chen, B.; Jiao, R. Probabilistic solar irradiance forecasting via a deep learning-based hybrid approach. IEEJ Trans. Electr. Electron. Eng. 2020, 15, 1604–1612. [Google Scholar] [CrossRef]
  29. Carrière, T.; Amaro e Silva, R.; Zhuang, F.; Saint-Drenan, Y.-M.; Blanc, P. A New Approach for Satellite-Based Probabilistic Solar Forecasting with Cloud Motion Vectors. Energies 2021, 14, 4951. [Google Scholar] [CrossRef]
  30. Nielsen, A.H.; Iosifidis, A.; Karstoft, H. IrradianceNet: Spatiotemporal deep learning model for satellite-derived solar irradiance short-term forecasting. Sol. Energy 2021, 228, 659–669. [Google Scholar] [CrossRef]
  31. Ineichen, P.; Perez, R. A new airmass independent formulation for the Linke turbidity coefficient. Sol. Energy 2002, 73, 151–157. [Google Scholar] [CrossRef] [Green Version]
  32. Vii, T. Etude de l’Eclairement et du Potentiel Photovoltaïque à l’Université de la Polynésie Française: Etude Météorologique et Modélisation de Système Photovoltaïque. University of French Polynesia: Puna’auia, French Polynesia, 2020. [Google Scholar]
  33. Siddiqui, T.A.; Bharadwaj, S.; Kalyanaraman, S. A Deep Learning Approach to Solar-Irradiance Forecasting in Sky-Videos. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 2166–2174. [Google Scholar] [CrossRef] [Green Version]
  34. Jayalakshmi, N.Y.; Shankar, R.; Subramaniam, U.; Baranilingesan, I.; Karthick, A.; Stalin, B.; Rahim, R.; Ghosh, A. Novel Multi-Time Scale Deep Learning Algorithm for Solar Irradiance Forecasting. Energies 2021, 14, 2404. [Google Scholar] [CrossRef]
  35. Dumas, J.; Cointe, C.; Fettweis, X.; Cornélusse, B. Deep learning-based multi-output quantile forecasting of PV generation. In Proceedings of the 2021 IEEE Madrid PowerTech, Madrid, Spain, 28 June–2 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
Figure 1. Total sky imager with a digital camera Axis 212 PTZ (left). Sky image taken at the University on 3 November 2019 (right).
Figure 1. Total sky imager with a digital camera Axis 212 PTZ (left). Sky image taken at the University on 3 November 2019 (right).
Atmosphere 14 01192 g001
Figure 2. Pearson correlation test for the measured meteorological data.
Figure 2. Pearson correlation test for the measured meteorological data.
Atmosphere 14 01192 g002
Figure 3. GHI autocorrelation using the Pearson correlation, for a time-shift ranging from 0 to 300 min. Red cross representing the Pearson coefficient.
Figure 3. GHI autocorrelation using the Pearson correlation, for a time-shift ranging from 0 to 300 min. Red cross representing the Pearson coefficient.
Atmosphere 14 01192 g003
Figure 4. Implemented CNN, with inputs being image sequences and meteorological data. The output is the predicted GHI for the next hour.
Figure 4. Implemented CNN, with inputs being image sequences and meteorological data. The output is the predicted GHI for the next hour.
Atmosphere 14 01192 g004
Figure 5. Implemented CNN-LSTM. The sky images are first processed by a CNN, then by an LSTM. The meteorological data are only processed by the LSTM. The two LSTMs are concatenated into a dense layer for GHI predictions.
Figure 5. Implemented CNN-LSTM. The sky images are first processed by a CNN, then by an LSTM. The meteorological data are only processed by the LSTM. The two LSTMs are concatenated into a dense layer for GHI predictions.
Atmosphere 14 01192 g005
Figure 6. Distribution of residuals at 10 am (left) and 13 pm (right).
Figure 6. Distribution of residuals at 10 am (left) and 13 pm (right).
Atmosphere 14 01192 g006
Figure 7. GHI measurements versus predictions for data test n°1.
Figure 7. GHI measurements versus predictions for data test n°1.
Atmosphere 14 01192 g007
Figure 8. Probabilistic predictions for the hybrid model for PI(38%), PI(68%), PI(95%), and PI(99%).
Figure 8. Probabilistic predictions for the hybrid model for PI(38%), PI(68%), PI(95%), and PI(99%).
Atmosphere 14 01192 g008
Table 1. Descriptive statistics including the mean, standard deviation (std), minimum/maximum values, and the quantiles for each meteorological variable.
Table 1. Descriptive statistics including the mean, standard deviation (std), minimum/maximum values, and the quantiles for each meteorological variable.
GHITemperatureRel HumidityWind SpeedWind DirectionCLS
Count243,011243,011243,011243,011243,011243,011
Mean326.027.273.61.9146.2487.5
Std325.242.29.21.5107.5395.0
Min0.020.247.50.00.00.0
25%19.025.667.00.945.734.7
50%23527.572.91.7116.7497.4
75%54628.980.12.5239.0839.3
Max153133.499.212.4360.01167
Table 2. Linke turbidity values for each month [32].
Table 2. Linke turbidity values for each month [32].
MonthJanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
TL3.33.63.33.23.23.22.82.83.223.53
Table 3. Models’ characteristics.
Table 3. Models’ characteristics.
Training Time
(min)
Learning RateNumber of Neurons in Hidden LayersNumber of Neurons in Convolutional LayersNumber of Trainable Parameters
ANN121 × 10−3 to 1 × 10−102 × 100010,901
CNN3401 × 10−3 to 1 × 10−102 × 1008964,189,633
LSTM451 × 10−3 to 1 × 10−104 × 20001,128,201
CNN-LSTM16001 × 10−3 to 1 × 10−104 × 2008967,013,233
Table 4. Mean values and variances for each hour of the day for the hybrid model.
Table 4. Mean values and variances for each hour of the day for the hybrid model.
Hour6789101112131415161718
Number of Samples 33342933392530272632312827
Gaussian distributionμ (W/m²)−1.5747.6682.0616.1429.1129.4877.656.22−5.409.70−20.87−16.22−40.0
σ (W/m²)9.9349.85107.82129.20126.69164.88111.80121.34136.66106.8788.5339.7130.64
Laplacian distributionμ (W/m²)0.633.666.331.7314.0920.7171.86−10.45−5.268.08−13.11−19.47−43.94
σ (W/m²)8.340.1178.2786.7388.07114.5894.3784.1699.6386.5557.9929.2218.22
Table 5. Deterministic metrics for all implemented models on test data n°1. Bold numbers represent the best performance.
Table 5. Deterministic metrics for all implemented models on test data n°1. Bold numbers represent the best performance.
ModelMAE (W/m2)RMSE (W/m2)R2
ANN85.89118.040.80
CNN74.53109.470.83
LSTM71.89113.240.81
CNN-LSTM66.09100.580.85
Persistence132.10165.390.69
Table 6. Deterministic results of the implemented models for test data n°2. Bold numbers represent the best performance.
Table 6. Deterministic results of the implemented models for test data n°2. Bold numbers represent the best performance.
ModelRMSE (W/m²)MAE (W/m²)
SunnyOvercastCloudyAllSunnyOvercastCloudyAllSunnyOvercastCloudyAll
ANN73.53115.80135.65121.5559.9780.0892.2882.590.950.420.550.77
CNN69.6183.04118.50100.8954.5867.8596.4370.360.950.700.660.84
LSTM69.8177.11139.71105.7350.6157.07116.0171.960.950.740.530.83
CNN-LSTM45.0961.09113.4491.7335.8252.0591.6960.460.980.840.690.87
Persistence131.02115.31159.49165.39109.7381.75134.84132.100.830.430.390.69
Table 7. Results of the probabilistic forecasts for all implemented models for data test n°1. Highlighted in black represents the best performance.
Table 7. Results of the probabilistic forecasts for all implemented models for data test n°1. Highlighted in black represents the best performance.
Models Prediction IntervalsGaussian DistributionLaplacian Distribution
PICP (%)PINAW (%)CWC (%)PICP (%)PINAW (%)CWC (%)
Hybrid model38%39.698.548.5443.818.88.81
68%73.9717.0417.0472.4217.5617.5
95%95.1033.3733.3791,7533.8067.60
99%99.2347.9747.9897.6848.0796.15
ANN38%40.4610.1710.1731.197.3114.64
68%70.6220.3020.3065.9817.5235.06
95%95.1038.9238.9396.3943.5943.46
99%99.7455.0855.0899.7462.3462.34
CNN38%43.568.728.7236.605.9011.80
68%77.0617.917.965.9814.0628.12
95%94.3333.4166.8295.1035.8835.88
99%99.2348.0848.0899.4852.1452.14
LSTM38%44.859.739.7337.636.3412.69
68%74.7419.2119.2167.2715.0230.05
95%93.5636.3872.7694.5937.8675.73
99%96.9151.71103.4399.4854.9154.91
Table 8. Hourly nMAP values for the implemented hybrid model for data test n°1.
Table 8. Hourly nMAP values for the implemented hybrid model for data test n°1.
Hour4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h
nMAP (%)14579093136342117191717252420332131973
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sansine, V.; Ortega, P.; Hissel, D.; Ferrucci, F. Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting. Atmosphere 2023, 14, 1192. https://doi.org/10.3390/atmos14071192

AMA Style

Sansine V, Ortega P, Hissel D, Ferrucci F. Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting. Atmosphere. 2023; 14(7):1192. https://doi.org/10.3390/atmos14071192

Chicago/Turabian Style

Sansine, Vateanui, Pascal Ortega, Daniel Hissel, and Franco Ferrucci. 2023. "Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting" Atmosphere 14, no. 7: 1192. https://doi.org/10.3390/atmos14071192

APA Style

Sansine, V., Ortega, P., Hissel, D., & Ferrucci, F. (2023). Hybrid Deep Learning Model for Mean Hourly Irradiance Probabilistic Forecasting. Atmosphere, 14(7), 1192. https://doi.org/10.3390/atmos14071192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop