Next Article in Journal
SMPTC3: Secure Multi-Party Protocol Based Trusted Cross-Chain Contracts
Previous Article in Journal
Designing a Robust Quantum Signature Protocol Based on Quantum Key Distribution for E-Voting Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Regional Photovoltaic Power Prediction Based on Stack Integration Algorithm

1
School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
2
Mobile Health Management System Engineering Research Center of the Ministry of Education, Hangzhou 311121, China
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2561; https://doi.org/10.3390/math12162561
Submission received: 22 July 2024 / Revised: 13 August 2024 / Accepted: 18 August 2024 / Published: 19 August 2024

Abstract

:
With the continuous increase in the proportion of distributed photovoltaic power stations, the demand for photovoltaic power grid connection is becoming more and more urgent, and the requirements for the accuracy of regional distributed photovoltaic power forecasting are also increasing. A distributed regional photovoltaic power prediction model based on a stacked ensemble algorithm is proposed here. This model first uses a graph attention network (GAT) to learn the structural features and relationships between sub-area photovoltaic power stations, dynamically calculating the attention weights of the photovoltaic power stations to capture the global relationships and importance between stations, and selects representative stations for each sub-area. Subsequently, the CNN-LSTM-multi-head attention parallel multi-channel (CNN-LSTM-MHA (PC)) model is used as the basic model to predict representative stations for sub-areas by integrating the advantages of both the CNN and LSTM models. The predicted results are then used as new features for the input data of the meta-model, which finally predicts the photovoltaic power of the large area. Through comparative experiments at different seasons and time scales, this distributed regional approach reduced the MAE metric by a total of 22.85 kW in spring, 17 kW in summer, 30.26 kW in autumn, and 50.62 kW in winter compared with other models.
MSC:
68T01

1. Introduction

It is projected that the global population will reach 9.6 billion by 2050, which will further drive the demand for energy worldwide and lead to a significant increase in electricity production around the world. According to statistics from the National Energy Administration of China, the cumulative installed capacity of grid-connected wind and solar power in China surpassed 760 million kilowatts by the end of 2022, continuously breaking the milestones of 800 million kilowatts, 900 million kilowatts, and 1 billion kilowatts and reaching 1.05 billion kilowatts by the end of 2023. This accounted for 36% of the total installed capacity, representing a 6.4 percent increase compared with the previous year. The installed capacity of grid-connected solar power increased from 390 million kilowatts at the end of 2022 to 610 million kilowatts at the end of 2023. In recent years, distributed photovoltaic power generation has entered a phase of rapid development within the industry. Data show that in 2021, the newly installed capacity of distributed photovoltaic power exceeded centralized photovoltaic power for the first time, with an addition of 29.28 million kilowatts, accounting for approximately 55% of the total newly added photovoltaic power generation capacity. In 2022, distributed photovoltaic power development became the main method for wind and solar power development, with a newly installed capacity of 51.11 million kilowatts, accounting for over 58% of the newly added photovoltaic power generation capacity that year. By the end of September 2023, the cumulative installed capacity of distributed photovoltaic power for households in China exceeded 100 million kilowatts, reaching 105 million kilowatts. With the continuous increase in the proportion of distributed photovoltaic installations, the presence of random fluctuations and non-stationarity caused by factors such as complex weather is becoming more prominent, posing a fundamental threat to the security of the power grid [1]. It also hinders the real-time data collection, perception, and processing capabilities necessary to achieve the goal of observing, measuring, adjusting, and controlling massive resources, as well as enhancing the coordinated interaction capabilities among power sources, energy storage, loads, and the grid. According to statistics from the National Energy Administration of China, solar power generation reached 325.9 billion kilowatt-hours in 2023, with year-on-year growth of 25.1% and a solar power utilization rate of 98.0%. However, there was estimated waste of approximately 6.52 billion kilowatt-hours of solar energy [2]. Therefore, improving the accuracy of regional photovoltaic power forecasting is of the utmost importance.
Currently, there are primarily three methods for photovoltaic power forecasting: physical methods, statistical methods, and artificial intelligence methods. Physical methods involve complex modeling processes which require the integration of meteorological and engineering expertise [3], such as astronomical models, meteorological models, and radiation models. While physical methods theoretically provide a deep understanding of photovoltaic power and high accuracy, their complexity, data requirements, and limitations may pose challenges in practical applications. Statistical methods include fuzzy theory [4], Markov chains [5], and regression analysis [6,7]. Since photovoltaic power is influenced by multiple input variables such as weather conditions, light intensity, and temperature, there are complex interactions and nonlinear relationships among these factors, which statistical methods often struggle to accurately capture and model. With the rapid development of artificial intelligence, deep learning methods have been widely applied in the field of photovoltaic power prediction. Ahn et al. [8] used an LSTM network based on an RNN for short-term photovoltaic power forecasting, addressing issues such as gradient vanishing and exploding, which exist in traditional RNNs. However, as the network becomes deeper, the performance of LSTM may decline. Agga et al. utilized convolutional neural networks to capture spatial features and local details in the data. However, due to the limited perception capability of the convolutional kernel for global information, even with larger convolutional kernels, their capture range was limited, resulting in suboptimal peak prediction. To address the performance decline of LSTM with increasing network depth and the weak global information perception capability of convolutional networks, the CNN-LSTM model was proposed [9]. The emergence of this hybrid model has enabled people to address the existing issues of single models, leading to the emergence of numerous hybrid methods. For instance, Qi et al. [10] employed a CNN-LSTM model to forecast short-term loads in integrated energy systems, demonstrating that this model outperforms the CNN and LSTM models in terms of prediction accuracy. Niu et al. [11] utilized attention mechanisms to optimize a CNN-BiGRU model for short-term multi-energy load forecasting, resulting in an average MAPE improvement of 66.09% compared with the single LSTM model. Gao et al. [12] used a CNN-BiLSTM model to predict the remaining lifespans of lithium-ion batteries in electric vehicles, showcasing through comparisons with other classical models that hybrid models possess higher generalization capabilities and prediction accuracy. The above methods have been proven to improve the accuracy of individual power predictions in power plants. However, the optimization problem of deep learning models is often non-convex, meaning that there are multiple local optima and saddle points. As the number of hybrid models increases, the number of local optima and saddle points also increases, making it easier for the model to become trapped in local optima and saddle points and thus fail to achieve the best prediction accuracy. To address these issues, researchers have proposed using optimization algorithms to find the optimal solutions in models, such as LCASO-BP [13], GA-LSTM [14], ED-LSTM [15], and PSO-VMDFE-WHO-CNN [16]. These methods have demonstrated the feasibility of using swarm intelligence optimization algorithms to find the optimal hyperparameters in models, thereby improving the accuracy of individual power plant predictions. However, most current photovoltaic power forecasting focuses on individual power plants, and there is limited research on forecasting regional photovoltaic power plants.
There are three main methods for regional photovoltaic power forecasting: cumulative methods, extrapolation methods, and statistical scale methods [17]. (1) In cumulative methods, the power of all individual photovoltaic power stations in a whole large area is predicted and then directly added to find the regional prediction results. This method is quite simple to implement, but the problem is also rather obvious. The number of photovoltaic power stations in a large area is extremely large, and if there is a large error one in of the power stations in the forecast process, then in the accumulation process, the error will gradually increase, and each power station will be distributed in different areas, which also leads to different characteristics and the need to build characteristics for each power station, resulting in much work. (2) In extrapolation methods, the region’s distributed photovoltaic power plants are divided into several sub-regions. Representative power values are selected to predict the output of each sub-region, and then the output values of each sub-region are summed to obtain the output value of the entire region. (3) Statistical scale methods involve dividing a large region into multiple sub-regions. Within each sub-region, representative power plants are selected, and their power outputs are individually predicted. Finally, using mathematical statistics, the weight coefficients of each representative power plant are calculated based on the proportion of the baseline power plant’s generation in each sub-region’s photovoltaic power plant group. The regional photovoltaic power plant forecast is then calculated through weighted aggregation.
The aforementioned methods for sub-region division include nearest neighbor propagation clustering [18], grid-partitioned sampling [19], k-means clustering [2], and hierarchical clustering [20]. After sub-region division, representative photovoltaic power plants are selected from each sub-region. Li et al. [2] calculated the correlation coefficients (such as the correlation coefficients, Pearson correlation coefficient, and Spearman’s rank correlation coefficient) between the power generation of each photovoltaic power plant and the total power generation of the sub-region. The photovoltaic power plant with the highest correlation coefficient was defined as the baseline power plant. Although the aforementioned methods are based on the statistical scale method, they often only consider the correlation between the representative power plant and the total regional power, neglecting the spatial correlation between the representative power plant and other power plants within the sub-region. This limitation restricts the utilization of data from other power plants and limits the improvement in prediction accuracy.
For example, within a sub-region, each power plant may have different capacities and installation scales, or the geographic locations and environmental conditions of the power plants may result in uneven resource distribution. In such cases, power plants with larger capacities or those situated in resource-rich areas would have a significant advantage in correlation calculations. However, representative power plants selected using this approach may not effectively represent the entire sub-region. This is because this method overlooks the spatial correlation between the representative power plant and other power plants within the sub-region. Each power plant may have different locations and relationships with other plants in space, and these relationships play an important role in the power characteristics of the entire sub-region. Methods which solely consider the overall regional power correlation may fail to capture this spatial correlation, leading to representative power plants which do not fully reflect the characteristics of the entire sub-region.
To address this issue, Simeunović et al. [21] proposed the application of graph convolutional networks (GCNs) for multi-site photovoltaic power prediction and demonstrated its effectiveness. Zhang et al. [22] constructed a GCN-LSTM prediction model to improve the accuracy of ultra-short-term photovoltaic power forecasting. However, the aforementioned methods have certain limitations in information aggregation when using GCNs. A GCN employs a fixed aggregation approach by linearly combining the features of the input nodes and their neighboring nodes. Furthermore, a GCN only focuses on the immediate neighbors of a node and ignores distant nodes. This restriction can limit the node representation to the information within its local neighborhood, failing to fully utilize the global information of the entire graph. To overcome this limitation, Velickovic et al. [23] introduced graph attention networks (GATs) and incorporated an attention mechanism to model the relationships between nodes. In contrast to GCNs, GATs dynamically aggregate information from neighboring nodes by learning the weights between each node and its neighbors. This adaptive aggregation allows each node to determine the extent of interaction with its neighbors, capturing the complex relationships between nodes. Additionally, GATs support multi-head attention mechanisms, where multiple attention heads are used in each layer for feature aggregation. Each attention head can learn different node relationships, providing richer graph structure information. By utilizing multiple attention heads, a GAT can simultaneously consider multiple node relationships, enhancing the model’s expressive power and generalization ability.
In summary, the existing methods for regional photovoltaic (PV) power prediction have the following limitations:
  • Compared with the summation method and extrapolation method, statistical scaling methods are more suitable for regional PV power prediction. However, due to various factors such as different environmental conditions affecting different PV power plants, selecting a reference PV power plant may not fully represent the characteristics and variations of all PV power plants within the sub-region.
  • The operation and maintenance of PV power plants can lead to changes in their power generation capabilities. The proportion of power generated by the reference PV power plant may vary over time, which can affect the estimation of the reference PV power plant’s proportion.
  • Extrapolating the entire region’s PV power based on proportions alone does not take into account the spatiotemporal characteristics and synergistic effects among sub-regions, resulting in less accurate predictions.
To address these existing issues, we propose a distributed regional photovoltaic (PV) power prediction method based on the stacked ensemble algorithm [24]. The main contributions are as follows:
  • In the process of selecting representative power plants for each sub-region, we utilize a graph attention network (GAT). The GAT models all the power plants within each sub-region by leveraging its characteristics, representing the connections and interactions among the power plants using a graph structure. By learning the weights and attention distributions between power plants, we integrate and fuse the features and relationships among different sub-regions, ultimately selecting the most representative power plants.
  • We employ a CNN-LSTM-multi-head attention parallel multi-channel model as the base model for the stacked ensemble algorithm. This approach fully utilizes the strengths of the CNN and LSTM models, enhancing the model’s feature representation capabilities and sequence data modeling abilities. Additionally, we incorporate a multi-head attention mechanism for comprehensive feature weighting and fusion, considering the different feature representations and sequence modeling capabilities. This results in more comprehensive and accurate model outputs.
  • By using the outputs of the base model as inputs to the meta-model, we incorporate the features and spatiotemporal characteristics of each sub-region into the consideration of the meta-model. The base model can perform individual feature extraction and modeling for each sub-region, capturing the spatiotemporal characteristics within the sub-region. The meta-model then integrates and consolidates the outputs of different sub-regions, better reflecting the characteristics of the entire region and predicting PV power generation.
The organizational structure of this paper is as follows. The first chapter describes the existing methods and problems in the field of regional photovoltaic power generation forecasting and puts forward solutions to the above problems. Section 2 introduces the framework of regional forecasting in this paper. Section 3 explains the selection of representative power stations. Section 4 introduces the stack integration algorithm and the basic model and meta-model selected in the algorithm. In Section 5, the feasibility of the proposed method is proven by comparing the model with other excellent models. Section 6 gives a final summary of this paper.

2. Regional Photovoltaic Power Generation Forecasting Framework

This article proposes a multi-channel distributed regional photovoltaic (PV) power prediction method based on the stacked ensemble algorithm. This method integrates the different characteristics of all power plants within each sub-region and the spatiotemporal features among different sub-regions, enabling the prediction of regional PV power generation under different seasons and time scales. The overall flowchart of the method is shown in Figure 1.
The general flow chart is as follows:
(1)
Preprocess the raw data of each power station, including removing outliers, filling missing values, and analyzing feature correlations. After data processing, divide the data into four seasons: spring, summer, autumn, and winter. Normalize the data after division and split them into training, testing, and validation sets at an 8:1:1 ratio.
(2)
Divide the larger area into three sub-regions: A, B, and C. After the processing in step 1, input the data of all power stations in region A into a GAT. Select the most representative power station for region A based on the results from the GAT. Apply the same method to select representative power stations for regions B and C. If a sub-region contains only one power station, then that station becomes the representative station for that region.
(3)
Input the data of representative power stations from regions A, B, and C for different seasons and time scales into the base model (CNN-LSTM-MHA (PC)). Utilize this model to predict the power generation of regions A, B, and C and obtain the results.
(4)
Use the output from step 3 as new features, and add them to the data of the larger area. Input the new data into a meta-model (GRU) to predict the power generation of the larger area.

3. Selecting Sub-Regions for Power Stations

When selecting sub-regions to represent power stations, first of all, power stations which can be classified into similar weather conditions by region are classified into three sub-regions (A, B and C), as shown in Figure 2.
Because power stations in the same area are often affected by similar climatic and environmental conditions. This partitioning helps reduce the complexity of the data and the impact of noise. After the zone is divided, representative power stations are selected from many power stations in the subzone. In the sub-region, there is a temporal and spatial correlation between power stations; that is, the status of a power station may be affected by the surrounding power stations. Most importantly, in the selection of representative power stations, it is necessary to consider not only the relationship between each power station and its neighboring power stations but also the relationship between all power stations in the whole region.

Graph Attention Networks

This article utilizes a graph attention network (GAT) to select representative power plants within each sub-region. The GAT is a graph neural network model designed to capture both the spatial and temporal information among related power plants within the sub-region by introducing attention mechanisms between nodes. Unlike a graph convolutional network (GCN), which performs a simple average aggregation of neighboring nodes, the GAT calculates the attention weights for each node with its neighbors and aggregates them accordingly. This allows the GAT to adaptively model the contributions of different nodes to their neighbors, enabling more flexible capturing of the local structure within the graph. Since each power plant may have varying importance and contributions, this article incorporates a multi-head attention mechanism in the GAT. This mechanism learns different attention weights from multiple perspectives in parallel, based on the relationships and features of the nodes. Figure 3 illustrates this process.
From the diagram, it is evident that each power plant in the sub-region is depicted as a node (P). The various colored arrows represent distinct attention sets. By continuously learning and incorporating the spatiotemporal features of all other power plants within the region, the attention sets are merged and averaged to yield the final output for each power plant. This method enables the model to consider the impact of each power plant from a global perspective, better capturing the interactions among power plants within the region and improving prediction accuracy.
Through iterative learning and prediction, the power plant with the most accurate predictions is chosen as the representative for the sub-region. This ensures that the selected representative power plant effectively represents the photovoltaic power generation within the entire sub-region, providing a reliable foundation for subsequent analysis and decision making. In cases where the sub-region contains only one power plant, it automatically becomes the representative for the sub-region, as shown in Figure 4.
In Figure 4, through the GAT, the representative power station in this sub-region is finally selected: P7. The calculation step takes the features of a group of nodes as the input. For example, h = h 1 , h 2 , h s , h i R F , where F is the dimension of the features owned by each node, and i = 1, 2, …, s, where s is the number of nodes. The correlation attention between nodes i and j is described as follows:
e i j = a W h i , W h j
α i j = softmax j e i j = exp e i j k N i exp e i k
α i j = exp LeakyReLU a T W h i |   | W h j k N i exp LeakyReLU a T W h i |   | W h j
h i = σ 1 K k = 1 K j N i α i j k W k h j
In Equation (1), W is the weight matrix, and a is the single-layer feedforward neural network. The weight matrix W is spliced with the result of the multiplication of node i ’s features and node j ’s features. After the spliced result is multiplied by a , the high-dimensional features are mapped to a real number, namely the attention coefficient e i j . In Equations (2) and (3), N is the number of other nodes in node domain i , and to perform a normalization operation on e i j , the LeakyReLU activation function is adopted. Finally, combined with the multi-head attention mechanism, as shown in Equation (4), the new node features are generated at last, where h = h 1 , h 2 , h s , h i R F .

4. Stack Integration Algorithm

The accuracy of PV power prediction in large-scale regions is of the utmost importance for energy planning and operational management. To ensure a reliable energy supply and effective operational decision making, higher-precision models are required for predicting PV power in large-scale regions. Building upon the statistical scale method, this paper divides a large-scale region into multiple sub-regions. To better capture the shared features and similarity relationships among the sub-regions, a combination model based on stacked ensemble algorithms is employed for predicting PV power in the large-scale region.
The stacked ensemble algorithm improves overall prediction performance by combining multiple base models. Its main concept involves using the predictions of the base models as input features and utilizing another meta-model for the final prediction. This algorithm effectively integrates the strengths of multiple models. By incorporating predictions from multiple base models, it can better adapt to new and unseen data. This generalization ability allows the model to demonstrate higher prediction capabilities and increased reliability when encountering various PV power generation scenarios in different sub-regions. Furthermore, the algorithm can be easily expanded to incorporate additional base models and adjusted and improved as needed, providing enhanced scalability and flexibility.

4.1. Basic Model

In the process of photovoltaic power generation, many influential characteristics are often involved, such as weather data (sunshine time, temperature, humidity, etc.), geographical location information, and photovoltaic module performance parameters. These data can have highly nonlinear relationships, and there can be temporal and spatial correlations and interaction effects, which also make it difficult for simple machine learning models to capture complex patterns and associations in the data. In this paper, the CNN-LSTM multi-channel parallel (CNN-LSTM (PC)) model is selected in the basic model training stage, which can maximize the advantages of combining the CNN and LSTM approaches, as shown in Figure 5.
Firstly, the parallel training of the CNN and LSTM models allows for comprehensive utilization of both the temporal and spatial features in the data. The CNN is capable of extracting spatial features from the data, while the LSTM model, with its memory cells and gating mechanisms, selectively remembers and forgets past information. This enables the model to better capture long-term patterns and trends in the power data of sub-regions. Additionally, there may exist spatial correlations in the power data of the region, meaning that adjacent sub-regions may have similar or related power values. Through its recursive structure, LSTM can simultaneously handle both time series and spatial correlations, thereby effectively leveraging the interdependence between sub-regions for prediction. By training these two models in parallel, we can consider the spatiotemporal information within the power plant data simultaneously, leading to more comprehensive modeling and prediction of power generation.
Secondly, through the multi-head attention mechanism, we can merge the outputs of the CNN and LSTM models. The multi-head attention mechanism allows for the weighting and fusion of different feature subspaces, enabling the model to focus more on important features and spatiotemporal relationships. This mechanism enhances the model’s perception of critical features in the power plant data, thus improving the accuracy and robustness of predictions.
Lastly, the final prediction is obtained by mapping the output of the multi-head attention mechanism through fully connected layers. The fully connected layers enable the combination and transformation of features, converting the high-level features extracted by the multi-head attention mechanism into the final prediction output. This structure possesses strong expressive power, being capable of adapting to complex patterns in power plant data and producing accurate prediction results. The structure and related parameters of the model are shown in Table A1 in Appendix A.

4.1.1. Convolutional Neural Networks

Convolutional neural networks (CNNs) have achieved significant success in image processing, and their feature extraction capability plays a crucial role in time series data as well. Time series data often contain complex patterns and structures, and traditional manual feature engineering is challenging for extracting such features. Through the hierarchical stacking of convolutional operations, a CNN can efficiently capture the spatial features among the data.
Moreover, a specific moment in time series data is usually correlated with its neighboring moments. Convolutional operations can capture local dependencies when processing time series data. By defining appropriate kernel sizes, the CNN can effectively capture local patterns and their evolution in the time series data. Additionally, by using different kernel sizes or multiple layers of convolution, the CNN can achieve a multi-scale representation of the time series data. Smaller kernels can capture detailed features, while larger kernels can capture more macroscopic trend features. Multiple layers of convolution can gradually extract abstract features from the time series data, enabling modeling of different time scales. The CNN’s implementation formulae are as follows:
Y i = j = 0 k 1 X i + j W j
Y i = max j = 0 p 1 X i s + j
where Y is the output sequence of the convolution operation, i is the index of the output sequence, k is the size of the convolution kernel, p is the size of the pooling window, and s is the step length of the pooling window.

4.1.2. Long Short-Term Memory

Long short-term memory (LSTM) is a variant of recurrent neural networks (RNNs) which has achieved significant success in handling time series data. Time series data often exhibit long-term dependencies and temporal relationships, and traditional RNNs face challenges in dealing with long-term dependencies due to the vanishing or exploding gradient problem. LSTM effectively addresses these issues by introducing gate mechanisms.
The main idea of LSTM is to store and update information through a memory unit called a “cell”. The cell consists of a forget gate, an input gate, and an output gate, each with learnable weights which determine whether to pass or update information. The forget gate decides whether previously stored memory should be forgotten, the input gate determines how new information is integrated into the memory, and the output gate determines how much of the output memory is passed to the next time step.
The key aspect of LSTM lies in its ability to update and retain long-term memory effectively. Through control of the forget gate and input gate, LSTM can selectively forget or store information, enabling more accurate handling of long-term dependencies. This allows LSTM to capture important patterns and structures in time series data without being limited by vanishing or exploding gradients.
Similar to CNNs, LSTM can also extract more abstract features through the stacking of multiple layers. Each LSTM layer can capture different levels of time scales, ranging from lower-level detailed features to higher-level abstract features. This multi-layer structure facilitates a deeper understanding and modeling of time series data by LSTM. The formulae for each state at time t are as follows:
i t = σ W i x t + U i h t 1 + b i
f t = σ W f x t + U f h t 1 + b f
o t = σ W o x t + U o h t 1 + b o
c ˜ t = tanh W c x t + U c h t 1
c t = f t c t 1 + i t c ˜ t
h t = o t tahn c t
where W i , W f , W o , and W c are the weight matrix of the input gate, forget gate, output gate, and memory unit, respectively, U i , U f , U o , and U c are the weight matrix of the hidden layer, and b i , b f , and b o are the bias values. A memory unit c t is updated through the forget gate and the input gate. The forget gate determines how much information from a memory unit c t 1 at the previous time is retained, and the input gate determines how much new information c ˜ t is added to the memory unit state.

4.1.3. Multi-Head Attention Mechanism

In the multi-head attention mechanism, each attention head has an independent weight allocation mechanism which determines how much attention each head pays to the input. Each attention head generates a weight coefficient vector which is used to weight the sum of the input and obtain a representation of that head. In this way, multiple attention heads can learn different attention patterns in different feature subspaces, thus providing diversified information expression. The formulae for calculating the multi-head attention mechanism are as follows:
MultiHead   Q , K , V = Concat   head 1 , head h W o
head i = Attention Q W i Q , K W i K , V W i V
Attention   K , Q , V = softmax ( Q K T d k ) V
In this formula, Q , K , V represent the query vector, key vector, and value vector, respectively, h represents the number of headers, head i represents the output of the head i , and W o represents the output transformation matrix. W i Q , W i k , and W i v represent the query, key, and value matrix of the head i , respectively, d k is the dimension of the key vector, and softmax is the similarity normalization.

4.2. Meta-Model

In the forecasting of photovoltaic power generation in large areas, we will pay more attention to the forecast results of the overall power generation. Because the spatial correlation between power stations in a large region may not be obvious, and short-term changes and fluctuations sometimes have a greater impact on the overall forecast results, the GRU is used as the meta-model in this paper.

Gated Recurrent Unit

The gated recurrent unit (GRU), compared with LSTM, has fewer gate units, allowing it to more effectively capture rapid changes and short-term patterns in large-area power data. Additionally, large areas may exhibit significant differences due to geographical location, weather, and other factors. The GRU model can better adapt to the variability between different regions and quickly adjust to changes in different areas. This enables it to accurately capture the characteristics and changing patterns of different regions when predicting power in a large area. As a meta-model, the GRU can synthesize predictions from sub-regions and further forecast the power for an entire large area. It can weigh and adjust the predictions from sub-regions, thereby improving the accuracy for the entire large area.
In this paper, a GRU is used as the meta-model of the stack integration algorithm, and the GRU implementation formulae are as follows:
z t = σ W z h t 1 , x t + b z
r t = σ W r h t , x t + b r
h ˜ t = tanh r t h t 1 W h + x t W h + b g
h = 1 z t h t 1 + z t h ˜ t
where x t is the input of the current time step, h t is the hidden state of the current time step, z t is used to control the weight between the hidden state of the previous moment and the candidate hidden state, and r t is used to control the influence of the previous hidden state on the candidate state, while h ˜ t is the candidate hidden state and W z , W r , and W h are weight parameters. See Appendix A for details.

4.3. Model Evaluation

In this paper, the root mean square error ( R M S E ), mean absolute error ( M A E ), and R squared value ( R 2 ) were used as evaluation indices to evaluate the prediction results of the proposed model and the comparison test model in different seasons and time scales. The specific formulae for these variables are as follows:
X R M S E = t = 1 N y t y t 2 N
X M A E = t = 1 N y t y t N
R 2 = 1 t = 1 N y t y t t = 1 N y ¯ y t 2 2
In the formulae, y t and y t are the true value and predicted value at time t , respectively, y ¯ is the average value of the true value, and N is the total number of test samples.

5. Experimental Research

5.1. Data Description

The experimental hardware set-up for this study included a 2.5 GHz Intel (R) Core (TM) i7-11700H CPU with 32.00 GB of memory, and implementation was performed using the TensorFlow framework and Python language. The experimental data for this study were provided by the Desert Knowledge Australia Solar Centre (DKASC) and originated from the Yulara Solar System in the Ayers Rock region of Australia. Installed in 2014, the Yulara Solar System is an operating 1.8 MW solar photovoltaic plant which was developed with the support of the Australian Renewable Energy Agency (ARENA). Comprising five sub-systems distributed across the local township of Yulara, it sits beside Central Australia’s renowned landmark Uluru (Ayers Rock) in addition to generating electricity for the local grid.
There were a total of 8 meteorological forecast-related fields: Wind_Speed, Temperature, Global_Horizontal_Radiation, Wind_Direction, Max_Wind_Speed, Air_Pressure, Pyranometer_1, and Pyranometer_2. The details are shown in Table 1. The dataset can be publicly downloaded from the website corresponding to [25].
Each dataset contained environmental factors and photovoltaic power output data collected at 5 min intervals, resulting in 288 data points per day. Due to human error, communication failures, or other issues during the data collection process, there may have been outliers and missing data. In this study, the 3 sigma criterion was applied to identify and remove outliers. This criterion defines data points which exceed three times the standard deviation as outliers. For missing data points, the Hermite interpolation method was used to estimate reasonable values based on the values and derivative information of existing data points.
As different seasons can have varying impacts on photovoltaic power generation, considering the influence of factors such as sunlight and temperature, the relationship between photovoltaic power generation and meteorological factors may exhibit different patterns at different time scales. Therefore, the dataset was divided into seasons: spring, summer, autumn, and winter. Furthermore, for each season, the data were divided into different time intervals: 1 h, 3 h, and 5 h. This division allowed for better capturing of seasonal patterns and trends between photovoltaic power generation and meteorological factors. Additionally, by modeling multiple time scales, the dynamic characteristics of the data could be more comprehensively captured.

5.2. Correlation Analysis

In photovoltaic power prediction tasks, there are numerous features which can potentially affect the power output. Selecting appropriate input features is crucial for establishing an accurate prediction model. Due to the presence of a large number of potential influencing factors, feature selection helps to identify features strongly correlated with the power output, reducing the interference of redundant information and noise. Choosing highly correlated features as inputs enables better capture of the factors influencing power and improves the accuracy and interpretability of the prediction model.
This study employed the maximal information coefficient ( M I C ) correlation and the Pearson correlation coefficient for feature selection and partitioning. M I C correlation, as a non-parametric measure of correlation, can identify associations between variables of any type, including nonlinear relationships. This capability allows it to discover complex correlations hidden within photovoltaic power data, assisting in finding features strongly correlated with power output without being limited by assumptions about feature distributions. The formula for calculating M I C correlation is as follows:
M I C x , y = max e * f < D I x , y min e , f
where I x , y is the amount of mutual information of x and y , e and f are the number of grid units, and D is usually set to the total number of samples to the power of 0.6.
The Pearson correlation coefficient is a common linear correlation measurement method which is suitable for measuring the strength of a linear relationship. By using the Pearson correlation coefficient, features which had a high linear correlation with the power output could be identified. The relevant calculation formula is as follows:
ρ x , y = cov X , Y σ X σ Y
where cov X , Y is the covariance of X and Y and σ X σ Y is the standard deviation of X and Y .
Correlation analysis was carried out between the historical data of actual photovoltaic power generation, relevant meteorological data, and the characteristics of the photovoltaic panels themselves. The correlation analysis results are shown in Figure 6.
The results show that Wind_Speed (wind speed), Temperature (temperature), Radiation (radiation), Max_Wind_Speed (maximum wind speed), Pyranometer_1 (probe 1’s temperature), and Pyranometer_2 (probe 2’s temperature) had great influence on photovoltaic power generation. Therefore, these influencing factors were selected to be input into the model as features.

5.3. Experimental Results

5.3.1. The Sub-Regions Represent Comparison Experiments for Power Station Selection

In Section 3, the selection method for a sub-regional representative power station was introduced. The power station with the best prediction result based on a GAT was the representative power station in the region. Sub-region A, for example, had five power stations: DG, SS, SD-1A, SD-2A, and SD-3A. By constructing a GAT, the power of five power stations was predicted and evaluated using the RMSE and MAE indices. Table 2, Table 3, Table 4 and Table 5 list the results.
Based on the data in Table 2, Table 3, Table 4 and Table 5 significant observations can be made regarding the prediction accuracy. The RMSE and MAE values for station SD-1A were noticeably smaller compared with the other power stations in the sub-region. This indicates that station SD-1A exhibited exceptional accuracy in its predictions. Taking into account the aforementioned metrics, we can conclude that station SD-1A achieved the best prediction results by iteratively learning and integrating the spatiotemporal features from all other power stations in the region. Therefore, station SD-1A can be considered representative of the entire sub-region. Furthermore, tests conducted across different seasons demonstrate that station SD-1A, as a representative power station, exhibited superior generalization capabilities, performing well in predictions across diverse seasons. This further confirms the superiority of station SD-1A in capturing seasonal variations and its predictive abilities. Based on these results, we selected SD-1A as the representative power station for region A. The same validation process was carried out for regions B and C, identifying representative power stations for their respective sub-regions.

5.3.2. Sub-Region Basic Model Comparison Test

In this study, a model based on the stacked ensemble algorithm was employed to predict the distributed photovoltaics (PVs) in regional areas. The basic model was used to predict the power generation in the sub-regions, and its output served as the input for the meta-model, which was used to predict power generation in the larger region. The accuracy of the basic model’s predictions directly impacts the quality of the input data for the meta-model. If the basic model’s predictions exhibit significant errors, then these errors will propagate to the meta-model, thereby affecting the prediction results for the larger region’s power generation. Additionally, if the selected basic model can adequately learn and capture the spatiotemporal features of the sub-regions, then the meta-model will benefit from these valuable feature representations. Conversely, if the selected basic model has weak feature learning capabilities, then it may not provide sufficient information for predicting the larger region, resulting in decreased prediction accuracy. Liu et al. [26] demonstrated the feasibility and effectiveness of parallel network structures in the PV domain by utilizing multimodal decomposition and combining parallel bidirectional long short-term memory (BiLSTM) and a convolutional neural network (CNN). Compared with traditional single-channel networks, parallel network structures exhibit stronger generalization and robustness.
In this paper, model 2 (CNN-LSTM-MHA (PC)) was used as the basic model and compared with model 1 (CNN-GRU-MHA (PC)) and model 3 (CNN-BiLSTM-MHA (PC)) in different seasons and different time steps. Taking region A as an example, the model evaluation results are shown in Figure 7.
In this experiment, dividing the data according to different seasons helped to better capture the seasonal patterns and trends between photovoltaic power generation and meteorological factors. Different seasons’ meteorological factors, such as sunlight and temperature, have varying impacts on photovoltaic power generation. Additionally, predictions were made for 1 h, 3 h, and 5 h ahead. By forecasting the photovoltaic power generation at different time points, the model’s robustness and practical benefits can be better evaluated. From Figure 7a, it can be seen that model 2’s MAE metric decreased by an average of 1.77 kW and 1.1 kW compared with model 1 and model 3 in the 1 h, 3 h, and 5 h predictions. Similarly, in Figure 7b, model 2 showed an average reduction of 2.63 kW and 0.22 kW compared with the other two models. The results indicate that by leveraging the strengths of the CNN model in capturing the local features of the data and LSTM in capturing the long-term dependencies, the CNN-LSTM-MHA (PC) model constructed in this study and trained in parallel can maximize model performance. When LSTM was replaced by a GRU and BiLSTM under the same model structure, although good prediction results were achieved, there were still some issues. For instance, the GRU struggled to capture complex data features due to fewer gating units, and BiLSTM required a larger number of parameters, making it prone to overfitting. The model evaluation metrics for sub-region A are shown in Table 6, Table 7 and Table 8 while detailed data for sub-regions B and C can be found in Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 of Appendix A.

5.3.3. Large-Area Meta-Model Comparison Test

Large regions typically encompass a wide range of geographical and climatic conditions, making it challenging to select suitable data features for large-area prediction. Additionally, there may be strong spatial correlations among data from different sub-regions. The stacked ensemble algorithm leverages the outputs of base models as new features for the meta-model, addressing the inadequacies of large-area data features and enhancing spatial correlations among different sub-regions. In this study, a GRU was employed as the meta-model to forecast power generation in a large region. To further validate the feasibility of the proposed model, comparative experiments were conducted with classical models such as CNN-LSTM-MHA (SC), BiLSTM-CNN (SC), and the traditional extrapolation method. The prediction data for the sub-regions were all based on the results predicted by the CNN-LSTM-MHA (PC) model. The extrapolation method involved summing the predictions from individual sub-regions. The comparison results are shown in Figure 8, where the data were randomly selected from one day of the final results.
From the above comparison of the prediction results, it can be observed that both the proposed method in this study and other classical models and traditional methods are capable of capturing the trends in photovoltaic power generation. This indicates that under normal circumstances, they can provide a certain level of prediction accuracy. However, during extreme weather conditions, there was a significant abnormal fluctuation in photovoltaic power generation. In comparison with other methods, the method proposed in this study can more accurately capture the detailed features of the data and demonstrate better fitting at the turning points of the curves. This suggests that this method can more precisely predict the changes in photovoltaic power generation at different time points and identify the inflection points of the power curve. The specific evaluation metrics are shown in Table 9, Table 10 and Table 11.
Based on the data table, it is evident that regardless of the season or time scale, using a GRU as the meta-model yielded significantly better results compared with other classical models. Across the four seasons of spring, summer, autumn, and winter, compared with BiLSTM-CNN (SC), CNN-LSTM-MHA (SC), and the extrapolation method, our proposed model showed average reductions in the MAE at different time scales, namely 5.49 kW, 8.28 kW, and 0.95 kW; 4.97 kW, 5.99 kW, and 0.87 kW; 7.9 kW, 9.73 kW, and 2.56 kW; and 14.07 kW, 17.68 kW, and 6.19 kW, respectively. Therefore, in scenarios where there are fewer features in a large area, the simplicity of the GRU model’s gating mechanism allows it to swiftly and effectively capture crucial information from the data, leading to superior outcomes.
Although the extrapolation method slightly outperformed the other two models in certain metrics, it still fell short compared with our proposed model. The strength of the extrapolation method lies in its simplicity and ease of implementation, but its performance heavily relies on the accuracy of the base model and reasonable sub-region divisions. Insufficient accuracy in the base model or overly detailed sub-region divisions can lead to error accumulation and decreased prediction accuracy.
Obtaining features from large-area data is challenging, resulting in a lower feature count. In such scenarios, models like BiLSTM-CNN (SC) and CNN-LSTM-MHA (SC), due to their complex structures, are prone to getting stuck in local optima during training, leading to significantly increased training times.
This indirectly underscores the importance of choosing the right models when dealing with diverse data. For sub-regions, where data complexity and influential features are high, a single model might not adequately capture essential information, necessitating the utilization of different models’ strengths by adjusting network structures for training and prediction. In contrast, for large areas with fewer features, complex model structures may not necessarily improve predictive accuracy; instead, they may increase the training time and risk of overfitting.

6. Conclusions

To address the issues of inaccurate selection of representative power stations for sub-regions, insufficient consideration of spatial correlations within and between sub-regions, and challenges in multi-step prediction, a distributed regional PV power prediction model based on a stacked ensemble algorithm was proposed, leading to the following conclusions:
  • By selecting representative power stations, the scale of an entire large region can be reduced, thereby lowering computational complexity. Compared with global prediction over a large region, using representative power stations for prediction can greatly simplify the model’s computational process and data transmission, improving efficiency. By employing the graph attention network (GAT) method to construct a graph network within sub-regions and learning attention weights between nodes, important node information can be adaptively selected. In this case, the GAT helped us choose influential power stations within sub-regions which could more accurately represent the overall power generation of the entire region.
  • A distributed regional PV power prediction model based on a stacked ensemble algorithm was proposed, which combined different basic models to form a multi-level model structure. By integrating and combining the model’s predictions at different levels, information across sub-regions can be effectively utilized. This integration capability helps the model better capture the shared features and similarity relationships between sub-regions, thereby improving the performance of the prediction model.
  • Through instance analysis and comparative experiments on different seasons and time scales, the proposed model demonstrated high accuracy, a strong generalization ability, and robustness, providing new insights for regional PV power prediction.
Although the method proposed in this study demonstrated significant advantages in regional forecasting, there are still some shortcomings which need further improvement and outlooks:
  • The data used in this study were sourced from a single origin and thus lacked diversity, which could have impacted the model’s generalization ability and prediction effectiveness. In future research, it is necessary to incorporate data from different regions and countries to enhance the diversity of the dataset.
  • The process from regional division and the selection of representative power stations to the final forecast of large regions consumed a substantial amount of time. In future research, it is essential to streamline the model structure to reduce time costs while ensuring high prediction accuracy.

Author Contributions

Conceptualization, K.H.; methodology, C.L. and S.S.; software, Z.F.; validation, Z.F.; formal analysis, C.L.; investigation, Z.F. and S.S.; resources, Y.F.; data curation, C.L.; writing—original draft, C.L.; writing—review & editing, K.H. and B.W.; supervision, Y.F. and B.W.; project administration, K.H.; funding acquisition K.H. and B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the joint funds of the Zhejiang Provincial Natural Science Foundation of China (Grant No. LHY21E090004 and LHZSZ24F020001), the Education Science Planning Project of Zhejiang Province in China (Grant No. 2024SCG026), Project funding from the Zhejiang Higher Education Association in China (Grant No. KT2024170), Teaching Construction and Reform Project of Hangzhou Normal University in China (Research on Smart Teaching Reform Based on Digital Twins), and the Scientific Research Foundation of Qianjiang College of Hangzhou Normal University (Grant No. 2022QJJL02), Zhejiang Provincial Natural Science Foundation of China (Grant No. LQ23F010005), Big Ideological and Political Course of Hangzhou Normal University in China (Information Management), Teaching Construction and Reform Project of Hangzhou Normal University in China (Teaching Reform of “Information Management” Course Based on 101 Plan).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. The structure and related parameters of the CNN-LSTM-MHA model.
Table A1. The structure and related parameters of the CNN-LSTM-MHA model.
ParametersCNN-LSTM-MHA
CNN-1 (convolution kernel size/number of convolution kernels)1/64
CNN-2 (convolution kernel size/number of convolution kernels)3/34
CNN-3 (convolution kernel size/number of convolution kernels)1/64
LSTM-1 (cells)100
LSTM-2 (cells)64
LSTM-3 (cells)400
LSTM-4 (cells)64
LSTM-5 (cells)80
Multi-head attention (heads)5
Dropout parameter0.1
learning_rate0.0001
OptimizerAdam
batch_size50
Table A2. The detailed evaluation indicators for Region B in 1 h.
Table A2. The detailed evaluation indicators for Region B in 1 h.
Sub-RegionSeasonMethodMAERMSE
B
1 h
SpringModel 12.31863.3771
Model 21.95082.8504
Model 32.05234.0718
SummerModel 11.65382.3936
Model 20.97371.3365
Model 31.46792.0254
AutumnModel 11.79952.6118
Model 23.58884.3422
Model 32.01762.8212
WinterModel 14.11395.3951
Model 22.16662.8827
Model 34.91896.7063
Table A3. The detailed evaluation indicators for Region B in 3 h.
Table A3. The detailed evaluation indicators for Region B in 3 h.
Sub-RegionSeasonMethodMAERMSE
B
3 h
SpringModel 12.09863.1738
Model 22.20713.2437
Model 32.32563.3833
SummerModel 12.09863.1738
Model 21.36942.1247
Model 32.36442.8664
AutumnModel 11.89262.7912
Model 21.51592.3909
Model 31.72552.4533
WinterModel 12.5193.6041
Model 21.59752.3817
Model 31.83562.5362
Table A4. The detailed evaluation indicators for Region B in 5 h.
Table A4. The detailed evaluation indicators for Region B in 5 h.
Sub-RegionSeasonMethodMAERMSE
B
5 h
SpringModel 12.38183.342
Model 21.57252.6347
Model 31.47672.3275
SummerModel 12.69223.6592
Model 21.92712.598
Model 32.01092.7458
AutumnModel 11.95722.6807
Model 23.11533.7569
Model 32.73783.2642
WinterModel 12.11372.9147
Model 21.57332.4565
Model 32.05153.0435
Table A5. The detailed evaluation indicators for Region C in 1 h.
Table A5. The detailed evaluation indicators for Region C in 1 h.
Sub-RegionSeasonMethodMAERMSE
C
1 h
SpringModel 10.73611.332
Model 20.56070.9965
Model 30.56211.0134
SummerModel 10.2580.4986
Model 20.35660.5514
Model 30.28140.4724
AutumnModel 10.71840.8341
Model 20.45470.6798
Model 30.48410.7354
WinterModel 10.74851.0218
Model 20.73340.9438
Model 30.86161.0237
Table A6. The detailed evaluation indicators for Region C in 3 h.
Table A6. The detailed evaluation indicators for Region C in 3 h.
Sub-RegionSeasonMethodMAERMSE
C
3 h
SpringModel 10.44760.9364
Model 20.44060.9356
Model 30.52820.9572
SummerModel 10.54130.6998
Model 20.47770.6814
Model 30.50760.7374
AutumnModel 10.41260.6139
Model 20.46070.7195
Model 30.4260.6773
WinterModel 10.68860.9259
Model 20.580.8281
Model 30.63640.8662
Table A7. The detailed evaluation indicators for Region C in 5 h.
Table A7. The detailed evaluation indicators for Region C in 5 h.
Sub-RegionSeasonMethodMAERMSE
C
5 h
SpringModel 10.6731.0403
Model 20.49670.9782
Model 30.97591.2924
SummerModel 10.72260.9733
Model 20.55880.753
Model 30.58750.8317
AutumnModel 10.55710.765
Model 20.51960.7218
Model 30.52230.7815
WinterModel 10.64080.8487
Model 20.55620.8148
Model 30.56250.8768
The step-by-step details for parsing LSTM are as follows:
1.
Determine the information flow into the cell state, where f t = σ W f x t + U f h t 1 + b f .
This decision is controlled by the “forget gate” through the sigmoid function. Based on the previous time step’s output h t 1 and the current input x t , a value between 0 and 1 is generated to decide whether to retain, forget, or partially retain the previous c t 1 information. For example, in previous sentences, much content was learned, some of which might be irrelevant to the current task, and thus it can be selectively filtered out.
2.
Generate new information to be updated.
This process involves two parts: i t = σ W i x t + U i h t 1 + b i and c ˜ t = tanh W c x t + U c h t 1 .
The first part is the “input gate” layer, which uses the sigmoid function to determine which information needs updating.
The second part is a tanh layer used to generate new candidate values c ˜ t , which may be added to the cell state as values generated by the current layer. These two parts’ generated values are combined to update the cell state.
3.
Update the information in the old cell state, where c t = f t c t 1 + i t c ˜ t .
Initially, we forget unnecessary information and then add QAZ to obtain the candidate value. This process aims to eliminate unnecessary information and add new content. For example, if the cell state previously stored information about Zhang San, and now information about Li Si is received, then we need to discard Zhang San’s information and retain Li Si’s information.
4.
Model the output such that o t = σ W o x t + U o h t 1 + b o , where h t = o t tanh c t .
Initially, the initial output o t is obtained through the sigmoid layer, and then c t is scaled from −1 to 1 using the tanh function and finally multiplied element-wise with o t to obtain the model’s output.

References

  1. Ahmed, R.; Sreeram, V.; Togneri, R.; Datta, A.; Arif, M.D. Computationally expedient Photovoltaic power Forecasting: A LSTM ensemble method augmented with adaptive weighting and data segmentation technique. Energy Convers. Manag. 2022, 258, 115563. [Google Scholar] [CrossRef]
  2. Li, G.; Guo, S.; Li, X.; Cheng, C. Short-term Forecasting Approach Based on bidirectional long short-term memory and convolutional neural network for Regional Photovoltaic Power Plants. Sustain. Energy Grids Netw. 2023, 34, 101019. [Google Scholar] [CrossRef]
  3. Zhu, Q.; Li, J.; Qiao, J.; Shi, M.; Wang, C. Application and Prospect of artificial intelligence technology in renewable energy forecasting. Proc. CSEE 2023, 43, 3027–3048. [Google Scholar]
  4. Yona, A.; Senjyu, T.; Funabashi, T.; Kim, C.H. Determination method of insolation prediction with fuzzy and applying neural network for long-term ahead PV power output correction. IEEE Trans. Sustain. Energy 2013, 4, 527–533. [Google Scholar] [CrossRef]
  5. Sanjari, M.J.; Gooi, H.B. Probabilistic forecast of PV power generation based on higher order Markov chain. IEEE Trans. Power Syst. 2016, 32, 2942–2952. [Google Scholar] [CrossRef]
  6. Xie, T.; Zhang, G.; Liu, H.; Liu, F.; Du, P. A hybrid forecasting method for solar output power based on variational mode decomposition, deep belief networks and auto-regressive moving average. Appl. Sci. 2018, 8, 1901. [Google Scholar] [CrossRef]
  7. Junior, J.G.d.S.F.; Oozeki, T.; Ohtake, H.; Shimose, K.-I.; Takashima, T.; Ogimoto, K. Regional forecasts and smoothing effect of photovoltaic power generation in Japan: An approach with principal component analysis. Renew. Energy 2014, 68, 403–413. [Google Scholar] [CrossRef]
  8. Ahn, H.K.; Park, N. Deep RNN-based photovoltaic power short-term forecast using power IoT sensors. Energies 2021, 14, 436. [Google Scholar] [CrossRef]
  9. Agga, A.; Abbou, A.; Labbadi, M.; El Houm, Y.; Ali, I.H.O. CNN-LSTM: An efficient hybrid deep learning architecture for predicting short-term photovoltaic power production. Electr. Power Syst. Res. 2022, 208, 107908. [Google Scholar] [CrossRef]
  10. Qi, X.; Zheng, X.; Chen, Q. A short term load forecasting of integrated energy system based on CNN-LSTM. E3S Web Conf. 2020, 185, 01032. [Google Scholar] [CrossRef]
  11. Niu, D.; Yu, M.; Sun, L.; Gao, T.; Wang, K. Short-term multi-energy load forecasting for integrated energy systems based on CNN-BiGRU optimized by attention mechanism. Appl. Energy 2022, 313, 118801. [Google Scholar] [CrossRef]
  12. Gao, D.; Liu, X.; Zhu, Z.; Yang, Q. A hybrid cnn-bilstm approach for remaining useful life prediction of evs lithium-ion battery. Meas. Control 2023, 56, 371–383. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Li, P.; Li, H.; Zu, W.; Zhang, H. Short-Term Power Prediction of Wind Power Generation System Based on Logistic Chaos Atom Search Optimization BP Neural Network. Int. Trans. Electr. Energy Syst. 2023, 2023, 6328119. [Google Scholar] [CrossRef]
  14. Qiu, F.; Wang, N.; Wang, Y. Short term photovoltaic power generation prediction model based on improved GA-LSTM neural network. In Proceedings of the 4th International Conference on Information Science, Electrical, and Automation Engineering (ISEAE 2022), Hangzhou, China, 25–27 March 2022; Volume 12257, pp. 13–18. [Google Scholar]
  15. Wang, T.; Beard, R.; Hawkins, J.; Chandra, R. Recursive deep learning framework for forecasting the decadal world economic outlook. arXiv 2023, arXiv:2301.10874. [Google Scholar]
  16. Zhang, Y.; Pan, Z.; Wang, H.; Wang, J.; Zhao, Z.; Wang, F. Achieving wind power and photovoltaic power prediction: An intelligent prediction system based on a deep learning approach. Energy 2023, 283, 129005. [Google Scholar] [CrossRef]
  17. Liu, C.; Li, M.; Yu, Y.; Wu, Z.; Gong, H.; Cheng, F. A review of multitemporal and multispatial scales photovoltaic forecasting methods. IEEE Access 2022, 10, 35073–35093. [Google Scholar] [CrossRef]
  18. Wang, X.; Yu, M.; Huo, Z.; Yang, J. Short-term power forecasting of distributed photovoltaic station clusters based on affinity propagation clustering and long short-term time-series network. Autom. Electr. Power Syst. 2023, 47, 133–141. [Google Scholar]
  19. Gong, D.; Chen, N.; Ji, Q.; Tang, Y.; Zhou, Y. Multi-scale regional photovoltaic power generation forecasting method based on sequence coding reconstruction. Energy Rep. 2023, 9, 135–143. [Google Scholar] [CrossRef]
  20. Wang, Y.; Shen, J.; Chen, C.; Zhou, B.; Zhang, C. Description of wind and solar power generation considering power plant clusters and time-varying power characteristics. Power Syst. Technol. 2023, 47, 1558–1572. [Google Scholar]
  21. Simeunović, J.; Schubnel, B.; Alet, P.J.; Carrillo, R.E. Spatio-temporal graph neural networks for multi-site PV power forecasting. IEEE Trans. Sustain. Energy 2021, 13, 1210–1220. [Google Scholar] [CrossRef]
  22. Zhang, X.; Gao, R.; Zhu, C.; Liu, C.; Mei, S. Ultra-short-term prediction of regional photovoltaic power based on dynamic graph convolutional neural network. Electr. Power Syst. Res. 2024, 226, 109965. [Google Scholar] [CrossRef]
  23. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  24. Cao, Y.; Liu, G.; Luo, D.; Bavirisetti, D.P.; Xiao, G. Multi-timescale photovoltaic power forecasting using an improved Stacking ensemble algorithm based LSTM-Informer model. Energy 2023, 283, 128669. [Google Scholar] [CrossRef]
  25. Dka Solar Centre. Available online: https://dkasolarcentre.com.au/locations/yulara (accessed on 22 July 2024).
  26. Liu, Q.; Li, Y.; Jiang, H.; Chen, Y.; Zhang, J. Short-term photovoltaic power forecasting based on multiple mode decomposition and parallel bidirectional long short term combined with convolutional neural networks. Energy 2024, 286, 129580. [Google Scholar] [CrossRef]
Figure 1. General flow chart.
Figure 1. General flow chart.
Mathematics 12 02561 g001
Figure 2. Sub-region division diagram.
Figure 2. Sub-region division diagram.
Mathematics 12 02561 g002
Figure 3. GAT introduces multiple attention mechanisms.
Figure 3. GAT introduces multiple attention mechanisms.
Mathematics 12 02561 g003
Figure 4. Selection diagram of power stations.
Figure 4. Selection diagram of power stations.
Mathematics 12 02561 g004
Figure 5. Basic model structure diagram.
Figure 5. Basic model structure diagram.
Mathematics 12 02561 g005
Figure 6. MIC correlation heat maps (a) and Pearson correlation heat maps (b).
Figure 6. MIC correlation heat maps (a) and Pearson correlation heat maps (b).
Mathematics 12 02561 g006
Figure 7. Comparison of model evaluation results at different time scales in spring and summer for region A (a). Comparison of model evaluation results at different time scales in autumn and winter for region A (b).
Figure 7. Comparison of model evaluation results at different time scales in spring and summer for region A (a). Comparison of model evaluation results at different time scales in autumn and winter for region A (b).
Mathematics 12 02561 g007aMathematics 12 02561 g007b
Figure 8. Comparison of prediction results at 1 h (a), prediction results at 3 h (b), and prediction results at 5 h (c).
Figure 8. Comparison of prediction results at 1 h (a), prediction results at 3 h (b), and prediction results at 5 h (c).
Mathematics 12 02561 g008
Table 1. Data information summary table.
Table 1. Data information summary table.
Power StationMINMAXAVGSD
Wind_Speed0.04351.681.971.17
Temperature−2.443.2122.138.57
Radiation−12.871391.46250.00345.8
Wind_Direction11.60346.90168.6557.73
Max_Wind_Speed0.2022.603.772.21
Air_Pressure942.19971.78957.105.55
Pyranometer_129.9982.1543.7710.82
Pyranometer_230.0278.7143.8810.91
Table 2. Spring’s representation of the selection of power stations.
Table 2. Spring’s representation of the selection of power stations.
NameRMSE MAE
DG71.928040.7104
SS7.67953.7006
SD-1A0.76860.3244
SD-2A1.30840.5419
SD-3A1.53750.6934
Table 3. Summer’s representation of the selection of power stations.
Table 3. Summer’s representation of the selection of power stations.
NameRMSE MAE
DG44.445824.8324
SS9.02064.6554
SD-1A0.90480.4669
SD-2A1.53670.8302
SD-3A2.56221.4479
Table 4. Autumn’s representation of the selection of power stations.
Table 4. Autumn’s representation of the selection of power stations.
NameRMSE MAE
DG78.532544.1866
SS8.31584.3670
SD-1A1.10400.6614
SD-2A1.64220.9526
SD-3A2.07781.2771
Table 5. Winter’s representation of power station selection.
Table 5. Winter’s representation of power station selection.
NameRMSE MAE
DG59.317233.7924
SS17.10397.8272
SD-1A1.93490.9600
SD-2A3.24271.5860
SD-3A8.30294.1430
Table 6. The detailed evaluation indicators for region A in 1 h.
Table 6. The detailed evaluation indicators for region A in 1 h.
Sub-RegionSeasonMethodMAERMSE
A
1 h
SpringModel 16.34929.4532
Model 25.0557.8378
Model 35.49767.9688
SummerModel 13.2965
Model 23.14923.7488
Model 33.69385.0222
AutumnModel 19.635114.2316
Model 23.35745.1431
Model 34.82136.5564
WinterModel 19.166712.1053
Model 26.63268.7282
Model 36.37839.568
Table 7. The detailed evaluation indicators for region A in 3 h.
Table 7. The detailed evaluation indicators for region A in 3 h.
Sub-RegionSeasonMethodMAERMSE
A
3 h
SpringModel 15.48849.7354
Model 23.58136.4893
Model 35.50089.2484
SummerModel 13.50415.2421
Model 23.06224.7234
Model 34.38115.7006
AutumnModel 14.49136.8614
Model 25.97997.7987
Model 34.27186.5595
WinterModel 17.690810.9833
Model 26.63099.448
Model 37.227810.4374
Table 8. The detailed evaluation indicators for Region A in 5 h.
Table 8. The detailed evaluation indicators for Region A in 5 h.
Sub-RegionSeasonMethodMAERMSE
A-5 hSpringModel 17.52598.7838
Model 25.41397.3349
Model 36.34828.2565
SummerModel 12.92744.2
Model 23.93235.4412
Model 33.85415.6159
AutumnModel 15.6067.9728
Model 25.14197.4288
Model 36.05028.8428
WinterModel 17.332110.1048
Model 25.118.0224
Model 313.442217.3235
Table 9. The detailed evaluation indicators for large area in 1 h.
Table 9. The detailed evaluation indicators for large area in 1 h.
Sub-RegionSeasonMethodMAERMSETime/Epoch
Large area
1 h
SpringProposed method3.90956.440447.45
BiLSTM-CNN (SC)12.52223.072435.65
CNN-LSTM-MHA (SC)13.47126.01885.35
Extrapolation method4.59438.4129/
SummerProposed method2.34843.57148.75
BiLSTM-CNN (SC)8.127813.065176.5
CNN-LSTM-MHA (SC)10.66716.42176.55
Extrapolation method3.18734.401/
AutumnProposed method3.99646.636249.35
BiLSTM-CNN (SC)16.26133.109179.5
CNN-LSTM-MHA (SC)19.39640.26156.75
Extrapolation method5.1437.2907/
WinterProposed method5.68318.437349.45
BiLSTM-CNN (SC)21.80341.533175.25
CNN-LSTM-MHA (SC)28.85954.12175.55
Extrapolation method6.725110.859/
Table 10. The detailed evaluation indicators for large area in 3 h.
Table 10. The detailed evaluation indicators for large area in 3 h.
Sub-RegionSeasonMethodMAERMSETime/Epoch
Large area
3 h
SpringProposed method3.41645.781453.85
BiLSTM-CNN (SC)10.01921.481197.65
CNN-LSTM-MHA (SC)13.52225.076283.55
Extrapolation method4.887512.911/
SummerProposed method2.6384.063548.5
BiLSTM-CNN (SC)7.922912.099645.5
CNN-LSTM-MHA (SC)9.322414.201153
Extrapolation method3.28674.9514/
AutumnProposed method5.26177.388848.95
BiLSTM-CNN (SC)12.56521.721565.55
CNN-LSTM-MHA (SC)18.24137.4383.75
Extrapolation method8.237610.322/
WinterProposed method5.09828.468948.5
BiLSTM-CNN (SC)9.646215.383436.5
CNN-LSTM-MHA (SC)29.59350.79679.55
Extrapolation method7.788711.962/
Table 11. The detailed evaluation indicators for large area in 5 h.
Table 11. The detailed evaluation indicators for large area in 5 h.
Sub-RegionSeasonMethodMAERMSETime/Epoch
Large area
5 h
SpringProposed method4.87846.907849.85
BiLSTM-CNN (SC)6.124411.706135.55
CNN-LSTM-MHA (SC)10.05013.80077
Extrapolation method5.55927.9815/
SummerProposed method2.76274.067749.85
BiLSTM-CNN (SC)6.61410.112417.85
CNN-LSTM-MHA (SC)5.0747.639673.85
Extrapolation method3.89175.5537/
AutumnProposed method5.83299.3144101.85
BiLSTM-CNN (SC)9.963216.754521.35
CNN-LSTM-MHA (SC)6.59079.9885141.45
Extrapolation method9.465411.150/
WinterProposed method8.24211.714120.5
BiLSTM-CNN (SC)29.78538.806648.85
CNN-LSTM-MHA (SC)13.60518.122128.95
Extrapolation method23.06962.800/
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, K.; Lang, C.; Fu, Z.; Feng, Y.; Sun, S.; Wang, B. Distributed Regional Photovoltaic Power Prediction Based on Stack Integration Algorithm. Mathematics 2024, 12, 2561. https://doi.org/10.3390/math12162561

AMA Style

Hu K, Lang C, Fu Z, Feng Y, Sun S, Wang B. Distributed Regional Photovoltaic Power Prediction Based on Stack Integration Algorithm. Mathematics. 2024; 12(16):2561. https://doi.org/10.3390/math12162561

Chicago/Turabian Style

Hu, Keyong, Chunyuan Lang, Zheyi Fu, Yang Feng, Shuifa Sun, and Ben Wang. 2024. "Distributed Regional Photovoltaic Power Prediction Based on Stack Integration Algorithm" Mathematics 12, no. 16: 2561. https://doi.org/10.3390/math12162561

APA Style

Hu, K., Lang, C., Fu, Z., Feng, Y., Sun, S., & Wang, B. (2024). Distributed Regional Photovoltaic Power Prediction Based on Stack Integration Algorithm. Mathematics, 12(16), 2561. https://doi.org/10.3390/math12162561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop