Next Article in Journal
High Frequency and Addressable Impedance Measurement System for On-Site Droplet Analysis in Digital Microfluidics
Previous Article in Journal
A Robust AR-DSNet Tracking Registration Method in Complex Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multisite Long-Term Photovoltaic Forecasting Model Based on VACI

School of Information and Communication Engineering, Hainan University, Haikou 570228, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2806; https://doi.org/10.3390/electronics13142806
Submission received: 16 June 2024 / Revised: 11 July 2024 / Accepted: 14 July 2024 / Published: 17 July 2024

Abstract

:
In the field of photovoltaic (PV) power prediction, long-term forecasting, which is more challenging than short-term forecasting, can provide more comprehensive and forward-looking guidance. Currently, significant achievements have been made in the field of short-term forecasting for PV power, but inadequate attention has been paid to long-term forecasting. Additionally, multivariate global forecasting across multiple sites and the limited historical time series data available further increase the difficulty of prediction. To address these challenges, we propose a variable–adaptive channel-independent architecture (VACI) and design a deep tree-structured multi-scale gated component named DTM block for this architecture. Subsequently, we construct a specific forecasting model called DTMGNet. Unlike channel-independent modeling and channel-dependent modeling, the VACI integrates the advantages of both and emphasizes the diversity of training data and the model’s adaptability to different variables across channels. Finally, the effectiveness of the DTM block is empirically validated using the real-world solar energy benchmark dataset. And on this dataset, the multivariate long-term forecasting performance of DTMGNet achieved state-of-the-art (SOTA) levels, particularly making significant breakthroughs in the 720-step ultra-long forecasting window, where it reduced the MSE metric below 0.2 for the first time (from 0.215 to 0.199), representing a reduction of 7.44%.

1. Introduction

In recent years, the severe issue of global non-renewable energy scarcity [1,2], coupled with long-term plans for carbon neutrality and peak carbon dioxide emissions [3], has made the demand for clean energy generation more urgent than ever [4]. The significant advantages of photovoltaic power generation have garnered special attention and promotion from countries worldwide [5]. Accurate prediction of PV power generation not only provides a basis for grid-connected operation and energy management of PV plants but also serves as a reference for timely detection of equipment anomalies and troubleshooting [6].
However, the key factor influencing PV power, solar irradiance, is affected by a combination of climate, geographical location, and time, exhibiting significant non-stationarity [7]. This is manifested in the continuously changing statistical properties and joint distributions, making accurate prediction extremely challenging. Traditional methods such as autoregressive integrated moving average (ARIMA) [8], exponential smoothing (ES) [9], and Kalman filtering [10] are based on the assumption of time series stationarity or statistical properties to predict future values, and thus are no longer applicable in the current non-stationary environment [11].
Additionally, due to the unique characteristics of electricity and the stringent requirements of power grid management, there is a more urgent need for long-term forecasting of PV power generation compared to short-term forecasting. The long-term forecasting is crucial for better planning of energy storage and grid-connected power generation [12,13]. And this necessitates higher predictive capabilities from the models, requiring accurate and effective capture of long-term dependencies between inputs and outputs over extended periods [14]. Simultaneously, the challenge of prediction is further compounded by the need for multivariate global forecasting across multiple sites and the limited availability of historical time series data [15]. In this context, multivariate time series forecasting models must capture the dependencies among various variables or channels to better predict multivariate outputs [16].
To address the aforementioned issues, this paper first proposes an approach from the perspective of improving channel architecture to solve the problem of data scarcity when training highly expressive, large-scale forecasting models and the issue of inter-variable dependencies in multivariate forecasting. Secondly, a deep tree-structured multi-scale gated guidance model framework is constructed. The deep residual connection stacking significantly ensures the model’s representational capacity, allowing for better modeling of both global and local detail dependencies in long-term time series, as well as the mapping information to the predictive outcomes.
In summary, the contributions of this paper are as follows:
  • Proposing an improved model architecture—variable–adaptive channel-independent architecture, which combines the advantages of increased training data volume and high robustness brought by channel independence and the benefits of enhanced representation performance from channel dependence;
  • Presenting a specific scheme for the variable–adaptive channel-independent architecture by introducing a deep tree-structured multi-scale gated component into the backbone of the forecasting model, and subsequently demonstrating its effectiveness through ablation experiments;
  • Developing a forecasting model called DTMGNet based on the deep tree-structured multi-scale gated component, which outperforms existing advanced methods on the solar energy benchmark dataset, achieving SOTA performance.

2. Related Work

2.1. PV Power Forecasting Task

Currently, research on PV power forecasting primarily concentrates on short-term forecasting, with forecast cycles generally ranging from seconds to one hour ahead [17]. These short-term forecasting approaches can be broadly categorized into three main types: physical methods, statistical methods, and deep learning methods [18]. Physical methods typically utilize measurement data and detailed geographical information of the power station to calculate PV power [19,20,21]. However, relying solely on physical modeling to compute PV power exhibits moderate interference resistance and lacks robustness [22]. Traditional statistical methods, such as ETS [23] and ARIMA [8], often demonstrate inadequate performance in capturing complex temporal dynamics [14]. Deep learning-based methods usually employ convolutional neural networks (CNN) [24], long short-term memory networks (LSTM) [25], or hybrid models combining both, along with some signal processing techniques for short-term forecasting [26].
For PV power forecasting tasks, diverse data are typically required for modeling, and correlation analysis needs to be conducted [18]. However, in the current field of multivariate long-term prediction, researchers often use the publicly available “solar energy” dataset [27]. This dataset only contains power generation records from 137 PV facilities, which requires researchers to build models for 137 variables or channels and make 137 sets of predictions. And the prediction work is based on exploring historical patterns from historical data to make overall predictions. It does not need to involve more detailed and underlying information about different PV grid-connected systems. Therefore, this multivariate prediction method demands the ability to comprehensively capture both the local and global characteristics of each channel from the perspective of data mining, thereby enhancing the overall accuracy and reliability of predictions [15]. This is of great significance for optimizing energy scheduling, improving grid stability, and promoting large-scale integration of renewable energy [18].

2.2. Long-Term Series Forecasting (LTSF)

In recent years, the emergence of transformers has significantly advanced the field of LTSF. Initially, the transformer architecture gained widespread attention in the LTSF domain due to its powerful capability to model long-term dependencies. Models such as Informer [14], Autoformer [28], and FEDformer [29] have modified the native transformer structure to better accommodate time series forecasting tasks.
Recently, models like PatchTST [30] and PETformer [31] have demonstrated that the original transformer architecture can achieve better results through appropriate patching strategies, a technique commonly applied in the computer vision field. Additionally, iTransformer [32] has further advanced the field by treating independent time series as tokens and capturing multivariate correlations through self-attention mechanisms, achieving top-tier performance. Moreover, CNNs, recurrent neural networks (RNNs), and graph neural networks (GNNs) [33] have also shown promising results in LTSF tasks. Notable works include SCINet [34], TimesNet [35], SegRNN [36], and CrossGNN [37].
Of particular interest is the resurgence of methods, based on multi-layer perceptron (MLP), which provide a straightforward and highly effective approach for learning the nonlinear temporal dependencies in time series [38]. Recent work, LTSF-Linear [39], suggests that transformer-based methods may not be as effective as previously thought in other domains. It was found that their impressive forecasting results might primarily rely on single-step forecasting rather than iterative forecasting. Surprisingly, LTSF-Linear, which uses only a single linear layer, outperforms many existing complex architectures. Additionally, MLP-based architectures such as FreTS [40], TiDE [41], TSMixer [42], FITS [43], and TimeMixer [44] have also achieved state-of-the-art results.
In this paper, the aforementioned methods were comprehensively considered, with a particular focus on their experimental performance in long-term forecasting. We have chosen a more direct and efficient MLP architecture as the backbone for our forecasting model. Innovatively, a deep tree-structured multi-scale gated component is introduced to control the flow, effectively integrating it with the backbone to achieve the variable–adaptive channel-independent architecture.

2.3. Multivariate Time Series Forecasting (MTSF)

There are two primary approaches for addressing the task of MTSF: channel-dependent methods and channel-independent methods. The channel-dependent architecture considers all channels simultaneously as input, with the predicted future values depending on the historical values of all channels. In contrast, the channel-independent architecture treats the multivariate sequence as multiple univariate sequences, training a unified model on these sequences where each channel is predicted independently, ignoring the relationships between different channels [45].
Channel-dependent methods exhibit stronger learning capabilities but often lack robustness in accurately predicting time series with distribution shifts. Conversely, channel-independent methods trade off model performance for increased robustness in predictions [15]. Thus, resolving the trade-off between model performance and prediction robustness is crucial in the design of multivariate forecasting models. In this work, we propose a variable–adaptive channel-independent architecture that combines the high robustness advantage of channel-independent methods with the representational performance enhancement brought by channel-dependent methods. This approach emphasizes training robustness and the model’s adaptive capability to different variables across channels.

3. Preliminaries

Multisite long-term forecasting can essentially be defined as multivariate long-term time series forecasting (MLTSF). In this context, multivariate time series forecasting refers to predicting future values Y = y L + 1 , , y L + H   R H × C , given historical observations X = x 1 , , x L     R L × C , where L denotes the length of the historical observation window, C is the number of different features or channels, typically far more than 1, and H is the length of the forecasting horizon. Long-term time series forecasting involves using previously observed time series data to predict the sequence values over a longer future period. The primary objective of LTSF is to extend the forecasting horizon H , as it provides richer and more forward-looking guidance in practical applications, often referring to predictions for more than 100 future time points, with the forecast cycle generally ranging from a day to one month ahead [14]. However, the extended forecasting horizon H also increases the challenge and complexity of the prediction model.

4. Proposed Method

4.1. Variable–Adaptive Channel-Independent Architecture (VACI)

The core improvement concept of the proposed variable–adaptive channel-independent architecture is to integrate the advantages of both channel independence and channel dependence. This approach aims to simultaneously harness the high robustness brought by channel independence and the enhanced representational performance provided by channel dependence [15].
As illustrated in Figure 1, the VACI emphasizes adaptive perception of input time sequences in the channel dimension and generates control information to guide the model. And this is built upon the foundation that channel independence focuses solely on the time dimension, yielding richer training data and a more robust model. The adaptive perception is particularly important when there are significant statistical differences between variables.
In terms of handling channel dependence, unlike current channel-dependent methods that directly fuse all features for modeling and rely on complex internal mechanisms of the model to select, suppress, or amplify features, the channel dependence in VACI is manifested in a more direct and explicit perception of common patterns and unique patterns among different variables. The perceived feature control information then guides the model, and with the increased training data volume brought by channel independence, the trained model exhibits higher robustness in predicting different variables.
To implement the VACI concept, this work proposes a specific solution: the deep tree-structured multi-scale gated component (DTM block) guides the channel-independent backbone model to address channel dependence issues. This will be elaborated in detail when describing the DTMGNet model in the following sections.

4.2. DTMGNet

As illustrated in Figure 2, DTMGNet comprises two primary data flows. One is the input stream that performs subtraction decomposition operations through residual connections, and the other is the deep multi-scale control stream that adaptively perceives different variables and input features via the DTM block. The foundational framework is based on residual connections, which facilitate the stacking of multiple layers [46]. Throughout this process, the input data pass through several neural blocks capable of extracting and transforming signals. Subsequently, the intricate details of each component within the architecture will be delved into.

4.2.1. Instance Normalization

Time series data often exhibit distributional changes between training and testing datasets. Recent studies have shown that employing a simple instance normalization strategy between the model’s input and output can help mitigate this issue [47]. In our work, a normalization strategy is also utilized. Specifically, we perform normalization and denormalization directly when the sequences are processed through the model. This process can be expressed as
x t - L + 1 : t = x t - L + 1 : t - E t x t - L + 1 : t σ x t - L + 1 : t
x t + 1 : t + H = x t + 1 : t + H + E t x t - L + 1 : t σ x t - L + 1 : t
where x t - L + 1 : t is the input sequence from time t − L + 1 to t, x t + 1 : t + H is the predicted sequence from time t + 1 to t + H, E t · denotes the expected value or mean of the sequence, and σ · denotes the standard deviation of the sequence.

4.2.2. Backbone

As illustrated on the right side of Figure 2, the initial input X i - 1     R L × C is historical data of length L . This input undergoes gating with the output of the DTM block before passing through the first feed-forward layer, and then residual processing with the previous inputs after passing through the feed-forward layer. The residual connections in the model essentially facilitate progressive learning. With each deeper layer, the model subtracts the previously learned results. This layer-by-layer subtraction process enables the model to gradually comprehend the overall envelope and complementary components of the time series, progressing from macro-level concepts to micro-level specifics [11].

4.2.3. DTM Block

The DTM block, a crucial component in the DTMGNet architecture characterized by variable–adaptive channel independence, incorporates two specialized processing configurations: input information handling and a dual-module modeling approach.
Firstly, as illustrated in Figure 3, the input information stream undergoes a hierarchical multi-scale processing before being fed into the module. The patching process segments the sequence along the time dimension, with varying scales applied in a tree-like manner to control different depths of DTMGNet. It is noteworthy that in time series forecasting, the positional characteristic information diminishes with increasing network depth. Enhancing positional feature information in deeper networks can significantly improve model performance [48]. Therefore, DTM blocks are applied at various depths to control the main input information stream and reinforce positional feature information. Additionally, the multi-scale processing method applied at different depths extends the control information from fine local details to broad global information, facilitating the model’s progressive learning.
Secondly, to enhance the representational capacity of the DTM block and its variable–adaptive control ability, we employed a unique lateral processing approach alongside the vertical four-level modeling. Specifically, two sets of DTM blocks model the data with shared parameters. As depicted on the left side of Figure 2, after tree-structured multi-scale processing, the DTM block performs layer normalization on the time series, followed by MLP processing and dropout. Finally, the gated processing of the linearly transformed raw information and the MLP-processed block control information generates the final control information output, which serves as the input stream for the main control information. The overall process of DTM block is illustrated in Algorithm 1.
Algorithm 1: DTM block
Require: input lookback time series X ∈ RL×C; input length L; the number of channels C; embedding dimension E; the number of the block N.
01:Xi = StandardScaler(XT) {Xi ∈ RL×C}
02:▷ Tree-structured multi-scale processing for i in {1,···, N}:
03:XRi = Reshape(Xi) {Xi ∈ RC×j×L/j, j = 2i}
X1,i = XRi[:, 1::2, :] {X1,i ∈ RC×j×L/2j, j = 2i}
X2,i = XRi[:, ::2, :] {X2,i ∈ RC×j×L/2j, j = 2i}
04:▷ 2 sets of blocks are adopted:
05:for XBi in {X1,i, X2,i}:
06: ▷ LayerNorm is adopted to reduce attribute discrepancies.
07: XNi = LayerNorm(XBi) {XNi ∈ RC×j×L/2j}
08: ▷ The MLP performs nonlinear transformations on the temporal aspect.
09: XFi = MLP(XNi) {XFi ∈ RC×j×E}
10: ▷ Add dropout to output stream.
11: XDi = Dropout(XFi) {XDi∈RC×j×E}
12: ▷ Add internalgate mechanism
13: XGi = G(Linear(XDi), MLP(XDi)) {XGi ∈ RC×j×L/2j}
14:End for
15:Output: O = Reshape(Connect(XG1,i, XG2,i)) {O ∈ RC×L}

4.2.4. Gating Mechanism

The gating mechanism is another distinctive feature of DTMGNet. In this model, a specialized gating process is applied to the input of each layer, particularly to the deeper layers that were previously overlooked. This approach ensures that the injected control information does not diminish with increasing network depth. Instead, it enhances the control information within the deeper layers, guiding the model to focus more on data diversity and adaptability to different variables within a single-channel shared yet channel-independent architecture. The gating mechanism can be described as:
G C ,   X = normal C X ,
where normal C represents layer normalization [49].

4.2.5. Feed-Forward Layer

As illustrated in Figure 4, the feed-forward layer consists of three linear layers arranged in an encoder–decoder diamond structure. The layers perform complex transformations on the signals that have been enhanced through gating processes. This is particularly crucial for the model to share channels while simultaneously adapting to different temporal dependency patterns and various variables. Additionally, for the final prediction output, a linear layer is connected to the feed-forward layer as the prediction head to achieve the ultimate temporal transformation. It is worth noting that the choice of a linear layer is justified by its proven representational capacity in the LTSF-Linear series and its superior adaptability to time series compared to other base models [39].
Additionally, to facilitate a comprehensive understanding of DTMGNet’s working principles, we provide detailed pseudocode outlining its implementation, as shown in Algorithm 2.
Algorithm 2: DTMGNet Architecture
Require: input lookback time series X ∈ RL×C; input length L; the number of channels C; predicted length O; embedding dimension E; the number of the block N.
01:Xi = StandardScaler(XT) {Xi ∈ RC×L}
02:for i in {1,···, N}: {Run through the backbone.}
03: Ci = DTM block(X) {Ci ∈ RC×L}
04: ▷ Add gate mechanism to input stream.
05: XGi = G(Xi, Ci){Ci ∈ RC×L} {XGi ∈ RC×L}
06: ▷ The feedforward layer performs nonlinear transformations.
07: XFi = FeedForward(XGi) {XFi ∈ RC×L}
08: ▷ Subtract the previously learned output.
09: Xi+1 = XFi − Xi {Xi+1 ∈ RC×L}
10:End for
11:▷ Linear layer is adopted as prediction head.
12:Oi = Linear(Xi+1) {Oi ∈ RC×O}
13:Output: O = InvertedScaler(OiT) {Output the final prediction results O ∈ RO×C}

5. Experimental Validation and Analysis

We first validate the performance of DTMGNet on the real-world solar energy benchmark dataset across four prediction windows. The results are then compared with 12 mainstream baseline models. Subsequently, we verify the effectiveness of the DTM block and conduct detailed tests on DTMGNet’s performance with varying depths of stacking.

5.1. Experimental Setup

Dataset: We performed experiments on the well-established benchmark dataset, solar energy [27], which recorded solar power generation from 137 PV facilities in Alabama in 2006, with data collected every 10 min. Detailed descriptions of the dataset are provided in Table 1, where the forecastability values are calculated as 1 minus the entropy of the time series’ Fourier decomposition [50]. Higher values indicate better predictability. It can be observed that the forecastability values presented in the Table 1 are relatively low, indicating that making predictions based on this benchmark dataset poses significant challenges.
Implementation details: Our experiments were performed using the PyTorch framework (v2.2.2) [51] and conducted on a single NVIDIA A100 40 GB GPU. The ADAM optimizer [52] was used to train the model, with the initial learning rate set to 0.0005. An early stopping strategy was employed to minimize the Mean Squared Error (MSE) loss function. The batch size was fixed at 32. The training, validation, and test sets were split following the same scheme as previous works, with a ratio of 7:1:2. The specific sizes are detailed in Table 1 under “Dataset Size”.
Metrics: Consistent with prior work, mean squared error (MSE) and mean absolute error (MAE) were used as evaluation metrics for long-term forecasting [14].
(1) MSE: The most commonly used metric for regression tasks. It calculates the average squared difference between the predicted values and the true values. The formula is as follows:
MSE = 1 n i = 1 n y i - y i ^ 2
(2) MAE: Another widely used metric for regression tasks, MAE calculates the average absolute difference between the predicted values and the true values. The formula is as follows:
MAE = 1 n i = 1 n y i - y i ^
where n denotes the number of samples, y i denotes the true value of the ith sample, and y i ^ denotes the predicted value of the ith sample.
Baselines: We compared DTMGNet against 12 baseline models, including state-of-the-art long-term forecasting models such as Timemixer (ICLR 2024) [44], iTransformer (ICLR 2024) [32], and PatchTST (ICLR 2023) [30]. Other competitive models included TimesNet [35], Crossformer [53], MICN [54], FiLM [55], DLinear [39], FEDformer [29], Stationary [56], Autoformer [28], and Informer [14].
Among these, Crossformer, PatchTST and iTransformer are recent representative works based on the transformer architecture. Crossformer utilizes channel-dependent methods, while PatchTST and iTransformer employ channel-independent methods. Crossformer utilizes cross-dimensional dependencies for multivariate time series prediction, embedding the input into a two-dimensional vector array through a novel dimensional segment embedding, and then employing a two-stage attention layer to effectively capture dependencies across time and dimensions. PatchTST adopts a channel-independent approach, sharing the same embedding across all sequences, and employs a subsequence-level patch design to enhance its performance in long-term time series prediction tasks. iTransformer treats independent time series as tokens, capturing multivariate correlations through self-attention mechanisms, and utilizes layer normalization and feed-forward network modules to learn better global sequence representations.
Timenet and MICN are representative works based on CNN architectures, and they both utilize channel-dependent methods. Timenet innovatively transforms time series into two-dimensional representations through periods extracted by discrete Fourier transform, then processes these 2D representations using convolutional networks for prediction. MICN adopts a multi-scale branch structure to model different latent patterns separately. Each pattern uses downsampling convolution and dilated convolution to extract local features and global correlations, capturing an overall view of the time series.
Additionally, DLinear and Timemixer are representative works based on MLP architectures, and they both utilize channel-independent methods. DLinear models trend and periodic sequences through two groups of single-layer linear layers, ultimately adding the two sequences to form the final prediction. Despite its simple design, it outperforms ordinary transformer-based models. Timemixer mixes decomposed seasonal and trend components in both fine-to-coarse and coarse-to-fine directions, thereby aggregating microscopic seasonal and macroscopic trend information sequentially. It further introduces multiple predictors to leverage the complementary predictive capabilities of multi-scale sequences.
We tested performance at four prediction lengths T   96 , 192 , 336 , 720 . For the look-back window size L, we ran three different sizes L   96 , 192 , 336 for DTMGNet and iTransformer models, always selecting the best result to address the issue that not all models are suited for the same look-back window size. The remaining baseline models used the parameter search results from the Timemixer work. This makes the comparative results of DTMGNet more convincing, because DTMGNet demonstrated strong performance without relying on extensive parameter search.

5.2. Experimental Results

Table 2 presents a performance comparison between DTMGNet and other baseline models. And when the look-back window size L is 192 and 336, DTMGNet and iTransformer achieve their best results, respectively. It can be observed that DTMGNet achieves SOTA performance in overall metrics (Avg indicator), obtaining the best performance in 8 out of the total 10 evaluation metrics.
In specific prediction length configurations, DTMGNet outperforms or matches the performance of the current leading model, TimeMixer, as shown in Figure 5a. Notably, for the extended prediction lengths of 336 and 720, DTMGNet is the first to reduce the MSE metric below 0.2. In the ultra-long prediction of 720 points, there is a significant breakthrough (from 0.215 to 0.199), with a reduction of 7.44%. Compared to representative transformer-based models like PatchTST and iTransformer, which exhibit excellent overall performance, DTMGNet consistently demonstrates superior performance across various prediction length configurations, as shown in Figure 5c. Additionally, compared to convolutional models with representative works such as TimesNet and MICN, DTMGNet also achieves consistently leading performance across all metrics, as illustrated in Figure 5b.
In summary, the experimental results confirm that the proposed DTMGNet exhibits outstanding predictive performance for different long-term prediction lengths on the solar energy dataset, highlighting its effectiveness in handling multivariate time series forecasting tasks. Simultaneously, the experiments also demonstrate the superiority of our proposed variable–adaptive channel-independent architecture.
Additionally, as illustrated in Figure 6, prediction examples from four different prediction windows of three cases show that the predicted values (orange line) align consistently with the actual values (blue line) across multivariate different scenarios. Although the model’s sensitivity to peak values is not extremely precise, DTMGNet exhibits excellent handling of various patterns, particularly trends. This is evident in how the orange prediction line closely follows the overall shape and direction of the blue ground truth line. It effectively captures long-term dependencies in the temporal dimension and the mapping relationships between past and future even in the longer forecast windows, providing highly valuable foresight for long-term forecasting scenarios.

5.3. Ablation Study and Analysis

To validate the effectiveness of the DTM block as a specific implementation of variable–adaptive channel-independent architecture, we conducted a comprehensive ablation study on the core component of DTMGNet, the DTM block. This study included component replacement and removal experiments. The “normal block” is a variant where the tree-structured multi-scale processing and grouped modeling of input information are removed, and the raw time series is directly used as the input for a single group DTM block. The “Without Adaption” variant is one where all DTM blocks are removed. The final experimental results are shown in Table 3.
Obviously, as shown in Figure 7, both component replacement and removal result in a performance degradation of the model. Notably, when the component is replaced with a conventional gating input, the performance is even worse than the model without using the DTM block. This indicates the significant challenge in adaptive channel processing and, conversely, demonstrates the success and effectiveness of the deep tree-structured multi-scale gated component design.

5.4. Deep Stacking Experiments and Analysis

To evaluate the influence of different deep stacking configurations on DTMGNet, we conducted a series of experiments with varying numbers of stacked layers in the DTMGNet backbone. The experimental results are presented in Table 4.
Compared to the representative work of transformer-based models, iTransformer, which tend to suffer from overfitting when the model depth increases [3], DTMGNet effectively mitigates this risk. This is due to its integration of the high robustness advantages of single channel-independent architectures and the enhanced representational capacity benefits brought by channel dependency. As shown in Figure 8, when the depth of the model increases, the evaluation metrics show consistent improvement. This demonstrates that deep stacking in DTMGNet effectively enhances the model’s representational capacity and allows for deeper designs that result in better performance.

6. Conclusions

In this paper, we propose a variable–adaptive channel-independent architecture that integrates the advantages of channel independence and channel dependence. This architecture addresses the common issues of insufficient training data and multi-channel dependency in long-term multivariate prediction models for solar power plants. And a deep tree-structured multi-scale gated component DTM block and a prediction model DTMGNet were designed based on this architecture. DTMGNet includes two data streams: one is the input stream through residual stacking layers, and the other is the deep multi-scale control stream provided by the DTM block, which adaptively perceives different variables and input features. And ablation experiments demonstrate that the deep stacking in DTMGNet effectively enhances model performance, while also validating the effectiveness of the DTM block’s design and the success of variable–adaptive channel-independent architecture.
Furthermore, in comparison with other models on the real-world benchmark dataset, solar energy, DTMGNet achieved SOTA performance. Specifically, when compared to TimeMixer, the current best-performing prediction model, DTMGNet demonstrated average reductions of 1.56% in MSE and 2.46% in MAE across four different prediction window scales. Notably, for the extended 720-step prediction horizon, we observed a substantial 7.44% decrease in MSE. This also demonstrates the superiority of our proposed variable–adaptive channel-independent architecture.
Based on the results, we believe that in the field of multivariate long-term prediction, besides innovations at the module level and in data processing, the success of the variable–adaptive channel-independent architecture demonstrates the importance of innovation at the architectural level. Moreover, we consider that exploring multivariate long-term prediction based on optimizing modules and data processing on top of an innovative architecture is a valuable approach worth emulating.

7. Limitations and Future Directions

Although this study proposes a novel architecture and develops a prediction model based on this architecture achieving superior performance, our research has primarily focused on improving prediction accuracy. It is noteworthy that in the field of multivariate long-term forecasting, due to the large-scale input and output data, when stacking multiple layers of DTMGNet, the training time and inference computational time resulting from the parameter scale may not show advantages compared to models such as TimeMixer and iTransformer. Furthermore, constrained by the limitations of existing public datasets, we were unable to conduct more in-depth explorations in data augmentation and online learning.
Therefore, the following three aspects will be essential research priorities for our future continuous optimization of this research, as well as extremely important avenues for further exploration of multivariate long-term forecasting models:
  • Optimizing the model processing architecture to enhance computational efficiency;
  • Exploring the application of data augmentation techniques in multivariate and long-term prediction by mining time series data characteristics;
  • Integrating online learning mechanisms by collecting the latest time series data to continuously optimize the model in real-time.

Author Contributions

Methodology, S.F. and R.C.; software, R.C.; validation, S.F. and M.H.; writing—original draft preparation, S.F. and R.C.; writing—review and editing, M.H., Y.W. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Key R&D project of Hainan province (ZDYF2024SHFZ264), National Key R&D project: 2020YFB2104403.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank all reviewers for their helpful comments and suggestions regarding this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Petroleum, B. Statistical Review of World Energy 2021. 2020. Available online: https://www.bp.com/content/dam/bp/business-sites/en/global/corporate/pdfs/energy-economics/statistical-review/bp-stats-review-2021-full-report.pdf (accessed on 1 March 2023).
  2. Halder, P.; Paul, N.; Joardder, M.U.; Sarker, M. Energy scarcity and potential of renewable energy in Bangladesh. Renew. Sustain. Energy Rev. 2015, 51, 1636–1649. [Google Scholar] [CrossRef]
  3. Gielen, D.; Gorini, R.; Leme, R.; Prakash, G.; Wagner, N.; Janeiro, L.; Collins, S.; Kadir, M.; Asmelash, E.; Ferroukhi, R. World Energy Transitions Outlook: 1.5 °C Pathway; International Renewable Energy Agency: Abu Dhabi, United Arab Emirates, 2021; Available online: https://www.irena.org/publications/2021/Jun/World-Energy-Transitions-Outlook (accessed on 1 March 2023).
  4. Attanayake, K.; Wickramage, I.; Samarasinghe, U.; Ranmini, Y.; Ehalapitiya, S.; Jayathilaka, R.; Yapa, S. Renewable energy as a solution to climate change: Insights from a comprehensive study across nations. PLoS ONE 2024, 19, e0299807. [Google Scholar] [CrossRef] [PubMed]
  5. Obaideen, K.; Olabi, A.G.; Al Swailmeen, Y.; Shehata, N.; Abdelkareem, M.A.; Alami, A.H.; Rodriguez, C.; Sayed, E.T. Solar energy: Applications, trends analysis, bibliometric analysis and research contribution to sustainable development goals (SDGs). Sustainability 2023, 15, 1418. [Google Scholar] [CrossRef]
  6. Mellit, A.; Kalogirou, S. Artificial intelligence and internet of things to improve efficacy of diagnosis and remote sensing of solar photovoltaic systems: Challenges, recommendations and future directions. Renew. Sustain. Energy Rev. 2021, 143, 110889. [Google Scholar] [CrossRef]
  7. Ekström, J.; Koivisto, M.; Millar, J.; Mellin, I.; Lehtonen, M. A statistical approach for hourly photovoltaic power generation modeling with generation locations without measured data. Sol. Energy 2016, 132, 173–187. [Google Scholar] [CrossRef]
  8. Piccolo, D. A distance measure for classifying ARIMA models. J. Time Ser. Anal. 1990, 11, 153–164. [Google Scholar] [CrossRef]
  9. Gardner, E.S., Jr. Exponential smoothing: The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
  10. Li, X.; Wang, K.; Wang, W.; Li, Y. A multiple object tracking method using Kalman filter. In Proceedings of the 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; pp. 1862–1866. [Google Scholar]
  11. Liang, D.; Zhang, H.; Yuan, D.; Zhang, B.; Zhang, M. Minusformer: Improving Time Series Forecasting by Progressively Learning Residuals. arXiv 2024, arXiv:2402.02332. [Google Scholar]
  12. Zsiborács, H.; Pintér, G.; Vincze, A.; Birkner, Z.; Baranyai, N.H. Grid balancing challenges illustrated by two European examples: Interactions of electric grids, photovoltaic power generation, energy storage and power generation forecasting. Energy Rep. 2021, 7, 3805–3818. [Google Scholar] [CrossRef]
  13. Faraji, J.; Hashemi-Dezaki, H.; Ketabi, A. Multi-year load growth-based optimal planning of grid-connected microgrid considering long-term load demand forecasting: A case study of Tehran, Iran. Sustain. Energy Technol. Assess. 2020, 42, 100827. [Google Scholar] [CrossRef]
  14. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
  15. Han, L.; Ye, H.-J.; Zhan, D.-C. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. IEEE Trans. Knowl. Data Eng. 2024, 1–14. [Google Scholar] [CrossRef]
  16. Cai, W.; Liang, Y.; Liu, X.; Feng, J.; Wu, Y. Msgnet: Learning multi-scale inter-series correlations for multivariate time series forecasting. Proc. AAAI Conf. Artif. Intell. 2024, 38, 11141–11149. [Google Scholar] [CrossRef]
  17. Mohamad Radzi, P.N.L.; Akhter, M.N.; Mekhilef, S.; Mohamed Shah, N.J.S. Review on the application of photovoltaic forecasting using machine learning for very short-to long-term forecasting. Sustainability 2023, 15, 2942. [Google Scholar] [CrossRef]
  18. Wu, Y.-K.; Huang, C.-L.; Phan, Q.-T.; Li, Y.-Y. Completed review of various solar power forecasting techniques considering different viewpoints. Energies 2022, 15, 3320. [Google Scholar] [CrossRef]
  19. Niccolai, A.; Dolara, A.; Ogliari, E. Hybrid PV power forecasting methods: A comparison of different approaches. Energies 2021, 14, 451. [Google Scholar] [CrossRef]
  20. Chapman, L.; Thornes, J.E. The use of geographical information systems in climatology and meteorology. Prog. Phys. Geogr. 2003, 27, 313–330. [Google Scholar] [CrossRef]
  21. Lin, C.; Mao, X.; Qiu, C.; Zou, L.; Sensing, R. DTCNet: Transformer-CNN Distillation for Super-Resolution of Remote Sensing Image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 11117–11133. [Google Scholar] [CrossRef]
  22. Gaboitaolelwe, J.; Zungeru, A.M.; Yahya, A.; Lebekwe, C.K.; Vinod, D.N.; Salau, A.O. Machine learning based solar photovoltaic power forecasting: A review and comparison. IEEE Access 2023, 11, 40820–40845. [Google Scholar] [CrossRef]
  23. Holt, C.A.; Shobe, W.M. Reprint of: Price and quantity collars for stabilizing emission allowance prices: Laboratory experiments on the EU ETS market stability reserve. J. Environ. Econ. Manag. 2016, 80, 69–86. [Google Scholar] [CrossRef]
  24. Zang, H.; Cheng, L.; Ding, T.; Cheung, K.W.; Liang, Z.; Wei, Z.; Sun, G. Hybrid method for short-term photovoltaic power forecasting based on deep convolutional neural network. IET Gener. Transm. Distrib. 2018, 12, 4557–4567. [Google Scholar] [CrossRef]
  25. Campos, F.D.; Sousa, T.C.; Barbosa, R.S. Short-Term Forecast of Photovoltaic Solar Energy Production Using LSTM. Energies 2024, 17, 2582. [Google Scholar] [CrossRef]
  26. Li, R.; Wang, M.; Li, X.; Qu, J.; Dong, Y. Short-term photovoltaic prediction based on CNN-GRU optimized by improved similar day extraction, decomposition noise reduction and SSA optimization. IET Renew. Power Gener. 2024, 18, 908–928. [Google Scholar] [CrossRef]
  27. Lai, G.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
  28. Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
  29. Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
  30. Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
  31. Lin, S.; Lin, W.; Wu, W.; Wang, S.; Wang, Y. Petformer: Long-term time series forecasting via placeholder-enhanced transformer. arXiv 2023, arXiv:2308.04791. [Google Scholar]
  32. Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. Itransformer: Inverted transformers are effective for time series forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar]
  33. Lira, H.; Martí, L.; Sanchez-Pi, N. A graph neural network with spatio-temporal attention for multi-sources time series data: An application to frost forecast. Sensors 2022, 22, 1486. [Google Scholar] [CrossRef] [PubMed]
  34. Liu, M.; Zeng, A.; Chen, M.; Xu, Z.; Lai, Q.; Ma, L.; Xu, Q. Scinet: Time series modeling and forecasting with sample convolution and interaction. Adv. Neural Inf. Process. Syst. 2022, 35, 5816–5828. [Google Scholar]
  35. Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. Timesnet: Temporal 2d-variation modeling for general time series analysis. In Proceedings of The eleventh international conference on learning representations. arXiv 2022, arXiv:2210.02186. [Google Scholar]
  36. Lin, S.; Lin, W.; Wu, W.; Zhao, F.; Mo, R.; Zhang, H. Segrnn: Segment recurrent neural network for long-term time series forecasting. arXiv 2023, arXiv:2308.11200. [Google Scholar]
  37. Huang, Q.; Shen, L.; Zhang, R.; Ding, S.; Wang, B.; Zhou, Z.; Wang, Y. Crossgnn: Confronting noisy multivariate time series via cross interaction refinement. Adv. Neural Inf. Process. Syst. 2023, 36, 46885–46902. [Google Scholar]
  38. Shao, Z.; Zhang, Z.; Wang, F.; Wei, W.; Xu, Y. Spatial-temporal identity: A simple yet effective baseline for multivariate time series forecasting. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 4454–4458. [Google Scholar]
  39. Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? Proc. AAAI Conf. Artif. Intell. 2023, 37, 11121–11128. [Google Scholar] [CrossRef]
  40. Yi, K.; Zhang, Q.; Fan, W.; Wang, S.; Wang, P.; He, H.; An, N.; Lian, D.; Cao, L.; Niu, Z. Frequency-domain MLPs are more effective learners in time series forecasting. Adv. Neural Inf. Process. Syst. 2023, 36, 76656–76679. [Google Scholar]
  41. Das, A.; Kong, W.; Leach, A.; Mathur, S.; Sen, R.; Yu, R. Long-term forecasting with tide: Time-series dense encoder. arXiv 2023, arXiv:2304.08424. [Google Scholar]
  42. Ekambaram, V.; Jati, A.; Nguyen, N.; Sinthong, P.; Kalagnanam, J. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 459–469. [Google Scholar]
  43. Xu, Z.; Zeng, A.; Xu, Q. FITS: Modeling Time Series with $10 k $ Parameters. arXiv 2023, arXiv:2307.03756. [Google Scholar]
  44. Wang, S.; Wu, H.; Shi, X.; Hu, T.; Luo, H.; Ma, L.; Zhang, J.Y.; Zhou, J. Timemixer: Decomposable multiscale mixing for time series forecasting. arXiv 2024, arXiv:2405.14616. [Google Scholar]
  45. Zhao, L.; Shen, Y. Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators. arXiv 2024, arXiv:2401.17548. [Google Scholar]
  46. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  47. Lin, S.; Lin, W.; Wu, W.; Chen, H.; Yang, J. SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters. arXiv 2024, arXiv:2405.00946. [Google Scholar]
  48. Zhang, J.; Wang, J.; Qiang, W.; Xu, F.; Zheng, C.; Sun, F.; Xiong, H. Intriguing Properties of Positional Encoding in Time Series Forecasting. arXiv 2024, arXiv:2404.10337. [Google Scholar]
  49. Ni, R.; Lin, Z.; Wang, S.; Fanti, G. Mixture-of-Linear-Experts for Long-term Time Series Forecasting. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Valencia, Spain, 2–4 May 2024; pp. 4672–4680. [Google Scholar]
  50. Goerg, G. Forecastable component analysis. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; pp. 64–72. [Google Scholar]
  51. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
  52. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  53. Zhang, Y.; Yan, J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  54. Wang, H.; Peng, J.; Huang, F.; Wang, J.; Chen, J.; Xiao, Y. Micn: Multi-scale local and global context modeling for long-term series forecasting. In Proceedings of the Eleventh International Conference on Learning Representations, Virtual Event, 25–29 April 2022. [Google Scholar]
  55. Zhou, T.; Ma, Z.; Wen, Q.; Sun, L.; Yao, T.; Yin, W.; Jin, R. Film: Frequency improved legendre memory model for long-term time series forecasting. Adv. Neural Inf. Process. Syst. 2022, 35, 12677–12690. [Google Scholar]
  56. Liu, Y.; Wu, H.; Wang, J.; Long, M. Non-stationary transformers: Exploring the stationarity in time series forecasting. Adv. Neural Inf. Process. Syst. 2022, 35, 9881–9893. [Google Scholar]
Figure 1. CD, CI and VACI architecture.
Figure 1. CD, CI and VACI architecture.
Electronics 13 02806 g001
Figure 2. The framework of DTMGNet.
Figure 2. The framework of DTMGNet.
Electronics 13 02806 g002
Figure 3. The schematic diagram of tree-structured multi-scale processing. The horizontal axis labeled “Time” shows processing in the temporal dimension. The data are divided into patches for processing. The vertical axis labeled “Multi-scale” shows different processing scales for inputs of different depths.
Figure 3. The schematic diagram of tree-structured multi-scale processing. The horizontal axis labeled “Time” shows processing in the temporal dimension. The data are divided into patches for processing. The vertical axis labeled “Multi-scale” shows different processing scales for inputs of different depths.
Electronics 13 02806 g003
Figure 4. The diagram of diamond encoder–decoder structure.
Figure 4. The diagram of diamond encoder–decoder structure.
Electronics 13 02806 g004
Figure 5. Performance comparison of different models. The radar charts display the performance comparison of different models across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Figure 5. Performance comparison of different models. The radar charts display the performance comparison of different models across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Electronics 13 02806 g005
Figure 6. Prediction cases under the input-192 setting. Each case is represented by a column of four graphs, demonstrating the model’s predictive capabilities as the forecast horizon extends (96, 192, 336, 720). The graphs compare the ground truth (blue line) with the model’s predictions (orange line) for various time series patterns.
Figure 6. Prediction cases under the input-192 setting. Each case is represented by a column of four graphs, demonstrating the model’s predictive capabilities as the forecast horizon extends (96, 192, 336, 720). The graphs compare the ground truth (blue line) with the model’s predictions (orange line) for various time series patterns.
Electronics 13 02806 g006
Figure 7. Performance comparison of three models in ablation experiments. The radar chart displays the performance comparison across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Figure 7. Performance comparison of three models in ablation experiments. The radar chart displays the performance comparison across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Electronics 13 02806 g007
Figure 8. Performance comparison of different depth stacking. The radar chart displays the performance comparison across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Figure 8. Performance comparison of different depth stacking. The radar chart displays the performance comparison across 10 different metrics. The concentric circles represent different values, with lower values (better performance) towards the center and higher values (worse performance) towards the edge. ‘XX_YY’ indicates the XX metric under the YY prediction window.
Electronics 13 02806 g008
Table 1. Dataset detailed descriptions.
Table 1. Dataset detailed descriptions.
DatasetDimDataset SizeFrequencyForecastability
Solar Energy137(36601, 5161, 10417)10 min0.33
Table 2. Experiment results.
Table 2. Experiment results.
Metric96192336720Avg1st Count
Model MSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
DTMGNet0.1673 0.2150.1940.2410.1970.2470.1990.25000.1890.2388
TimeMixer0.1670.2200.1870.2490.2000.2580.2150.2500.1920.2443
iTransformer0.1930.2430.2170.2690.2270.2800.2390.2980.2190.2720
PatchTST0.2240.2780.2530.2980.2730.3060.2720.3080.2560.2980
TimesNet0.2190.3140.2310.3220.2460.3370.2800.3630.2440.3340
Crossformer0.1810.2400.1960.2520.2160.2430.2200.2560.2040.2481
MICN0.1880.2520.2150.2800.2220.2670.2260.2640.2130.2660
FiLM0.3200.3390.3600.3620.3980.3750.3990.3680.3690.3610
DLinear0.2890.3770.3190.3970.3520.4150.3560.4120.3290.4000
FEDformer0.2010.3040.2370.3370.2540.3620.2800.3970.2430.3500
Stationary0.3210.3800.3460.3690.3570.3870.3350.3840.3400.3800
Autoformer0.4560.4460.5880.5610.5950.5880.7330.6330.5930.5570
Informer0.2000.2470.2200.2510.2600.2870.2440.3010.2310.2720
The “1st Count” represents the number of times a model achieved the best result among all metrics. Lower values of MSE and MAE indicate better performance.
Table 3. Results of ablation experiment.
Table 3. Results of ablation experiment.
Metric96192336720Avg
Model MSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
DTMGNet0.1670.2150.1940.2410.1970.2470.1990.2500.1890.238
Normal Gate0.1810.2270.1970.2450.2030.2520.2020.2520.1960.244
Without Adaption0.1800.2310.1930.2460.2000.2520.2000.2530.1930.245
Table 4. Results of Deep Stacking.
Table 4. Results of Deep Stacking.
Metric96192336720Avg
Depth MSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
10.1840.2330.1960.2450.2030.2510.2060.2540.1970.246
20.1750.2230.1950.2430.2010.2500.2010.2510.1930.242
30.1730.2180.1950.2420.1990.2480.1990.2500.1910.240
40.1670.2150.1940.2410.1970.2470.1990.2500.1890.238
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, S.; Chen, R.; Huang, M.; Wu, Y.; Liu, H. Multisite Long-Term Photovoltaic Forecasting Model Based on VACI. Electronics 2024, 13, 2806. https://doi.org/10.3390/electronics13142806

AMA Style

Feng S, Chen R, Huang M, Wu Y, Liu H. Multisite Long-Term Photovoltaic Forecasting Model Based on VACI. Electronics. 2024; 13(14):2806. https://doi.org/10.3390/electronics13142806

Chicago/Turabian Style

Feng, Siling, Ruitao Chen, Mengxing Huang, Yuanyuan Wu, and Huizhou Liu. 2024. "Multisite Long-Term Photovoltaic Forecasting Model Based on VACI" Electronics 13, no. 14: 2806. https://doi.org/10.3390/electronics13142806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop