Next Article in Journal
Study on the Single-Event Burnout Effect Mechanism of SiC MOSFETs Induced by Heavy Ions
Previous Article in Journal
Analysis of the Second-Order NS SAR ADC Performance Enhancement Based on Active Gain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GCN-Transformer-Based Spatio-Temporal Load Forecasting for EV Battery Swapping Stations under Differential Couplings

1
School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China
2
China Mobile Communications Co., Ltd., Beijing 102206, China
3
China Southern Power Grid Yunnan Electric Power Research Institute, Kunming 650217, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(17), 3401; https://doi.org/10.3390/electronics13173401
Submission received: 10 July 2024 / Revised: 23 August 2024 / Accepted: 24 August 2024 / Published: 27 August 2024

Abstract

:
To address the challenge of power absorption in grids with high renewable energy integration, electric vehicle battery swapping stations (EVBSSs) serve as critically important flexible resources. Current research on load forecasting for EVBSSs primarily employs Transformer models, which have increasingly shown a lack of adaptability to the rapid growth in scale and complexity. This paper proposes a novel data-driven forecasting model that combines the geographical feature extraction capability of graph convolutional networks (GCNs) with the multitask learning capability of Transformers. The GCN-Transformer model first leverages Spearman’s rank correlation to create a multinode feature set encompassing date, weather, and historical load data. It then employs data-adaptive graph generation for dynamic spatio-temporal graph construction and graph convolutional layers for spatial aggregation tailored to each node. Unique swapping patterns are identified through node-adaptive parameter learning, while the temporal dynamics of multidimensional features are managed by the Transformer’s components. Numerical results demonstrate enhanced accuracy and efficiency in load forecasting for multiple and widely distributed EVBSSs.

1. Introduction

Electric vehicles (EVs) have gained prominence in urban transportation and logistics due to their lower carbon emissions and reduced operational costs. Governments globally are promoting the shift from traditional diesel vehicles to electric ones in urban freight transport. However, the unregulated integration of a large number of EVs has resulted in increased volatility and unpredictability in regional grid loads, posing potential risks to the grid system and creating new challenges for load forecasting. Precise forecasting of EV station loads is essential for grid load planning and is also a prerequisite for guiding EV charging strategies, which is vital for the strategic planning of charging infrastructure.
Currently, methods for EV station load forecasting are primarily categorized into two approaches: statistical learning and machine learning (ML) techniques. Examples include Monte Carlo simulations, Sigmoid cloud models, and graph WaveNets. Statistical learning typically involves analyzing historical data from EV stations using statistical techniques to forecast future loads. For instance, a prediction model [1] that considers variables like vehicle type, charging duration, and initial state of charge, achieving high accuracy. This model leverages Monte Carlo simulations to understand user driving patterns and create a scalable demand model for EV stations. However, these statistical methods are often constrained by data quality and sample size, which can limit the accuracy of load predictions [2]. They also assume data follow certain statistical laws, making it difficult to capture nonlinear patterns and complex interactions in EVBSS loads, especially when influenced by unpredictable factors.
As AI technology advances, ML methods are increasingly applied to charging load forecasting, offering solutions in various EV-related fields such as traffic balance and charging management [3]. Traditional forecasting often overlooks the characteristics of transportation networks. To address this, a traffic balance-based simulation method for charging load has been proposed, along with a range prediction method that considers multiple correlated daily scenarios. While previous studies have yielded good results by focusing on individual microgrids, they have not fully accounted for the spatio-temporal coupling between different loads, suggesting there is room for improvement in prediction accuracy. Incorporating spatio-temporal coupling in load prediction can lead to more precise and practical outcomes.
A spatio-temporal prediction method using an expanded causal convolutional 2D neural network inputs the load of each EVBSS as a heatmap to capture the spatio-temporal distribution effectively [4]. This approach explores the spatio-temporal coupling between geographically adjacent EVBSSs, yielding promising results. However, real-world scenarios may involve differentiated coupling relationships that are not solely based on geographic proximity [5]. The model’s limitation in processing sequential Euclidean space data may lead to inaccuracies in predicting the spatio-temporal distribution of loads.
ML approaches can address space-time coupling characteristics but face challenges in accurately modeling complex interactions between multiple stations, especially when geographic and environmental factors influence the predictive model. Integrating and extracting useful features from this information remains a significant research challenge. GCNs offer a novel approach to data representation and feature extraction within the power system [6]. They have been applied to various power system tasks, including wind speed and photovoltaic output prediction. A GCN-based spatio-temporal load prediction method that improves accuracy by sharing a set of GCN parameters across nodes was proposed [7]. However, the load at public EVBSSs is highly influenced by user electricity consumption behavior, which varies by node. Using uniform GCN parameters may fail to capture these differentiated patterns, affecting the effectiveness of spatio-temporal load prediction.
The Transformer model, as initially introduced, has garnered significant acclaim across diverse domains, including natural language processing, image recognition, video analysis, and audio processing. In recent years, attention mechanism-driven Transformer models have demonstrated superior performance over recurrent neural networks (RNNs) in a multitude of tasks. The attention mechanism facilitates the modeling of relationships between input and output sequences irrespective of their sequence distance, transcending the RNN’s limitation of sequential processing [8]. In comparison to RNNs, Transformers exhibit a more flexible architecture, broader correlation scope, and enhanced versatility.
GCNs adeptly manage and encapsulate spatial data within power grids, especially concerning the spatial interdependencies. GCNs model the spatial arrangement and interactions of EVBSSs, enabling the model to discern and exploit the geographic contiguity and network structure among these stations [9]. The Transformer model’s attention mechanism adeptly addresses temporal dependencies across various time points, unencumbered by sequence distance constraints [10]. This capability is pivotal for capturing the long-term temporal dynamics in time series data, crucial for anticipating the periodic and seasonal fluctuations in electricity demand.
The amalgamation of GCNs and Transformers, known as the GCN-Transformer, integrates spatial and temporal data dimensions, offering a holistic load forecasting approach. This methodology surpasses conventional time series or spatial analyses by providing more nuanced predictions. Unlike traditional methods that may be constrained by data quality and volume, the GCN-Transformer capitalizes on deep learning’s proficiency to uncover intricate patterns even from modest datasets. Furthermore, GCNs adeptly process non-Euclidean data types, such as network data, a challenge that often impedes traditional machine learning algorithms.
In Section 2, the factors influencing the load forecasting results are analyzed, and the more significant factors are selected. In Section 3, this article introduces the structure of the proposed model and describe the model’s process. To ensure the accuracy of the model introduction, Section 4 of this article selects several pieces of data from the dataset to illustrate the model’s data, provides the model’s relevant parameters as well as the forecasting results, and compares the forecasting results with those of several other models.
This paper presents its principal contributions as follows:
(1)
An innovative data-driven GCN-Transformer model is proposed, which is particularly effective in managing multitask forecasting challenges arising from the wide distribution of multiple EVBSSs. It is designed to analyze complex scenarios involving a broad spectrum of multinode loads with significant interdependencies. Unlike traditional Transformer methods, which often inadequately account for mutual influences among loads in multinode forecasting scenarios, the GCN-Transformer model seamlessly integrates these considerations, offering a more sophisticated and comprehensive solution.
(2)
The graph network structure is further refined using a novel Spearman’s rank correlation coefficient, which integrates multidimensional node features by optimally selecting influential factors. In contrast to traditional graph convolutional neural networks, which focus solely on node information and are constrained by the inherent attributes of the data, the incorporation of the Transformer equips the model with a multitude of reference factors, thereby enhancing its performance in handling time series problems.

2. Influence Factor Analysis

In the context of significant interdependencies among loads at various EVBSSs, short-term load forecasting is influenced by numerous factors, including historical load data, regional characteristics, meteorological conditions, and temporal dynamics. To comprehensively account for the aggregate impact of these elements, historical load data undergo a preprocessing phase. Subsequently, separate quantitative models are developed to represent regional, meteorological, and temporal factors. This methodology ensures a thorough integration of the synergistic effects of these factors, enabling the quantification and incorporation of elements that may not be directly included in the predictive model.

2.1. Historical Load Data

Historical load data play a pivotal role in load forecasting models. Prior to model integration, data cleansing procedures are meticulously executed. Any defective or missing values within the dataset are substituted with the average load value from the preceding day, as referenced in [11]. Subsequently, the data undergo min-max normalization, a transformation that scales the load values to a range of [0, 1]. This scaling technique accelerates the model’s convergence rate. The normalized value x * may be mathematically represented as follows:
x * = x x min x max x min
where x max is the maximum value in sample data and x min is the minimum value in the sample data.

2.2. Regional Factors

Electricity usage patterns vary significantly across different regions, influencing the load dynamics at EVBSSs [12]. Given the diverse topological connections among EVBSSs, the resulting graph structure is irregular, classifying it as a non-Euclidean entity. To capture these regional topological nuances, an adjacency matrix is employed to quantify the load relationships of EVBSSs. In the realm of graph theory, the adjacency matrix delineates the connectivity between nodes within a graph, disregarding the spatial distances between them. The adjacency matrix, denoted as A, can be mathematically formulated as follows:
A = ( a 11 a 1 n a n 1 a n n ) R n × n
where a 11 , , a n n are the connection relationships between nodes. If there is a connection between two nodes, the corresponding element is 1, otherwise it is 0.

2.3. Meteorological Factors

The aggregate load of multiple public EVBSSs is significantly influenced by meteorological factors and exhibits considerable variability. It is essential to examine the correlation between the multivariate coupling load and meteorological factors to judiciously select the input variables for short-term multivariate load forecasting models.
This study employs the Spearman rank correlation coefficient to assess the relationship between the coupling load of multiple public EVBSSs and meteorological factors. Influential factors for multivariate load forecasting are identified based on the values of the correlation coefficients.
This research utilizes a public dataset from a city in Northeast China, encompassing load data from public EVBSSs for the first quarter of 2022, sampled at 60 min intervals. Concurrent local meteorological data, including temperature and humidity, are also collected. The Spearman rank correlation coefficient is applied to measure the correlation between the multivariate load of the public EVBSSs and meteorological data. A visual representation of the Spearman rank correlation coefficient calculation is depicted in Figure 1, where a higher absolute value signifies a stronger correlation.
The correlation among multivariate loads at EVBSSs is subject to variation, and the extent of correlation with meteorological factors is not uniform. Consequently, when formulating models for multivariate load forecasting at these stations, it is imperative to account for the distinct coupling relationships among different load types. In this paper, meteorological factors are chosen as input features for training models, and | ρ s | 0.4 is selected.

2.4. Date Factors

Date-related factors significantly influence the short-term load forecasting of EVBSSs. In China, where the electrical load is predominantly industrial, there is a noticeable decrease in load during holidays as opposed to working days. The date factors primarily encompass working days, holidays, and seasonal variations, the latter of which was discussed in a previous section. This study employs binary (0–1) variables to represent the presence or absence of working days and holidays within the date factors.
To encapsulate, the determinants of short-term load forecasting for EVBSSs comprise regional, meteorological, and date-related factors. A detailed representation of these factors is illustrated in Figure 2.

3. GCN-Transformer Multiswap Station Load Joint Forecasting Model

The battery swap station load forecasting model introduced in this paper initially harnesses the power of GCNs to manage spatial dependencies. GCNs enable the model to capture the structural nuances within the swap station network, including physical linkages and functional relationships between locations, thereby adeptly learning spatial characteristics inherent in load data. Following this, the Transformer module processes time series data, incorporating historical load information, meteorological influences, date-related factors, and regional specifics. The self-attention mechanism within the Transformer is adept at identifying dependencies across various time points, assisting the model in discerning trends and variations in load patterns over time.
By integrating these methodologies, the GCN-Transformer model adeptly manages and synthesizes intricate spatial and temporal dynamics, providing a precise multidimensional analysis for load prediction at battery swap stations. This approach empowers the model to not only discern individual station load patterns but also understand the interplay among different stations, thereby bolstering the forecast’s precision and dependability. The structure of the model is shown in Figure 3. Initially, multisource data are extracted from historical load data, meteorological factors, date factors, and regional factors, which are represented through a node feature vector. In the graph adaptive generation module, a spatio-temporal graph is constructed based on these input data, and the structure of the graph is dynamically adjusted via node-adaptive parameter learning to better capture complex spatio-temporal dependencies. Subsequently, the GCN is utilized to perform convolution operations on the graph structure to extract internode feature relationships. These features are further processed by the Transformer model to capture long-range dependencies and enhance prediction accuracy. The model outputs are used to perform both short-term and long-term load forecasting tasks, with the model’s performance evaluated by calculating the mean absolute percentage error (MAPE) and forecasting accuracy (FA). This framework effectively combines multisource data with spatio-temporal information, improving the precision of load forecasting.

3.1. GCN Model

GCNs, a class of convolutional neural networks tailored for graph data, were pioneered by Bruna et al. in 2013. In the realm of battery swap station load forecasting, the intricate interactions and linkages between stations form a complex, non-Euclidean topology, as characterized by graph theory and depicted in Figure 4. The graph convolutional framework of GCNs presents a potent methodology for distilling spatial characteristics from these non-Euclidean topologies [13]. This capability is especially crucial for gaining insights into and examining the load distribution and fluctuations across the network of swap stations. The architectural composition of GCNs, which includes input, hidden, and output layers, facilitates an exhaustive examination and application of these spatial dynamics, thereby providing robust analytical support for the precise prediction of loads at battery swap stations [14].
Taking the output y ( x ) of the node feature vector and the spatial relationship between nodes as the input of the GCN layer, as shown in Figure 5, the output z ( x ) is
z ( x ) = f [ y ( x ) , A ]
where A is the adjacency matrix, the elements of which represent the topological connection between nodes, 0 represents no connection, and 1 represents a connection; f is the processing method of the GCN, the forward-propagation formula of which is
H ( l + 1 ) = σ ( D ˜ 1 / 2 A ˜ D ˜ 1 / 2 H l W l )
where I is the unit matrix, A ˜ = A + I ; D ˜ is the diagonal matrix, D ˜ i i = j A ˜ i j ; H l is the output value of the first layer; W l is the parameter value of the first layer; and σ is the activation function.

3.2. Transformer Model

The Transformer architecture is a neural network framework that capitalizes on the self-attention mechanism, rendering it exceptionally well suited for sequence data processing. Unlike traditional recurrent neural networks (RNNs) or long short-term memory (LSTM) models, the Transformer architecture parallelizes the processing of input sequences, thereby more effectively capturing overarching relationships and dependencies [15]. This model is composed of an encoder and a decoder, with the encoder being the focal component in the context of electric vehicle battery swap station load forecasting analysis [16]. The Transformer’s function within this framework is to distill global patterns and trends from time series data, enhancing the comprehension of correlations across various time steps. It operates at a higher level of abstraction relative to time series data, adeptly capturing broader patterns and trends, thereby accelerating the training process, offering multiscale insights, stabilizing the optimization procedure, and refining hyperparameter tuning.
The attention mechanism is an algorithmic approach that emulates the characteristics of human attention. When an individual initially views a painting, they perceive the overall composition; however, upon closer inspection, their focus narrows to particular areas of interest. This selective concentration is akin to the attention mechanism’s operation, where the processing is not evenly distributed across all inputs but is instead allocated preferentially to salient features. By assigning unique weight parameters to each element within the input, the mechanism can dynamically prioritize relevant information and concurrently filter out nonessential details. The prevalent attention functions are additive attention and dot-product attention, the latter of which is also known as multiplicative attention. The dot-product attention, which our algorithm employs, offers practical advantages such as increased computational speed and space efficiency due to its implementation through highly optimized matrix multiplication operations.
The dot-product attention function is designed to map a query along with associated key-value pairs to an output matrix. In this framework, all of the query, keys, and values are represented as matrices. The architecture of the dot-product attention as applied in this study is depicted in Figure 6, and the computational procedure is detailed in Equations (5) and (6):
Q = W q X i n K = W k X i n V = W v X i n
A t t e n t i o n ( Q , K , V ) = s o f t max ( Q K T d k ) V
h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V ) W o
M u l t i h e a d ( Q , K , V ) = C o n c a t ( h e a d 1 , h e a d 2 , , h e a d h )
where the projections are parameter matrices W i Q R d mod e l × d Q , W i K R d mod e l × d K , W i V R d mod e l × d V , and W i o R d V × d mod e l .
In the encoding layer’s multihead attention module as well as the first multihead attention module of the decoding layer, all keys, values, and queries originate from the same source, signifying the operation of self-attention within these modules. However, in the decoding layer’s second multihead attention module, the queries are derived from the preceding decoder layer, while the memory keys and values are obtained from the encoder’s output. This configuration indicates that the attention module serves as a cross-attention mechanism, integrating the multifaceted coupling interactions extracted by the encoder with the distinct load characteristic information also mined by the encoder.
The multihead attention mechanism is an innovative approach that enhances the traditional attention mechanism by dividing it into multiple heads. Each attention head focuses on different positions of the input sequence to capture different aspects of the information. This parallel processing allows the model to jointly attend to information from different representational subspaces at different positions, which in turn enriches the representational power of the model.
As shown in Figure 7, in practice, the input is split into multiple heads, each processing a subset of the input data. The outputs from these heads are then combined to form a single output. This technique is particularly useful in natural language processing tasks.
The original Transformer model, as introduced by [17], was adeptly enhanced to accommodate multivariate inputs, thereby enabling the integration of supplementary data such as meteorological information. This refinement was executed by adjusting the input dimension of the model’s initial linear layer, thereby equipping the model to adeptly handle inputs of varying dimensions. This modification underscores the adaptability inherent in the model’s design. The Transformer model outlined in this paper is especially tailored for the load forecasting at EV stations; it diminishes memory consumption, permitting the processing of extended sequences and facilitating predictions informed by an expanded historical context. Figure 8 illustrates the comprehensive architecture of the model.
This figure illustrates a typical Transformer-based model architecture used for time series forecasting, comprising an encoder, decoder, and forecasting output layer. The encoder component first maps the input time series data to a specific dimension through a linear layer and incorporates positional encoding to embed sequential information, addressing the Transformer’s inherent lack of sequence awareness. The encoder layer consists of multiple repeated units, each including a multihead [18] sparse attention mechanism to focus on critical information within the time series, and a feedforward neural network block to enhance the model’s representational capacity. The decoder component takes the output from the encoder and prior predictions as input, using multihead self-attention mechanisms to handle dependencies between time steps while also leveraging multihead attention to compare the encoder’s output with the decoder’s input features, thereby capturing more complex time series patterns. Finally, the decoder’s output passes through a series of linear transformations, including nonlinear activations, to generate the forecasting output. This architecture, through its encoder–decoder design, effectively captures complex patterns and long-range dependencies within time series data, making it suitable for multivariate time series forecasting tasks.
Let L denote the length of the input sequence, d represent the embedding dimension of the Transformer, N indicate the length of the output sequence, and M signify the internal intermediate length utilized by the decoder’s initial sequence generator.
The encoder is structured with a series of encoder layers, each consisting of a multihead self-attention mechanism and a feedforward network, as depicted in Figure 9. The direct output of the encoder, known as the encoded memory, mirrors the shape of the input to the Transformer. These memory values are subsequently refined by the decoder to produce the forecasted output.

3.3. GCN-Transformer Model Evaluation Index

To evaluate the prediction performance, the mean absolute percentage error y M A P E and forecasting accuracy y F A are set regarding the load forecasting index of battery swap station, which can be expressed as
y M A P E = 1 n i = 1 n | X a c t ( i ) X p r e d ( i ) X a c t ( i ) | × 100 %
y F A = ( 1 | X a c t ( i ) X p r e d ( i ) | X a c t ( i ) ) × 100 %
where n is the total number of predictions and X a c t ( i ) and X p r e d ( i ) are the true value and the forecasted value of the load at time i, respectively.

4. Case Study

In this section, the findings of the model are detailed. Section 4.1 sets the stage by describing the experimental environment and the meticulously selected parameter configurations. Section 4.2 then delves into the specifics of the experimental dataset utilized. The subsequent Section 4.3 presents the experimental results, offering a clear view of the model’s performance. Finally, Section 4.4 concludes with an insightful discussion on the model’s effectiveness as proposed in this study.

4.1. Experimental Environment

The hardware platform was configured as the six-core, 56 GB, 5 Mbps Tencent cloud GPU computing server GN 8, which was programmed with Python language, and the model was implemented by the PyTorch library.
It is necessary to select network parameters such as activation functions and set appropriate iteration times and learning rates. Specific network parameter settings are shown in Table 1.
The activation function is ReLu, which is used to capture the long-distance dependence of the sequence. The optimizer is Adam, and the number of iterations is 1500; the learning rate is set to 0.001, and the batch training size is 64 to ensure that the total training time is within a reasonable range.

4.2. Experimental Datasets

The model underwent training utilizing a comprehensive set of hourly electricity consumption data from battery swap stations located in a city within Northeast China [19]. The data cover the span from 1 March 2022, to 1 March 2023, capturing a full year’s worth of hourly readings. Concurrently, the training process integrated actual meteorological data sourced from the city’s weather stations over the identical timeframe. The rationale behind opting for a year-long dataset lies in the pronounced climatic variations characteristic of Northeast China. A one-year time span ensures the model accounts for the full spectrum of load changes from spring through to winter, enhancing the robustness of the training process.
Referencing Figure 10, the map accurately portrays the geographical coordinates (latitude and longitude) of the battery swap stations, arranged according to the spatial relationships delineated in the figure. For the experiment, historical load data were sourced from forty-six uniquely positioned battery swap stations scattered throughout the city, providing a comprehensive dataset for analysis.
In the experimental setup, the annual dataset was segmented into 8760 discrete data points, each representing an hourly interval. The load curve occasionally shows intervals of zero load values, which can be attributed to the maintenance schedule of the station itself.
To illustrate the magnitude of load fluctuations, Figure 11 presents the load curve of a specific battery swap station. The graph indicates that the highest load peaks are experienced during the winter months, characterized by substantial and pronounced fluctuations.
Figure 12 graphically represents the temperature trends for two distinct months in the city. The line chart delineates the daily temperature extremes, with yellow markers indicating the lowest temperatures and blue markers signifying the highest temperatures of each day.
Observations from Figure 12 reveal substantial fluctuations in temperature throughout the two consecutive months. This variability underscores the rationale for incorporating annual temperature data as a critical influencing factor in the model.

4.3. Experimental Results and Analysis

As shown in Figure 13, the described graph illustrates the effectiveness of the GCN-Transformer model in forecasting the power load for multiple electric vehicle battery swap stations (EVBSSs). The horizontal axis represents the unique identifiers for each station, while the vertical axis displays a comparative analysis between the predicted and actual power values, measured in kilowatts. The solid line, labeled “Prediction”, shows the power estimates provided by the model, and the dashed line, noted as “Actual”, corresponds to the empirically observed power levels. The close alignment of the predicted power trajectory with the actual measurements in the graph highlights the model’s significant accuracy in load forecasting for various swap stations. This level of precision is of great significance for grid operators, offering a strategic asset for optimizing energy distribution and operational efficiency. It is especially critical during periods of peak demand, where it aids in achieving equilibrium between energy supply and demand as well as in refining grid load management strategies.
The model, after incorporating factors such as geographical location, weather, and time, produced a multitude of outcomes. Due to the extensiveness of these results, it is impractical to enumerate each one. Instead, Figure 13 presents a comparative analysis of the forecasted and actual load values for 46 battery swap stations at a particular moment. To enhance the credibility of the data presented within the article and to address the considerable number of discrepancies, a subset of 3 stations was randomly selected from the 46 for a detailed evaluation of the forecast results. For the purpose of demonstrating the reliability of the model, we also attempted to use other models for the task of load forecasting. The specific numerical data for this evaluation are provided in Table 2.

4.4. GCN-Transformer Effectiveness Analysis

This section aims to illustrate the effectiveness of the proposed GCN-Transformer with respect to feature dimensionality and training time. A comparative analysis of the performance of the GCN-Transformer model is presented alongside several prevalent models, namely, Transformer, RNN, TCN-LSTM, and CNN-LSTM, as detailed in Table 3.
The RNN model, which solely incorporates historical load data, has a feature dimension of 1, resulting in a relatively brief training period. Conversely, the CNN-LSTM and TCN-LSTM models analyze data that encompass both temporal indicators, such as date information and meteorological variables. The date factors are categorized into weekdays and holidays, while the meteorological factors consist of temperature, dew point, humidity, and air pressure. Consequently, these models have a feature dimension of 7. The proposed GCN-Transformer model in this paper also takes into account the spatial interdependencies between the current region and others, elevating the feature dimension to 8. In this comparative experiment, we employed the early stopping method to set the model iterations.
The training criteria for each model listed in the table are determined by the early stopping mechanism on the validation set. Specifically, the model training is halted when the loss on the validation set no longer decreases over several consecutive training rounds. This method helps prevent overfitting and ensures the robustness of the model’s performance. When discussing the comparisons in the table, we did indeed take into account the training time of different models. Since all models were trained on the same dataset and used the same early stopping mechanism, these comparisons are reasonable and valid. Furthermore, in our experimental design, we also ensured that the hyperparameters of all models (such as learning rate, batch size, etc.) were kept as consistent as possible to minimize the impact of these factors on training time.
The RNN network, characterized by a lower feature dimension, is associated with a shorter training time. This advantage stems from its reliance on historical load data alone. In contrast, the hybrid models, CNN-LSTM [20] and TCN-LSTM, process time series data enriched with additional temporal and meteorological features, leading to higher feature dimensions. Consequently, these models demand a longer training time when compared to the RNN. Nonetheless, the model introduced in this paper, despite incorporating graph information and expanding the feature dimension to eight, did not encounter a substantial increase in training time. This outcome highlights the efficiency of the proposed model in handling complex data structures while maintaining a manageable training duration.
To ascertain the efficacy of the proposed model in enhancing load forecasting accuracy, predicated on the direct spatial connectivity between disparate regions, a comparative analysis was conducted with the single-task load forecasting model, namely, the Transformer. The comparative outcomes, reflected in terms of the average absolute percentage error (AAPE) and prediction accuracy, are presented in Table 4, showcasing the performance of both the proposed model and the Transformer on a test dataset.
In addition to comparing with the Transformer model, the results of the model were also compared with the aforementioned RNN, TCN-LSTM, and CNN-LSTM models. Since the performance of these three models was not very good compared to the Transformer, they were collectively compared with the proposed model.
Beyond the aforementioned Transformer model, a comparative study was conducted to showcase the predictive performance of various established models in the context of charging station compliance. The comparative results are presented in Figure 14 and Figure 15. To ensure the generalizability of the findings, a random moment’s station load value from the load time curves of 46 charging stations was selected for overall indicator comparative analysis, with the detailed accuracy data delineated in Table 5. The findings indicate that when the scale of the charging station escalates to incorporate complex multinode interactions, the efficacies of the models currently in extensive use exhibit notable deficiencies.
As shown in Figure 14 and Figure 15, the traditional forecasting method using RNN only studies one characteristic, with a prediction accuracy of only 87.98%, which indicates poor model performance. On the basis of this model, when a TCN or CNN is hybridized with LSTM, there is an improvement in accuracy to varying degrees, but there are still differences in the impact on model performance in specific analysis. Despite the improvements, the accuracy still does not meet the requirements. The model proposed in this article, GCN-Transformer, can further effectively learn spatial features, with a forecast accuracy of 95.27%.
In the predicted 24 h load, when predicting multiple battery swap stations simultaneously, the GCN-Transformer model, which incorporates various factors such as meteorological conditions and geographical locations for the forecasting task, demonstrated superior performance. The model in this paper has higher accuracy in the load prediction of EVBSSs.

5. Conclusions

As power systems expand, the scope of the EVBSS grows accordingly, surpassing the capabilities of traditional single-task load forecasting models. This paper proposes an innovative GCN-Transformer model that excels in load forecasting for EVBSSs characterized by extensive multinode structures and diverse topologies. The model overcomes the limitations of existing graph convolutional neural networks in handling time series data and addresses the Transformer’s shortcoming in accounting for the intricate interdependencies within multitask load forecasting. By integrating a comprehensive range of data—including geographical coordinates, meteorological details, and load profiles from station nodes—the model is rigorously trained to forecast loads with a holistic perspective that considers both weather and locational influences. Furthermore, the strategic application of Spearman’s rank correlation coefficient is used to curate and prioritize influential weather factors, thereby enhancing the model’s predictive accuracy.
Head-to-head comparisons with existing models in the load forecasting domain demonstrated the superior precision of the proposed model. Case studies further validated this, highlighting the model’s exceptional accuracy in forecasting loads for an ensemble of stations, even amid fluctuating weather conditions and geographical variations. The model’s outstanding performance attests to its effectiveness in addressing the multifaceted challenges of multitask forecasting within expansive, multinode, and multitopological systems.

Author Contributions

Investigation and methodology, X.H. (Xuehao He) and X.H. (Xiao Hu); validation, writing, review, and editing, X.H. (Xiao Hu), Z.Z., J.Y. (Jinduo Yang), J.Y. (Jiaquan Yang), and S.L.; supervision, X.H. (Xiao Hu) and Z.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key R&D Program of China (2023YFB2407300), the Science and Technology Development Plan Project of Jilin Province, China (20220508009RC), and the National Scholarship Fund of China (202307790010).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

Author Zhiyu Fan was employed by the company China Mobile Communications Co., Ltd. Authors Jiaquan Yang and Xuehao He were employed by the company China Southern Power Grid Yunnan Electric Power Research Institute. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Almaghrebi, A.; James, K.; Al Juheshi, F.; Alahmad, M. Insights into household electric vehicle charging behavior: Analysis and predictive modeling. Energies 2024, 17, 925. [Google Scholar] [CrossRef]
  2. Bampos, Z.N.; Laitsos, V.M.; Afentoulis, K.D.; Vagropoulos, S.I.; Biskas, P.N. Electric vehicles load forecasting for day-ahead market participation using machine and deep learning methods. Appl. Energy 2024, 360, 122801. [Google Scholar] [CrossRef]
  3. Chan, J.W.; Yeo, C.K. Electrical power consumption forecasting with transformers. In Proceedings of the 2022 IEEE Electrical Power and Energy Conference (EPEC), Victoria, BC, Canada, 5–7 December 2022; pp. 255–260. [Google Scholar] [CrossRef]
  4. Dai, Q.; Huo, X.; Hao, Y.; Yu, R. Spatio-temporal prediction for distributed PV generation system based on deep learning neural network model. Front. Energy Res. 2023, 11, 1204032. [Google Scholar] [CrossRef]
  5. Gnanavendan, S.; Selvaraj, S.K.; Dev, S.J.; Mahato, K.K.; Swathish, R.S.; Sundaramali, G.; Accouche, O.; Azab, M. Challenges, solutions and future trends in EV-technology: A review. IEEE Access 2024, 12, 17242–17260. [Google Scholar] [CrossRef]
  6. Gupta, A.K.; Singh, R.K. Short-term day-ahead photovoltaic output forecasting using PCA-SFLA-GRNN algorithm. Front. Energy Res. 2022, 10, 1029449. [Google Scholar] [CrossRef]
  7. He, L.; Li, L.; Li, M.; Li, Z.; Wang, X. A deep learning approach to the transformer life prediction considering diverse aging factors. Front. Energy Res. 2022, 10, 930093. [Google Scholar] [CrossRef]
  8. Huang, N.; He, Q.; Qi, J.; Hu, Q.; Wang, R.; Cai, G.; Yang, D. Multinodes interval electric vehicle day-ahead charging load forecasting based on joint adversarial generation. Int. J. Electr. Power Energy Syst. 2022, 143, 108404. [Google Scholar] [CrossRef]
  9. Kumar, R.R.; Bharatiraja, C.; Udhayakumar, K.; Devakirubakaran, S.; Sekar, K.S.; Mihet-Popa, L. Advances in batteries, battery modeling, battery management system, battery thermal management, SOC, SOH, and charge/discharge characteristics in EV applications. IEEE Access 2023, 11, 105761–105809. [Google Scholar] [CrossRef]
  10. Li, S.; Hu, W.; Cao, D.; Dragicevic, T.; Huang, Q.; Chen, Z.; Blaabjerg, F. Electric vehicle charging management based on deep reinforcement learning. J. Mod. Power Syst. Clean Energy 2022, 10, 719–730. [Google Scholar] [CrossRef]
  11. Li, S.; Zhao, P.; Gu, C.; Li, J.; Cheng, S.; Xu, M. Battery protective electric vehicle charging management in renewable energy system. IEEE Trans. Ind. Inform. 2023, 19, 1312–1321. [Google Scholar] [CrossRef]
  12. Li, Y.; Wei, Y.; Zhu, F.; Du, J.; Zhao, Z.; Ouyang, M. The path enabling storage of renewable energy toward carbon neutralization in China. eTransportation 2023, 16, 100226. [Google Scholar] [CrossRef]
  13. Liang, Y.; Ding, Z.; Zhao, T.; Lee, W.-J. Real-time operation management for battery swapping-charging system via multi-agent deep reinforcement learning. IEEE Trans. Smart Grid 2023, 14, 559–571. [Google Scholar] [CrossRef]
  14. Liu, K.; Li, K.; Peng, Q.; Zhang, C. A brief review on key technologies in the battery management system of electric vehicles. Front. Mech. Eng. 2019, 14, 47–64. [Google Scholar] [CrossRef]
  15. Song, H.; Al Khafaf, N.; Kamoona, A.; Sajjadi, S.S.; Amani, A.M.; Jalili, M.; Yu, X.; McTaggart, P. Multitasking recurrent neural network for photovoltaic power generation prediction. Energy Rep. 2023, 9, 369–376. [Google Scholar] [CrossRef]
  16. Sun, X.; Fu, J.; Yang, H.; Xie, M.; Liu, J. An energy management strategy for plug-in hybrid electric vehicles based on deep learning and improved model predictive control. Energy 2023, 269, 126772. [Google Scholar] [CrossRef]
  17. Wang, Q.; Wang, Z.; Zhang, L.; Liu, P.; Zhou, L. A battery capacity estimation framework combining hybrid deep neural network and regional capacity calculation based on real-world operating data. IEEE Trans. Ind. Electron. 2023, 70, 8499–8508. [Google Scholar] [CrossRef]
  18. Xing, Y.; Li, F.; Sun, K.; Wang, D.; Chen, T.; Zhang, Z. Multi-type electric vehicle load prediction based on Monte Carlo simulation. Energy Rep. 2022, 8, 966–972. [Google Scholar] [CrossRef]
  19. Zahoor, A.; Mehr, F.; Mao, G.; Yu, Y.; Sápi, A. The carbon neutrality feasibility of worldwide and in China’s transportation sector by E-car and renewable energy sources before 2060. J. Energy Storage 2023, 61, 106696. [Google Scholar] [CrossRef]
  20. Zhong, B. Deep learning integration optimization of electric energy load forecasting and market price based on the ANN–LSTM–transformer method. Front. Energy Res. 2023, 11, 1292204. [Google Scholar] [CrossRef]
Figure 1. Multivariate load correlation analysis of meteorological factors.
Figure 1. Multivariate load correlation analysis of meteorological factors.
Electronics 13 03401 g001
Figure 2. Influencing factors of short-term load forecasting of the battery swapping station.
Figure 2. Influencing factors of short-term load forecasting of the battery swapping station.
Electronics 13 03401 g002
Figure 3. GCN-Transformer multiswap station load joint forecasting model framework.
Figure 3. GCN-Transformer multiswap station load joint forecasting model framework.
Electronics 13 03401 g003
Figure 4. Euclidean structured data (left) vs. non-Euclidean structured data (right).
Figure 4. Euclidean structured data (left) vs. non-Euclidean structured data (right).
Electronics 13 03401 g004
Figure 5. Multilayer GCN message aggregation model.
Figure 5. Multilayer GCN message aggregation model.
Electronics 13 03401 g005
Figure 6. Dot-product attention in attention mechanism.
Figure 6. Dot-product attention in attention mechanism.
Electronics 13 03401 g006
Figure 7. Multihead attention in attention mechanism.
Figure 7. Multihead attention in attention mechanism.
Electronics 13 03401 g007
Figure 8. Overall structure of the proposed Transformer-based model.
Figure 8. Overall structure of the proposed Transformer-based model.
Electronics 13 03401 g008
Figure 9. Transformer structure, encoder side. Dashed lines indicate layers which are shared between inputs.
Figure 9. Transformer structure, encoder side. Dashed lines indicate layers which are shared between inputs.
Electronics 13 03401 g009
Figure 10. Geographical location schematic diagram of the battery swapping stations.
Figure 10. Geographical location schematic diagram of the battery swapping stations.
Electronics 13 03401 g010
Figure 11. Historical load data chart of battery swap station A.
Figure 11. Historical load data chart of battery swap station A.
Electronics 13 03401 g011
Figure 12. The annual rainfall curve of the city over the year.
Figure 12. The annual rainfall curve of the city over the year.
Electronics 13 03401 g012aElectronics 13 03401 g012b
Figure 13. Comparison of model output and actual value.
Figure 13. Comparison of model output and actual value.
Electronics 13 03401 g013
Figure 14. The last 24 steps of real data compared with predictions from different models.
Figure 14. The last 24 steps of real data compared with predictions from different models.
Electronics 13 03401 g014
Figure 15. Prediction errors of different models (last 24 steps).
Figure 15. Prediction errors of different models (last 24 steps).
Electronics 13 03401 g015
Table 1. GCN-Transformer network parameter settings.
Table 1. GCN-Transformer network parameter settings.
ItemParameter
Activation functionReLu
OptimizerAdam
Maximum epoch1500
Learning rate0.001
Batch size64
Table 2. The accuracy of the load forecast for some battery swap stations by the model.
Table 2. The accuracy of the load forecast for some battery swap stations by the model.
LoadEvaluation Index (%)Proposed ModelTCN-LSTMCNN-LSTM
Battery swapping station load A y M A P E 1.043.364.66
y F A 98.9696.6495.34
Battery swapping station load B y M A P E 3.235.246.78
y F A 97.7794.7693.22
Battery swapping station load C y M A P E 8.4211.9513.75
y F A 98.5888.0586.25
Table 3. Comparison of neural networks in terms of feature dimension and training time.
Table 3. Comparison of neural networks in terms of feature dimension and training time.
Neural NetworkFeature DimensionTraining Time (s)
RNN1998.7
TCN-LSTM73294.0
CNN-LSTM72677.13
Transformer12988.07
Proposed model83372.46
Table 4. Comparison between the proposed model and single load forecasting model.
Table 4. Comparison between the proposed model and single load forecasting model.
LoadEvaluation Index (%)TransformerProposed Model
Battery swapping station load y M A P E 6.54%4.84%
y F A 93.66%95.16%
Table 5. Comparison between the proposed model and single load forecasting model.
Table 5. Comparison between the proposed model and single load forecasting model.
LoadEvaluation Index (%)RNNTCN-LSTM
Battery swapping station load y M A P E 12.02%9.67%
y F A 87.98%90.33%
LoadEvaluation Index (%)CNN-LSTMProposed Model
Battery swapping station load y M A P E 7.23%4.73%
y F A 92.77%95.27%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, X.; Zhang, Z.; Fan, Z.; Yang, J.; Yang, J.; Li, S.; He, X. GCN-Transformer-Based Spatio-Temporal Load Forecasting for EV Battery Swapping Stations under Differential Couplings. Electronics 2024, 13, 3401. https://doi.org/10.3390/electronics13173401

AMA Style

Hu X, Zhang Z, Fan Z, Yang J, Yang J, Li S, He X. GCN-Transformer-Based Spatio-Temporal Load Forecasting for EV Battery Swapping Stations under Differential Couplings. Electronics. 2024; 13(17):3401. https://doi.org/10.3390/electronics13173401

Chicago/Turabian Style

Hu, Xiao, Zezhen Zhang, Zhiyu Fan, Jinduo Yang, Jiaquan Yang, Shaolun Li, and Xuehao He. 2024. "GCN-Transformer-Based Spatio-Temporal Load Forecasting for EV Battery Swapping Stations under Differential Couplings" Electronics 13, no. 17: 3401. https://doi.org/10.3390/electronics13173401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop