Next Article in Journal
Automatic Verticality Monitoring and Securing System for Large Circular Steel Pipes
Next Article in Special Issue
A Time-Sensitive Graph Neural Network for Session-Based New Item Recommendation
Previous Article in Journal
An Approach to the State Explosion Problem: SOPC Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting

1
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300130, China
2
State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, China
3
Hebei Province Key Laboratory of Big Data Calculation, Hebei University of Technology, Tianjin 300130, China
4
School of Electrical Engineering, Hebei University of Technology, Tianjin 300130, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(24), 4988; https://doi.org/10.3390/electronics12244988
Submission received: 23 November 2023 / Revised: 8 December 2023 / Accepted: 11 December 2023 / Published: 13 December 2023
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)

Abstract

:
Accurate traffic prediction is vital for traffic management, control, and urbanization construction. Extensive research efforts have diligently focused on capturing the intricate spatio-temporal relationships that are inherent in traffic data. However, a limited number of studies have fully exploited the potential of periodicity, a distinctive and valuable characteristic of transportation systems. In this paper, we propose a novel GRU- and Transformer-Based Periodicity Fusion Network (GTPFN) to distinguish the effects of different types of periodic data and integrate them seamlessly and effectively. Initially, the proposed model captures dynamic spatio-temporal correlations and obtains the candidate prediction result by employing a GRU encoder–decoder with spatial attention, focusing on the hourly data. Subsequently, we design the Pattern Induction Block based on GRU layers to extract regular traffic patterns from daily and weekly data. Finally, the Pattern Fusion Transformer integrates these patterns, followed by a Feedforward layer, to yield the final prediction output. Experiments on the Caltrans Performance Measurement System (PEMS) datasets illustrate that the proposed model outperforms state-of-art baseline models on most predicted horizons.

1. Introduction

In recent years, the field of intelligent transportation systems (ITS) has gained increasing attention and experienced rapid development. Traffic forecasting is a crucial component of ITS, which enhances urban road utilization, allowing traffic department personnel to optimize the traffic flow and allocate resources more efficiently. Accurate traffic forecasting is essential for predicting and preventing road accidents, thus improving the overall traffic control capabilities. Moreover, it empowers citizens to plan their travel routes effectively, leading to time savings, reduced emissions, and an improved quality of life. Furthermore, it impacts industries like navigation, autonomous driving [1], and traffic monitoring [2], where precise traffic prediction is essential for operational optimization and efficiency improvements. Therefore, the development and advancement of accurate traffic forecasting have significant implications for both the public welfare and economic sectors.
Capturing intricate and dynamically evolving spatio-temporal relationships is a distinguishing characteristic and challenge encountered in traffic forecasting. In recent decades, researchers have achieved notable advancements that allow them to tackle this issue comprehensively. Initially, classical statistical methodologies, such as the autoregressive integrated moving average (ARIMA) [3] and History Average (HA), were extensively explored and implemented within this field. However, their focus primarily rests on temporal relations while disregarding the equally significant nonlinear factors and spatial relationships. This leads to insufficient performances in traffic prediction endeavors, mainly when dealing with long-term scenarios. Subsequently, to achieve a better predictive accuracy, traditional machine learning techniques like the k nearest neighbors (KNN) [4] and support vector regression (SVR) [5] are adopted to process voluminous traffic data, enabling more effective capture of the dynamic spatio-temporal associations. Nonetheless, owing to the stringent feature engineering prerequisites and limited generalizability, the practical applicability of these traditional machine learning models in traffic prediction remains constrained. With the progression of computing power and the abundance of traffic data, many deep learning approaches have emerged as potential remedies for addressing the above-mentioned challenges.
In the domain of deep learning models for traffic prediction, the initial approaches revolve around utilizing Convolutional Neural Networks (CNNs). These networks are designed to extract spatial dependencies. In parallel, Recurrent Neural Networks (RNNs) and their variants are employed to capture temporal dependencies [6,7,8,9]. The inherent limitation of CNNs it that they are solely suitable for grid-based maps, while traffic road maps adhere to a graph-based structure. Consequently, CNNs do not align with road maps’ essential characteristics, adversely impacting their ability to capture the spatial relationships among nodes. Subsequently, the introduction of Graph Neural Networks (GNNs) has effectively addressed this challenge by leveraging the structural attributes of road maps to facilitate spatial feature fusion, consequently yielding substantial advancements within this field [10,11,12]. Nevertheless, the prevailing deep learning models still contend with two unresolved issues that detrimentally influence the prediction accuracy.
In GNNs, the node relationship matrix used in most studies is fixed (based on connection, distance, and similarity), and can only be utilized to extract neighbor information or similar functional area information [13,14,15]. However, in reality, the dependencies between nodes in each time step constantly change and cannot be captured by a fixed matrix, affecting the model’s ability to capture spatial connections. Therefore, determining how to construct a node relationship matrix to discover the node relationships at each time point entirely is a critical issue. Another challenge is that, in existing models, most models overlook the utilization of periodic traffic characteristics. Periodicity is one of the most apparent traffic characteristics, and historical data from the same period often exhibit a high level of similarity. Therefore, periodic data can more accurately predict future traffic trends. Some studies have utilized periodic data and obtained relatively excellent experimental results [16,17,18]. However, a significant deficiency observed across numerous studies lies in their inadequate utilization of periodicity. Essentially, these studies incorporate periodic data into the same model architecture to derive corresponding outputs, followed by adopting simplistic information aggregation methods such as linear layer fusion or concatenation to generate the final prediction results. This approach exhibits the absence of a more profound exploration of periodic data. The utilization of hourly data primarily aims at prediction, whereas daily and weekly data serve analogical purposes. Hence, a solitary model architecture fails to address these two functions simultaneously. Furthermore, implementing such periodic fusion methods hinders the comprehensive exploration of interconnections within periodic data. Thus, effectively harnessing periodic features and optimizing the information fusion process across different periods pose formidable challenges.
In response to the above-mentioned challenges, we propose a novel deep learning framework named the GRU- and Transformer-Based Periodicity Fusion Network (GTPFN) as a potential solution. This model leverages hourly data to forecast forthcoming traffic patterns utilizing a GRU encoder–decoder with spatial attention. Additionally, the regular traffic patterns of future periods are induced from daily and weekly data employing the Pattern Induction Block. The Pattern Fusion Transformer integrates these distinct outputs subsequently. Finally, a Feedforward layer is employed to derive the ultimate output. The main contributions of this paper can be summarized as follows:
  • We present a novel and interpretable perspective for handling periodicity traffic prediction data, aiming to use the different features of various types of periodicity data fully. Specifically, we utilize hourly data to forecast the basic future traffic pattern and introduce the Pattern Induction Block, which enables the induction of regular future traffic patterns from daily and weekly data. Furthermore, we propose the Pattern Fusion Transformer to consolidate these disparate outputs effectively.
  • We propose the Spatial Attention GRU encoder–decoder to simultaneously consider spatial and temporal relationships. This spatial attention mechanism facilitates the dynamic computation of inter-node relationships at each time step. Consequently, it enhances the representation of the current traffic status while effectively capturing the evolving spatial correlations.
  • We conduct extensive experimental evaluations to assess the model’s performance on four PEMS datasets. The resulting experimental findings reveal that GTPFN performs better than state-of-the-art baselines on most horizons.
The rest of this paper is organized as follows: An overview of the relevant work is provided in Section 2, the definition of the problem is provided in Section 3, the various parts of the model are discussed in detail in Section 4, and Section 5 provides our comparison with other baselines on PEMS datasets as well as sufficient ablation experiments, hyperparameter experiments, and their comparison results. Finally, we conclude in Section 6.

2. Related Work

2.1. Periodicity

In traffic prediction, recognizing and accommodating recurring traffic patterns, including daily rush hours, weekly fluctuations between weekdays and weekends, and seasonal variations, is paramount. These periodic features are instrumental for enhancing the performance of traffic prediction models. Consequently, researchers are dedicated to exploring methods for effectively capturing and utilizing this periodic information for traffic forecasting.
The Attention-Based Spatio-Temporal Graph Convolutional Network (ASTGCN) [16] puts periodic information into corresponding stacked spatio-temporal blocks composed of convolution and attention. It combines their outputs with a linear layer to obtain the final result. The Transformer-Graph Convolutional Attention Network (TRGCAT) [17] encodes periodic features based on temporal transformer layers in parallel and then concatenates and decodes them with the spatial feature through the Graph Convolutional Attention Network to obtain the final output. The Multi-View Dynamic Graph Convolution Network (MVDGCN) [18] puts periodic information into the GRU encoder–decoder with coupled graph convolution and fuses these results with linear layers. Multiple Information Spatio–Temporal Attention-Based Graph Convolution Networks (MISTAGCNs) [15] put the periodic information into the corresponding spatio-temporal blocks and then stack these results and put them into multiple spatio-temporal blocks to obtain the final result.
Nonetheless, many contemporary models incorporating periodicity, including the aforementioned ones, commonly integrate hourly, daily, and weekly data within a singular module to encompass spatio-temporal relationships, fusing them through straightforward mechanisms. These approaches neglect the characteristics inherent in distinct periodic information, thereby limiting the potential enhancement of the model’s performance. Specifically, the daily and weekly data that have the same prediction period are more conducive to induction. Daily data that are collected before the prediction period, on the other hand, are better suited for capturing the spatio-temporal relationships that are essential for predictive analytics. Furthermore, prevalent periodic data fusion techniques such as concatenation and linear layers, as employed in the aforementioned models, exhibit a degree of oversimplification in their capacity to comprehensively integrate and leverage periodic information.

2.2. Transformer

Transformer is a sequence-to-sequence model based on the self-attention mechanism [19], which is widely used in natural language processing methods, such as machine translation, text summarization, language generation, and other tasks [20,21]. It achieves significant results in a short period of time. Transformer adopts a new architecture compared to that used by the traditional Recurrent Neural Network (RNN) model. It does not need to process the elements in the sequence one-by-one, which makes the training process highly parallel and accelerates the training speed of the model. The self-attention mechanism is the core idea of Transformer. It can model the relationship between any two positions in a sequence, thereby capturing global contextual information. It also interacts with all other positions by computing an attention weight matrix. It subsequently combines and aggregates the representations of various positions, factoring in their respective attention weights. This method enables the model better to understand the dependency relationships between different sentence positions. In addition to the self-attention mechanism, Transformer introduces residual connections and layer normalization techniques to address gradient vanishing and stability issues during the training process.
Transformer has made significant achievements in traffic prediction due to its excellent ability to capture complex spatio-temporal relationships in historical data. Spatio-Temporal Transformer Networks (STTNs) [22] utilize the stacking of spatial transformers and temporal transformers to fuse spatio-temporal information. Autoformer [23] designs a decomposition architecture to deal with long-term temporal dependencies. It also creates an autocorrelation mechanism to improve the computation efficiency and data utilization. Non-Stationary Transformers [24] address the over-stationarization problem which deteriorates Transformer’s performance in non-stationary time series forecasting. It introduces two key modules,“Series Stationarization” to enhance the predictability by standardizing input statistics and “De-Stationary Attention” to restore non-stationary information into temporal dependencies. Propagation Delay-Aware Dynamic Long-Range Transformer (PDFormer) [25] introduces a spatial self-attention module incorporating distinct graph-masking techniques to capture local geographic and global semantic neighborhoods. Moreover, a traffic delay perception feature conversion module was devised to model temporal delays in the propagation of spatial information explicitly.
However, although various studies have demonstrated Transformer’s strong ability to capture spatio-temporal relationships, the vast memory consumption caused by excessive parameters must be addressed, especially when using the multi-head attention mechanism, which exacerbates this situation. Some models only stack Transformer blocks to achieve better prediction results while ignoring the vast memory consumption. Determining how to more efficiently utilize transformers to reduce memory usage while improving the model’s accuracy remains a challenging issue. Determining how to solve the problem mentioned above is also a key point that we are concerned about, so we use transformer blocks for pattern fusion instead of capturing spatio-temporal dependencies, which can significantly reduce the usage of transformers without harming the model’s performance.

3. Preliminaries

In this section, we provide the basic definitions and statements related to this work.

3.1. Road Network

The road map G = ( V , E , A ) is used to display the connection relationship of road segments. V = v i i = 1 , 2 , , N is a set of nodes on the road, and N represents the number of nodes. E = e i j is a set of edges indicating connectivity between node i and node j. A R N × N describes the connectivity between nodes. The specific definition is as follows:
A i j = 1 , if v i and v j are connected , 0 , otherwise .
While there is a definition of A in the standard traffic network diagram, this model uses dynamic matrices to capture changing spatial relationships, not fixed ones like A.

3.2. Traffic Feature Matrix

Assuming that a sequence of time T records the traffic features of each node in the road network G during this time period, we use x t c , i R to indicate the value of the c-th feature of node i at time t. X t i R F denotes the values of all features of node i at time t, and F is the number of traffic features. X = ( X t 1 , X t 2 , , X t N ) R N × F shows the values of all nodes at time t. X = ( X 1 , X 2 , , X T ) R N × T × F denotes the value of all features of all nodes over the period T. In the experiment, we use the traffic flow for predictions for the PEMS03, 04, and 08 datasets, while for the PEMS-Bay dataset, we choose speed.

3.3. Problem Definition

To predict traffic information Y = ( y 1 , y 2 , , y N ) R N × T p for a while in the future, we utilize the X of specific periods in the past. Y represents the characteristics of all nodes in the next T p time steps. y i = ( y 1 i , y 2 i , , y T p i ) R T p represents one traffic feature of a particular node i during the predicted period.

4. Methodology

4.1. Data Preparation and Processing

Before formally introducing the modules in our model, we need to introduce the data required for each module. As shown in Figure 1, X h R N × T h × F i denotes hourly data, and T h is the length of its time steps. U d = ( X d 1 , , X d P d ) R P d × N × T p × F i denotes daily data, namely those with the same time period as the predicted time in the previous P d days. Similarly, U w = ( X w 1 , , X w P w ) R P w × N × T p × F i denotes weekly data, namely those with the same time period as the predicted time in the previous P w weeks. For instance, as Figure 2 shows, if we want to predict the traffic feature from 8:00 to 8:55 on Friday, 24 March, we need data from 7:00 to 7:55 on 24 March ( X h ), data from 8:00 to 9:00 on 23 March and 22 March ( U d ), and data from 8:00 to 9:00 on 17 March and 10 March ( U w ). The exact number of P d and P w to choose depends on the specific situation. It is important to emphasize that F i represents the input feature dimension, which has been modified from the original matrix containing only one traffic feature through a linear layer. This modification allows us to capture more complex relationships from the input features.

4.2. Overview

As shown in Figure 1, the GTPFN mainly consists of three modules. The first module is a Spatial Attention GRU encoder–decoder for hourly data. We integrate the GRU and spatial mechanism to capture the spatio-temporal relationship at each time step. The second module is the Pattern Induction Block for daily and weekly data. We utilize the GRU gate mechanism to induce the regular traffic pattern for the predicted period. The third module is the Pattern Fusion Transformer for periodic result fusion, We use the multi-head self-attention mechanism of Transformer to deeply fuse the periodic information results generated by the above two modules.

4.3. Spatial Attention GRU Encoder–Decoder

Firstly, we feed X h R N × T h × F i into a Spatial Attention GRU encoder–decoder to generate the candidate prediction outcome Y ˜ 0 . Compared with the Long Short-Term Memory (LSTM) [26], the Gated Recurrent Unit (GRU) network has gained widespread adoption for its parameter efficiency and competitive ability to capture long-term temporal relationships [27], thereby serving as a preferred alternative to the LSTM. The basic formulas are as follows:
R t = σ I t W x r + H t 1 W h r + b r
Z t = σ I t W x z + H t 1 W h z + b z
H ˜ = tanh I t W x h + R t H t 1 W h h + b h
H t = Z t H t 1 + ( 1 Z t ) H t ˜
where R t and Z t represent the reset gate and the update gate, respectively, which are the core of the GRU. I t is the input at the current time t. H t 1 shows the hidden state of the previous time step. H ˜ indicates the candidate hidden state. H t is the final hidden state of time step t. ⊙ represents the Hadamard product. R t is used to control the impact of historical information on the current hidden state at a time point. When it is 0, it means completely ignoring historical information; when it is 1, it means retaining more historical information. The update gate Z t is used to determine the weights of the hidden state of the previous time step and the candidate hidden state of the current time step. When it approaches 1, it means that the hidden state of the previous time step has a greater impact on the current time than the current input data.
In order to effectively model the dynamic spatial dependencies over time while considering the temporal dependencies, we replace the above formulas with the following:
S = L e a k y _ R e L U X t h W x a ( H t 1 W h a ) T
S i , j = exp ( S i , j ) j = 1 N exp ( S i , j )
X t h = S X t h R t = σ X t h W x r + b r
Z t = σ X t h W x z + b z
H ˜ = tanh X t h W x h + R t H t 1 W h h + b h
H t = Z t H t 1 + ( 1 Z t ) H t ˜
where X t h R N × F i is the hourly input of the current step. S R N × N is the attention score between nodes at time t, and S R N × N is the result of normalization through the Softmax function. X t h is the result after spatial attention. H ˜ , H t 1 , H t R N × F h denote the candidate hidden state, the hidden state of the previous time step, and the hidden state of the current time step, respectively. W x a , W x r , W x z , W x h R F i × F h , W h r , W h z , W h h R F h × F h and b R N × F h are learnable parameters. F h is the feature dimension of the hidden state.
As shown in Equations (6)–(11), we first calculate the attention score according to X t h and H t 1 and then capture spatial relationships to obtain X t h through spatial attention. This departure enables the independent utilization of spatial information obtained from each time step. As a result, our model is able to incorporate spatial attention into the GRU computations, thus enhancing its capacity to capture dynamic spatial correlations effectively.
We use this Spatial Attention GRU encoder to obtain the hidden state of the historical time series, which contains essential historical contextual information. Then, we use the Spatial Attention GRU decoder to generate candidate prediction autoregressive results Y ˜ 0 R T p × N × F h based on this hidden state.

4.4. Pattern Induction Block

The Spatial Attention GRU encoder–decoder model employs hourly data as the foundation for prediction, while the Pattern Induction Block harnesses daily and weekly historical information to abstract the conventional traffic pattern during the predicted timeframe. The Pattern Induction Block is composed of GRU layers. The number of layers is P d for daily data and P w for weekly data. The formulas of GRU layers are very similar to Equations (2)–(5). We only replace I t with X t R N × T p × F i . The utilization of the GRU layer serves the purpose of leveraging its gate mechanism to eliminate outliers within the periodic information effectively. This transformative approach aims to attain the regular traffic pattern during the predicted period rather than exploiting the GRU layer to address spatio-temporal relationships as usual. Taking the daily part as an example, we introduce the specific working method of the Pattern Induction Block in detail below.
Daily data are U d R P d × N × T p × F i , and their GRU layers of the Pattern Induction Block should be P d , corresponding to the past P d days. Each X d i ( i = 1 , 2 , , P d ) in U d is put into the GRU layers from old to new, and finally, we obtain the daily traffic pattern H d R N × T p × F h . Similarly, the weekly regular pattern H w R N × T p × F h is also obtained using the same method.

4.5. Pattern Fusion Transformer

After obtaining the candidate prediction result Y ˜ 0 , daily traffic pattern H d , and weekly traffic pattern H w based on the periodic information, we fuse them sequentially in the Pattern Fusion Transformer. The Transformer mainly consists of position encoding, a multi-head self-attention mechanism, the Feedforward layer, and the residual connection & normalization layer, as shown in Figure 3.
In contrast to Recurrent Neural Networks (RNNs), which inherently exhibit a natural temporal sequencing of data and iterate input following this temporal order, the Transformer architecture’s self-attention mechanism lacks an intrinsic awareness of input sequences’ temporal relationships. The time order is important for the fusion of regular traffic patterns and candidate prediction results, because they should pay more attention to the information of nearby time points rather than the information of distant time points. Consequently, it becomes imperative to introduce position encoding as a remedy for this limitation. Position encoding leverages sine (sin) and cosine (cos) functions to encode and represent the temporal relationships among input samples. The computational procedure for position encoding is elucidated by Formulas (12) and (13).
P E ( p o s , 2 i ) = sin ( p o s 10 , 000 2 i / d m o d e l )
P E ( p o s , 2 i + 1 ) = cos ( p o s 10 , 000 2 i / d m o d e l )
In this context, p o s denotes the specific position of the current sample within the input sequence, d m o d e l signifies the eigenvalue dimensions of each sample, and i denotes the position of the current feature within the sample. The position encoding matrix, represented as P E , is constructed by iteratively encoding information concerning the sample’s position and feature positions. This encoding process uses the sine (sin) and cosine (cos) functions. This matrix has a dimensionality of p o s d m o d e l . The rationale behind employing sine and cosine functions is their ability to capture relative position relationships effectively. Specifically, any location P E p o s + k within the encoding matrix can be expressed as a linear function of P E p o s , as shown in Equations (14) and (15). This property simplifies the extraction of relative positional information between the two positions.
sin ( p o s + k ) = sin ( p o s ) cos k + cos ( p o s ) sin k
cos ( p o s + k ) = cos ( p o s ) cos k sin ( p o s ) sin k
Finally, the outcome of the positional encoder is added to the input, so that the temporal relationship is preserved for further calculation.
After applying positional encoding, we utilize the self-attention mechanism to uncover the deep-seated interconnections within periodic data. Our goal is to enable our model to adeptly capture and encapsulate the inherent periodic correlations in time series data, thereby enhancing the understanding of the relationships between the candidate prediction outcome and regular traffic patterns. Applying this method, in turn, facilitates the generation of more comprehensive representations and precise analytical insights for traffic prediction. The formulas used for the self-attention of the block are as follows (here, we collectively refer to H d and H w as O):
Q = Y ˜ i 1 W q
K = H W k
V = H W v
S e l f A t t e n t i o n ( Q , K , V ) = s o f t m a x ( Q ( K ) T F k ) V
where Q, K, and V represent the query, key, and value, respectively. W q R F h × F k , W k R F h × F k , W v R F h × F k are learnable parameters. F k is the dimension of the key that is used to adjust the similarity calculation range between queries and keys. s o f t m a x is employed to calculate its correlation coefficient, ensuring that its product values adhere strictly to the positivity and collectively sum up to 1. Subsequently, the value matrix V undergoes a weighted aggregation procedure using predetermined weight coefficients, leading to the attainment of the self-attention output.
This module aims to integrate periodic traffic patterns into the candidate prediction results. Therefore, for the calculation of Q and V, we choose to use the periodic traffic pattern H, and for the calculation of K, we choose to use Y ˜ i 1 . In order to capture the correlation and dependency between patterns and the candidate prediction results at a deeper level, we also use the multi-head attention mechanism for the calculation, and the specific formula is as follows:
M u l t i H e a d ( Q , K , V ) = Concat ( h 1 , , h n )
h j = S e l f A t t e n t i o n ( H W j Q , Y ˜ i 1 W i K , H W j V )
The multi-head self-attention mechanism is an amalgamation of multiple self-attention operations that employs several self-attention heads to capture distinct subspaces of information. The resulting attention values from each head are then concatenated and subjected to linear transformation, yielding the ultimate attention representation. Taking the n-head self-attention mechanism as an example, the input feature vector X is partitioned into X / / n sub-feature sequences, where / / means the division with only the retention of the integer part. Each sub-sequence independently computes its attention and subsequently merges them into an output sequence denoted as O through concatenation.
It should be emphasized that, after obtaining the output sequence O R N × T p × F h , we add the periodic fusion information Y ˜ i 1 from the previous step in the form of residuals and then perform LayerNorm together to improve the stability of the gradient propagation and enhance the feature representation ability. The final output result Y ˜ i of this Pattern Fusion Transformer is obtained through LayerNorm and a Feedforward layer. The formula can be expressed as follows:
L i = L a y e r N o r m ( O + Y ˜ i 1 )
Y ˜ i = R e L U ( R e L U ( L i W i a + b i a ) W i b + b i b )
where W i a R F h × 2 F h , W i b R 2 F h × F h are learnable parameters.
In this way, we only used two Transformer blocks to deeply capture the internal connections between periodic information and combine them, reducing memory usage while still giving the model excellent predictive power.
After passing through the Weekly Pattern Fusion Transformer, the output obtained is Y ˜ 2 . It passes through a Feedforward layer to adjust the feature dimension and perform feature extraction, resulting in the final model output result Y. The specific formula for this last Feedforward network is as follows:
Y = R e L U ( Y ˜ 2 W y a + b y a ) W y b + b y b

5. Experiment

5.1. Datasets

Our experiments use four Caltrans Performance Measurement System datasets to evaluate our model: PEMS-Bay, PEMS04, PEMS07, and PEMS08 [28]. PEMS provides a unified database of traffic data collected by Caltrans on California’s highways, along with datasets from Caltrans and partner agencies. The PEMS04, 07, and 08 datasets contain three traffic features, flow, speed, and occupation, while the PEMS-Bay dataset only contains the speed feature. The relevant information from these datasets is shown in Table 1.

5.2. Baselines

In order to substantiate the efficacy of our proposed method, we compare our method with the following baseline methods:
  • HA: A statistical method that employs historical data averages to forecast forthcoming values.
  • ARIMA [29]: A methodology that integrates autoregressive and moving average models to address time series forecasting challenges.
  • VAR [30]: A statistical method used for modeling and analyzing the dynamic relationships among multiple time series variables.
  • FC-LSTM [31]: A neural network architecture that combines fully connected layers with Long Short-Term Memory (LSTM) layers to handle sequential and non-sequential data.
  • DCRNN [32]: A model that combines the bi-directional random walk on the distance-based graph with GRU in an encoder–decoder manner.
  • Graph WaveNet [33]: A framework that combines the adaptive adjacency matrix into graph convolution with 1D dilated convolution.
  • ASTGCN [16]: A model which utilizes attention and convolution to capture the spatio-temporal relationship with periodicity fusion.
  • STGCN [13]: A method that utilizes graph convolution and casual convolution to learn the spatial and temporal dependencies.
  • STSGCN [34]: A network that utilizes the localized spatio-temporal subgraph module to model localized correlations independently.
  • STID [35]: A framework that leverages Spatial and Temporal IDentity information (STID) to address samples’ indistinguishability in the spatial and temporal dimensions based on multi-layer perceptrons.

5.3. Evaluation Metrics

To facilitate a quantitative comparison of these methodologies, we employ three distinct metrics to comprehensively evaluate the model’s performance in traffic forecasting. Specifically, these metrics encompass the Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE), each of which is elucidated as follows:
M A E = 1 N T i = 1 N t = 1 T y ^ t i y t i
M A P E = 100 % N T i = 1 N t = 1 T y ^ t i y t i y t i
R M S E = 1 N T i = 1 N t = 1 T y ^ t i y t i 2
where y ^ t i is the element in the predicted result Y ^ , and y t i is the element in the ground truth Y .

5.4. Experiment Setting

All experimental assessments are conducted utilizing a single NVIDIA RTX 3090 GPU with 24 GB of memory. The proposed neural network architecture is implemented by utilizing PyTorch. The maximum training epoch is established at 300, including an early stopping mechanism. The default batch size is 64. The key hyperparameters are configured as follows: T h is 12, equivalent to the past 1 h; P d and P w are both 2, corresponding to the past 2 days and 2 weeks, respectively. There are 32 hidden channels for the GRU layers, and the number of heads of multi-head attention is eight. The initial learning rate for the model is initialized at 0.01 and subsequently reduced to 0.001 after 150 training epochs. The weight decay is 0.0001. Regarding dataset partitioning, the training, validation, and test data are distributed in a ratio of 6:2:2. Model training is executed by employing the Adam optimization algorithm, and the loss function is the SmoothL1 Loss. We utilize the above-mentioned hourly, daily, and weekly data to forecast the subsequent 12 time steps (i.e., one hour).

5.5. Main Results

Table 2 shows a comparison of the baselines for 15 min (horizon = 3), 30 min (horizon = 6), and 60 min (horizon = 12) ahead of the prediction on the PEMS datasets. We observe that (1) deep learning techniques, exemplified by STSGCN and DCRNN, consistently yield superior outcomes when compared to conventional time series methodologies, such as the ARIMA and VAR models. This result substantiates the efficacy of incorporating both spatial and temporal correlations in traffic forecasting. (2) STID achieves promising results on all four datasets, indicating the importance of considering spatio-temporal indistinguishability in the sample. (3) The GTPFN model yields commendable outcomes compared to preceding state-of-the-art models across four distinct datasets. This performance underscores the methodology’s effectiveness, which incorporates periodic information for predictive and inductive purposes alongside the seamless integration of Pattern Fusion Transformers. Such an approach is demonstrably efficacious for bolstering the precision of both short-term and long-term forecasting endeavors.

5.6. Ablation Study

We conducted ablation studies on the PEMS04 dataset to validate the effectiveness of the key components of our proposed model GTPFN. We name the GTPFNs without different components as follows:
  • GTPFN w/o P: Removes the utilization of daily data and weekly data and only uses hourly data for predictions.
  • GTPFN w/o T: Removes the Pattern Fusion Transformer and fuses the periodical data by linear layers instead.
  • GTPFN w/o H: Removes the utilization of hourly data and only uses daily data and weekly data to induce the pattern.
  • GTPFN w/o A: Removes the attention mechanism from the SAGRU encoder–decoder.
Figure 4 shows the results of the ablation experiment. Evidently, the Pattern Induction Block exerts the most substantial influence on the entire model, particularly in the context of long-term predictions. This observation underscores the pivotal role that Pattern Induction Blocks play in mitigating the cumulative error impact of the GRU encoder–decoder. The second-most influential factor affecting the model’s performance is the Pattern Fusion Transformer, underscoring the imperative of delving into deeper levels to consider the interplay of periodic information. Importantly, it is noteworthy that when employing only the Pattern Induction Block for prediction, the loss values exhibit remarkable uniformity across various time points. This outcome aligns coherently with our expectations, as utilizing the induced regular traffic patterns as predictive outcomes does not entail the stepwise accumulation of losses that is characteristic of conventional prediction models.

5.7. Hyperparameter Experiments

In this subsection, we conduct hyperparameter experiments using the PEMS04 dataset to determine the optimal values for P d and P w . The outcomes are visually represented in Figure 5. The most favorable prediction results are obtained when the values of P d and P w are both 2. This observation underscores the significance of amalgamating weekly and daily information to enhance the prediction accuracy. Notably, the model’s performance is the least favorable when P d is 2 and P w is 0, while it is significantly improved when P d is 0 and P w is 2. This discrepancy implies that the weekly periodicity within the PEMS04 dataset holds greater prominence compared to the daily periodicity.

6. Conclusions

This paper proposes a novel GRU- and Transformer-Based Periodicity Fusion Network (GTPFN). The proposed model includes the Spatial Attention GRU encoder–decoder. It captures dynamic spatio-temporal relationships at each time step and makes basic predictions based on hourly data. Additionally, the model incorporates Pattern Induction Blocks based on GRU layers. This block induces regular traffic patterns using daily and weekly data. Furthermore, the model utilizes Pattern Fusion Transformers to integrate the output from the above-mentioned modules, followed by a Feedforward layer to generate the final output. The extensive experiments on PEMS datasets demonstrate the superiority of the proposed method.
Nevertheless, this model exhibits a limited responsiveness towards the outlier. In future investigations, we intend to explore methodologies to integrate external influences from weather conditions, events, and accidents into the model, thereby fostering enhanced sensitivity towards the outlier. However, integrating these external influences poses potential challenges that warrant careful consideration. Challenges may include data quality issues, the dynamic nature of external factors, and the need for real-time updates. Addressing these challenges is crucial for ensuring the robustness of our predictive model. If the model can successfully address these challenges, it will be more sensitive to traffic flow prediction in emergencies, thereby achieving a better performance.

Author Contributions

Conceptualization, Y.Z. and S.L.; methodology, Y.Z. and B.L.; software, P.Z.; validation, P.Z., S.L. and Y.Z.; formal analysis, B.L.; investigation, Y.Z. and B.L.; resources, S.L.; data curation, P.Z.; writing—original draft preparation, Y.Z. and S.L.; writing—review and editing, P.Z.; visualization, B.L.; supervision, P.Z.; project administration, P.Z.; funding acquisition, P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are available publicly online; they are also available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, K.; Wu, Z.; Wang, W.; Ren, S.; Zhou, X.; Gadekallu, T.R.; Luo, E.; Liu, C. GRTR: Gradient Rebalanced Traffic Sign Recognition for Autonomous Vehicles. IEEE Trans. Autom. Sci. Eng. 2023, 1–13. [Google Scholar] [CrossRef]
  2. Yang, Y.; Wang, W.; Liu, L.; Dev, K.; Qureshi, N.M.F. AoI Optimization in the UAV-Aided Traffic Monitoring Network Under Attack: A Stackelberg Game Viewpoint. IEEE Trans. Intell. Transp. Syst. 2023, 24, 932–941. [Google Scholar] [CrossRef]
  3. Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
  4. Zhang, L.; Liu, Q.C.; Yang, W.; Wei, N.; Dong, D. An Improved K-nearest Neighbor Model for Short-term Traffic Flow Prediction. Procedia Soc. Behav. Sci. 2013, 96, 653–662. [Google Scholar] [CrossRef]
  5. Zivot, E.; Wang, J. Vector Autoregressive Models for Multivariate Time Series. In Modeling Financial Time Series with S-Plus®; Springer: New York, NY, USA, 2003; pp. 369–413. [Google Scholar] [CrossRef]
  6. Ma, X.; Dai, Z.; He, Z.; Na, J.; Wang, Y.; Wang, Y. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction. Sensors 2017, 17, 818. [Google Scholar] [CrossRef] [PubMed]
  7. Yao, H.; Wu, F.; ke, J.; Tang, X.; Jia, Y.; Lu, S.; Gong, P.; Ye, J. Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
  8. Zhang, J.; Zheng, Y.; Qi, D. Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 31. [Google Scholar] [CrossRef]
  9. Ubal Núñez, C.; Di-Giorgi, G.; Contreras-Reyes, J.; Salas, R. Predicting the Long-Term Dependencies in Time Series Using Recurrent Artificial Neural Networks. Mach. Learn. Knowl. Extr. 2023, 5, 1340–1358. [Google Scholar] [CrossRef]
  10. Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3848–3858. [Google Scholar] [CrossRef]
  11. Ye, J.; Sun, L.; Du, B.; Fu, Y.; Xiong, H. Coupled Layer-wise Graph Convolution for Transportation Demand Prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–7 February 2021; Volume 35, pp. 4617–4625. [Google Scholar] [CrossRef]
  12. Xu, Y.; Cai, X.; Wang, E.; Liu, W.; Yang, Y.; Yang, F. Dynamic traffic correlations based spatio-temporal graph convolutional network for urban traffic prediction. Inf. Sci. 2022, 621, 580–595. [Google Scholar] [CrossRef]
  13. Yu, B.; Yin, H.; Zhu, Z. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, Stockholm, Sweden, 13–19 July 2018; AAAI Press Inc.: Menlo Park, CA, USA, 2018; pp. 3634–3640. [Google Scholar] [CrossRef]
  14. Zhang, W.; Zhang, C.; Tsung, F. Transformer Based Spatial-Temporal Fusion Network for Metro Passenger Flow Forecasting. In Proceedings of the 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), Lyon, France, 23–17 August 2021; pp. 1515–1520. [Google Scholar]
  15. Tao, S.; Zhang, H.; Yang, F.; Wu, Y.; Li, C. Multiple Information Spatial-Temporal Attention based Graph Convolution Network for traffic prediction. Appl. Soft Comput. 2023, 136, 110052. [Google Scholar] [CrossRef]
  16. Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 922–929. [Google Scholar] [CrossRef]
  17. Zhu, C.; Yu, C.X.; Huo, J. Research on Spatio-Temporal Network Prediction Model of Parallel-Series Traffic Flow Based on Transformer and Gcat. SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
  18. Huang, X.; Ye, Y.; Yang, X.; Xiong, L. Multi-view dynamic graph convolution neural network for traffic flow prediction. Expert Syst. Appl. 2023, 222, 119779. [Google Scholar] [CrossRef]
  19. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the NIPS’17: 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6000–6010. [Google Scholar]
  20. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar]
  21. Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.G.; Le, Q.V.; Salakhutdinov, R. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
  22. Xu, M.; Dai, W.; Liu, C.; Gao, X.; Lin, W.; Qi, G.J.; Xiong, H. Spatial-Temporal Transformer Networks for Traffic Flow Forecasting. arXiv 2020, arXiv:2001.02908. [Google Scholar]
  23. Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. arXiv 2022, arXiv:2106.13008. [Google Scholar]
  24. Liu, Y.; Wu, H.; Wang, J.; Long, M. Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting. arXiv 2022, arXiv:2205.14415. [Google Scholar]
  25. Jiang, J.; Han, C.; Zhao, W.X.; Wang, J. PDFormer: Propagation Delay-Aware Dynamic Long-Range Transformer for Traffic Flow Prediction. arXiv 2023, arXiv:2301.07945. [Google Scholar] [CrossRef]
  26. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  27. Chung, J.; Gülçehre, Ç.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  28. Chen, C.; Varaiya, P.P. Freeway Performance Measurement System (Pems); PATH Research Report; Sage: Thousand Oaks, CA, USA, 2002. [Google Scholar]
  29. Box, G.E.P.; Jenkins, G. Time Series Analysis, Forecasting and Control; Holden-Day, Inc.: Oakland, CA, USA, 1990. [Google Scholar]
  30. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  31. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. arXiv 2014, arXiv:1409.3215. [Google Scholar]
  32. Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. arXiv 2017, arXiv:1707.01926. [Google Scholar]
  33. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, Macao, China, 10–16 August 2019; AAAI Press Inc.: Menlo Park, CA, USA, 2019; pp. 1907–1913. [Google Scholar] [CrossRef]
  34. Song, C.; Lin, Y.; Guo, S.; Wan, H. Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 914–921. [Google Scholar] [CrossRef]
  35. Shao, Z.; Zhang, Z.; Wang, F.; Wei, W.; Xu, Y. Spatial-Temporal Identity: A Simple yet Effective Baseline for Multivariate Time Series Forecasting. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022. [Google Scholar]
Figure 1. Framework of GTPFN.
Figure 1. Framework of GTPFN.
Electronics 12 04988 g001
Figure 2. An example of constructing the input of the time series segments (suppose the size of the predicting window is 1 h, and T h = T p , P d and P w both have values of 2).
Figure 2. An example of constructing the input of the time series segments (suppose the size of the predicting window is 1 h, and T h = T p , P d and P w both have values of 2).
Electronics 12 04988 g002
Figure 3. Overall structure of the Pattern Fusion Transformer.
Figure 3. Overall structure of the Pattern Fusion Transformer.
Electronics 12 04988 g003
Figure 4. Results of ablation experiment. (a) MAE Loss. (b) RMSE Loss. (c) MAPE Loss.
Figure 4. Results of ablation experiment. (a) MAE Loss. (b) RMSE Loss. (c) MAPE Loss.
Electronics 12 04988 g004
Figure 5. Results of the hyperparameter experiment. (a) MAE Loss. (b) RMSE Loss. (c) MAPE Loss.
Figure 5. Results of the hyperparameter experiment. (a) MAE Loss. (b) RMSE Loss. (c) MAPE Loss.
Electronics 12 04988 g005
Table 1. Dataset description and statistics.
Table 1. Dataset description and statistics.
Dataset#SensorsGranularity#Time StepTime Range
PEMS-Bay3255 min52,11601/01/2017–06/31/2017
PEMS043075 min16,99201/01/2018–02/28/2018
PEMS078835 min28,22405/01/2017–08/31/2017
PEMS081705 min17,85607/01/2016–08/31/2016
Table 2. Traffic Forecasting Result Comparison On Different Datasets.
Table 2. Traffic Forecasting Result Comparison On Different Datasets.
DatasetsMethodsHorizon 3Horizon 6Horizon 12
MAE RMSE MAPE MAE RMSE MAPE MAE RMSE MAPE
PEMS-BayHA2.885.596.77%2.885.596.77%2.885.596.77%
ARIMA1.623.303.50%2.334.765.40%3.386.508.30%
VAR1.743.163.60%2.324.255.00%2.935.446.50%
FC-LSTM2.054.194.80%2.204.555.20%2.374.965.70%
DCRNN1.392.802.73%1.663.813.75%1.984.644.75%
Graph WaveNet1.392.802.69%1.653.753.65%1.974.584.63%
ASTGCN1.523.133.22%2.014.274.48%2.615.426.00%
STGCN1.352.862.86%1.693.833.85%2.004.564.74%
STSGCN1.443.013.04%1.834.184.17%2.265.215.40%
STID1.302.812.73%1.623.723.68%1.894.404.47%
GTPFN1.312.752.65%1.623.653.52%1.904.354.29%
PEMS04HA30.2660.9372.24%30.2660.9372.24%30.2660.9372.24%
ARIMA21.9835.2116.52%25.3839.2121.03%26.6740.7422.43%
VAR21.9434.4016.42%23.7236.5818.02%26.7640.2820.94%
FC-LSTM21.3733.3115.21%23.7236.5818.02%26.7640.2820.94%
DCRNN19.6531.2915.17%21.8034.1116.83%26.2039.9118.43%
Graph WaveNet18.7529.8014.14%20.4031.9115.85%23.2135.4119.43%
STGCN19.7031.1514.83%20.7032.8615.28%22.1434.9916.92%
ASTGCN20.1631.5314.13%22.2934.2715.65%26.2340.1219.19%
STSGCN19.8031.5813.41%21.3033.8414.27%24.4738.8416.27%
STID17.5228.4812.00%18.2929.8612.46%19.5831.7913.38%
GTPFN17.7229.7411.82%18.5131.2412.18%19.8733.4213.00%
PEMS07HA37.5951.6521.83%37.5951.6521.83%37.5951.6521.83%
ARIMA32.0248.8318.30%35.1852.9120.54%38.1255.6420.77%
VAR20.0932.1313.61%25.5840.4117.44%32.8652.0526.00%
FC-LSTM20.4233.218.79%23.1837.549.80%28.7345.6312.23%
DCRNN19.4531.398.29%21.1834.429.01%24.1438.8410.42%
Graph WaveNet18.6930.698.02%20.2633.378.56%22.7937.119.73%
STGCN20.3332.738.68%21.6635.359.16%22.7437.949.71%
ASTGCN21.3632.918.87%22.6336.459.86%24.5137.9711.03%
STSGCN20.2131.658.46%21.4533.958.96%23.9939.3610.13%
STID18.3130.397.72%19.5932.908.30%21.5236.299.15%
GTPFN17.3229.887.16%18.3831.967.56%20.0034.748.32%
PEMS08HA29.5244.0316.59%29.5244.0316.59%29.5244.0316.59%
ARIMA19.5629.7812.45%22.3533.4314.43%26.2738.8617.38%
VAR19.5229.7312.54%22.2533.3014.23%26.1738.9717.32%
FC-LSTM17.3826.2712.63%21.2231.9717.32%30.9643.9625.72%
DCRNN16.6225.4810.04%17.8817.6311.38%22.5134.2114.17%
Graph WaveNet14.2222.969.45%15.9424.729.77%17.2726.7711.26%
STGCN15.4525.139.98%17.7927.3811.03%21.4633.7113.34%
ASTGCN16.4525.1811.13%18.7628.5712.33%22.5333.6915.34%
STSGCN16.6525.4010.90%17.8227.3111.60%19.7731.4313.12%
STID13.2821.668.62%14.2123.579.24%15.5825.8910.33%
GTPFN12.9521.938.94%13.5723.289.41%14.4725.4010.34%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Liu, S.; Zhang, P.; Li, B. GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting. Electronics 2023, 12, 4988. https://doi.org/10.3390/electronics12244988

AMA Style

Zhang Y, Liu S, Zhang P, Li B. GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting. Electronics. 2023; 12(24):4988. https://doi.org/10.3390/electronics12244988

Chicago/Turabian Style

Zhang, Yazhe, Shixuan Liu, Ping Zhang, and Bo Li. 2023. "GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting" Electronics 12, no. 24: 4988. https://doi.org/10.3390/electronics12244988

APA Style

Zhang, Y., Liu, S., Zhang, P., & Li, B. (2023). GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting. Electronics, 12(24), 4988. https://doi.org/10.3390/electronics12244988

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop