Next Article in Journal
Siamese-Derived Attention Dense Network for Seismic Impedance Inversion
Previous Article in Journal
A Novel Tourist Trip Design Problem with Stochastic Travel Times and Partial Charging for Battery Electric Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors

by
Xiaohang Zhao
1,2 and
Mingyuan Zhang
1,*
1
Department of Construction Management, Dalian University of Technology, Dalian 116024, China
2
Department of Architecture and Civil Engineering, City University of Hong Kong, Kowloon, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2823; https://doi.org/10.3390/math12182823
Submission received: 12 August 2024 / Revised: 7 September 2024 / Accepted: 10 September 2024 / Published: 11 September 2024

Abstract

:
Accurate predictions of parking occupancy are vital for navigation and autonomous transport systems. This research introduces a deep learning mode, AGCRU, which integrates Adaptive Graph Convolutional Networks (GCNs) with Gated Recurrent Units (GRUs) for predicting on-street parking occupancy. By leveraging real-world data from Melbourne, the proposed model utilizes on-street parking sensors to capture both temporal and spatial dynamics of parking behaviors. The AGCRU model is enhanced with the inclusion of Points of Interest (POIs) and housing data to refine its predictive accuracy based on spatial relationships and parking habits. Notably, the model demonstrates a mean absolute error (MAE) of 0.0156 at 15 min, 0.0330 at 30 min, and 0.0558 at 60 min; root mean square error (RMSE) values are 0.0244, 0.0665, and 0.1003 for these intervals, respectively. The mean absolute percentage error (MAPE) for these intervals is 1.5561%, 3.3071%, and 5.5810%. These metrics, considerably lower than those from traditional and competing models, indicate the high efficiency and accuracy of the AGCRU model in an urban setting. This demonstrates the model as a tool for enhancing urban parking management and planning strategies.

1. Introduction

As urbanization progresses and private vehicle ownership rises, urban parking challenges have become increasingly acute, particularly during peak periods in both commercial and residential districts. Insufficient parking management exacerbates traffic congestion and increases the time drivers spend searching for available parking spaces [1]. This not only boosts carbon dioxide emissions but could also lead to suboptimal utilization of urban space [2]. Current parking resource management strategies fail to effectively address this dynamically varying demand, frequently resulting in parking shortages during peak times and exacerbating urban traffic conditions. Consequently, the inadequacies of the current management system in forecasting and allocating parking resources highlight the inefficiencies in urban parking distribution, emphasizing the urgent need for enhancements to improve the overall effectiveness of urban transportation systems [3].
The performance of short-term parking occupancy prediction models can be strongly affected by the development of machine learning technologies and the algorithms employed. The models are subject to various advanced algorithms such as long short-term memory (LSTM) neural network [4] and deep learning models [5], which have significantly enhanced the management of parking resource allocation. For instance, Gong, et al. [6] developed a temporal Graph Convolutional Network model (CPPM) that integrates Convolutional Neural Networks (CNNs) for analyzing street view images with Bayesian probabilities, as well as spatiotemporal feature mining using Graph Convolutional Networks (GCNs) and gated recurrent units (GRUs). Their results demonstrated that this model excels in forecasting parking lot occupancy across various cities. Feng, et al. [7] predicted parking lot occupancy by integrating temporal and weather information. Liu, et al. [8] also suggested a deep learning model that combines Graph Convolutional Networks (GCNs) and long short-term memory (LSTM) networks. Ye, et al. [9] identified spatial dependencies and temporal characteristics, incorporating periodic changes into their analysis. They used real shared parking data from Chengdu to show how well it could predict the future. These models typically utilize historical data to train prediction algorithms, enabling them to accurately forecast short-term parking demand and occupancy rates.
However, despite significant progress in prediction accuracy achieved by machine learning methods, there are still several issues and challenges. First, existing models often focus on macro factors such as time and geographic location but do not adequately consider parking behavior preferences and habits, which might affect the generalizability and applicability of the models. Second, the adaptability and transferability of current models remain a concern. A model that performs well in a specific city or region might not be directly applicable in other areas, as differences in traffic conditions, cultural backgrounds, and urban layouts can greatly influence the effectiveness of the model. Therefore, improving models to include more influencing factors and enhancing their adaptability and accuracy in various environments are crucial challenges that need to be addressed in current curbside parking prediction technologies.
In this study, an on-street parking prediction model, AGCRU, that integrates Adaptive Graph Convolutional Networks (Adaptive GCNs) with Gated Recurrent Units (GRUs) was creatively developed and validated, aiming to significantly improve the accuracy and computational efficiency of predictions. The model incorporates micro influencing factors, such as individual parking habits, nearby Points of Interest (POIs), and diverse residential types, to analyze how these variables shape parking patterns and achieve more accurate demand predictions. Considering the impact of surrounding facilities and residential types on parking behavior, it is crucial that our model includes Points of Interest (POIs) and various residential categories. POIs such as schools, hospitals, and shopping areas directly affect nearby parking demand as they frequently attract visitors and have specific peak times. Similarly, residential types indicate the density of inhabitants and potential parking behaviors in an area—factors that are vital for more accurate predictions of parking demand. By integrating these specific spatial factors, our model not only captures the necessary temporal and spatial dynamics but also adapts its predictive capabilities to the unique characteristics of each urban area, thereby improving accuracy and applicability. The Adaptive GCN at the core of our model boasts a critical advantage: it can dynamically adjust its structure based on the specific context and needs of different cities, ensuring optimal model configuration. This feature notably boosts generalizing capabilities of the model across complex data environments. In summary, this paper makes the following contributions:
  • This study introduces the novel AGCRU model, which combines the strengths of Adaptive Graph Convolutional Networks (GCNs) and Gated Recurrent Units (GRUs). This model is further enhanced by integrating Points of Interest (POIs) and household data, significantly improving the prediction accuracy of on-street parking occupancy.
  • A unique feature of the AGCRU model is the implementation of an adaptive adjacency matrix. In contrast to traditional static graph models, the adjacency relationships are dynamically adjusted by the adaptive GCN component based on real-time data changes. This significantly enhances the model’s ability to be accurately taught and to capture the spatial dependencies inherent in urban environments.
  • The AGCRU model was validated using a large-scale real-world dataset from Melbourne, demonstrating its performance over existing models through comprehensive testing across multiple time intervals.
The structure of this paper is as follows. Section 2 surveys the works related to parking behavior predictions. Section 3 outlines our method employed for parking prediction. Section 4 presents case studies from the city of Melbourne, Australia, where we apply our methodology to forecast parking occupancy. This paper is concluded in Section 5, where we summarize our findings and contributions.

2. Related Work

2.1. On-Street Parking Occupancy Feature

The analysis of spatiotemporal features of parking occupancy is a key component in predicting parking demand [9]. Research has shown that on-street parking occupancy information exhibits significant spatiotemporal correlation and heterogeneity [1]. Despite varying backgrounds across different cities or locations, parking occupancy data still demonstrate similar spatiotemporal distribution patterns [10,11,12,13]. Studies reveal that in cities with similar population structures and commercial activities, parking occupancy displays consistent periodicity and trends [14]. By subdividing the dataset into weekly, weekday, and weekend segments, the study analyzed the occupancy trends at different parking sites during various time periods. Moreover, Zhang, et al. [15] developed an innovative periodic weather-aware LSTM model (PewLSTM model), which integrates the effects of periodic weather changes and special events, precisely predicting parking behavior during various periods including regular days, holidays, and the COVID-19 pandemic, highlighting the importance of temporal heterogeneity in parking prediction.
Parking occupancy is influenced by a variety of factors including regional land use types, population density, and commercial activities, demonstrating clear spatial heterogeneity. For instance, Nourinejad and Roorda [16] explored the spatiotemporal heterogeneity of parking demand in different urban functional areas (such as residential and office areas). The study revealed how parking needs in these areas vary significantly by location and time. Specifically, the parking demand patterns in residential and office areas are distinctly different, reflecting the primary functions and user behaviors of these areas.
In summary, the core challenge in analyzing parking occupancy prediction lies in accurately capturing the characteristics of time and space. In the temporal dimension, we must not only focus on the periodicity, consistency, and trends in parking occupancy but also consider its temporal heterogeneity to identify variations across different time periods. Similarly, in the spatial dimension, besides analyzing the heterogeneity between regions, we should also focus on the periodicity, consistency, and trends within each area. By comprehensively considering these spatiotemporal characteristics, we can more accurately predict parking demand.

2.2. Methodologies of Parking Occupancy Prediction

The methodological progression in parking occupancy prediction research has witnessed a paradigm shift from statistical approaches to machine learning and deep learning paradigms [17]. Initial investigations predominantly employed statistical techniques such as linear regression and time series analysis. For instance, Ho, et al. [18] developed a parking demand forecasting model utilizing a linear regression framework, while Chen, et al. [19] applied the ARIMA model for parking space occupancy prediction, both demonstrating the efficacy of these methods in enhancing predictive accuracy. However, these traditional approaches exhibited limitations in capturing complex interactions and nonlinear phenomena inherent in the data.
As the field matured, researchers gravitated towards more intricate time series analyses and queueing theory applications. Tavafoghi, et al. [20] formulated a queueing model for real-time estimation of parking lot occupancy, whereas Hess [21] introduced the PARKFIT algorithm, leveraging high-resolution spatial data to analyze urban parking distribution, thus augmenting the spatial precision of parking demand forecasts.
The advent of technological innovations has heralded the emergence of AI-driven parking prediction infrastructures. Yu, et al. [22] presented an adaptive system capable of dynamically modifying prediction strategies based on real-time data inputs. Neural network models, such as those proposed in Rajabioun and Ioannou [13], offer novel perspectives on deciphering the dynamic complexities of urban parking demand by integrating spatiotemporal attributes. Similarly, the Adaptation and Learning to Learn (ALL) approach integrates advanced deep learning and federated learning techniques to significantly improve small-sample parking occupancy prediction accuracy, as demonstrated by Qu, et al. [23]. The study conducted by Yang, et al. [24] elucidated that the amalgamation of deep learning architectures (e.g., Graph Convolutional Neural Networks and Recurrent Neural Networks) with multi-source spatiotemporal datasets substantially enhanced the accuracy of real-time predictions for urban street-level parking occupancy.
In essence, the methodologies of parking occupancy prediction have transitioned from rudimentary statistical analyses to machine learning and deep learning techniques. These sophisticated approaches have not only amplified the capacity of models to handle complexity but also enhanced the interpretability of intricate spatiotemporal data. Notwithstanding the breakthroughs achieved through machine learning and deep learning techniques in parking prediction, challenges persist regarding the regional applicability and flexibility of these models. To address these issues, we explore more adaptable methodologies [25], Adaptive Graph Convolutional Networks, which possess the capability to autonomously modify their network architecture and parameters in response to specific regional conditions, thereby yielding more precise and reliable predictions.

2.3. Influencing Factors of Parking Prediction

Parking demand prediction is a crucial issue in urban planning and transportation management, involving multidimensional influencing factors that can be divided into two main aspects: macroscopic and microscopic factors [25]. Macroscopic demand factors focus on regional characteristics and urban planning elements that influence overall parking demand, including location conditions, building characteristics, land use patterns, and transportation systems. Hess [21] emphasized the importance of location, pointing out that parking pressure in high-demand areas such as city centers is much greater than in suburbs. Van Ommeren, et al. [26] explored the relationship between building scale and parking demand, finding that parking demand generated by large commercial centers is significantly higher than that of residential areas. Shoup [27] suggested that mixed-use areas, by optimizing daily travel distances, may reduce the demand for parking spaces. Pierce and Shoup [28] showed that efficient public transportation systems can effectively alleviate parking pressure in core business districts. Mei, et al. [29] analyzed the dynamic changes in parking demand across different times and regions in a city using a proxy model.
Microscopic demand distribution factors focus on individual-level demand characteristics and their impact on parking space allocation, including personal attributes, economic income, and spatiotemporal demand distribution. Hess [21] found a correlation between age and parking service preferences, while Inci [30] indicated that higher parking fees may encourage low-income groups to seek alternative travel modes. Chao, et al. [31] delved into the differences in parking behavior among individuals from various socioeconomic backgrounds. Bai, et al. [32] emphasized the influence of environmental awareness and information cognition on parking choices, and Van Ommeren, Wentink, and Rietveld [26] highlighted the impact of drivers’ psychological expectations on parking efficiency.
It is worth noting that there are complex interactions between macroscopic and microscopic factors. For example, Christiansen, et al. [33] examined the influence of parking facilities and the built environment on travel behavior, highlighting how accessibility and regulations at both home and the destination can significantly affect car usage patterns. Pierce and Shoup [28] emphasized how socioeconomic status and travel habits shape parking demand patterns in specific urban contexts. By understanding the interplay of these macroscopic and microscopic factors, urban planners can optimize parking resource allocation through the implementation of scientific policies and the utilization of modern technologies. As Zhou, et al. [34] noted, this optimization should be able to flexibly respond to dynamic changes in actual demand. Considering these macroscopic and microscopic factors comprehensively, urban planners can optimize parking resource allocation by implementing scientific policies (Zong, Zeng, and Yu [4]) and utilizing modern technologies to respond to changes in actual demand [35].
Predicting parking demand is a complex, multidimensional challenge that encompasses multiple layers, from the macroscopic urban layout to individual behavior. Increasingly, researchers are incorporating relevant influencing factors into their models, enhancing the accuracy and reliability of the predictions. This study aims to explore the interaction mechanisms of various influencing factors within different regional contexts, providing more comprehensive and precise decision support for optimizing parking resource allocation. Moreover, to effectively illustrate the development of related work, Table 1 has been presented.

3. Prediction Methodology

3.1. Overview of the Prediction Methodology

To accurately predict urban parking occupancy, this paper proposes an innovative method that integrates an Adaptive Graph Convolutional Network (Adaptive GCN) with a Gated Recurrent Unit (GRU). This approach enhances prediction accuracy by combining spatial adaptive graph convolution with time series analysis. The data, collected from urban sensors, include parking status, timestamps, and geographic locations. During the data preprocessing phase, anomaly detection, data cleansing, and feature normalization are conducted, with a particular emphasis on extracting temporal features from timestamps and spatial features from geographic data. The construction of the Adaptive GCN is based on the geographical proximity of parking spaces and other attributes such as Points of Interest (POIs), forming nodes and edges that capture complex spatial relationships. These relationships are dynamically adjusted in their connection weights to more accurately reflect actual spatial dependencies. Sequential data are processed by the GRU to capture temporal dependencies, utilizing historical parking data to predict future trends. In the model integration strategy, spatial feature outputs from the Adaptive GCN are combined with the temporal dynamics from the GRU through one or multiple fusion layers, enhancing the model’s ability to predict urban parking dynamics, thereby achieving precise predictions of parking occupancy. This integrated approach not only optimizes model performance but also enhances the efficiency and practicality of the predictions. The conceptual framework of this predictive methodology is illustrated in Figure 1.

3.2. Adaptive Spatial Correlation Relationship Model

3.2.1. Adaptive Adjacency Matrix

In this proposed Adaptive Graph Convolutional Network (Adaptive GCN) model, one of the core components is the dynamically learned adjacency matrix, whose processing details are shown in Algorithm 1, which enables the model to adjust the graph structure based on the complexity of the data and the specific requirements of the task. Initially, this adjacency matrix is not statically defined but is instead a model parameter that is dynamically updated through the training process. During the initialization phase of model training, the adjacency matrix is assigned random values and is adjusted in each iteration based on the gradient of the loss function using backpropagation. Through this process, the adjacency matrix learns to capture the underlying structure of the graph, automatically discovering effective connections between nodes.
Figure 1. Overview of the prediction methodology.
Figure 1. Overview of the prediction methodology.
Mathematics 12 02823 g001
The specific process of updating the adjacency matrix involves calculating the normalized adjacency matrix, which is achieved through the degree matrix D for each node, where each element D i of D represents the degree of node i [36]. The normalized adjacency matrix A ^ is computed as A ^ = D 1 2 A D 1 2 [37], which helps maintain the stability of network training and optimizes the propagation of graph signals. After normalization, we compute the Laplacian matrix L = I A ^ [38], where I is the identity matrix. This step is crucial for graph convolution operations, enabling effective propagation of node features within the graph. After updating the adjacency matrix and computing the Laplacian matrix, the model inputs the node features and the Laplacian matrix into the Chebyshev polynomial filter for feature transformation.
Algorithm 1: Calculate Learnable Adjacency Matrix in Adaptive GCN
Input:A learnable adjacency matrix parameter
Output: The   normalized   adjacency   matrix   A ^ , and the Laplacian matrix L
1.Normalize Adjacency Matrix
Calculate the degree matrix D by summing each row of the adjacency matrix A.
Calculate the inverse square root of D. Replace any infinite values with zeros.
Construct   the   diagonal   matrix   D 1 2
Normalize   A   and   obtain   the   normalized   adjacency   matrix   A ^ = D 1 2 A D 1 2 .
2.Compute Laplacian Matrix
Create the identity matrix I of shape (V, V) with the same data type and device as A.
Compute   the   graph   Laplacian   matrix   = A ^ .
3.Return
Output   the   normalized   adjacency   matrix   A ^ .
Output the Laplacian matrix L.

3.2.2. Enhanced Feature Representation

In the Adaptive Graph Convolutional Network (Adaptive GCN), the updated adjacency matrix and the computed Laplacian matrix are used for graph convolution operations. The model inputs the node features along with the Laplacian matrix into the Chebyshev polynomial filter for feature transformation. These transformed features are processed through a series of convolutional layers, each containing an activation function to introduce nonlinearity and enhance the model’s expressive power.
Specifically, these convolutional layers operate as follows: each layer applies a graph convolution operation, convolving the node features with the corresponding convolutional kernel to generate new node feature representations. The convolution operation typically involves matrix multiplication, multiplying the input features with the convolutional kernel [37]. The mathematical expression is as follows:
H l + 1 = σ κ = 1 Κ α κ · D ^ κ 1 2 A ^ κ D ^ κ 1 2 H l W l
where α κ are the adaptive weights that have learned to reflect the importance of different types of connections, such as edge types or directions. Each A k and D k represents different types of adjacency and degree matrices, allowing the model to operate on various graph structures.
After each convolution operation, we employ ReLU as a nonlinear activation function to better capture complex feature relationships. The activation function enhances the model’s expressive power, enabling it to learn more intricate feature representations. Additionally, we insert max pooling layers between the convolution layers to reduce feature dimensionality, decrease computational load, and help extract more robust features. Pooling layers effectively summarize local information, aiding the model in better understanding the global structure of the graph.
Moreover, we apply batch normalization (BN) techniques after the convolutional layers to accelerate training and improve model stability. Batch normalization standardizes the output features of each layer, ensuring a mean of 0 and a variance of 1, and then learns adaptive scaling and shifting parameters. The mathematical expression is as follows:
H ^ ( l ) = H l μ σ 2 + ε
where μ and σ 2 are the mean and variance of the current batch, respectively, and ε is a small value to prevent division by zero [39].
These transformed features, after undergoing the processing steps, are ultimately used for the parking occupancy prediction task. This multi-layer processing approach enables the model to learn richer and more complex node feature representations, thereby improving prediction accuracy and overall model performance. By incorporating the support feature matrix, Adaptive GCN can capture both local and global information within the graph structure and flexibly adapt to different graph structures, significantly enhancing the model’s expressive power and predictive performance. The processing detail is showed in Algorithm 2.
Algorithm 2: Adaptive Graph Convolutional Network (Adaptive GCN) Operations
Input:Node features matrix X, updated adjacency matrix A, degree matrix D, and layer weights W.
Output:Transformed features suitable for parking occupancy prediction.
1.Initialize Graph Convolution Layers:
Adjacency and Degree Matrices: Set for Ak and Dk for each type of connection considering edge types or directions.
2.Chebyshev Polynomial Filter Application:
Input: Node features matrix X along with the Laplacian matrix L.
Process: Apply the Chebyshev polynomial filter to transform features
3.Graph Convolution Operations:
For each layer l perform:
  • Convolution: Convolve node features with the convolutional kernel to generate new representations.
  • Activation Function: Apply ReLU to introduce nonlinearity and enhance expressive power.
4.Feature Pooling and Normalization:
Max Pooling: Insert max pooling layers between convolution layers to reduce feature dimensionality and computational load.
Batch Normalization: Apply batch normalization to standardize the features of each layer.
5.Output Processing
Integration: Combine all processed features for the final output suitable for parking occupancy prediction tasks.
6.Return the processed sequence data ready for application in prediction tasks.

3.2.3. Adaptive GCN Generation

In this study, the Adaptive Graph Convolutional Network (Adaptive GCN) is introduced, a model specifically designed to accurately process and capture complex spatial relationships. Compared to traditional Graph Convolutional Networks (GCNs) [37], the significant innovation of Adaptive GCN lies in its dynamic learning mechanism, enabling the model not only to process static graph structures but also to dynamically adjust the adjacency relationships between nodes according to data changes [40].
The Graph Convolutional Network (GCN), initially proposed by Kipf and Welling in 2017 [37], aims to extend the concept of traditional Convolutional Neural Networks to graph structured data. Within the GCN model, graph convolution is defined as the product of the adjacency matrix and the feature matrix, enabling information propagation between nodes. Specifically, a single layer of GCN can be represented as follows:
H l + 1 = σ D ^ 1 2 A ^ D ^ 1 2 H l W l ,
where H ( l ) represents the node feature matrix at the lth layer, containing the feature information for each node at that layer. A denotes the adaptive adjacency matrix, which describes the connectivity between nodes in the graph; its values can dynamically adjust during the learning process. D is the degree matrix of A, where each element D i i represents the degree of node i , which is the number of nodes directly connected to node i , and W ( l ) denotes the weight matrix at the corresponding layer, used for transforming node features during the graph convolution operation. σ is the nonlinear activation function, ReLU, used to enhance the nonlinearity of model and assist in capturing more complex data structure. This formulation follows the framework introduced by Kipf and Welling [37], which extends Convolutional Neural Networks to graph-structured data.
The self-attention mechanism, originally introduced by Vaswani, et al. [41] in the Transformer model, has profoundly influenced the field of sequence data processing by allowing models to dynamically focus on different parts of a sequence according to their relative importance. Incorporating principles of self-attention into GCN, though not direct implementation of self-attention layers, enhances the learnability and adaptability of node relationships within the network.
Adaptive GCN leverages dynamic weights α κ , inspired by self-attention concepts, to control the strength of information propagation across different edges, optimizing the integration of node information. This adaptation enables a flexible framework where an Adaptive GCN layer can be described as follows:
H l + 1 = σ κ = 1 Κ α κ · D ^ κ 1 2 A ^ κ D ^ κ 1 2 H l W l ,
where α κ are the adaptive weights associated with each type of connection in the graph. These weights are learned during training and are used to modulate the strength of information propagation between nodes, reflecting the importance of different types of edges, such as their types or directions. Each A k denotes the adjacency matrices for different types of connections within the graph. Each type of adjacency matrix corresponds to a specific kind of relationship or interaction between nodes, allowing the model to adapt to various structural patterns within the data. D k is the degree matrix corresponding to A k . It is a diagonal matrix where each diagonal element D k i i represents the degree of node i with respect to the kth the type of connection, which is the sum of the weights of all edges connected to node i in A k . This approach is based on the methodology discussed in Zhou, et al. [42], who review the methods and applications of graph neural networks, highlighting the utility of dynamic adjustments in network architecture to accommodate diverse data environments.
As illustrated in Figure 2, the input layer of the model integrates multidimensional data sources, including distributions of Points of Interest (POIs), residential housing, parking sensors, and related parking features, organized in the form of feature matrices and dynamic adjacency matrices. The Adaptive GCN layer performs adaptive convolution operations by combining feature and adjacency matrices, effectively capturing and aggregating neighborhood features. This layer also includes mechanisms for weight sharing and adjustment to optimize the learning process and enhance the model’s generalization ability. Subsequently, the activation and normalization stages refine the feature representation through ReLU functions and batch processing techniques, enhancing the model’s nonlinear fitting capabilities. Finally, through global pooling and feature readout operations, high-dimensional features are transformed into the final prediction results required by the output layer, achieving efficient learning and prediction of complex spatial relationships. This structure not only optimizes the handling of information flow but also enhances the model’s adaptability to dynamic and complex data environments.
In summary, the core features of Adaptive GCN are centered around two major technological innovations: First, the adjacency matrix is dynamically updated during the model training process, allowing for real-time capture of changes in the graph structure and more accurately reflecting the dependencies and influence relationships among nodes [42]. Second, the convolution operation enhances the capture of crucial features by dynamically adjusting the weights of the convolution kernels, rather than simply aggregating information from neighboring nodes [43].

3.3. Temporal Correlation Relationship Model Structure

3.3.1. Temporal Dynamics in Parking Demand

In urban roadside parking areas, the demand for parking exhibits significant temporal correlations. Such demand is influenced not only by the continuity of recent historical parking data but also by discernible periodic patterns. For instance, during daily peak hours, the need for parking spaces typically intensifies; concurrently, a distinction in parking patterns between weekdays and weekends is observed. Despite the availability of extensive historical parking time data, the length of the input variables for the predictive model is constrained. Hence, to effectively capture these temporal dependencies, a strategy for key timeframe extraction has been employed.
Setting the current time as t and selecting an appropriate time aggregation interval q, we compute the parking occupancy information within the timestamp t , t + q to estimate the forthcoming parking occupancy intensity. Moreover, as demonstrated in Figure 3, we extract key temporal segments from the time series data that represent the hourly interval T h , daily interval T d , and weekly interval T w . To ensure data consistency and processability, the data undergo feature scaling and sequence creation, normalizing it into fixed-length time series.
By extracting relevant feature vectors from the time series and aggregating them, we construct a feature matrix X that includes features from multiple regions. This matrix is then fed into a GRU (Gated Recurrent Unit) layer. The GRU layer, through its internal update gate and reset gate mechanisms, further extracts temporal features from each region and optimizes the information flow. The update gate regulates how much of the old state information should be retained while receiving new input, balancing the retention of historical information with the updating of current information. The reset gate determines how much past state information should be ignored when forming the candidate value of the current state, allowing the model to flexibly forget irrelevant or misleading historical data when new data arrive. Through these mechanisms, the GRU layer effectively processes the input feature matrix, identifying key time-dependent features such as the increase in parking demand during peak hours and the variations in parking patterns between weekdays and weekends. Ultimately, the GRU layer generates outputs that accurately predict short-term parking occupancy rates for various time intervals.

3.3.2. GRU Model Generation

In this study, Gated Recurrent Units (GRUs) were employed to process the time series features of parking occupancy, as shown in Algorithm 3. GRUs are an improved version of Recurrent Neural Networks (RNNs) [43] that effectively address the common issues of gradient vanishing and exploding by introducing update gates and reset gates [44]. Specifically, GRUs can capture long-term dependencies in time series data while reducing information loss and computational complexity during training. Therefore, GRUs are particularly suitable for analyzing long time series data.
In the proposed model, the parking sensor data are preprocessed to generate an input tensor, which represents multiple timesteps within each batch and the features of each timestep. The input data are passed through two GRU layers for processing. The first GRU layer receives the raw input data and extracts initial temporal dependencies. Its output is then used as the input for the second GRU layer, which further refines the high-level features. Each GRU layer regulates the flow of information through update gates and reset gates, generating candidate hidden states and final hidden states. The final hidden state is mapped to the desired output feature space through a fully connected layer.
The operation of GRUs can be described by the following mathematical expressions [44]:
r t = σ W r X t + U r h t 1 + b r z t = σ ( W z X t + U z h t 1 + b z ) c r = t a n h ( W c X t + U h r c h t 1 + b c ) h t = z t h t 1 + ( 1 z t ) c r
Here, σ denotes the sigmoid activation function, which compresses the input values into the range between 0 and 1, making it suitable for gating mechanisms in the network. It is used here to calculate the update gate z t and reset gate r t , determining how much of the past information to forget and how much new information to add. W r ,   W z ,   a n d   W c are weight matrices associated with the input for the reset gate, update gate, and candidate activation, respectively. These matrices transform the input X t at each timestep t before it is combined with the previous hidden state. U r ,   U z ,   a n d   U h are weight matrices that transform the previous hidden state h t 1 . These matrices are crucial in incorporating the information from the past into the current state’s computations. b r ,   b z , a n d   b c are bias vectors for the reset gate, update gate, and candidate activation, respectively. These vectors add an additional layer of abstraction and help the model avoid vanishing gradient problems by adjusting the thresholds at which activations occur. represents element-wise multiplication, used here to combine the results of different gates with the hidden states and candidate activations. This operation is essential for merging old and new information in a controlled manner. c r is the candidate hidden state, calculated using a tangent hyperbolic function, tanh, which scales the values between −1 and 1. It provides a nonlinear transformation of the inputs, contributing to the ability of the GRU to capture complex patterns. h t is the new hidden state for the current timestep, computed by interpolating between the old hidden state h t 1 and the candidate hidden state c r , as modulated by the update gate z t .
Algorithm 3: Time Series Feature Extraction using GRU
Input:A tensor of time series data with shape
[batch_size, sequence_length, input_dim].
Output:A tensor of processed sequence data with shape
[batch_size, output_dim].
1.Initialize the GRU model with input dimension D, hidden size, number of GRU layers, output dimension, and dropout rate GRU.
2.Define the first GRU layer:
1.Input dimension: D
2.Hidden size: Hidden_size
3.Define the second GRU layer:
1.Input dimension: Hidden_size
2.Hidden size: Hidden_size
4.Define the fully connected layer:
Input dimension: hidden_size.
Output dimension: num_output.
5.For each batch of input data:
1.Initialize the hidden state h01 for the first GRU layer with zeros.
2.Perform the forward pass through the first GRU layer:
Input: Tensor of shape
[batch_size, sequence_length, D].
Output: Tensor out1 and hidden state
3.Initialize the hidden state h02 for the second GRU layer with zeros.
4.Perform the forward pass through the second GRU layer:
Input: Tensor out1 from the first GRU layer.
Output: Tensor out2 and hidden state.
5.Apply the fully connected layer to the output of the second GRU layer:
Input: Tensor out2 of shape
[batch_size, sequence_length, hidden_size].
Output: Tensor of shape
[batch_size, num_output].
6.Return the processed sequence data.

3.4. Model Validation

To accurately evaluate the performance of our proposed model, we conducted a series of comparative analyses with established models. These comparison models included the autoregressive integrated moving average (ARIMA) model, as detailed by Kumar and Vanajakshi [45], which is widely used in time series data analysis as a classic model; the long short-term memory network (LSTM) described by Siami-Namini, et al. [46], which is highly regarded for its advantages in handling temporal dependencies; and the spatiotemporal Graph Convolutional Networks (ST-GCNs) proposed by Yu, Yin and Zhu [22], Attention-Based Spatial–Temporal Graph Convolutional Networks (ASTGCNs) [47] and the hybrid spatial–temporal graph convolution networks (HST-GCNs) [48], both of which excel in capturing spatial and temporal dynamic information. We chose these models for comparison because they each perform exceptionally well in specific data processing scenarios, providing a comprehensive benchmark for evaluating our model’s performance under different conditions.
To comprehensively evaluate model performance, we selected three key evaluation metrics: root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). These three metrics are widely used in evaluating predictive model performance. RMSE emphasizes the squared differences in prediction errors, making it sensitive to large errors; MAE provides the average absolute value of prediction errors, offering greater stability; and MAPE expresses errors as a percentage, intuitively reflecting the deviation of predicted values from actual values. This multidimensional evaluation approach helps us more accurately understand the model’s performance in practical applications.
During the model training process, we used the cross-entropy loss function [49] for iterative training. This loss function is particularly suitable for handling classification problems, evaluating the difference between the predicted probability distribution and the actual labels at each iteration. By continuously optimizing this loss value, we improve the model’s prediction accuracy. The choice of cross-entropy is due to its direct optimization of the model’s output probabilities, effectively promoting the model’s learning efficiency in multi-classification problems.
  • Root mean square error:
    R M S E = 1 T M j = 1 T i = 1 n y i j y ^ i j 2 ,
  • Mean absolute error:
    M A E = 1 T M j = 1 T i = 1 n y i j y ^ i j ,
  • The mean absolute percentage error:
    M A P E = 100 % n i = 1 n y i j y ^ i j y i j ,
In this context, T and M denote the number of samples in the time series and the number of parking lots, respectively. In addition, y i j y ^ i j represents the actual and predicted values of the ith parking occupancy outcome at time j.

4. Case Study

4.1. Research Area

Melbourne, as the capital and largest city of the Australian state of Victoria, with a population of 159,810 as of 30 June 2022 [50], boasts a diverse economy and vibrant cultural life. With the city’s population growth and economic expansion, approximately 1 million people visit Melbourne for work, education, travel, or tourism [50].
To alleviate parking pressure in the Central Business District (CBD) and other high-demand areas, the Melbourne City Council has implemented measures including peak-hour parking restrictions, differentiated parking pricing, and timed curbside parking fees. The aim of these policies is to allocate parking resources reasonably, reduce traffic congestion, and encourage the use of public transport [51]. The streetscape of on-street parking in Melbourne is shown in Figure 4.
However, parking difficulties remain prominent, especially in commercial districts and tourist hotspots. These challenges highlight the importance of accurately forecasting short-term parking demand, which not only optimizes the use of parking resources but also supports urban planning and traffic management [27]. The consideration of individual travel preferences plays a key role in this process, as it affects patterns and the elasticity of parking demand [52]. For example, drivers who are sensitive to parking fees may switch to other modes of transportation in response to higher costs, while those who value time might be willing to pay extra for the convenience of parking.
Therefore, incorporating individual travel preferences into parking demand prediction models is crucial for developing effective parking strategies and management measures. This approach helps to reduce the time vehicles spend cruising for parking spots on streets, which, in turn, decreases traffic congestion and increases the efficiency of parking resource allocation [53].

4.2. Data Description

A variety of information is included to provide valuable insights into urban dynamics and how urban spaces are utilized. The data are divided into three main categories: POI, residential dwelling, and parking data. Each category has its own unique value and function in understanding urban living patterns and individual parking preferences.

4.2.1. On-Street Parking Data

To analyze parking characteristics, this study selected sensor data from the city of Melbourne for the year 2019 [54]. The collected dataset, showed in Table 2, encompasses information from in-ground parking bay sensors across the city, recording event-based occupancy statuses of the parking spaces, the spatial coordinates of the sensors, and the curbside ID [55].

4.2.2. Land Use (POI)

The POI data for the city of Melbourne are sourced from the City of Melbourne Open Data platform [56], which provides comprehensive information on land use and is regularly updated. This study selected three specific subsets for analysis, which included landmarks and places of interest, including schools, theaters, health service facilities, sports facilities, places of worship, galleries, and museums [56], the distribution of cafés, restaurants, and bistros [57], and the location and industry classification of business establishments.
1.
Landmarks and places of interest
The dataset compiles exhaustive information on various Points of Interest within the city of Melbourne, showed in Table 3, including detailed descriptions and geographic coordinates of each location. The collection encompasses 49 distinct themes, covering a broad spectrum of categories such as community use, educational centers, health services, leisure, mixed use, offices, and places of assembly. Additionally, the subthemes included art galleries, churches, functions, informal outdoor facilities, major sports and recreation facilities, and offices.
2.
Distribution of cafés, restaurants, and bistros
The dataset provides a detailed distribution of seats in cafes, restaurants, and bistros within the city of Melbourne, covering historical data from 2002 to 2022, displayed in Table 4. For each commercial establishment at a specific location, the data meticulously capture the business name, precise operating address, industry classification according to Australian and New Zealand Standard Industrial Classification [58], number of dining seats (separated into indoor and outdoor seats), exact location, and small block code and small area name corresponding to central land use and employment [59]. We selected the data from 2017 for our analysis. The detailed description is shown in the following table.
3.
Location and industry classification of business establishments
The dataset, showed in Table 5, which spans the years 2002–2022, comes from Central Geelong Census of Land Use and Employment (CLUE) [59]. The data include the addresses of commercial establishments, their industry classification according to the Australian and New Zealand Standard Industrial Classification (ANZSIC4) [58], their geographical locations, and their allocation within CLUE designated blocks and small areas. Within this dataset, a commercial establishment is defined as a commercial occupant within a building, a separate land use entity, or any permanent site of economic activity as per the ANZSIC standards.

4.2.3. Residential Dwelling Data

In examining the factors influencing parking demand preferences, we incorporated residential information from Melbourne as a significant variable. These data are derived from the municipal property tax rate database, employing a streamlined classification framework that categorizes residences into three types: “residential apartments”, “houses/townhouses”, and “student apartments”. Additionally, the dataset meticulously documents the number of dwelling units within each residential building, providing foundational data to support our analysis of the relationship between residential types and parking demand. The detailed information is distributed in Table 6.

4.3. Hyperparameters

Experiments were conducted on a robust platform equipped with Windows 10, an Intel i7 processor, and 16 GB RAM. The development and implementation of the model were executed using Python on TensorFlow 2.3, on a server equipped with an NVIDIA GTX 3060 GPU. L2 loss was utilized for training, and early stopping was implemented on the validation set to enhance the model’s generalization capabilities.
To ensure the effectiveness of the proposed model’s training and testing, we divided the dataset into training, testing, and validation sets in a ratio of 0.8, 0.1, and 0.1, respectively. The optimization of our model was performed using the Adam optimizer, initiated with an initial learning rate of 0.001, which was dynamically adjusted in response to reductions in validation loss to optimize training efficiency. Categorical cross-entropy, well-suited for multi-class prediction tasks, was employed as the loss function.
Prediction time periods of 15 min, 30 min, and 60 min were chosen to capture different aspects of parking demand: 15 min for rapid changes, which are essential for real-time parking guidance and traffic control; 30 min for mid-term demand forecasting for short-term planning and resource allocation; and 60 min for long-term forecasting for parking facility expansion and policy adjustments. This setup comprehensively reflects parking demand variations over different time frames, enhancing the model’s practical applicability and predictive accuracy.

4.4. Data Processing

In this study, cleaning of the collected parking data was conducted, including the removal of errors and outliers, such as records of anomalous parking durations, to ensure the accuracy of subsequent analyses. Specifically, records with abnormal parking times were identified and eliminated. Based on this purified raw data, we developed a feature extraction process utilizing parking sensor data combined with the timestamps of vehicle arrivals and departures, which enabled the extraction of crucial temporal features.
To manage features of varying scales, we standardized the data using Min-Max normalization and Standard Deviation normalization, aligning all features to the same scale to facilitate model computation and comparison. During the spatial data processing phase, we constructed an adjacency matrix, as depicted in Figure 5c, based on geographical location, illustrated in Figure 5a, and functional similarities, demonstrated in Figure 5b. This matrix serves as the foundational component of the Graph Convolutional Network. By calculating the Euclidean distances and cosine similarities between different locations, we defined the connection strength between nodes.
Ultimately, the processed feature data and adjacency matrices were used as inputs for training the Graph Convolutional Network (GCN) and Gated Recurrent Unit (GRU), providing a robust foundation for our parking prediction model. This ensured that the model could effectively learn and predict urban parking dynamics. This detailed preparation of data is a critical prerequisite for the model’s accurate prediction of parking occupancy rates.

5. Results Analysis and Discussion

5.1. The Accuracy of the Parking Occupancy Prediction Results

In the evaluation of predictive performance, this study introduced a triad of statistical metrics: mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). These metrics were employed to benchmark the adaptive GCN and GRU networks (AGCRU) models against a suite of established forecasting models, namely the autoregressive integrated moving average (ARIMA) [45], long short-term memory network (LSTM) [15], and spatiotemporal Graph Convolutional Network (ST-GCN) [22], Attention-Based Spatial–Temporal Graph Convolutional Networks (ASTGCNs) [47], and the hybrid spatial–temporal Graph Convolution Networks (HST-GCNs) [48] across temporal intervals of 15, 30, and 60 min. The AGCRU model exhibited MAE values of 0.0156, 0.0330, and 0.0558; RMSE values of 0.0244, 0.0665, and 0.1003; and MAPE values of 1.5561%, 3.3071%, and 5.5810%, respectively, for the corresponding forecasting horizons. These metrics were appreciably lower than the comparative figures registered by the ARIMA, LSTM, ST-GCN, ASTCGN, and HST-GCN models. The empirical evidence thus underscores the enhanced accuracy and robustness of the AGCRU model, underscoring its superior performance in the domain of short-term on-street parking prediction. The detailed information is distributed in Table 7.
Figure 6 presents a comparative analysis of predicted and actual values over time periods of 15 min, 30 min, and 60 min. To manage the vast amount of data, we visualized only the data from the first half of the year to thoroughly assess the predictive performance of the model. In all the charts, actual values are represented by a blue line, while predicted values are indicated by a red dashed line. From the charts, it is observed that within the time period of 15 min, the model demonstrates extremely high precision, closely matching the actual data, which reflects its exceptional ability to capture short-term data trends. In the time period of 30 min, although the fluctuations in predicted values are more pronounced, the model still closely follows the overall trend in the actual values, indicating good stability and adaptability in mid-term trend prediction. In the time period of 60 min, despite deviations in the prediction of peaks and troughs, the model overall captures the long-term trend fluctuations, showing its potential to identify data patterns over longer time spans. This suggests that the model has the capacity to handle longer-term data changes, particularly suited for applications that rely on long-term prediction accuracy.

5.2. The Performance of the Model

According to Table 8, four different Graph Convolutional Network models demonstrate varying levels of training time consumption. These models include ST-GCN, AST-GCN, HST-GCN, and AGCRU. The data reveal that AST-GCN requires the longest training time, totaling 623.77 s, significantly exceeding the training times of the other three models. The training times for ST-GCN and HST-GCN are closely matched, at 151.83 s and 152.05 s, respectively. The AGCRU model records the lowest training time consumption, at 147.37 s. This indicates that AGCRU may offer superior training efficiency compared to the other models.
Moreover, with the model training process displayed in Figure 7, we can observe the dynamic relationship between training loss, test loss, and epochs. For the predictions of parking occupancy in the next 15 min, 30 min, and 60 min, as the number of training epochs increases, both training and test losses exhibit a downward trend without any signs of increase. This trend indicates that the model continuously improves during the learning process, with parameter optimizations that not only adapt to the training data but also effectively generalize to the prediction. The rapid decrease in loss values also implies that the model converges to a state of high performance. The training curves in Figure 6 demonstrate that the model achieves a steady enhancement in performance on both the training and test datasets while avoiding overfitting.

6. Conclusions

In this study, a novel on-street parking prediction model is proposed, which integrates Adaptive Graph Convolutional Networks (GCNs) with Gated Recurrent Units (GRUs), further enhanced by incorporating Points of Interest (POIs) and household data. The findings show that the AGCRU model significantly outperforms existing models, with lower mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) across various time intervals. These results highlight the practicality of combining spatiotemporal dynamics with real-time urban data to improve parking management systems.
The theoretical ramifications of this study indicate that the integration of adaptive learning mechanisms with urban geographical data can yield significant insights into urban planning and space management. From a practical standpoint, these insights have the potential to assist urban planners and policymakers in formulating more effective parking solutions, thereby alleviating urban congestion and enhancing land use optimization.
Future research should consider investigating the applicability of the AGCRU model across varied urban environments, assessing its efficacy within diverse geographical and cultural milieus. Furthermore, the incorporation of dynamic variables could enhance the precision of the predictions of the model. Advancements in these areas may facilitate the development of more intelligent, adaptable urban transportation systems that dynamically respond to urban changes in real time.

Author Contributions

Conceptualization, X.Z.; Data curation, X.Z.; Investigation, X.Z.; Methodology, X.Z.; Visualization, X.Z.; Software, X.Z.; Writing—original draft, X.Z.; Writing—review and editing, M.Z.; Funding acquisition, M.Z.; Supervision, M.Z.; Resource, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Liaoning, grant number 2023-MS-113.

Data Availability Statement

The dataset used in this research were download from Melbourne open data platform. The dataset of on-street parking event in 2019 is available at https://data.melbourne.vic.gov.au/explore/dataset/on-street-car-parking-sensor-data-2019/information/ (accessed on 7 October 2023), the dataset of landmarks and places of interest is available at https://data.melbourne.vic.gov.au/explore/dataset/landmarks-and-places-of-interest-including-schools-theatres-health-services-spor/table/?location=15,-37.82075,144.96226&basemap=mbs-7a7333 (accessed on 29 November 2023), the dataset of distribution of cafés, restaurants, and bistros is available at https://data.melbourne.vic.gov.au/explore/dataset/cafes-and-restaurants-with-seating-capacity/information/ (accessed on 15 December 2023), the dataset of Location and industry classification of business establishments is available at https://data.melbourne.vic.gov.au/explore/dataset/business-establishments-with-address-and-industry-classification/information/ (accessed on 15 December 2023), and the dataset of residential dwelling is available at https://data.melbourne.vic.gov.au/explore/dataset/residential-dwellings/table/?sort=-dwelling_number (accessed on 15 December 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fulman, N.; Benenson, I.; Ben-Elia, E. Modeling parking search behavior in the city center: A game-based approach. Transp. Res. Part C Emerg. Technol. 2020, 120, 102800. [Google Scholar] [CrossRef]
  2. Xiao, R.I.; Jaller, M. Prediction framework for parking search cruising time and emissions in dense urban areas. Transportation 2023, 1–29. [Google Scholar] [CrossRef]
  3. Hampshire, R.C.; Jordon, D.; Akinbola, O.; Richardson, K.; Weinberger, R.; Millard-Ball, A.; Karlin-Resnik, J. Analysis of parking search behavior with video from naturalistic driving. Transp. Res. Rec. 2016, 2543, 152–158. [Google Scholar] [CrossRef]
  4. Zong, F.; Zeng, M.; Yu, P. A parking pricing scheme considering parking dynamics. Transportation 2023, 51, 1349–1371. [Google Scholar] [CrossRef]
  5. Kaur, R.; Roul, R.K.; Batra, S. A hybrid deep learning CNN-ELM approach for parking space detection in Smart Cities. Neural Comput. Appl. 2023, 35, 13665–13683. [Google Scholar] [CrossRef]
  6. Gong, S.; Qin, J.; Xu, H.; Cao, R.; Liu, Y.; Jing, C.; Hao, Y.; Yang, Y. Spatio-temporal parking occupancy forecasting integrating parking sensing records and street-level images. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103290. [Google Scholar] [CrossRef]
  7. Feng, N.; Zhang, F.; Lin, J.; Zhai, J.; Du, X. Statistical analysis and prediction of parking behavior. In Proceedings of the Network and Parallel Computing: 16th IFIP WG 10.3 International Conference, NPC 2019, Hohhot, China, 23–24 August 2019; Proceedings 16. Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 93–104. [Google Scholar]
  8. Liu, Y.; Liu, C.; Luo, X. Spatiotemporal deep-learning networks for shared-parking demand prediction. J. Transp. Eng. Part A Syst. 2021, 147, 04021026. [Google Scholar] [CrossRef]
  9. Ye, W.; Kuang, H.; Lai, X.; Li, J. A multi-view approach for regional parking occupancy prediction with attention mechanisms. Mathematics 2023, 11, 4510. [Google Scholar] [CrossRef]
  10. Yang, S.; Ma, W.; Pi, X.; Qian, S. A deep learning approach to real-time parking occupancy prediction in transportation networks incorporating multiple spatio-temporal data sources. Transp. Res. Part C Emerg. Technol. 2019, 107, 248–265. [Google Scholar] [CrossRef]
  11. Kuo, P.-F.; Hsu, W.-T.; Putra, I.G.B.; Sulistyah, U.D. The proposed model for analyzing off-street parking Dynamics: A case study of Taipei City. Transp. Res. Part A Policy Pract. 2024, 180, 103965. [Google Scholar] [CrossRef]
  12. Zhao, D.; Ju, C.; Zhu, G.; Ning, J.; Luo, D.; Zhang, D.; Ma, H. MePark: Using meters as sensors for citywide on-street parking availability prediction. IEEE Trans. Intell. Transp. Syst. 2021, 23, 7244–7257. [Google Scholar] [CrossRef]
  13. Rajabioun, T.; Ioannou, P.A. On-street and off-street parking availability prediction using multivariate spatiotemporal models. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2913–2924. [Google Scholar] [CrossRef]
  14. Shao, W.; Zhang, Y.; Xiao, P.; Qin, K.K.; Rahaman, M.S.; Chan, J.; Guo, B.; Song, A.; Salim, F.D. Transferrable contextual feature clusters for parking occupancy prediction. Pervasive Mob. Comput. 2024, 97, 101831. [Google Scholar] [CrossRef]
  15. Zhang, F.; Liu, Y.; Feng, N.; Yang, C.; Zhai, J.; Zhang, S.; He, B.; Lin, J.; Zhang, X.; Du, X. Periodic weather-aware LSTM with event mechanism for parking behavior prediction. IEEE Trans. Knowl. Data Eng. 2021, 34, 5896–5909. [Google Scholar] [CrossRef]
  16. Nourinejad, M.; Roorda, M.J. Impact of hourly parking pricing on travel demand. Transp. Res. Part A Policy Pract. 2017, 98, 28–45. [Google Scholar] [CrossRef]
  17. Torres-López, R.; Casillas-Pérez, D.; Pérez-Aracil, J.; Cornejo-Bueno, L.; Alexandre, E.; Salcedo-Sanz, S. Analysis of machine learning approaches’ performance in prediction problems with human activity patterns. Mathematics 2022, 10, 2187. [Google Scholar] [CrossRef]
  18. Ho, P.W.; Ghadiri, S.M.; Rajagopal, P. Future Parking Demand at Rail Stations in Klang Valley. Proc. MATEC Web Conf. 2017, 103, 09001. [Google Scholar] [CrossRef]
  19. Chen, H.; Yang, M.; Zhang, Y.; Wen, H. Parking Spaces Demand Forecasting Based on ARIMA Model. In Proceedings of the Proceedings of the 2023 15th International Conference on Computer Modeling and Simulation, Dalian, China, 16–18 June 2023; pp. 133–137. [Google Scholar]
  20. Tavafoghi, H.; Poolla, K.; Varaiya, P. A queuing approach to parking: Modeling, verification, and prediction. arXiv 2019, arXiv:1908.11479. [Google Scholar]
  21. Hess, D.B. Access to public transit and its influence on ridership for older adults in two US cities. J. Transp. Land Use 2009, 2, 3–27. [Google Scholar] [CrossRef]
  22. Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv 2017, arXiv:1709.04875. [Google Scholar]
  23. Qu, H.; Liu, S.; Li, J.; Zhou, Y.; Liu, R. Adaptation and learning to learn (all): An integrated approach for small-sample parking occupancy prediction. Mathematics 2022, 10, 2039. [Google Scholar] [CrossRef]
  24. Yang, H.; Ke, R.; Cui, Z.; Wang, Y.; Murthy, K. Toward a real-time smart parking data management and prediction (SPDMP) system by attributes representation learning. Int. J. Intell. Syst. 2022, 37, 4437–4470. [Google Scholar] [CrossRef]
  25. Haomin, G. Analysis of Space-Time Demand Characteristics and State Evaluation of Large Scale Complex Parking Lots Based on Parking Space. Master’s Thesis, Southeast University, Nanjing, China, 2021. [Google Scholar]
  26. Van Ommeren, J.N.; Wentink, D.; Rietveld, P. Empirical evidence on cruising for parking. Transp. Res. Part A Policy Pract. 2012, 46, 123–130. [Google Scholar] [CrossRef]
  27. Shoup, D. High Cost of Free Parking; Routledge: London, UK, 2021. [Google Scholar]
  28. Pierce, G.; Shoup, D. Getting the prices right: An evaluation of pricing parking by demand in San Francisco. J. Am. Plan. Assoc. 2013, 79, 67–81. [Google Scholar] [CrossRef]
  29. Mei, Z.; Zhang, W.; Zhang, L.; Wang, D. Optimization of reservation parking space configurations in city centers through an agent-based simulation. Simul. Model. Pract. Theory 2020, 99, 102020. [Google Scholar] [CrossRef]
  30. Inci, E. A review of the economics of parking. Econ. Transp. 2015, 4, 50–63. [Google Scholar] [CrossRef]
  31. Chao, Z.; Zihao, C.; Hao, Q. Time-Domain Characteristics of Residential Parking and SEM-BL Integration Model of Parking Method Choice Behaviour. Math. Probl. Eng. 2022, 2022, 5164257. [Google Scholar] [CrossRef]
  32. Bai, L.; Sze, N.; Liu, P.; Haggart, A.G. Effect of environmental awareness on electric bicycle users’ mode choices. Transp. Res. Part D Transp. Environ. 2020, 82, 102320. [Google Scholar] [CrossRef]
  33. Christiansen, P.; Engebretsen, Ø.; Fearnley, N.; Hanssen, J.U. Parking facilities and the built environment: Impacts on travel behaviour. Transp. Res. Part A Policy Pract. 2017, 95, 198–206. [Google Scholar] [CrossRef]
  34. Zhou, C.; Jia, H.; Juan, Z.; Fu, X.; Xiao, G. A data-driven method for trip ends identification using large-scale smartphone-based GPS tracking data. IEEE Trans. Intell. Transp. Syst. 2016, 18, 2096–2110. [Google Scholar] [CrossRef]
  35. Shoup, D. Pricing curb parking. Transp. Res. Part A Policy Pract. 2021, 154, 399–412. [Google Scholar] [CrossRef]
  36. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. Adv. Neural Inf. Process. Syst. 2016, 29, 3844–3852. [Google Scholar]
  37. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  38. Hammond, D.K.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. Anal. 2011, 30, 129–150. [Google Scholar] [CrossRef]
  39. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  40. Li, R.; Wang, S.; Zhu, F.; Huang, J. Adaptive graph convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  41. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar]
  42. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  43. Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv 2017, arXiv:1707.01926. [Google Scholar]
  44. Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar]
  45. Kumar, S.V.; Vanajakshi, L. Short-term traffic flow prediction using seasonal ARIMA model with limited input data. Eur. Transp. Res. Rev. 2015, 7, 21. [Google Scholar] [CrossRef]
  46. Siami-Namini, S.; Tavakoli, N.; Namin, A.S. A comparison of ARIMA and LSTM in forecasting time series. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1394–1401. [Google Scholar]
  47. Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 922–929. [Google Scholar]
  48. Xiao, X.; Jin, Z.; Hui, Y.; Xu, Y.; Shao, W. Hybrid spatial–temporal graph convolutional networks for on-street parking availability prediction. Remote Sens. 2021, 13, 3338. [Google Scholar] [CrossRef]
  49. Ho, Y.; Wookey, S. The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling. IEEE Access 2019, 8, 4806–4813. [Google Scholar] [CrossRef]
  50. Melbourne’s Public Transport System. Available online: https://transport.vic.gov.au (accessed on 29 November 2023).
  51. Parking in Melbourne: Strategy and Policies. Available online: https://www.melbourne.vic.gov.au (accessed on 10 July 2023).
  52. Yan, X.; Levine, J.; Marans, R. The effectiveness of parking policies to reduce parking demand pressure and car use. Transp. Policy 2019, 73, 41–50. [Google Scholar] [CrossRef]
  53. Arnott, R. Spatial competition between parking garages and downtown parking policy. Transp. Policy 2006, 13, 458–469. [Google Scholar] [CrossRef]
  54. City of Melbourne Open Data Team. On-Street Parking Bay Sensors; City of Melbourne Open Data Team: Melbourn, Australia, 2024.
  55. City of Melbourne. On-Street Car Parking Sensor Data—2019. Available online: https://data.melbourne.vic.gov.au/explore/dataset/on-street-car-parking-sensor-data-2019/information/ (accessed on 7 October 2023).
  56. Landmarks and Places of Interest, Including Schools, Theatres, Health Services, Sports Facilities, Places of Worship, Galleries, and Museums. Available online: https://data.melbourne.vic.gov.au/explore/dataset/landmarks-and-places-of-interest-including-schools-theatres-health-services-spor/table/?location=15,-37.82075,144.96226&basemap=mbs-7a7333 (accessed on 29 November 2023).
  57. Café, Restaurant, Bistro Seats. Available online: https://data.melbourne.vic.gov.au/explore/dataset/cafes-and-restaurants-with-seating-capacity/information/ (accessed on 15 December 2023).
  58. Pink, D.T.B. Australian and New Zealand Standard Industrial Classification. Available online: https://www.abs.gov.au/statistics/classifications/australian-and-new-zealand-standard-industrial-classification-anzsic/latest-release (accessed on 8 July 2023).
  59. Central Geelong Census of Land Use and Employment (CLUE). Available online: https://www.geelongdataexchange.com.au/pages/cluev2/#:~:text=The%20Central%20Geelong%20Census%20of%20Land%20Use%20and,land%20use%2C%20and%20identifies%20key%20trends%20in%20employment (accessed on 15 December 2023).
Figure 2. Adaptive GCN Structure.
Figure 2. Adaptive GCN Structure.
Mathematics 12 02823 g002
Figure 3. Temporal prediction interval.
Figure 3. Temporal prediction interval.
Mathematics 12 02823 g003
Figure 4. Sample of On-Street Parking in Melbourne.
Figure 4. Sample of On-Street Parking in Melbourne.
Mathematics 12 02823 g004
Figure 5. Visualization of data matrix: (a) Distance Matrix Visualization; (b) Cosine Similarity Matrix; (c) Adjacency Matrix Visualization.
Figure 5. Visualization of data matrix: (a) Distance Matrix Visualization; (b) Cosine Similarity Matrix; (c) Adjacency Matrix Visualization.
Mathematics 12 02823 g005
Figure 6. Comparison between actual and predicted value in 15, 30, 60 min.
Figure 6. Comparison between actual and predicted value in 15, 30, 60 min.
Mathematics 12 02823 g006
Figure 7. Training and test loss per epoch.
Figure 7. Training and test loss per epoch.
Mathematics 12 02823 g007
Table 1. Summary of related works.
Table 1. Summary of related works.
ThemeStudy ReferenceMethodologyMain FindingsResearch Gaps
On-Street Parking Occupancy Feature[10,11,12,13]Spatiotemporal distribution patterns in similar citiesConsistent periodicity and trends in cities with similar population structures and commercial activitiesLimited discussion on varying urban layouts and cultural backgrounds
[14]Periodicity and trends in parking occupancyConsistent periodicity and trends in parking occupancyModel complexity and computational demands not addressed
[15]Periodic weather-aware LSTM model (PewLSTM)Precisely predicts parking behavior during various periods including holidays and pandemicsChallenges in handling sudden non-periodic events
[16]Spatiotemporal heterogeneity in different urban areasSignificant variation in parking needs by location and time in residential and office areasFurther research needed on micro-level influences like individual preferences
ThemeReferenceTechnique/FocusMain FindingsResearch Gaps
Methodologies of parking occupancy prediction[18]Linear RegressionDeveloped a parking demand forecasting modelLimitations in capturing complex interactions and nonlinear phenomena
[19]ARIMA ModelApplied for parking space occupancy predictionLimitations in capturing complex interactions and nonlinear phenomena
[20]Queueing ModelReal-time estimation of parking lot occupancyChallenges persist regarding regional applicability and flexibility
[21]PARKFIT AlgorithmAnalyzed urban parking distribution with high-resolution dataChallenges persist regarding regional applicability and flexibility
[23]ALL Approach (Deep Learning, Federated Learning)Improved small-sample parking occupancy prediction accuracyChallenges persist regarding regional applicability and flexibility
[24]Deep Learning Architectures (Graph Convolutional Neural Networks, Recurrent Neural Networks)Enhanced the accuracy of real-time predictions for urban street-level parking occupancyChallenges persist regarding regional applicability and flexibility
ThemeReferenceFocus AreaMain FindingsResearch Gap
Influencing factors of parking prediction[26]Building Scale and Parking DemandLarger commercial centers have higher parking demand than residential areasDoes not account for changes over time or special events
[27]Mixed-Use AreasSuggests mixed-use areas can reduce parking demand by optimizing travelLimited empirical evidence to support claim
[28]Public Transportation and Parking PressureEfficient public transportation can alleviate parking pressure in business districtsFocuses only on core business districts, not wider urban areas
[29]Dynamic Changes in Parking DemandAnalyzed parking demand variability across times and regionsFocus primarily on proxy modeling without extensive real-world validation
[30]Economic Influence on Parking ChoicesHigher fees may drive low-income groups to alternate travel modesDoes not address middle or high-income impacts
[31]Socioeconomic Background and Parking BehaviorVaried parking behavior among socioeconomic groupsLimited scope to specific demographic data
[32]Environmental AwarenessEnvironmental awareness influences parking choicesLack of broader demographic applicability
[33]Parking Facilities and Built EnvironmentAccessibility and regulations significantly affect car usage patternsLimited to parking facilities’ impact without considering broader transportation network impacts
[34]Optimization of Parking ResourcesShould respond flexibly to dynamic demand changes
Table 2. Parking sensor data.
Table 2. Parking sensor data.
FeatureDescription
Parking lots IDThe unique location code of parking space sensor
Arrival timeThe moment at which the sensors registered the arrival of vehicles
Departure timeThe moment at which the sensors registered the arrival of vehicles
LocationGeographical information of Longitude/Latitude
Status descriptionUnoccupied/Occupied
Table 3. Landmarks and POIs.
Table 3. Landmarks and POIs.
FeatureDescription
ThemeCommunity Use, Education Center, Health Services, Leisure/Recreation, Mixed Use, Office, Place of Assembly, Place of Worship, Purpose Built, Retail, Transport, Vacant Land
SubthemeArt Gallery/Museum, Church, Function/Conference/Exhibition Center, Informal Outdoor Facility (Park/Garden/Reserve), Major Sports and Recreation Facility, Office, Public Buildings, Public Hospital, Railway Station, Retail/Office/Carpark, Tertiary (University), Theater Live
Feature nameName of landmarks and place of interest
LocationGeographical information of Longitude/Latitude
Table 4. Distribution of cafés, restaurants, and bistros.
Table 4. Distribution of cafés, restaurants, and bistros.
FeatureDescription
YearThe recording year of food service venues
Building addressThe detail address of food service venues
Trading nameThe name of food service venues
Industry descriptionPubs, Taverns, Bar, Takeaway food service, Bakery product Manufacturing, Accommodation
Seating typeOutdoor/Indoor
Number of seatsThe capacity of food service venues
LocationGeographical information of Longitude/Latitude
Table 5. Location and industry classification of business establishments.
Table 5. Location and industry classification of business establishments.
FeatureDescription
YearThe recording year of business establishments venues
Trading nameName of building
Business addressThe detail address of business building
Industry descriptionCategory of industry
LocationGeographical information of Longitude/Latitude
Table 6. Residential dwelling data.
Table 6. Residential dwelling data.
FeatureDescription
DateThe update time of building location
Building addressDetail address of building
Dwelling typeResidential Apartment, House/Townhouse, Student Apartment
LocationGeographical information of Longitude/Latitude
Table 7. The accuracy of the parking occupancy prediction results.
Table 7. The accuracy of the parking occupancy prediction results.
MAEAGCRUARIMALSTMST-GCNASTGCNHST-GCN
15 min0.01560.05440.06440.03550.03510.0345
30 min0.03300.06960.07690.04670.04660.0456
60 min0.05580.09820.10040.06300.06270.0593
RMSEAGCRUARIMALSTMSTGCNASTGCNHST-GCN
15 min0.02440.07110.08860.04940.05180.0487
30 min0.06650.09060.10370.06390.06650.0632
60 min0.10030.12360.13290.08510.08580.0801
MAPE%AGCRUARIMALSTMSTGCNASTGCNHST-GCN
15 min1.556110.479410.55667.29839.96077.1222
30 min3.307113.505212.98929.715913.34599.3656
60 min5.581019.195317.545212.934119.227412.2568
Table 8. Training time consumption.
Table 8. Training time consumption.
ST-GCNAST-GCNHST-GCNAGCRU
151.83623.77152.05147.37
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, X.; Zhang, M. Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors. Mathematics 2024, 12, 2823. https://doi.org/10.3390/math12182823

AMA Style

Zhao X, Zhang M. Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors. Mathematics. 2024; 12(18):2823. https://doi.org/10.3390/math12182823

Chicago/Turabian Style

Zhao, Xiaohang, and Mingyuan Zhang. 2024. "Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors" Mathematics 12, no. 18: 2823. https://doi.org/10.3390/math12182823

APA Style

Zhao, X., & Zhang, M. (2024). Enhancing Predictive Models for On-Street Parking Occupancy: Integrating Adaptive GCN and GRU with Household Categories and POI Factors. Mathematics, 12(18), 2823. https://doi.org/10.3390/math12182823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop