Next Article in Journal
Spatial Autocorrelation Analysis of CO and NO2 Related to Forest Fire Dynamics
Previous Article in Journal
Evaluation of Pedestrian-Perceived Comfort on Urban Streets Using Multi-Source Data: A Case Study in Nanjing, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

VE-GCN: A Geography-Aware Approach for Polyline Simplification in Cartographic Generalization

1
School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
2
State Key Laboratory of Geo-Information Engineering, Xi’an 710054, China
3
National Engineering Research Center of Geographic Information System, China University of Geosciences, Wuhan 430074, China
4
School of Computer Science, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2025, 14(2), 64; https://doi.org/10.3390/ijgi14020064
Submission received: 23 December 2024 / Revised: 3 February 2025 / Accepted: 4 February 2025 / Published: 6 February 2025

Abstract

:
Polyline simplification is a critical process in cartographic generalization, but the existing methods often fall short in considering the overall geographic morphology or local edge and vertex information of polylines. To enhance the graph convolutional structure for capturing crucial geographic element features and simultaneously learning vertex and edge features within map polylines, this study introduces a joint vertex–edge feature graph convolutional network (VE-GCN). The VE-GCN extends the graph convolutional operator from vertex features to edge features and integrates edge and vertex features through a feature transformation layer, enhancing the model’s capability to represent the shapes of polylines. To further improve this capability, the VE-GCN incorporates an architecture for retaining crucial geographic information. This architecture is composed of a structure for retaining local positional information and another for extracting multi-scale features. These components capture high–low dimensional and large–small scale features, contributing to polylines’ comprehensive local and global representation. The experimental results on road and coastline datasets verified the effectiveness of the proposed network in maintaining the overall shape characteristics of simplified polylines. After fusing the edge features, the differential distance between the roads before and after simplification decreased from 1.06 to 0.18. The network ensures invariant global spatial relationships, making the simplified data well suited for cartographic generalization applications, especially in simplifying vector map elements.

1. Introduction

Map generalization is a process of abstracting and simplifying geospatial information [1]. During cartographic processes, to ensure that maps can clearly and accurately display details, it is necessary to abstract and simplify map elements, highlighting their essential characteristics [2]. With the advancements in information technology, digital maps have been widely applied in various fields, such as navigation systems, emergency response, urban planning, and environmental monitoring. Among the core issues in map generalization is the multi-scale representation of polylines, which has been a primary focus of cartographic research [3]. Ensuring the readability of maps and the accuracy of information, while reasonably simplifying polylines and retaining their key characteristics, are major challenges in map generalization. These challenges have attracted considerable attention from both academic researchers and experts in applied fields [4].
Due to the complexity of the overall structure of polylines in maps, the current line simplification algorithms lack intelligent support and still rely heavily on human–computer interaction or manual editing [5]. With the continuous development of computer technology, several automated polyline generalization algorithms based on geometric and mathematical methods have been proposed, achieving some success [6]. Among these, the Douglas–Peucker (D–P) algorithm and its variants [7,8,9] remain widely used for various simplification tasks due to their low computational complexity and easily interpretable results. However, in practical map generalization, cartographic experts must consider multiple factors with intricate relationships, including complex geometric, topological, and geographic consistencies, which are difficult to accurately capture with a single mathematical model or algorithm. Moreover, the effectiveness of these algorithms depends heavily on parameters and thresholds, lacking self-learning capabilities and the ability to extract implicit geographic information and knowledge from existing expert outcomes.
In recent years, the rapid development of artificial intelligence has enabled the extraction of fuzzy knowledge from data through machine learning models, accelerating the decision-making processes in polyline simplification and enabling rapid simplification of complex polylines in large-scale scenarios. Due to the complexity of geographic data, early machine-learning-based polyline simplification methods often relied on raster images, learning features, and knowledge of multi-scale polylines within images to achieve simplification. Examples include genetic algorithms [10] and self-organizing map algorithms [11]. However, converting vector polylines into raster images can lead to the loss of topological relationships between vertices, and convolutional neural networks may overlook the importance of key vertices in polylines during feature extraction, making it difficult to effectively preserve critical geographic information.
To address these limitations, researchers have introduced graph structures for the representation and analysis of vector map data, applicable in fields such as traffic forecasting [12] and building pattern recognition [13]. Graph structures can represent various irregular data types, enabling the representation of geographic features without the information loss caused by conversion or sampling. This provides a new solution for the simplification of map polylines, preserving key geographic information more effectively during the generalization process.
From a cognitive perspective, in the artificial simplification of polylines, vertices and edges serve as the basic building blocks and are the primary focus of decision-making. Considering the feature attributes of points and edges as decisive factors for intelligent polyline simplification aligns with cognitive principles. From a geographic perspective, the structured information inherent in polyline features is primarily expressed through the basic shapes formed by vertices and edges, and the simplification process should preserve the integrity of geographical features. However, directly applying existing graph convolutional models to polyline simplification presents several limitations:
(1)
Limitations of vertex features: In graph structures, edges represent connections between vertices and lack physical entities. However, in map polylines, edges are also fundamental elements. Cartographic generalization is a constrained, cognition-driven process guided by thinking, and intelligent polyline simplification should fully consider the geometric and geographic properties of both vertices and edges to better express cognitive rules and cartographic principles.
(2)
Extraction of crucial geographic features: Geographical features refer to the composition, distribution, and interrelationships of the geographic elements in an area. Due to the lack of directional properties in graph convolutional kernels, unlike raster convolutional kernels, graph convolutional layers merely aggregate information. This results in graph convolutional models having a lower capacity to learn features compared to convolutional networks.
In summary, applying existing graph convolutional models to polyline simplification presents two key limitations: falling short of intuitively representing the positional relationship between vertices and failing to appropriately retain or eliminate crucial geographical feature vertices during simplification.
To enhance the learning capability of graph convolutional models for edge features in vector polylines and address the shortcomings in geographic feature extraction and crucial geographic element recognition, this study proposes a joint vertex–edge feature graph convolutional network (VE-GCN) that considers crucial geographic features. This model expands the graph convolutional layer from vertex features to edge features and employs a feature transform layer that facilitates the conversion between vertex and edge features, enabling these distinct feature types to merge into unified polyline detail characteristics. The local detail features extracted by the two different convolutional layers, combined with the overall shape characteristics obtained from the fused features, enhance the model’s ability to express the relationship between the overall shape and local details within polylines. This significantly improves the model’s capacity for geographic feature extraction, thus aiding decision-making in polyline simplification tasks. Additionally, the model incorporates a crucial geographic information retention architecture that learns the low-dimensional position information and high-dimensional shape features of crucial geographic elements within polylines. This approach ensures the cognitive representation of these elements during polyline simplification.

2. Related Works

Currently, polyline simplification methods have been extensively studied and applied in cartographic research. Based on the level of automation, experts and researchers generally classify automatic polyline simplification methods into two categories: geometry-based simplification methods [10,14] and machine-learning-based simplification methods [15,16].

2.1. Geometry-Based Simplification Methods

Geometry-based polyline simplification methods rely on cartographic expertise to set simplification factors, which are then used to decide whether to retain the vertices and curved segments within polylines. These manually set simplification factor methods can be further divided into vertex-based algorithms [6] and bend-based algorithms [17,18].
Vertex-based Algorithms: These methods consider polyline vertices as the minimum unit and simplify polylines by deleting redundant points among these vertices. These algorithms assess the importance of each vertex within the polyline to retain critical vertices and eliminate less significant ones, thereby achieving simplification. Classic examples of this approach include the Visvalingam–Whyatt algorithm [19], Douglas–Peucker (D–P) algorithm [7,8,9], Li–Openshaw algorithm [20], arc ratio chord method [21], vertical ratio chord method [22], and their respective improved versions [23,24,25,26,27]. The D–P algorithm, representative of vertex-based methods, is widely used due to its relative efficiency and stability. However, it can cause significant deformations and, in some special cases, produce errors such as self-intersections.
Bend-based Algorithms: These methods are designed based on human cognitive habits, aiming to better extract the geographical features embedded in vector polylines and avoid self-intersections in the simplification results. Bend-based algorithms treat vector polylines as composed of a series of continuous bends, simplifying the polyline by calculating shape characteristics and deleting bends that do not meet specific conditions. Those algorithms emphasize the analysis of shape characteristics to improve the preservation of critical bends during simplification [28]. Notable examples include the polyline simplification algorithm supported by Delaunay triangulation [29], binary tree representation of curve bending depth hierarchy [30], line generalization based on composite bending analysis [17], line simplification using oblique bend division [18], line simplification preserving the curve bend characteristics [31], and polyline simplification based on ternary bend groups [32]. Although bend-based algorithms improve the preservation of geographic features to some extent, they still fail to fully account for the critical feature vertices and spatial relationships within the polyline, potentially leading to the removal of important characteristic points during the simplification process.
In summary, although traditional geometry-based methods have significant advantages in efficiency and stability, they have some limitations in geographic feature extraction. Geometry-based methods often rely on predefined rules to filter vertices, lacking flexible handling of complex geometric details and making it difficult to effectively extract key features. The decision-making basis of those methods is based on local features, which can easily ignore the overall spatial structure and lead to the destruction of topological relationships.

2.2. Machine-Learning-Based Simplification Methods

With the rapid development of artificial intelligence, deep learning has been widely applied in fields such as image processing, speech recognition, and text analysis. However, established machine learning models like CNNs and RNNs are designed for structured data and cannot directly handle complex map data. Early machine-learning-based polyline simplification methods sought to standardize polyline data through transformation or sampling. The most common approach was converting vector data into raster images, enabling the simplification of polylines by learning multi-scale line features and knowledge from images, using methods such as genetic algorithms [10] and self-organizing maps [11]. These methods primarily simplify polylines by learning the features of raster element contours, tracing simplification trajectories, and matching simplification templates [16,33].
With the successful application of deep learning technology in image translation and style transfer [34], approaches that learn and simulate transformations from pre-simplification to post-simplification have emerged. These methods construct deep learning models with strong image feature learning capabilities, designed specifically for complex polyline elements, and generate raster samples [4,35]. The simplification process involves inputting small-scale polyline raster images into the model, which then optimizes and generates simplified large-scale polyline raster images by extracting features and optimizing loss functions. However, converting vector polylines to raster images for transformation leads to the loss of topological spatial relationships between vertices, and the feature extraction process using convolutional neural networks often overlooks the importance of crucial vertices in polylines.
Despite these limitations, recent research has begun to address vector data structures. For instance, Jiang and colleagues utilized the established R-CNN model [36], treating rectangles formed by any two vertices as extraction boxes, effectively transforming the polyline simplification problem into an object detection task. By extracting feature maps from raster images, the method determines whether to retain the extraction box, which corresponds to retaining the two vertices within it. While this approach preserves the topological relationships of polylines, the feature extraction remains raster-based, and treating vertex selection as an object detection task still leads to conflicts in vertex retention decisions. Yu and Chen [37] conceptualized the multi-scale representation of polylines in cartographic generalization as a multi-scale encoding process within neural network feature maps. Each progressive feature layer represents the features at different scales of detail. To facilitate end-to-end polyline simplification and ensure a consistent number of vertices, the study regularizes the data by uniformly sampling the vertices within vector polyline elements, thereby extracting geographic characteristics at different scales. However, the multi-scale feature extraction of uniformly sampled vertices complicates the expression of crucial vertices in polylines. The lack of crucial vertex information results in the loss of geographic characteristics in simplified polylines.
Additionally, some studies have employed support vector machines [15] or fully connected neural networks [38] for polyline simplification, determining vertex retention based on the geometric features of each vertex. Although these methods avoid data loss during input, they are limited to focusing solely on the geometric details of polylines and fail to consider their overall geographic features, leading to suboptimal simplification results, particularly for complex datasets.
Graph convolutional networks (GNNs) offer promising capabilities for polyline simplification by effectively modeling spatial and topological relationships, but they also face significant challenges. A major issue is the difficulty in capturing complex spatial relationships, such as ensuring global topological consistency across intersecting or overlapping geometries, which can lead to errors like disconnected networks [39,40]. GCNs are highly dependent on high-quality annotated data, yet cartographic datasets are often scarce and unevenly distributed, hindering generalization. Additionally, the importance of critical vertices, such as sharp bends or key landmarks, may be diluted during the message-passing process, leading to the loss of essential geographic details. Addressing these issues is crucial for unlocking the full potential of GCNs in polyline simplification.

3. Materials and Methods

Graphs are data structures consisting of vertices connected by edges, capable of representing various types of irregular data. However, in conventional graph structures, edges merely represent relationships between vertices and do not possess physical entities, with at most a single weight stored on the edge. In contrast, in geographic map data, both vertices and edges are fundamental geometric elements that collectively define polylines, and both encapsulate crucial geographic information. To enhance the graph convolutional structure for capturing crucial geographic element features and simultaneously learning vertex and edge features within map polylines, this study proposes a joint vertex–edge feature graph convolutional network (VE-GCN). The VE-GCN comprises two main components: a graph convolutional layer adapted for geographic structures and an architecture for retaining crucial geographic features. The adapted graph convolutional layer focuses on both vertex and edge features of polylines, integrating these features into unified geographic characteristics. The architecture for retaining crucial geographic features includes a structure for retaining local positional information and another for extracting multi-scale geographic information. By leveraging residual cascades and multi-scale mechanisms, this structure further enhances the model’s ability to extract geographic features. The overall process of the proposed VE-GCN for polyline simplification is illustrated in Figure 1.

3.1. Improved Graph Convolutional Layer for Geographic Structure

In conventional graph convolution, the graph space is typically constructed on the Laplacian matrix formed by vertices, where edge features can only be processed by attaching them to vertices as vertex features. However, as graph convolution does not have the directionality of image convolutional kernel, incorporating edge features directly as vertex information can result in the loss of critical information (e.g., direction and length) inherent in the edge features. This approach disrupts the intrinsic relationships between vertices and edges, leading to a substantial loss of geographic characteristics in polylines. To address this issue and enhance the graph convolutional core, this study introduces a method of separately constructing edge features and merging them with vertex features. This approach aims to mitigate the inadequate shape characteristics and the incomplete geographic characteristics in polyline simplification caused by the absence of edge features.

3.1.1. Construction of Geographic Graph Structure

Graph-structure data comprise attributes and structural features. Attribute features store spatial positions and other supplementary attributes of vertices. In graph-structure data or map vector data, only the attribute features of vertices are typically included. However, for polylines with strong spatial correlations, constructing a graph structure solely on the basis of vertex attribute features cannot intuitively capture the positional relationship between each vertex. Consequently, introducing edge features into the graph convolutional structure becomes indispensable.
Vertices in map data represent explicit spatial locations, often characterized by coordinates, k-adjacency relationships, and local geometric features such as curvature or prominence within a polyline. These attributes are essential for preserving key inflection points in line generalization. Conversely, edges in polylines represent the continuous segments connecting vertices and contain vital spatial properties such as length, orientation, directionality, and even contextual terrain-related characteristics. Unlike conventional graphs, where edges are abstract relational links, edges in geospatial polylines carry rich geometric information, playing a significant role in defining the shape and continuity of spatial features.
In this study, the vertex features comprise attributes and spatial characteristics, including coordinates and k-adjacency average distance. These details serve as the initial input for the graph convolutional network. For the fusion construction of vertex and edge features, the study incorporates the attribute features of edges into the graph convolutional construction. Edge features encompass edges’ attributes and spatial characteristics, such as coordinates, length, and trigonometric functions. Specifically, trigonometric functions are employed to encode geometric relationships, such as the orientation or angular relationships between edges, which can provide critical information about the spatial configuration of the polyline. The construction of edge and vertex attribute features is illustrated in Figure 2. In Figure 2a, v n represents N vertices in the graph structure, each with Q attributes; in Figure 2b, e m represents M edges in the graph structure, each with P attributes.
The structural feature is another crucial aspect of graph-structure data. To represent the geometric (topological) relationships in the polyline data, this study utilizes the adjacency matrix to depict the relationship between vertices, the incidence matrix to represent the relationship between vertices and edges, and the adjacency matrix to generate the Laplacian matrix used to construct the graph convolutional core. The adjacency matrix and incidence matrix are preserved using sparse matrix representation. This approach substantially reduces the space cost of data storage and effectively represents the connectivity between vertices and edges. To further characterize the topological structure features, this study also considers the associations between edges within the graph structure. This method extends the Laplace matrix to the edge, thereby representing the correlation between edges in the graph structure. As illustrated in Figure 2b, taking the edges of the polyline as vertices, a graph structure of the edge elements is constructed on the basis of connectivity of the edges, and the adjacency matrix of edge elements is stored.

3.1.2. Integration of Vertex and Edge Features

Conventional graph convolutions typically operate on the Laplacian matrix constructed on the basis of vertices, treating edge features as vertex features by attaching them to the vertices. However, this approach can lead to the loss of edge feature information during the graph convolutional operation. The integrity of the geographic characteristics of polylines is compromised, severely affecting feature extraction and simplification. Therefore, this study introduces an extended Laplacian matrix that encompasses edge elements while utilizing conventional graph convolutional layers for vertex features. This extension enables graph convolutional operations on edge features, allowing the layers to directly compute features using edge information. It breaks through the limitation of conventional GCNs, enabling direct input of edge features into the model for convolutional operations (Figure 3) and thus reducing the effect of information loss during convolution.
The equation of the graph convolutional layer for vertex features in this study is expressed as follows:
Y V = σ k = 0 K 1 W k T k L ~ X V + b
where X V represents the input vertex features, Y V represents the output vertex features, W k is the learnable parameter with dimensions I V × J V , I V is the number of feature layers of the input vertex features X V , and J V is the number of feature layers of the output vertex features Y V . b is the offset term, and σ is the activation function. T k L ~ represents the Laplacian matrix weighted by the Chebyshev inequality, where L ~ = 2 L s y m λ m a x I is the normalized form of the Laplacian matrix. Following this weighting process, the vertex’s own features are primarily governed by the identity matrix W 0 , the first-order neighbor features by W 1 , and the second-order neighbor features by W 2 [41]. This increased spatial correlation of the convolutional kernel makes the convolutional layer suitable for mapping features.
To construct a VE-GCN, this study extends the Laplacian matrix through the graph structure of edge elements (Figure 2b). This expansion enables the capture of interconnectivity between edges in the graph structure, similar to the Laplace matrix L, resulting in the Laplace matrix LE for edge elements.
Utilizing the expanded Laplacian matrix and basing on the graph convolutional layer for vertex features, our method extends to a convolutional layer for edge features. The convolutional computation formula for the latter is expressed as follows:
Y E = σ k = 0 K 1 W k T k L ~ E X E + b
where X E represents the input edge features, Y E represents the output edge features, W k is the learnable parameter with dimensions I E × J E , I E is the number of feature layers of the input edge features X E , and J E is the number of feature layers of the output edge features Y E . b is the offset term, and σ is the activation function.
By employing the graph convolutional layer for vertex features and the improved convolutional layer for edge features, our method can learn the vertex and edge features to map polyline data, encompassing a comprehensive set of geographic information. Specifically, vertex information captures localized spatial characteristics such as curvature and proximity, while edge features contribute to an understanding of connectivity patterns, geometric transitions, and directional variations. This approach is advantageous for extracting the spatial and geographic characteristics of polylines.
However, to simultaneously utilize these two types of features and apply them to polyline simplification, an integration mechanism is required. To integrate vertex and edge features into unified features within the model, this study introduces a convolutional layer for feature transformation (Figure 4). It uses spatial correlation methods to enable the conversion and fusion of vertex and edge features. In this way, the model can effectively utilize both types of features to obtain accurate spatial characteristics of polylines for polyline simplification.
The formula for converting edge features to vertex features is
V = σ w B E + b
where E represents the input edge features, V represents the output vertex features, and w is the learnable parameter with dimensions I E × J V . I E is the number of input edge features E , J V is the number of output vertex features V , b is the offset term, and σ is the activation function.
Owing to the property of the incidence matrix, the features of the vertex v n are calculated from the features of all adjacent edges. The v n , j is the j th feature of vertex v n , which is
v n , j = e m N v n i = 0 I w i , j e m , i
where e m represents all the adjacent edges of the vertex v n .
Similarly, the formula for converting edge features into vertex features is
E = σ w B T V + b
where V represents the input vertex features, E represents the output edge features, w is the learnable parameter with dimensions I V × J E , I V is the number of input vertex features V , J E is the number of output edge features E , b is the offset term, and σ is the activation function.
Owing to the property of the incidence matrix, the features of edge e m are calculated from the features of all adjacent edges. The e m , j is the jth feature of edge e m , which is
e m , j = v n N e m i = 0 I w i , j v n , i
where v n represents all the adjacent vertices of edge e m .
By combining graph convolutional layers that incorporate vertex and edge features, as well as convolutional layers for feature transformation, we can construct the VE-GCN. The network is trained to learn the joint geographic characteristics of polylines by integrating these vertex and edge features into a unified spatial representation. The fused features effectively enhance polyline simplification decision-making by simultaneously expressing the relationship between overall shape characteristics and local geometric features. Through this approach, VE-GCN ensures that the simplification process preserves critical topological structures while effectively reducing redundancy, resulting in improved geometric fidelity and spatial coherence in generalized polylines.

3.2. Architecture for Retaining Crucial Geographic Features

Expressing the geographic characteristics of polylines involves constructing attributes and structural features between vertices and edges. The accurate representation of edge and vertex features during simplification forms the foundational data for polyline simplification research. Determining the simplification results at a given position from these edge and vertex features relies on the structure of the simplification model. In real-world geographic scenarios, the model’s ability to retain crucial geographic elements depends on effectively managing overall shape and local details. Loss of crucial geographic information while extracting features can lead to the inaccurate retention of representative geographic elements, such as capes and docks in coastlines. Therefore, this study utilizes position information and multi-scale geographic features to determine whether vertices are preserved. The structure of the polyline simplification model is illustrated in Figure 5.

3.2.1. Structure for Retaining Local Positional Information

The lack of local directionality in the graph convolutional layer, functioning as an information aggregator, results in a substantial loss of upper-level feature information during convolutional operations. As the number of graph convolutional layers increases, nearly every vertex aggregates information from the entire graph, diminishing the diversity of the local network structure for each vertex. This undermines the effective learning of vertex features. Additionally, the positional information crucial for describing the geographic location of polylines tends to be inadequately preserved in deep graph convolutional operations. This deficiency results in a failure to judge edge and vertex positional information during polyline simplification, overlooking key geographic details. Consequently, the ability of polyline simplification models based on graph convolution to preserve crucial geographic elements is compromised.
Residual structures, with their skip connections, enable the fusion of shallow and deep features, improving the network’s convergence speed and mitigating the gradient vanishing problem in deep networks. Therefore, this study incorporates residual structures to retain low-dimensional features and local positional information, enhancing the model’s ability to preserve critical geographical information within polyline elements. This approach addresses the issue of information loss during the extraction of vertex and edge features. Figure 6 illustrates the proposed graph convolutional module with the integrated residual structure.

3.2.2. Structure for Extracting Multi-Scale Geographic Information

In addition to enhancing the extraction of positional information in the simplification model to address the challenge of retaining crucial geographic features, the accurate representation of multi-scale features in polylines imposes constraints on vertex retention during simplification.
In graph convolution, the size of the receptive field determines the range of information acquisition in polyline simplification. As the distance between crucial vertices and the current vertices varies in polyline simplification, improving the retention of crucial geographic features hinges on expressing local and global information of the current vertex through multi-scale feature extraction. If the receptive field is too small, the simplified model can only capture the local features of the current vertex, overlooking the correlation between adjacent vertices. If the receptive field is too large, the simplified model may include redundant information and struggle to discern local details. Thus, this study employs a multi-scale graph convolutional structure (Figure 7) and utilizes varying receptive fields during convolution. The multi-scale structure can extract features at different scales simultaneously, augmenting the polyline simplification model’s ability to extract geographic features and retain crucial geographic elements. It can distinguish vertices from local details and global perspectives.

3.3. Loss Function

The polyline simplification in this study is achieved by determining whether vertices are retained using cross-entropy loss, commonly employed in classification tasks, to optimize the model. However, due to the imbalance between the numbers of retained and non-retained vertices during the simplification process, the model tends to favor the class with more samples, leading to abnormal simplification results. To address this issue, a weighted binary cross-entropy loss function is adopted, introducing weighting coefficients to balance the impact of positive and negative samples on the loss. The optimization function is expressed as follows:
L = 1 N i = 1 N y i log p i + w 1 y i log 1 p i
where N is the number of samples, y represents the true label (0 or 1), and p represents the probability of the model predicting a positive class.

4. Experiments

This study constructs the aforementioned VE-GCN using the TensorFlow 2 framework. Two datasets, one for roads and another for coastlines, are established to conduct experiments and analyze the model.

4.1. Evaluation Metrics

Evaluating the results of map generalization has long been a significant challenge in cartography, with no perfect quantitative metric established over the years. In current polyline simplification research, the quality of simplification is commonly assessed by measuring the similarity between two polylines. Key metrics for evaluating curve similarity include Hausdorff distance and Fréchet distance. The Hausdorff distance quantifies the similarity between two sets of vertices. Assuming there are two sets A = a 1 , , a p and B = b 1 , , b q , the Hausdorff distance between these two vertex sets is defined as
H A , B = max h A , B , h B , A
where
h A , B = max a A min b B a b
The Hausdorff distance measures the maximum mismatch between two sets of vertices, considering all points of one curve as a set and calculating the distance between the two sets of curve vertices.
The Fréchet distance characterizes path–space similarity between two continuous curves A and B on a space S , i.e., A : 0,1 S , B : 0,1 S . Let α and β be the reparameterization functions of the unit interval, i.e., α : 0,1 S , β : 0,1 S . The Fréchet distance between curves A and B is then defined as
F A , B = inf α , β max t 0,1 d A α t , B β t
where d is the metric function on S. The Fréchet distance represents the shortest maximum distance between two directed curves that cannot backtrack.
Although Hausdorff and Fréchet distances are commonly used to measure curve similarity, both are highly sensitive to extreme values (outliers). The presence of a significantly deviant noise point on a curve can overshadow the calculation results of other segments, hindering the measurement of the average distance between the two curves. Therefore, this study primarily uses the differential distance to express the similarity between polylines before and after simplification. The differential distance is the area between the two polylines before and after simplification divided by the length, representing the differential average distance between the two polylines.
d i f f e r e n t i a l   d i s t a n c e = S C 1 C 2 a v e L C 1 , L C 2
This study also uses the vertex retention rate to quantify the simplification rates of polylines. The vertex retention rate is the ratio of retained points after simplification to the total number of vertices before simplification. The formula is as follows:
v e r t e x   r e t e n t i o n   r a t e = N C 2 N C 1
Similarity rate and simplification rate can only indirectly reflect the geometric accuracy of the simplification results. However, these metrics do not fully capture the preservation of essential geographic spatial features and the overall cartographic quality of the simplification results. Given that polyline simplification aims not only to reduce data complexity but also to maintain the spatial readability and functional accuracy of geographic features, relying solely on similarity-based metrics is a pragmatic yet incomplete approach. Therefore, a comprehensive evaluation should integrate qualitative assessments, such as visual comparisons, expert reviews, and perceptual analysis, to ensure that the simplification retains critical spatial patterns and remains effective for practical cartographic applications.

4.2. Road Simplification Experiment

As one of the most representative features of polylines on maps, road simplification has become an integral component of multi-scale urban modeling and cartographic generalization research. To validate the effectiveness of the method presented in this study, we constructed a road dataset using Douglas–Peucker simplification and conduct polyline simplification experiments using the proposed VE-GCN on this dataset.

4.2.1. Data Source

The urban road data used in this study are sourced from “The Master Plan of Development for Beijing (2016–2035)”, published by the Beijing Municipal Government. The data include the planned road network system map of the central urban area, the planned road network of regional trunk highways, and the planned road network of highway hubs. After vectorization, the road network data of Beijing city are obtained (Figure 8). The central urban area road network includes urban expressways, main roads, and secondary roads, whereas the regional trunk highway network includes highways, primary roads, and secondary roads. The road network data consist of a total of 2216 roads, with a total length of approximately 14,968 km, covering a range of road lengths from 98 m to 187.6 km.

4.2.2. Result

In this experiment, the loss weight is set to 10, resulting in a vertex retention rate of approximately 0.42 for the simplified roads. The differential distance to the ground truth (GT) is 0.04, although the Hausdorff and Fréchet distances are approximately 0.64. To vividly demonstrate the effectiveness of our method in road line simplification, Figure 9 presents detailed views of the road lines before and after simplification. From left to right, the images show the input original large-scale road data, the roads simplified by the D–P algorithm (label), and the road simplification results output by our model.
Upon closer inspection of local details, our method substantially reduces the number of vertices by preserving only one vertex at the bends of the road after the simplification. This enables us to maintain the shape of the road while greatly reducing the amount of data.
From a holistic perspective, our method accurately maintains the curvature of roads, in curved and straight shapes, and effectively preserves the primary morphological features of roads at the target scale while removing minor details. This reduces the storage pressure of vector road data and complex road network data, achieving road line simplification.

4.3. Coastline Simplification Experiment

To further validate the practicality and versatility of our method, we use coastline data from the national geographic database to create a dataset for training the model. We then conduct experiments and further analysis by applying the VE-GCN for polyline simplification.

4.3.1. Data Source and Processing

The coastline data in this study are sourced from the 1:250,000 and 1:1,000,000 national primary geographic datasets in the topographic element database of the national basic geographic information database. Parts of Fujian, Guangdong, Guangxi, and Hainan provinces are included. A total of 2497 coastlines with a scale of 1:250,000 and 576 coastlines with a total length of approximately 8215 km are obtained (Figure 10). In order to save data processing time and standardize the methods, we directly used the original national geographic information data without performing homogenization or other operations to improve accuracy.
Figure 11 displays a segment of the 1:1,000,000 coastline (shown in red) and the 1:250,000 coastline (shown in green) from the national basic geographic information database. However, we note a discrepancy between the two scales of coastlines. Following the principle of “approximate deviation for shorter distances” in geography’s first law, we manually select matching vertices between the two coastline scales (indicated by blue arrows in Figure 11) to obtain the corrected 1:1,000,000 coastline (shown in orange). Additionally, as the starting points of the coastlines in the database are usually inconsistent between the two scales, the data must be reorganized. Any excessive length needs to be truncated to construct the experimental dataset required for our method.

4.3.2. Result

The experimental results indicate that, when the loss weight is set to 5, the quality of the coastline dataset at a 1:1,000,000 scale simplified by our model reaches the same level as the labeled data (Figure 12). The vertex retention rate is approximately 0.26, the differential distance to the GT is 0.18, and the Hausdorff and Fréchet distances are 1.36 and 1.39, respectively. The coastline curve smoothly transitions from a complex large-scale coastline to a simple small-scale coastline, maintaining consistent polyline simplification levels at dense and sparse vertices. The simplified coastline retains the shape characteristics of the coastline at a large scale in the curved and straight parts of the lines and dense and sparse vertices. In regions with different vertex densities, the simplified coastline matches the vertex density at the same small scale. For example, the model deletes middle vertices in a sizeable straight range within the zoomed area, retaining apparent break vertices. The model preserves vertices in areas with significant curvature, such as the bend at the lower left corner of zoomed Region 2. Regardless of whether the regions are sparse or dense, the simplification maintains an appropriate number of vertices. In high-density areas (e.g., magnified Region 1 and Region 2), many redundant vertices are deleted, whereas, in less dense areas (e.g., magnified Region 3 and Region 4), fewer vertices are removed. This approach ensures a relatively consistent vertex density across different regions after simplification, preserving the natural variability and spatial characteristics of the coastline. Thus, the vector polyline simplification model proposed in this study not only considers the overall structure consistency of the polylines but also accounts for vertex density variations in different regions. The model achieves a simplified coastline at the 1:1,000,000 scale that matches the quality of the original labeled data.

5. Discussion

5.1. Parameter Discussion

Our method utilizes a weighted cross-entropy loss for model optimization. However, the issue of class imbalance makes the weight of the cross-entropy loss crucial for the degree of line simplification. The aforementioned issue stems from a considerable difference in the number of retained versus deleted vertices in coastline simplification. Therefore, in the coastline experiment, we conduct trials with different weights to optimize the model. The aim is to identify the most suitable weight for simplifying the data of the coastline at the 1:250,000 scale to a 1:1,000,000 scale. As illustrated in Figure 13, the greater the weight, the less the degree of simplification by the model and vice versa.
To determine the weight for weighted cross-entropy, ten different weights are tested in the experiment. Figure 14 displays the average vertex retention rate and differential distance of the test set after simplifying cross-entropy models with varying weights. The dotted line represents the label values. These two indicators collectively reflect the model’s simplification ability. As the weight increases, the degree of simplification decreases, the vertex retention rate increases, and the differential distance decreases. The model with a weight of 3 to 4 is closest to the label in vertex retention rate, and that with a weight of 6 to 7 is closest to the label in differential distance. Considering the vertex retention rate and differential distance, we select 5 as the weighted cross-entropy loss weight for our method. Figure 15 illustrates that the simplification result with a weight of 5 is the closest to the 1:1,000,000 coastline data (labels) in terms of shape characteristics and bending degree. Particularly, the simplification results obtained with a weight of 5 correspond closely to the level of simplification and detail retention observed in the coastline drawn at the 1:1,000,000 scale by experts in the bay and cape areas. This further confirms the effectiveness of the VE-GCN proposed in this study in crucial geographic locations.
Similarly, for simplification requirements of other scales, the optimal simplification effect can also be achieved by adjusting the weights.

5.2. Comparison with Existing Methods

To demonstrate the effectiveness and progressiveness of our approach, we compare traditional algorithms, including the D–P algorithm [7], hexagon clustering method [42], ternary bend group simplification method [31], and the support vector machine (SVM) method [15]. Table 1 provides a comparative analysis of the evaluation metrics between our proposed method and the existing ones. It highlights the superior data compression capabilities of the existing methods and demonstrates their stable performance with minimal significant errors. Nevertheless, the SVM method encounters difficulties in fitting during training, and the distance before and after simplification is relatively large.
However, similarity metrics cannot fully measure the quality of map generalization. From the perspective of map visualization, traditional methods still lack the level of intelligence required for practical cartographic generalization, often failing to handle geographical shapes in a targeted and intelligent manner. Figure 15 shows the visualization results of different algorithms applied to the coastline simplification task. During the map generalization process, narrow river estuaries are often simplified away and replaced by river elements. In Region 1, the river estuary located in the central part of the coastline (highlighted by the red circle) is not successfully simplified by any of the four existing algorithms, and sharp angles are introduced. Since the river estuary’s depth is greater than the adjacent coastline segments, increasing the simplification threshold leads to the loss of critical vertices in the middle. However, our proposed method successfully simplifies the river estuary while preserving the overall shape of the coastline. For wider estuaries, map generalization typically retains the outer wider sections and simplifies the narrower inner parts into river elements. In Region 2, most of the coastline consists of river estuaries. Our method successfully retains the wider outer parts of the estuary while simplifying the inner sections (highlighted by the red circle), a result not achieved by the four existing algorithms. For larger or more significant features and phenomena, map generalization usually prioritizes preserving these critical characteristics. Region 3 shows the simplification of the important deep-water port of Yangpu Port in Hainan (highlighted by the red circle). None of the four existing algorithms retain the port’s shape, with the Douglas–Peucker method even introducing severe self-intersections. In contrast, our approach successfully preserves this critical geographical feature. In this test, the SVM-based method fails to effectively simplify polylines, showing noticeable imbalance—retaining excessive details in some areas while missing critical regions entirely. Moreover, errors such as the incorrect removal of start and end vertices result in incomplete polylines.
The results indicate that the traditional geometry methods fail to consider the geographic characteristics of map data, resulting in an inability to reasonably retain or delete geographic features during polyline simplification. This limitation ultimately disrupts the fundamental morphology of coastlines. Although the SVM method is commonly used in machine learning, its overly simplistic structure is incapable of learning the overall geographic features of polylines and the simplification rules employed by cartographic experts during cartographic generalization. Consequently, it cannot effectively simplify polylines. By contrast, the polyline simplification method proposed in this study takes a comprehensive approach by considering the geographic information embedded in the polylines. The method learns the geometric characteristics, geographic features, and spatial positions of the polylines. By selectively preserving various cartographic objects, such as representative geographic elements (e.g., wharves and capes), the map generalization becomes rational.
The simplification results better align with the intended simplification by cartographic experts, achieving effective knowledge transfer from expert-driven simplification results to intelligent polyline simplification. Furthermore, this approach reduces the need for additional operations, such as manual parameter adjustments, enhancing the automation and intelligence of the polyline simplification.

5.3. Ablation Experiment for Integrating Edge Features

This study employs an enhanced graph convolutional layer to formulate the VE-GCN, providing a more comprehensive incorporation of edge features than conventional graph convolutions. To thoroughly validate the effectiveness of the proposed VE-GCN, an ablation analysis of the integrated edge features is conducted. Table 2 and Figure 16 present a comparison of the simplification results between the baseline graph convolutional model using only vertex features and the proposed method. The data in the table indicate that, when using a weighted cross-entropy loss with a weight of 5, the baseline graph convolution utilizing only vertex features and the proposed method has a vertex retention rate of approximately 0.25. The differential distance between the results of the baseline graph convolutional network using only vertex features and the GT is 1.06. However, the GT is 0.18 for our method. At the same level of simplification, the proposed method yields a higher similarity to the GT.
Figure 16 compares the original graph convolutional model with our proposed method in the coastline simplification task. In Region 1, our method successfully retains several critical inflection vertices during the simplification process, preserving the overall shape of the coastline. In contrast, the graph convolutional model that relies solely on vertex features retains incorrect inflections, leading to significant changes in the coastline’s shape. In Region 2, our method effectively maintains the curvature of the coastline, whereas the vertex-feature-only graph convolutional model removes all the intermediate vertices, resulting in an unrealistic straight line. In Region 3, our approach successfully preserves the port, a critical geographical feature, during simplification, while the vertex-feature-only model fails to correctly retain the island’s shape.
Overall, under the same level of simplification, our method preserves more shape features. By treating vertices as fundamental elements for coastline simplification and fully integrating the descriptions of both vertex and edge features, our approach achieves better preservation of the coastline’s geometric characteristics, yielding smoother simplification results. Conventional graph convolutional networks perform convolutional operations using only vertex features, whereas our proposed VE-GCN performs separate convolutions on vertex and edge features, merging these two feature types to better suit the simplification of vector polyline segments. By making fuller use of edge features in polylines, our method outperforms graph convolutional networks that use only vertex features, particularly in extracting geographic features of coastlines and achieving reasonable simplifications.

5.4. Ablation Experiment for the Architecture for Retaining Crucial Geographic Features

In this study, the VE-GCN incorporates an architecture for retaining crucial geographic features, comprising a structure for retaining local positional information and another for extracting multi-scale geographic information. To thoroughly assess the efficacy of this architecture in retaining crucial geographic features during coastline simplification, an ablation analysis is conducted. Table 3 and Figure 17 present comparisons among various models: manually annotated coastlines at different scales (GT), a baseline model without the architecture for retaining crucial geographic features, models with only the structure for retaining local positional information (denoted as “L” in Table 3 and Figure 17), models with only the structure for extracting multi-scale geographic information (denoted as “M” in Table 3 and Figure 17), and models optimized with both structures (our method). These comparisons illustrate the simplification results of each model. The data in the table indicate that, under the same condition of using a weight of 5, the complete architecture for retaining crucial geographic features substantially enhances the similarity of the model’s simplification results to the GT. The differential distance decreases from 0.83 to 0.18, indicating a marked improvement in the model’s ability to learn the characteristics of polyline data.
As shown in Figure 17, the visual comparison of coastline simplification results indicates that, compared to the baseline model, the model incorporating the structure for retaining local positional information performs significantly better at preserving the details of critical vertices along the coastline. Meanwhile, the model equipped with the structure for extracting multi-scale geographic features better retains the overall shape information of the coastline at smaller scales. However, the model with only the structure for retaining local positional information fails to effectively preserve the overall structural features of the coastline during simplification. The model with only the structure for extracting multi-scale geographic features struggles with detailed feature learning, resulting in excessive redundant structures and sharp angles being retained post-simplification.
Our proposed method combines the strengths of both structures, preserving the overall structural information of the coastline while making better decisions regarding critical vertices through local detail features, significantly reducing errors in the simplification process. Overall, the comparison of the coastline simplification results demonstrates that the proposed improved method can more comprehensively extract the geographical features of coastlines and exhibits higher efficiency and accuracy in simplification tasks.

6. Conclusions

This study proposes a vector polyline simplification method based on a joint vertex–edge feature graph convolutional network (VE-GCN) for map generalization. To address the adaptability of deep learning methods to map polyline data, we extended the graph convolutional layer to include edge features, enabling joint learning of vertex and edge features. By integrating these features, our approach significantly enhances the model’s ability to extract geographic features. To further improve the extraction of critical geographical features, we developed an architecture for retaining crucial geographic features, which boosts the model’s geographic feature extraction capability and simplification quality.
The experimental results on real-world road and coastline datasets demonstrate that, compared to the existing methods, our approach better considers the geographic characteristics of polylines during the automated process. It effectively learns the properties of polylines and incorporates expert knowledge from map generalization. The simplified coastlines maintain the original shapes of curves, ensuring that global spatial relationships remain intact, thus making the vector map feature simplification process more efficient in cartographic generalization.
Although our proposed method achieves promising results, it is still in the early research stages. The model complexity remains relatively simple, and there is room for further improvement in its performance. There are still shortcomings in extracting higher-level features such as bends and overall roads. Moreover, road data often appear as networks on maps; therefore, future research should not be limited to simplifying individual polylines. In the future, we aim to expand our focus to include simplifying map features at the bend, road, and network levels.

Author Contributions

Conceptualization, Siqiong Chen, Yongyang Xu, and Zhong Xie; methodology, Siqiong Chen; software, Siqiong Chen; validation, Siqiong Chen; formal analysis, Siqiong Chen; investigation, Siqiong Chen; resources, Siqiong Chen; data curation, Siqiong Chen and Anna Hu; writing—original draft preparation, Siqiong Chen and Anna Hu; writing—review and editing, Siqiong Chen, Anna Hu, Yongyang Xu, and Haitao Wang; visualization, Siqiong Chen; supervision, Yongyang Xu and Zhong Xie; project administration, Siqiong Chen and Yongyang Xu; funding acquisition, Yongyang Xu and Zhong Xie. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 42371454.

Data Availability Statement

The data presented in this study are openly available on FigShare at https://doi.org/10.6084/m9.figshare.21801400.v9 (12 October 2024), reference number 21801400.

Acknowledgments

The authors would like to express special thanks to the editor and all the anonymous reviewers for their valuable comments that helped to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yin, H.M. A generalization of geographic conditions maps constrained by both spatial and semantic scales. Acta Geod. Cartogr. Sin. 2021, 50, 426. [Google Scholar]
  2. Lan, Q.P. Research on Multi-Scale Concatenated Update Methods for Map Data. Ph.D. thesis, Wuhan University, Wuhan, China, 2010. [Google Scholar]
  3. Shen, Y.L. Simplified Representation of Map Elements from Computer Vision Perspective. Ph.D. thesis, Wuhan University, Wuhan, China, 2019. [Google Scholar]
  4. Du, J.W.; Wu, F.; Xing, R.X.; Gong, X.R.; Yu, L.Y. Segmentation and sampling method for complex polyline generalization based on a generative adversarial network. Geocarto Int. 2021, 37, 4158–4180. [Google Scholar] [CrossRef]
  5. Wu, F.; Gong, X.Y.; Du, J.W. Overview of the Research Progress in Automated Map Generalization. Acta Geod. Cartogr. Sin. 2017, 46, 1645–1664. [Google Scholar]
  6. Li, J.; Ma, J.S.; Shen, J.; Yang, M.M.; Liu, L. Improvements of linear features simplification algorithm based on vertexes clustering. J. Geomat. Sci. Technol. 2013, 30, 525–529, 534. [Google Scholar]
  7. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartogr. Int. J. Geogr. Inf. Geovis. 1973, 10, 112–122. [Google Scholar] [CrossRef]
  8. Amiraghdam, A.; Diehl, A.; Pajarola, R. LOCALIS: Locally-adaptive Line Simplification for GPU-based Geographic Vector Data Visualization. Comput. Graph. Forum 2020, 39, 443–453. [Google Scholar] [CrossRef]
  9. Liu, B.; Liu, X.C.; Li, D.J.; Shi, Y.T.; Fernandez, G.; Wang, Y.D. A Vector Line Simplification Algorithm Based on the Douglas-Peucker Algorithm, Monotonic Chains and Dichotomy. ISPRS Int. J. Geo-Inf. 2020, 9, 251. [Google Scholar] [CrossRef]
  10. Wu, F.; Deng, H.Y. Using Genetic Algorithms for Solving Problems in Automated Line Simplification. Acta Geod. Cartogr. Sin. 2003, 32, 349–355. [Google Scholar]
  11. Jiang, B.; Nakos, B. Line Simplification Using Self-Organizing Maps. In Proceedings of the ISPRS Workshop on Spatial Analysis and Decision Making, Hong Kong, China, 3–5 December 2003. [Google Scholar]
  12. Yu, B.; Yin, H.T.; Zhu, Z.X. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 3634–3640. [Google Scholar]
  13. Yan, X.F.; Ai, T.H.; Yang, M.; Yin, H.M. A graph convolutional neural network for classification of building patterns using spatial vector data. ISPRS J. Photogramm. Remote Sens. 2019, 150, 259–273. [Google Scholar] [CrossRef]
  14. Zheng, C.Y.; Guo, Q.S.; Hu, H.K. The Simplification Model of Linear Objects Based on Ant Colony Optimization Algorithm. Acta Geod. Cartogr. Sin. 2011, 40, 635–638. [Google Scholar]
  15. Duan, P.X.; Qian, H.Z.; He, H.W.; Xie, L.M.; Luo, D.H. A Line Simplification Method Based on Support Vector Machine. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 744–752. [Google Scholar]
  16. Cheng, B.; Liu, Q.; Li, X.; Wang, Y. Building simplification using backpropagation neural networks: A combination of cartographers’ expertise and raster-based local perception. GISci. Remote Sens. 2013, 50, 527–542. [Google Scholar] [CrossRef]
  17. Zhang, Q.N.; Liao, K. Line Generalization Based on Structure Analysis. Acta Sci. Nat. Univ. Sunyatseni 2001, 40, 118–121. [Google Scholar]
  18. Qian, H.Z.; Wu, F.; Chen, B.; Zhang, J.H.; Wang, J.Y. Simplifying Line with Oblique Dividing Curve Method. Acta Geod. Cartogr. Sin. 2007, 36, 443–449+456. [Google Scholar]
  19. Visvalingam, M.; Whyatt, J.D. Line generalisation by repeated elimination of points. Cartogr. J. 1993, 30, 46–51. [Google Scholar] [CrossRef]
  20. Li, Z.L.; Openshaw, S. Algorithms for the Automated Line Generalization Based on Natural Principle of Objective Generalization. Int. J. Geogr. Inf. Syst. 1992, 6, 373–389. [Google Scholar] [CrossRef]
  21. Nakos, B.; Mitropoulos, V. Critical Points Detection Using the Length Ratio (LR) for Line Generalization. Cartographica 2003, 40, 35–51. [Google Scholar] [CrossRef]
  22. Teh, C.H.; Chin, R.T. On the Detection of Dominant Points on Digital Curves. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 859–872. [Google Scholar] [CrossRef]
  23. Chrobak, T.A. Numerical Method for Generalizing the Linear Elements of Large-Scale Maps, Based on the Example of Rivers. Cartogr. Int. J. Geogr. Inf. Geovis. 2000, 37, 49–56. [Google Scholar] [CrossRef]
  24. Zhu, K.P.; Wu, F.; Wang, H.L.; Zhu, Q. Improvement and Assessment of Li-Openshaw Algorithm. Acta Geod. Cartogr. Sin. 2007, 36, 450–456. [Google Scholar]
  25. Liu, H.M.; Fan, Z.D.; Xu, Z.; Deng, M. An Improved Local Length Ratio Method for Curve Simplification and Its Evaluation. Geogr. Geo-Inf. Sci. 2011, 27, 45–48. [Google Scholar]
  26. Deng, M.; Chen, J.; Li, Z.L.; Xu, Z. An Improved Local Measure Method for the Importance of Vertices in Curve Simplification. Geogr. Geo-Inf. Sci. 2009, 25, 40–43. [Google Scholar]
  27. Shen, Y.L.; Ai, T.H.; He, Y.K. A new approach to line simplification based on image processing: A case study of water area boundaries. ISPRS Int. J. Geo-Inf. 2018, 7, 41. [Google Scholar] [CrossRef]
  28. Wang, Z.S.; Müller, J.-C. Line Generalization Based on Analysis of Shape Characteristics. Cartogr. Geogr. Inf. Syst. 1998, 25, 3–15. [Google Scholar] [CrossRef]
  29. Li, J.H.; Wu, F.; Du, J.W.; Gong, X.Y.; Xing, R.X. Chart Depth Contour Simplification Based on Delaunay Triangulation. Geomat. Inf. Sci. Wuhan Univ. 2019, 44, 778–783. [Google Scholar]
  30. Ai, T.H.; Guo, R.Z.; Li, Y.L. A Binary Tree Representation of Curve Hierarchical Structure in Depth. Acta Geod. Cartogr. Sin. 2001, 30, 343–348. [Google Scholar]
  31. Huang, B.H.; Wu, F.; Zhai, R.J.; Gong, X.Y.; Li, J.H. The Line Feature Simplification Algorithm Preserving Curve Bend Feature. J. Geomat. Sci. Technol. 2014, 31, 533–537. [Google Scholar]
  32. Qian, H.Z.; He, H.W.; Wang, X.; Hu, H.M.; Liu, C. Line Feature Simplification Method Based on Bend Group Division. Geomat. Inf. Sci. Wuhan Univ. 2017, 42, 1096–1103. [Google Scholar]
  33. Ma, L. Features extraction of buildings and generalization using deep learning. In Proceedings of the 28th International Cartographic Conference, Washington, DC, USA, 2–7 July 2017. [Google Scholar]
  34. Kang, Y.; Gao, S.; Roth, R.E. Transferring multiscale map styles using generative adversarial networks. Int. J. Cartogr. 2019, 5, 115–141. [Google Scholar] [CrossRef]
  35. Courtial, A.; Ayedi, A.; Touya, G.; Zhang, X. Exploring the potential of deep learning segmentation for mountain roads generalisation. ISPRS Int. J. Geo-Inf. 2020, 9, 338. [Google Scholar] [CrossRef]
  36. Jiang, B.D.; Xu, S.F.; Li, Z.W. Polyline simplification using a region proposal network integrating raster and vector features. GISci. Remote Sens. 2023, 60, 2275427. [Google Scholar] [CrossRef]
  37. Yu, W.H.; Chen, Y.X. Data-driven polyline simplification using a stacked autoencoder-based deep neural network. Trans. GIS 2022, 26, 2302–2325. [Google Scholar] [CrossRef]
  38. Du, J.W.; Wu, F.; Zhu, L.; Liu, C.Y.; Wang, A.D. An ensemble learning simplification approach based on multiple machine-learning algorithms with the fusion using of raster and vector data and a use case of coastline simplification. Acta Geod. Cartogr. Sin. 2022, 51, 373–387. [Google Scholar]
  39. Guo, X.; Liu, J.N.; Wu, F.; Qian, H.Z. A Method for Intelligent Road Network Selection Based on Graph Neural Network. Data 2022, 7, 10. [Google Scholar] [CrossRef]
  40. Buffelli, D.; Vandin, F. The Impact of Global Structural Information in Graph Neural Networks Applications. ISPRS Int. J. Geo-Inf. 2023, 12, 336. [Google Scholar] [CrossRef]
  41. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  42. Raposo, P. Scale-specific automated line simplification by vertex clustering on a hexagonal tessellation. Cartogr. Geogr. Inf. Sci. 2013, 40, 427–443. [Google Scholar] [CrossRef]
Figure 1. Structure of the VE-GCN.
Figure 1. Structure of the VE-GCN.
Ijgi 14 00064 g001
Figure 2. Structures and properties of vertices (a) and edges (b) in the graphs.
Figure 2. Structures and properties of vertices (a) and edges (b) in the graphs.
Ijgi 14 00064 g002
Figure 3. Graph convolutional layers of vertex features (a) and graph convolutional layers extended to edge features (b).
Figure 3. Graph convolutional layers of vertex features (a) and graph convolutional layers extended to edge features (b).
Ijgi 14 00064 g003
Figure 4. Convolutional layer for spatial conversion.
Figure 4. Convolutional layer for spatial conversion.
Ijgi 14 00064 g004
Figure 5. Structure of the polyline simplification model.
Figure 5. Structure of the polyline simplification model.
Ijgi 14 00064 g005
Figure 6. Graph convolutional module with residual structure. The arrows in the graph structure indicate the direction of information propagation.
Figure 6. Graph convolutional module with residual structure. The arrows in the graph structure indicate the direction of information propagation.
Ijgi 14 00064 g006
Figure 7. Structure of the multi-scale graph convolution.
Figure 7. Structure of the multi-scale graph convolution.
Ijgi 14 00064 g007
Figure 8. Road network data for Beijing. Different colors represent different levels of roads.
Figure 8. Road network data for Beijing. Different colors represent different levels of roads.
Ijgi 14 00064 g008
Figure 9. Example of road simplification results.
Figure 9. Example of road simplification results.
Ijgi 14 00064 g009
Figure 10. Example display of coastline data.
Figure 10. Example display of coastline data.
Ijgi 14 00064 g010
Figure 11. Example of coastline data correction: red represents the 1:1,000,000 coastline, green represents the 1:250,000 coastline, blue arrow represents the corresponding vertex between two scale roads, and orange represents the 1:1,000,000 coastline after correction.
Figure 11. Example of coastline data correction: red represents the 1:1,000,000 coastline, green represents the 1:250,000 coastline, blue arrow represents the corresponding vertex between two scale roads, and orange represents the 1:1,000,000 coastline after correction.
Ijgi 14 00064 g011
Figure 12. Examples of coastline simplification results.
Figure 12. Examples of coastline simplification results.
Ijgi 14 00064 g012
Figure 13. Simplification results under different optimization approaches for cross-entropy weights.
Figure 13. Simplification results under different optimization approaches for cross-entropy weights.
Ijgi 14 00064 g013
Figure 14. Weighted cross-entropy weight and model simplification level.
Figure 14. Weighted cross-entropy weight and model simplification level.
Ijgi 14 00064 g014
Figure 15. Result comparison between existing methods and our method.
Figure 15. Result comparison between existing methods and our method.
Ijgi 14 00064 g015
Figure 16. Result comparison of ablation experiment for integrating edge features.
Figure 16. Result comparison of ablation experiment for integrating edge features.
Ijgi 14 00064 g016
Figure 17. Result comparison of ablation experiment for the architecture for retaining crucial geographic features.
Figure 17. Result comparison of ablation experiment for the architecture for retaining crucial geographic features.
Ijgi 14 00064 g017
Table 1. Evaluation metric comparison between existing methods and our method.
Table 1. Evaluation metric comparison between existing methods and our method.
MethodVertex
Retention Rate
Differential
Distance
Hausdorff
Distance
Fréchet
Distance
D–P0.230.111.081.11
Hexagon clustering0.190.111.151.17
Ternary bend groups0.240.101.121.14
SVM0.230.261.591.61
Our method0.260.181.361.39
Table 2. Evaluation metric comparison of ablation experiment for integrating edge features.
Table 2. Evaluation metric comparison of ablation experiment for integrating edge features.
MethodVertex
Retention Rate
Differential
Distance
Hausdorff
Distance
Fréchet
Distance
Only vertex feature0.241.063.653.68
Our method0.260.181.361.39
Table 3. Evaluation metric comparison of ablation experiment for the architecture for retaining crucial geographic features.
Table 3. Evaluation metric comparison of ablation experiment for the architecture for retaining crucial geographic features.
MethodVertex
Retention Rate
Differential
Distance
Hausdorff
Distance
Fréchet
Distance
Baseline0.100.834.164.17
Only L0.230.251.681.72
Only M0.270.201.491.52
Our method (M + L)0.260.181.361.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, S.; Hu, A.; Xu, Y.; Wang, H.; Xie, Z. VE-GCN: A Geography-Aware Approach for Polyline Simplification in Cartographic Generalization. ISPRS Int. J. Geo-Inf. 2025, 14, 64. https://doi.org/10.3390/ijgi14020064

AMA Style

Chen S, Hu A, Xu Y, Wang H, Xie Z. VE-GCN: A Geography-Aware Approach for Polyline Simplification in Cartographic Generalization. ISPRS International Journal of Geo-Information. 2025; 14(2):64. https://doi.org/10.3390/ijgi14020064

Chicago/Turabian Style

Chen, Siqiong, Anna Hu, Yongyang Xu, Haitao Wang, and Zhong Xie. 2025. "VE-GCN: A Geography-Aware Approach for Polyline Simplification in Cartographic Generalization" ISPRS International Journal of Geo-Information 14, no. 2: 64. https://doi.org/10.3390/ijgi14020064

APA Style

Chen, S., Hu, A., Xu, Y., Wang, H., & Xie, Z. (2025). VE-GCN: A Geography-Aware Approach for Polyline Simplification in Cartographic Generalization. ISPRS International Journal of Geo-Information, 14(2), 64. https://doi.org/10.3390/ijgi14020064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop