Next Article in Journal
The Role of Micro Breaking of Small-Scale Wind Waves in Radar Backscattering from Sea Surface
Next Article in Special Issue
Spatial Temporal Analysis of Traffic Patterns during the COVID-19 Epidemic by Vehicle Detection Using Planet Remote-Sensing Satellite Images
Previous Article in Journal
Neural Network Approaches to Reconstruct Phytoplankton Time-Series in the Global Ocean
Previous Article in Special Issue
An Accurate Vegetation and Non-Vegetation Differentiation Approach Based on Land Cover Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects

1
Key Lab of Spatial Data Mining & Information Sharing of Ministry of Education, Academy of Digital China (Fujian), Fuzhou University, Fuzhou 350108, China
2
Faculty of Geoinformation Science and Earth Observation (ITC), University of Twente, P.O. Box 217, 7500AE Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(24), 4158; https://doi.org/10.3390/rs12244158
Submission received: 27 October 2020 / Revised: 12 December 2020 / Accepted: 15 December 2020 / Published: 18 December 2020

Abstract

:
Spatial information regarding the arrangement of land cover objects plays an important role in distinguishing the land use types at land parcel or local neighborhood levels. This study investigates the use of graph convolutional networks (GCNs) in order to characterize spatial arrangement features for land use classification from high resolution remote sensing images, with particular interest in comparing land use classifications between different graph-based methods and between different remote sensing images. We examine three kinds of graph-based methods, i.e., feature engineering, graph kernels, and GCNs. Based upon the extracted arrangement features and features regarding the spatial composition of land cover objects, we formulated ten land use classifications. We tested those on two different remote sensing images, which were acquired from GaoFen-2 (with a spatial resolution of 0.8 m) and ZiYuan-3 (of 2.5 m) satellites in 2020 on Fuzhou City, China. Our results showed that land use classifications that are based on the arrangement features derived from GCNs achieved the highest classification accuracy than using graph kernels and handcrafted graph features for both images. We also found that the contribution to separating land use types by arrangement features varies between GaoFen-2 and ZiYuan-3 images, due to the difference in the spatial resolution. This study offers a set of approaches for effectively mapping land use types from (very) high resolution satellite images.

Graphical Abstract

1. Introduction

Mapping land use using high resolution (HR) satellite images is important for urban planning, socio-economic analysis, and geo-information database updating [1,2,3,4]. Particularly, the rapid urbanization and population growth pose big challenges for sustainable urban development, requiring updated and fine-detailed land use information for decision making [5,6]. In the past decade, remote sensing technologies have developed rapidly, and many satellites have been launched. A vast amount of remote sensing images that were acquired from various platforms and sensors can be accessed now. Among them, (very) high spatial resolution remote sensing images, usually with a resolution finer than 5 m acquired from Worldview, GeoEye, GaoFen-2 (GF2), and ZiYuan-3 (ZY3) satellites, provide detailed images of the earth. These images have a high potential for land use mapping at the land parcel or local neighborhood levels [7,8,9,10,11].
Automatic image classification is an essential operation for mapping land use from remote sensing images. In the literature, many studies have been conducted for remote sensing image classification [12,13,14,15]. Initially, the methods of land use image classification were developed for classifying land cover types based upon a number of low-level image features, e.g., spectral, textural, and contextual features [16,17]. These low-level image features can be used in order to separate land use between relatively coarse categories from low- and medium-resolution remote sensing images. However, they fail to effectively describe the properties of land use categories, e.g., residential, commercial, and industrial land, at urban neighborhood level [18]. Particularly, on a high-resolution satellite image, a homogeneous land use area reflects various spectral signatures, which correspond to different types of land cover objects. Here a land use unit refers to a homogeneous land use area, such as a land parcel or a street block. The spatial arrangement of land cover objects can be complicated within a land use unit, leading to difficulties in effectively characterizing the structural properties of land use units [9]. Thus, it is necessary to derive high-level image features in order to improve the separability between land use types, e.g., using landscape metrics [19], visual bag-of-words (BOW) [20,21], or latent Dirichlet allocation model [22]. So far, the complex topological relations between land cover objects are insufficiently dealt with by the above-mentioned methods.
Existing studies have shown that the mining of topological relations between land cover objects improves land use classification performance by differentiating complex urban structures [23,24]. The main assumption underlying land use classification is that land use units with similar structures are more likely to have similar functional properties. For example, Li [9] proposed a hierarchical Bayesian model for extracting land use information while using very high resolution (VHR) remote sensing images, in which the type of land use is inferred based upon functional variables represented by the spatial arrangement of land cover objects. The land cover objects are preliminarily classified from images, for which many methods can be found in the literature [25,26,27,28,29]. A widely used method refers to object-based land cover classification by traditional machine learning algorithms, like support vector machine or random forest that is based upon spectral, textural, and geometrical information [28,30]. The classification can be further improved by including more advanced features, e.g., derived from extended multi-attribute profiles [26]. Furthermore, recent studies have shown that state-of-the-art land cover classifications tend to be achieved by deep learning methods [25,27]. An effective technique for describing and quantifying the topology of neighboring land cover objects is based upon graph theory. In a graph, the land cover objects and their relationships can be encoded as graph attributes and edges. By doing so, the problem of characterizing the spatial structure of land use is transformed to the derivation of structural features from a graph that represents the topology of land cover objects within land use units. Walde [24] investigated handcrafted graph features (by feature engineering) in order to measure the structural properties of city blocks for classifying urban structure types. However, deriving graph features in a handcrafted manner requires considerable expert knowledge, limiting the generalizability of a classification method using these features. Lehner [31] investigated a tree-structured framework based upon object-based image analysis for urban structure type classification, in which a (sub)tree represents the topology of objects. The construction of this framework for a different application also relies on expert knowledge.
By contrast, automatically deriving high-level structural features from graph-structured data has been used in network analysis, information mining, and computer vision [32,33,34]. Graph kernels measure the similarity between graphs, and they can be integrated with a kernel-based classifier, like support vector machine (SVM), for graph classification. [33] grouped existing graph kernels into neighborhood aggregation methods (e.g., based on the Weisfeiler–Lehman algorithm), assignment- and matching-based methods, subgraph patterns, walks and paths, and kernels for graphs with continuous labels. Another popular strategy for handling graph-structured data is based on graph neural networks (GNNs) [32,35], achieving the state-of-the-art performance in graph feature extraction and graph classification. Among them, an important variant of graph neural networks refers to graph convolutional network (GCN) [36]. Recently, Li [11] applied GCNs to urban land use classification from very high resolution (VHR) satellite images with 0.5 m spatial resolution, and obtained promising results.
This study is an extension of [11], with the aim to exploit the spatial arrangement of land cover objects by graph-based methods for land use mapping from high resolution satellite images. We focus on extracting high-level structural features of land use while using GCNs, and comparing with methods using feature engineering and graph kernels. Previous studies investigated land use classification by GCNs while using VHR images with a spatial resolution of less than 1m. In this study, we also analyze the applicability of these graph-based methods to land use classifications using high resolution remote sensing images with a spatial resolution of 2.5 m.
The remaining of this paper is organized, as follows. Section 2 introduces the study areas and data, Section 3 illustrates the graph-based methods used for graph feature learning and land use classification, Section 4 gives the experimental results and corresponding analysis. The discussion and conclusions are provided in Section 5 and Section 6, respectively.

2. Study Areas and Data

The study area is located in the core region of Fuzhou City, the capital city of Fujian province, China (Figure 1). It covers 210 km 2 , and it contains a large variety of land cover and land use types. For the Fuzhou study area, we acquired two remote sensing images, from the GF2 satellite on 18 February 2020 and the ZY3 satellite on 10 April 2020. The GF2 image was captured by a PMS1 sensor having four 2.5 m resolution multi-spectral bands and one 0.78 m resolution panchromatic band. The ZY3 image has four multi-spectral bands with 6.78 m and one panchromatic band with 2.38 m resolution. We executed pansharpening to fuse multi-spectral and panchromatic bands for both images [37], and resampled the pansharpened images to 0.8 m and 2.5 m for experiment convenience. The processed GF2 image has a size of 18,678 × 17,703 pixels for the Fuzhou study area, and the ZY3 image has 5978 × 5666 pixels.
We collected ground truth land use data over the study area from the local surveying department to delineate homogeneous land use units. The land use samples that were used to train and test land use classification algorithms were collected based on land use units derived from the ground truth data. These data were derived from the 3rd National Land Survey Project, and produced at the end of 2019. The classification system of the ground-truth land use data follows the (Chinese) National Land Use Classification Standard (GB/T 21010-2017 ), with 12 first-level land use classes and 73 second-level classes. These data are the most detailed and accurate land use data available and, thus, are suitable for this study. Moreover, we re-organized the official land use system at the first level into seven land use classes, see Table 1. We distinguish six main land use classes of high-density residential (RH), low-density residential (RL), commercial (CM), industrial and warehouses (IW), green space and entertainment land (GE), and undeveloped land (UN) from high resolution remote sensing images. We group the rest of land use classes into others (OT), including public management and services, transportation, water body, and land for special use or with multiple categories. We map water body separately because water body can be easily identified based upon the ratio of water coverage.

3. Methods

Figure 2 illustrates the workflow of land use mapping while using graph-based methods. Specifically, we model the topological relationships between neighboring land cover objects preliminarily obtained from remote sensing images into a planar graph. Three feature extraction methods based upon GCNs, feature engineering, and graph kernels are used to learn structural features from graph-structured data. The extracted structural features that are integrated with the features regarding the spatial composition of land use are then used for land use classification.

3.1. Land Cover Classification

Land cover classification is first conducted in order to obtain key types of land cover objects. In this section, we apply a deep learning method based upon UNet [38] to obtain the land cover map from a HR image. UNet is a popular network for semantic segmentation in computer vision, and has been successfully used for land cover classification from remote sensing images. When compared with traditional image classification methods that are based on machine learning, UNet can automatically derive high-level semantic features of images, and perform semantic segmentation in an end-to-end way. In this study, the adopted UNet uses four multispectral bands of HR images for classifying seven land cover types [39], i.e., trees, grass, shadow, water, bare soil, buildings, and others.
The main parameters of the UNet classification were set, as follows: stochastic gradient descent with momentum optimizer was used with the momentum value of 0.9, and the initial learning rate was set to 0.05 with the L2 regularization. The UNet used four channels of a HR image. The training dataset of the UNet classification was collected by digitizing the HR images into ground truth land cover areas. Because the two images (GF2 and ZY3) were acquired in the same year, we collected the training dataset (i.e., reference land cover areas) that was mainly based on the GF2 image, and then re-used the dataset with a slight correction to classify land cover on the ZY3 image.

3.2. Characterizing Urban Structures by Graphs

The arrangement of land cover objects and their structures plays an important role in separating the land use types from a HR image. We use graph theory to model the topological relations between neighboring land cover objects. The pairwise relations between land cover objects are represented in a graph. Let G = ( V , E ) be a graph with a set of nodes V = { v 1 , , v n } and a set of edges E = { e 1 , , e m } , where n and m are the number of nodes and edges, respectively. A node v refers to either a land cover object or a land use unit, which corresponds to v L C or v L U , respectively, and an edge e refers to the linkage of two nodes. Besides, features x can be assigned to each node, leading to a feature matrix for all nodes X = x 1 , , x n T . High-level structural features are extracted from graph-structured data, and then used for land use classification.
To create a graph G , we first need to determine graph nodes V , i.e., the basic spatial objects for analysis, and obtain their edges E to model their pairwise relations. In this study, we used image objects created from image segmentation as nodes, while graph edges describe the spatial adjacency of images objects in a land use unit. More specifically, the image segmentation was conducted by multi-resolution segmentation implemented in eCognition software. For the GF2 image, we set the scale parameter to 60, while 30 for the ZY3 image because of a lower spatial resolution. The obtained image objects were over-segmented to maintain fine spatial details [11]. The adjacency relationships between neighboring image objects was specified by an adjacency matrix, which was computed from a Delaunay triangulation that was built based upon the centroids of image segments (Figure 2). The use of Delaunay triangulation in order to determine the adjacency of neighboring spatial objects was previously investigated in [40].

3.2.1. Graph Features Derived via Graph Convolutional Networks

For a graph G , GCN learns hidden representations h v for each node v by aggregating its neighborhood information. Let h i ( l ) be the feature vector of node v i at the lth GCN layer, and H ( l ) be the feature matrix of all nodes V . A lth layer GCN aggregates information via the following layer-wise propagation rule [36],
H ( l ) = σ ( D ˜ 1 2 A ˜ D ˜ 1 2 H ( l 1 ) W ( l 1 ) ) ,
where A ˜ = A + I is the adjacency matrix of graph G with added self-connections, I is the identity matrix, D ˜ is a degree matrix of A ˜ with D ˜ i i = j A ˜ i j , W ( l 1 ) is a learned weight matrix, H ( 0 ) equals X , and σ represents an activation function, such as the ReLU function. The label of nodes V can be predicted as Y ^ at the last layer of the GCN while using the s o f t m a x function,
Y ^ = s o f t m a x ( D ˜ 1 2 A ˜ D ˜ 1 2 H ( l 1 ) W ( l ) ) ,
where s o f t m a x ( x i ) = e x p ( x i ) / i e x p ( x i ) .
In this section, we use a two-layer GCN for land use classification [11]. More specifically, we create a subgraph G i for each land use unit based upon two types of nodes, which correspond to a number of land cover nodes v L C and a land use node v L U . A land cover node refers to a land cover object within the land use unit, and the its edges model the linkage between the adjacent land cover objects. For the land use node, its edges model the linkage between land cover objects and the land use unit. In urban areas, the number of land cover objects vary between different land use units, leading to big variation among subgraphs G i representing land use units. Li [11] proposed compressing a graph that models the relations between land cover objects into the graph that models the relations between land cover types by creating a AUM. We use the same strategy in order to compress the size of graphs with respect to land use units.
Moreover, we use Bayesian methods in order to integrate information regarding spatial arrangement and spatial composition. Let C be the class variable of land use types with the number of K L U classes, F S A be the attribute variable of spatial arrangement, and F S C be the attribute variable of spatial composition. We assumed that the variable F S A is independent of F S C . Based upon Bayes’ theorem [11], the label of unclassified land use units can be assigned by
C * = arg max C i   p ( C i ) p ( F S C | C i ) p ( F S A | C i ) , i = 1 , , K L U .
Here, the prior probability p ( C i ) was set equally for all land use classes, the conditional probability p ( F S A | C i ) was approximated by the probabilistic output of Y ^ by GCN. Regarding parameter settings associated with the used GCN model, we set the number of hidden units to 24, the maximum number of epochs to 400, the initial learning rate to 0.01 using a L2 loss, and the drop rate to 0.5 [11]. The conditional probability of p ( F S C | C i ) was approximated by the probabilistic output of a SVM classification while using histogram intersection kernel [41], based on spatial composition features. More specifically, the spatial composition features refer to the coverage ratio and density of a specific land cover class within a land use unit [9].

3.2.2. Graph Features Derived via Graph Kernels

In machine learning and pattern recognition, kernel methods have also been popularly used for handling graph-structured data, referring to graph kernels, particularly with the SVM classifier. We compare land use classifications between the structural features that were extracted by GCNs and graph kernels.
Given data points x , x R , a kernel k is a function k ( x , x ) = ϕ ( x ) , ϕ ( x ) , where ϕ denotes a mapping from R to a feature space H , e.g., a Hilbert space [42]. We are interested in constructing kernels for graph-structured data analysis, corresponding to graph kernels, i.e., k ( G , G ) where G and G refer to graphs. Recently, Kriege [33] conducted a comprehensive review on graph kernels by analyzing their expressivity, non-linear decision boundaries, accuracy, and agreement for benchmark graph-structured datasets. Among them, the Weisfeiler–Lehman subtree kernel [43] achieved the best overall performance and the state-of-the-art in graph classification, motivating our choice of graph kernel in this study. This kernel is a successful instance of the Weisfeiler–Lehman kernel framework [43], which is built upon the Weisfeiler–Lehman test of isomorphism [44]. Given two graphs G and G , the two graphs are considered to be isomorphic if and only if a pair of nodes in G is connected by an edge in the same way as the corresponding pair of nodes in G . See [43] for a detailed description on the Weisfeiler–Lehman subtree kernel. For the subtree kernel, we set its main parameter, i.e., the number of iterations, to 6.

3.2.3. Graph Features Derived via Feature Engineering

Extracting graph features via feature engineering, referred to as handcrafted graph features, has also been studied in the past [24,45,46]. For example, Walde [24] investigated a number of graph features based on graph centrality, adjacency unit matrix (AUM), graph connectivity, and geometry in order to classify urban structure types. These features are also used in our study for land use classification (Table 2). Based upon handcrafted graph features, we use a random forest to classify land use units into different land use classes. For the random forest algorithm, we set its main parameter, i.e., the number of trees, to 500.

3.3. Performance Evaluation and Accuracy Assessment

We extracted high-level structural features by GCNs, and applied a Bayesian classifier (as mentioned in Section 3.2.1) to land use classification on both GF2 and ZY3 images. We labeled this classification as B N ( S C , S A G C N ) for convenience (No. 1 in Table 3). In order to evaluate the performance of this method, we compared with nine different methods using spatial arrangement and composition features, leading to three categories (Table 3). The choice of these methods was motivated, as follows.
  • S V M ( S C ) , SVM classification only uses features of spatial composition. It is a traditional method, and it can serve as a baseline to compare with classifications using features of spatial arrangement.
  • S V M ( S A L S ) , SVM classification only uses landscape metrics: fractal dimension, landscape shape index, and Shannon’s diversity index. These metrics were investigated by existing studies [19], and they demonstrated their effectiveness in characterizing spatial structures.
  • S V M ( S C , S A L S ) , SVM classification uses both spatial composition features and landscape metrics [9]. The above-mentioned two methods only characterizes the spatial properties of land use units from one aspect, i.e., either spatial composition or spatial structures. We believe that the integration of the two aspects can improve classification performance.
  • G C N ( S A G C N ) , GCN classification that automatically derives high-level structural features while using a GCN. This method has been recently applied to land use classification from VHR remote sensing images [11]. It is also of great interest to use this method on HR images.
  • S V M ( S A K e r ) , SVM classification uses the Weisfeiler–Lehman subtree kernel. It is an automatic method for graph feature learning. [43] stated that the subtree kernel achieved the best overall performance among many others. Therefore, the S V M ( S A K e r ) can be seen as a state-of-the-art method while using graph kernels.
  • R F ( S A H D ) , random forest classification uses handcrafted graph features [24]. It is a common practice to manually derive features for graph-structured data analysis. The adopted method achieved successful results in classifying urban structure types [24]. Therefore, we consider it to be a suitable benchmark method for land use classification.
  • B N ( S C , S A K e r ) and B N ( S C , S A H D ) , Bayesian classifications by combining spatial composition features and structural features that are derived from graph kernels and feature engineering, i.e., S V M ( S A K e r ) and R F ( S A H D ) .
  • B N ( S C , S A G C N ) * , a variant of B N ( S C , S A G C N ) by taking the type of building roofs into account when computing the spatial arrangement and composition features. We further divided classified buildings into dark roof, gray roof, brick-color roof, blue roof, and bright roof based on spectral features while using a SVM classifier [9].
We compute a confusion matrix [49] based on sample points collected by visual interpretation while using a stratified sampling strategy in order to evaluate the accuracy of the land cover map derived from a high resolution remote sensing image. The confusion matrix is also used to assess the accuracy of land use classifications.

4. Results

4.1. Land Cover Classification

Figure 3 shows the classified land cover maps from the GF2 and ZY3 images. We used the training dataset (i.e., ground truth land cover areas) that was collected from two subset areas (red boxes in Figure 3) to train the UNet model. By visual inspection, the two classified maps have similar distributions of land cover types, although the map of the GF2 image has larger shadow areas than that from the ZY3 image due to different azimuth angles. The two land cover maps were assessed based upon the confusion matrix whlie using testing samples (i.e., in pixels) that were generated by stratified random sampling. More specifically, 60 testing samples were randomly generated for each land cover type, and interpreted by visual inspection. The areas of testing samples are shown in the black boxes of Figure 3. Table 4 and Table 5 give the confusion matrices of the two classified land cover maps on the GF2 and ZY3 datasets. In general, both of the datasets were classified with relatively higher overall accuracy (OA) and kappa coefficient κ , corresponding to 0.91 and 0.90 for the OA of the GF2 and ZY3 datasets, and 0.8972 and 0.8806 for their κ . More specifically, the land cover map on the GF2 image has a slightly higher accuracy than that of the ZY3 image. Particularly, the increase of spatial resolution helps in distinguishing g r a s s from t r e e s , e.g., the user accuracy (UA) of the g r a s s obtained from the GF2 image is higher than that of the ZY3 image, corresponding to 0.93 vs. 0.63. However, we also observed that the UA of the w a t e r class on the ZY3 dataset is higher than that of the GF2 dataset, for which more shadow pixels were misclassified into w a t e r class. This is because w a t e r and s h a d o w have a similar spectral response on HR images.

4.2. Land Use Classification and Accuracy Assessment

For land use classification, we selected 80 land use units as samples for each land use class based upon the ground truth land use map and visual inspection of HR images (Figure 4). The samples were randomly partitioned into training and testing datasets with equal size, i.e., 280 sample land use units for each dataset. We then performed 10 different land use classifications while using the GF2 and ZY3 images.
Wwe first conducted a 10-fold cross validation on the training dataset for each classification method in order to evaluate the performance between different land use classifications. Figure 5 plots the distribution of the overall accuracy of the 10-fold cross validation accuracy. This figure shows that the classifications using structural features derived from graph kernels and GCNs, i.e., S V M ( S A K e r ) and G C N ( S A G C N ) had higher accuracies than using handcrafted graph features, i.e., R F ( S A H D ) , on both study datasets. The lowest classification accuracy was produced by S V M ( S A L S ) while using three landscape metrics alone. In general, the accuracy of the land use classification using the GF2 image was higher than the corresponding classification using the ZY3 image due to a higher spatial resolution. Besides, the integration of structural features with spatial composition features further improved the classification accuracies. The highest classification accuracy was achieved by B N ( S C , S A G C N ) * taking building roof’s information into account. More specially, the classified buildings were further divided into dark roof, gray roof, brick-color roof, blue roof, and bright roof. We also evaluated the 10 land use classifications while using the testing dataset. Table 6 lists the OA and κ coefficient of these classifications. We can see that the results of land use classifications B N ( S C , S A G C N ) * achieved an OA of 0.8750 and 0.8500, and κ of 0.8542 and 0.8250 for the GF2 and ZY3 images.
Table 7 and Table 8 show the confusion matrices of the land use classifications while using the B N ( S C , S A G N N ) * method on the GF2 and ZY3 images. We can see that the user accuracies (UAs) of all land use classes, except the class of others, are larger than 0.80 for both images. More specifically, for the GF2 dataset, the highest UA was obtained by the high-density residential with a UA of 1.00, followed by green space and entertainment land and low-density residential due to clear spatial patterns, which can be seen from their adjacency unit matrices (AUMs) (Figure 6). The lowest UA was given by the land use class of others. It is reasonable, because this class is usually composed of multiple functions, and used for multiple purposes. For the ZY3 dataset, we found that the highest UA was from the industrial and warehouses with a UA of 1.00, followed by green space and entertainment land, while the high-density residential was classified with a UA of 0.83. This may be explained by the fact that (1) the ZY3 image has a spatial resolution of 2.5 m, which is hard to identify individual land cover objects, particularly the buildings in highly populated areas (Figure 6); (2) on a ZY3 image, the buildings that are used for industrial and warehouses can be more easily identified than buildings used for other land use types, because of more homogenous spectral reflectance and more regular shapes.
For the study area, two big rivers (i.e., Min River and Wulong River) cross the city, which results in some land use units filled with water. These land use units can be easily identified based upon the coverage ratio of water. We created an additional land use class, i.e., water body, for those land use units with a large portion of water. Here, we set the threshold for the coverage ratio of water to 0.85. Figure 7 shows the derived land use maps using the B N ( S C , S A G N N ) * method from both the GF2 and ZY3 images.

5. Discussion

This study focused on the comparison of land use classifications between different graph-based methods, and between different high resolution remote sensing images. We investigated three kinds of graph-based methods, i.e., by the handcrafted method (i.e., feature engineering), graph kernels, and graph convolutional networks (GCNs), in order to extract structural features for the classification. The GCNs and graph kernels methods have been recently used as state-of-the-art methods for dealing with graph-structural data. We compared ten different land use classifications that are based upon the extracted structural features, and experimented on two remote sensing images, i.e., a very high resolution (VHR) image from the GF2 satellite with a spatial resolution of 0.8 m and a high resolution (HR) image from the ZY3 satellite of 2.5m. Our results reveal that the structural features that are derived from GCNs and graph kernels provide better classification performance than that using handcrafted methods, and they have a high potential for different applications. In general, the classification accuracies that were obtained by the GF2 image are higher than that of the ZY3 image, due to a higher spatial resolution. Nonetheless, the ZY3 satellite has a larger swath, and can be used for collecting images with broader coverage, facilitating to land use mapping over large areas. Previous research has highlighted the importance of the spatial arrangement of land cover objects for land use mapping from VHR images [9], and shown that graph-based methods, like GCNs [11], can effectively model the spatial arrangement information. However, to the best of our knowledge, little research has been conducted on the comparison of land use classifications between popular graph-based methods while using VHR images, particularly less using HR images. This study contributes to adding such gap information.
We model the pair-wise relations between neighboring land cover objects into a graph. Thus, it is essential to identify key types of land cover objects preliminarily. This study uses a deep learning method that is based upon the UNet model in order to classify land cover from the GF2 and ZY3 images. The classified land cover maps from the two images show similar attribute accuracies in terms of confusion matrix when using a point-by-point evaluation. Regarding the geometrical accuracy, the classified land cover map of the GF2 image has a better delineation of the boundaries of individual land cover objects than the ZY3 image due to higher spatial resolution (Figure 6). For example, on a ZY3 image of 2.5 m resolution, small buildings and buildings in densely populated areas are difficult to be delineated. This difficulty may affect the modelling of urban structures based on the pair-wise relations of land cover objects. This could also explain that the highest user accuracy of the classified land use map of the ZY3 image was given by industrial and warehouses class, where the buildings usually have relatively homogeneous spectral reflectance and regular shapes. Besides, a previous study showed that the classification accuracy of land use is positively correlated with the accuracy of classified land cover [9].
A number of methods using the structural features that were extracted from graph-structured data are applied to land use classifications. Among them, we highlighted the use of graph kernels and GCNs for automatically learning high-level structural features. Recent studies investigated the effectiveness of using GCNs for feature learning from graph-structured data, and provided a technique for data compression by adjacency unit matrix (AUM) [11]. As follow-up research of [11], this study added the extraction of high-level structural features that are based on a state-of-the-art graph kernel, i.e., the Weisfeiler–Lehman subtree kernel [43]. Our results showed that both GCNs and graph kernels performed better than the handcrafted method in terms of classification accuracy. When comparing with graph kernels, the GCNs achieved the best classification performance in terms of classification accuracy. On the other hand, graph kernels can be naturally integrated with support vector machine, showing merits from the perspective of the applicability of methods. In this study, the parameters that are associated with feature learning and classification methods were set according to the literature. Further work can be conducted in order to analyze the effect of key parameters on classification performance. It is also interesting to compare the land use classifications while using high-level structural features that were learnt from graph-structured data with those using deep image features learnt from grid-structured data [50,51,52], leading to our future study. Moreover, from the statistical point of view, further improvement can be conducted in order to increase the number of training and testing land use samples using a more sophisticated sampling strategy for accuracy assessment and performance evaluation.
We tested the effectiveness of the proposed method for land use mapping on one study area in Fuzhou, China. One may be interested in the applicability of the method to other areas of the world. This method follows a hierarchical land use classification framework that was proposed in [9]. The framework starts from the classification of land cover, proceeds to the characterization of spatial arrangement and composition of land use, and ends at the classification of land use. It also highlights the importance of characterizing spatial arrangement effectively. This study used a data-driven method that was based on GCN to automatically learn high-level structural features in order to characterize the spatial arrangement. Hence, we expect this method can also be used in other different areas. Although the proposed land use classification achieved satisfactory results in this study by giving 40 land use samples per class, it is helpful to conduct a sensitivity analysis of the effect of the number of samples on the learning power of spatial arrangement features by GCN, and on the subsequent land use classification, leading to future study.
The derivation of homogenous land use units is essential in classifying land use with high accuracy, because the errors that are involved in the delineation of land use boundaries affect classification accuracy [53]. We used official data regarding land use boundaries from the local surveying department to obtain land use units. By doing so, the produced land use maps maintain high geometric accuracy. On the other hand, such practice may constrain the use of the classification method for cases without existing data of land use boundaries. For such cases, it is important to investigate an automatic method in order to directly obtain land use units from remote sensing images, which is out of the scope of this study. However, such an investigation leads to a challenging topic and it is insufficiently addressed in the literature.
Last but not the least, in this study, we distinguished six main land use classes, i.e., high-density residential, low-density residential, commercial, industrial and warehouses, green space and entertainment land, and undeveloped land from high resolution remote sensing images, while grouping the rest into others. Within the class of others, land use may involve different subclasses, such as land for special use or with multiple categories, showing a large variety of characteristics. Future efforts can be conducted to refine the definition of land use classes, or look for more effective strategy to deal with mixed land use type.

6. Conclusions

This study investigated the use of graph convolutional networks (GCNs) in order to extract high-level structural features for land use mapping from high resolution satellite images. We compared it with two other popular graph-based methods: feature engineering and graph kernels, and made a comparison with two remote sensing images of 0.8 m and 2.5 m spatial resolution, respectively. Our results showed that the structural features that are derived from GCNs and graph kernels are more effective than handcrafted features for both images. When comparing it with graph kernels, the GCNs achieved the best classification performance in terms of classification accuracy. Combining spatial composition features with structural features further improved the accuracy. Moreover, the improvement by the structural features varies between different land use classes for the two images due to the different spatial resolution. By comparing ten different land use classification methods, we conclude that integrating the structural features that are derived from GCNs and graph kernels with spatial composition serves as an effective means for land use mapping from high resolution remote sensing images, and has a high potential for different applications due to the automation of feature extraction.

Author Contributions

Conceptualization, M.L.; methodology, M.L.; software, M.L.; validation, M.L.; formal analysis, M.L. and A.S.; investigation, M.L.; resources, M.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L. and A.S.; visualization, M.L.; project administration, M.L.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) with project (No. 42001283), “Fine-category urban land use classification from very high resolution remote sensing images integrated with element-structure-function hierarchical representations”.

Acknowledgments

We thank Kirsten M. de Beurs from Oklahoma University for providing helpful comments, the Fujian Nebula Big Data Application Service Co., Ltd. for providing GF-2 imagery, and Xiaoqin Wang from Fuzhou University for the support on the collection of ZY3 imagery.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Seto, K.C.; Reenberg, A. Rethinking Global Land Use in an Urban Era; MIT Press: Cambridge, MA, USA, 2014; Volume 14. [Google Scholar]
  2. Zhu, Z.; Zhou, Y.; Seto, K.C.; Stokes, E.C.; Deng, C.; Pickett, S.T.; Taubenböck, H. Understanding an urbanizing planet: Strategic directions for remote sensing. Remote Sens. Environ. 2019, 228, 164–182. [Google Scholar] [CrossRef]
  3. Dong, L.; Ratti, C.; Zheng, S. Predicting neighborhoods’ socioeconomic attributes using restaurant data. Proc. Natl. Acad. Sci. USA 2019, 116, 15447–15452. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, X.; Du, S.; Du, S.; Liu, B. How do land-use patterns influence residential environment quality? A multiscale geographic survey in Beijing. Remote Sens. Environ. 2020, 249, 112014. [Google Scholar] [CrossRef]
  5. Habitat, U. World Cities Report 2016: Urbanization and Development–Emerging Futures; UN-Habitat: Nairobi, Kenya, 2016. [Google Scholar]
  6. Ilieva, R.T.; McPhearson, T. Social-media data for urban sustainability. Nat. Sustain. 2018, 1, 553–565. [Google Scholar] [CrossRef]
  7. Banzhaf, E.; Netzband, M. Monitoring urban land use changes with remote sensing techniques. In Applied Urban Ecology: A Global Framework; Wiley: Hoboken, NJ, USA, 2011; pp. 18–32. [Google Scholar]
  8. Novack, T.; Kux, H.; Feitosa, R.Q.; Costa, G.A. A knowledge-based, transferable approach for block-based urban land-use classification. Int. J. Remote Sens. 2014, 35, 4739–4757. [Google Scholar] [CrossRef]
  9. Li, M.; Stein, A.; Bijker, W.; Zhan, Q. Urban land use extraction from Very High Resolution remote sensing imagery using a Bayesian network. ISPRS J. Photogramm. Remote Sens. 2016, 122, 192–205. [Google Scholar] [CrossRef]
  10. Srivastava, S.; Vargas-Muñoz, J.E.; Tuia, D. Understanding urban landuse from the above and ground perspectives: A deep learning, multimodal solution. Remote Sens. Environ. 2019, 228, 129–143. [Google Scholar] [CrossRef] [Green Version]
  11. Li, M.; Stein, A.; de Beurs, K. A Bayesian characterization of urban land use configurations from VHR remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102175. [Google Scholar] [CrossRef]
  12. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  13. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  14. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  15. Talukdar, S.; Singha, P.; Mahato, S.; Pal, S.; Liou, Y.A.; Rahman, A. Land-Use Land-Cover Classification by Machine Learning Classifiers for Satellite Observations—A Review. Remote Sens. 2020, 12, 1135. [Google Scholar] [CrossRef] [Green Version]
  16. Gong, P.; Howarth, P.J. Frequency-based contextual classification and gray-level vector reduction for land-use identification. Photogramm. Eng. Remote Sens. 1992, 58, 423–437. [Google Scholar]
  17. Eyton, J.R. Urban land use classification and modelling using cover-type frequencies. Appl. Geogr. 1993, 13, 111–121. [Google Scholar] [CrossRef]
  18. Barr, S.L.; Barnsley, M.J.; Steel, A. On the separability of urban land-use categories in fine spatial scale land-cover data using structural pattern recognition. Environ. Plan. Plan. Des. 2004, 31, 397–418. [Google Scholar] [CrossRef]
  19. Herold, M.; Liu, X.; Clarke, K.C. Spatial metrics and image texture for mapping urban land use. Photogramm. Eng. Remote Sens. 2003, 69, 991–1001. [Google Scholar] [CrossRef] [Green Version]
  20. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  21. Cheng, G.; Han, J.; Guo, L.; Liu, Z.; Bu, S.; Ren, J. Effective and efficient midlevel visual elements-oriented land-use classification using VHR remote sensing images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4238–4249. [Google Scholar] [CrossRef] [Green Version]
  22. Văduva, C.; Gavăt, I.; Datcu, M. Latent Dirichlet allocation for spatial analysis of satellite images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2770–2786. [Google Scholar] [CrossRef]
  23. Barnsley, M.J.; Barr, S.L. Distinguishing urban land-use categories in fine spatial resolution land-cover data using a graph-based, structural pattern recognition system. Comput. Environ. Urban Syst. 1997, 21, 209–225. [Google Scholar] [CrossRef]
  24. Walde, I.; Hese, S.; Berger, C.; Schmullius, C. From land cover-graphs to urban structure types. Int. J. Geogr. Inf. Sci. 2014, 28, 584–609. [Google Scholar] [CrossRef]
  25. Kwan, C.; Ayhan, B.; Budavari, B.; Lu, Y.; Perez, D.; Li, J.; Bernabe, S.; Plaza, A. Deep Learning for Land Cover Classification Using Only a Few Bands. Remote Sens. 2020, 12, 2000. [Google Scholar] [CrossRef]
  26. Kwan, C.; Gribben, D.; Ayhan, B.; Bernabe, S.; Plaza, A.; Selva, M. Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data. Remote Sens. 2020, 12, 1392. [Google Scholar] [CrossRef]
  27. Zhang, X.; Han, L.; Han, L.; Zhu, L. How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery? Remote Sens. 2020, 12, 417. [Google Scholar] [CrossRef] [Green Version]
  28. Kwan, C.; Gribben, D.; Ayhan, B.; Li, J.; Bernabe, S.; Plaza, A. An Accurate Vegetation and Non-Vegetation Differentiation Approach Based on Land Cover Classification. Remote Sens. 2020, 12, 3880. [Google Scholar] [CrossRef]
  29. Ayhan, B.; Kwan, C.; Budavari, B.; Kwan, L.; Lu, Y.; Perez, D.; Li, J.; Skarlatos, D.; Vlachos, M. Vegetation Detection Using Deep Learning and Conventional Methods. Remote Sens. 2020, 12, 2502. [Google Scholar] [CrossRef]
  30. Li, M.; Bijker, W.; Stein, A. Use of binary partition tree and energy minimization for object-based classification of urban land cover. ISPRS J. Photogramm. Remote Sens. 2015, 102, 48–61. [Google Scholar] [CrossRef]
  31. Lehner, A.; Blaschke, T. A generic classification scheme for urban structure types. Remote Sens. 2019, 11, 173. [Google Scholar] [CrossRef] [Green Version]
  32. Bronstein, M.M.; Bruna, J.; LeCun, Y.; Szlam, A.; Vandergheynst, P. Geometric deep learning: Going beyond euclidean data. IEEE Signal Process. Mag. 2017, 34, 18–42. [Google Scholar] [CrossRef] [Green Version]
  33. Kriege, N.M.; Johansson, F.D.; Morris, C. A survey on graph kernels. Appl. Netw. Sci. 2020, 5, 1–42. [Google Scholar] [CrossRef] [Green Version]
  34. Johnson, J.; Gupta, A.; Li, F.-F. Image generation from scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1219–1228. [Google Scholar]
  35. Zhou, J.; Cui, G.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. arXiv 2018, arXiv:1812.08434. [Google Scholar]
  36. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  37. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  39. Kemker, R.; Salvaggio, C.; Kanan, C. High-resolution multispectral dataset for semantic segmentation. arXiv 2017, arXiv:1703.01918. [Google Scholar]
  40. Anders, K.H.; Sester, M.; Fritsch, D. Analysis of settlement structures by graph-based clustering. Semant. Modellier. Smati 1999, 99, 41–49. [Google Scholar]
  41. Maji, S.; Berg, A.; Malik, J. Efficient classification for additive kernel SVMs. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 66–77. [Google Scholar] [CrossRef] [PubMed]
  42. Shawe-Taylor, J.; Cristianini, N. Kernel Methods for Pattern Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  43. Shervashidze, N.; Schweitzer, P.; Van Leeuwen, E.J.; Mehlhorn, K.; Borgwardt, K.M. Weisfeiler-lehman graph kernels. J. Mach. Learn. Res. 2011, 12, 2539–2561. [Google Scholar]
  44. Weisfeiler, B.; Lehman, A.A. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Tech. Inform. 1968, 2, 12–16. [Google Scholar]
  45. Borgatti, S.P.; Halgin, D.S. Analyzing affiliation networks. Sage Handb. Soc. Netw. Anal. 2011, 1, 417–433. [Google Scholar]
  46. Comber, A.J.; Brunsdon, C.F.; Farmer, C.J. Community detection in spatial networks: Inferring land use from a planar graph of land cover objects. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 274–282. [Google Scholar] [CrossRef] [Green Version]
  47. Rodrigue, J.; Comtois, C.; Slack, B. The Geography of Transport Systems; Taylor & Francis: Abingdon, UK, 2013. [Google Scholar]
  48. Bogaert, J.; Rousseau, R.; Van Hecke, P.; Impens, I. Alternative area-perimeter ratios for measurement of 2D shape compactness of habitats. Appl. Math. Comput. 2000, 111, 71–85. [Google Scholar] [CrossRef]
  49. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  50. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  51. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef] [Green Version]
  52. Zhou, W.; Ming, D.; Lv, X.; Zhou, K.; Bao, H.; Hong, Z. SO–CNN based urban functional zone fine division with VHR remote sensing image. Remote Sens. Environ. 2020, 236, 111458. [Google Scholar] [CrossRef]
  53. Castilla, G.; Hay, G. Uncertainties in land use data. Hydrol. Earth Syst. Sci. 2007, 11, 1857–1868. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overview of the study area on GF2 (a) and ZY3 (b) satellite images.
Figure 1. Overview of the study area on GF2 (a) and ZY3 (b) satellite images.
Remotesensing 12 04158 g001
Figure 2. Workflow of land use classification using graph-based methods from high resolution remote sensing images. LC and LU refer to land cover and land use respectively.
Figure 2. Workflow of land use classification using graph-based methods from high resolution remote sensing images. LC and LU refer to land cover and land use respectively.
Remotesensing 12 04158 g002
Figure 3. (Top) Land cover map of the Fuzhou study area derived from GF2 image, and (Bottom) from ZY3 image. Each box in red or black covers a subset area with 12 km 2 .
Figure 3. (Top) Land cover map of the Fuzhou study area derived from GF2 image, and (Bottom) from ZY3 image. Each box in red or black covers a subset area with 12 km 2 .
Remotesensing 12 04158 g003
Figure 4. Overview of the collected land use samples (in land use units) on the GF2 image.
Figure 4. Overview of the collected land use samples (in land use units) on the GF2 image.
Remotesensing 12 04158 g004
Figure 5. Distributions of OA across 10 different implementations of the different land use classifications on the GF2 and ZY3 images.
Figure 5. Distributions of OA across 10 different implementations of the different land use classifications on the GF2 and ZY3 images.
Remotesensing 12 04158 g005
Figure 6. Demonstration of land use samples of the GF2 and ZY3 images.
Figure 6. Demonstration of land use samples of the GF2 and ZY3 images.
Remotesensing 12 04158 g006
Figure 7. (a) Land use map of the Fuzhou study area derived from GF2 image, and (b) from ZY3 image.
Figure 7. (a) Land use map of the Fuzhou study area derived from GF2 image, and (b) from ZY3 image.
Remotesensing 12 04158 g007
Table 1. Land use classification system.
Table 1. Land use classification system.
AbbreviationClasses
RHHigh-density residential
RLLow-density residential
CMCommercial
IWIndustrial and warehouses
GEGreen space and entertainment land
UNUndeveloped land
OTOthers (the rest of above-mentioned classes, including public management and services, transportation, water body, and land for special use or with multiple categories)
Table 2. Handcrafted graph features [24]. LC refers to land cover.
Table 2. Handcrafted graph features [24]. LC refers to land cover.
CategoryFeatures
Centrality measuresDegree centrality, i.e., LC type of the node with the highest node degree [45].
Betweenness centrality, i.e., LC type of the node with the highest node betweenness centrality [45].
Mean node degree of buildings, i.e., Mean of the node degree of buildings.
AUM-related measuresNormalized number of edges to trees, grass, and others.
Percentage of buildings with at least 1 edge to trees, grass, and others.
Diversity around buildings.
Percentage of buildings enclosed by trees, grass, and others.
Connectivity measuresBeta index β = m / n , measures the level of connectivity in a graph, where n and m refer to the number of edges e and number of nodes v [47]. Two distance ranges of the beta index: 0–60 m and 40–150 m were computed [24].
Additional measuresShape index of buildings γ = A / P 2 , where A and P refer to the area and perimeter of a building object [48].
Building coverage ratio.
Table 3. Different methods for land use classifications. SA and SC refer to spatial arrangement and spatial composition.
Table 3. Different methods for land use classifications. SA and SC refer to spatial arrangement and spatial composition.
CategoriesNo.AbbreviationMethod Description
1 B N ( S C , S A G C N ) Bayesian classification uses both structural features (derived via GCN) and spatial composition features.
SC2 S V M ( S C ) SVM classification uses spatial composition features.
3 S V M ( S A L S ) SVM classification uses landscape metrics, i.e., fractal dimension, landscape shape index, and Shannon’s diversity index [19].
4 S V M ( S C , S A L S ) SVM classification uses both spatial composition features and landscape metrics [9].
SA5 G C N ( S A G C N ) GCN classification with deep graph features [11].
6 S V M ( S A K e r ) SVM classification uses graph features derived via the Weisfeiler-Lehman subtree kernel [43].
7 R F ( S A H D ) Random forest classification uses handcrafted graph features [24].
SC+SA8 B N ( S C , S A K e r ) Bayesian classification uses both structural features (derived via the Weisfeiler-Lehman subtree kernel) and spatial composition features.
9 B N ( S C , S A H D ) Bayesian classification uses both structural features (derived via feature engineering) and spatial composition features.
10 B N ( S C , S A G C N ) * A variant of B N ( S C , S A G C N ) , i.e., Bayesian classification uses both structural features (derived via GCN) and spatial composition features, where building roof information is considered. * indicates a variant version.
⋄ indicates the proposed method in Section 3.2.1.
Table 4. Confusion matrix of the classified land cover map using the GF2 image. UA, PA, and OA refer to user accuracy, producer accuracy, and overall accuracy.
Table 4. Confusion matrix of the classified land cover map using the GF2 image. UA, PA, and OA refer to user accuracy, producer accuracy, and overall accuracy.
Reference Class
Map ClassTreesGrassWaterShadowBaresoilOthersBuildingsUA *
Trees560031000.93 ± 0.06
Grass256000200.93 ± 0.06
Water004880310.80 ± 0.10
Shadow000600001.00 ± 0.00
Baresoil000156300.93 ± 0.06
Others000615030.83 ± 0.10
Buildings000102570.95 ± 0.06
PA *0.97 ± 0.051.00 ± 0.001.00 ± 0.000.76 ± 0.080.97 ± 0.050.83 ± 0.090.93 ± 0.06
OA * = 0.91 ± 0.03, κ  = 0.8972
* 95% confidence interval.
Table 5. Confusion matrix of the classified land cover map using the ZY3 image. UA, PA, and OA refer to user accuracy, producer accuracy, and overall accuracy.
Table 5. Confusion matrix of the classified land cover map using the ZY3 image. UA, PA, and OA refer to user accuracy, producer accuracy, and overall accuracy.
Reference Class
Map ClassTreesGrassWaterShadowBaresoilOthersBuildingsUA *
Trees573000000.95 ± 0.06
Grass1838012100.63 ± 0.12
Water105701100.95 ± 0.06
Shadow100590000.98 ± 0.03
Baresoil300054300.90 ± 0.08
Others200105520.92 ± 0.07
Buildings000012570.95 ± 0.06
PA *0.70 ± 0.070.93 ± 0.081.00 ± 0.000.97 ± 0.040.93 ± 0.060.89 ± 0.070.97 ± 0.05
OA * = 0.90 ± 0.03, κ  = 0.8806
* 95% confidence interval.
Table 6. Evaluation of the 10 land use classifications of the GF2 and ZY3 images on the testing datasets. SA and SC refer to spatial arrangement and spatial composition.
Table 6. Evaluation of the 10 land use classifications of the GF2 and ZY3 images on the testing datasets. SA and SC refer to spatial arrangement and spatial composition.
CategoriesNo.MethodsGF2 Image ZY3 Image
OA κ OA κ
1 B N ( S C , S A G N N ) 0.84640.8208 0.83930.81251
SC2 S V M ( S C ) 0.80000.7667 0.75710.7167
3 S V M ( S A L S ) 0.50360.4208 0.38210.2791
4 S V M ( S C , S A L S ) 0.80000.7667 0.80000.7667
SA5 G N N ( S A G C N ) 0.79290.7583 0.70710.6583
6 S V M ( S A K e r ) 0.80000.7667 0.71070.6625
7 R F ( S A H D ) 0.67860.6250 0.66790.6125
SA+SC8 B N ( S C , S A K e r ) 0.80360.7708 0.78210.7458
9 B N ( S C , S A H D ) 0.81070.7791 0.77860.7417
10 B N ( S C , S A G N N ) * 0.87500.8542 0.85000.8250
⋄ indicates the proposed method.
Table 7. Confusion matrix of the classified land use map using the B N ( S C , S A G N N ) * method based on the GF2 image. UA, PA and OA refer to user accuracy, producer accuracy and overall accuracy. RL, RL, CM, IW, GE, UN, and OT refer to land use classes of high-density residential, low-density residential, commercial, industrial and warehouses, green space and entertainment land, undeveloped land, and others, respectively.
Table 7. Confusion matrix of the classified land use map using the B N ( S C , S A G N N ) * method based on the GF2 image. UA, PA and OA refer to user accuracy, producer accuracy and overall accuracy. RL, RL, CM, IW, GE, UN, and OT refer to land use classes of high-density residential, low-density residential, commercial, industrial and warehouses, green space and entertainment land, undeveloped land, and others, respectively.
Reference Class
Map ClassRHRLCMIWGEUNOTUA *
RH300000001.00 ± 0.00
RL133000010.94 ± 0.08
CM033410030.83 ± 0.12
IW401370020.84 ± 0.11
GE000039020.95 ± 0.07
UN401004000.89 ± 0.09
OT144210320.73 ± 0.13
PA *0.75 ± 0.110.83 ± 0.100.85 ± 0.100.92 ± 0.080.98 ± 0.051.00 ± 0.000.80 ± 0.11
OA * =0.88 ± 0.04, κ  = 0.8542
* 95% confidence interval.
Table 8. Confusion matrix of the classified land use map using the B N ( S C , S A G N N ) * method based on the ZY3 image. UA, PA and OA refer to user accuracy, producer accuracy and overall accuracy. RH, RL, CM, IW, GE, UN, and OT refer to land use classes of high-density residential, low-density residential, commercial, industrial and warehouses, green space and entertainment land, undeveloped land, and others, respectively.
Table 8. Confusion matrix of the classified land use map using the B N ( S C , S A G N N ) * method based on the ZY3 image. UA, PA and OA refer to user accuracy, producer accuracy and overall accuracy. RH, RL, CM, IW, GE, UN, and OT refer to land use classes of high-density residential, low-density residential, commercial, industrial and warehouses, green space and entertainment land, undeveloped land, and others, respectively.
Reference Class
Map ClassRHRLCMIWGEUNOTUA *
RH351130110.83 ± 0.11
RL036100050.86 ± 0.11
CM203001040.81 ± 0.13
IW000350001.00 ± 0.00
GE000037110.95 ± 0.07
UN301123710.82 ± 0.11
OT037101280.70 ± 0.14
PA *0.88 ± 0.100.90 ± 0.090.75 ± 0.110.88 ± 0.090.92 ± 0.080.92 ± 0.080.70 ± 0.12
OA * = 0.85 ± 0.04, κ  = 0.8250
* 95% confidence interval.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, M.; Stein, A. Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects. Remote Sens. 2020, 12, 4158. https://doi.org/10.3390/rs12244158

AMA Style

Li M, Stein A. Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects. Remote Sensing. 2020; 12(24):4158. https://doi.org/10.3390/rs12244158

Chicago/Turabian Style

Li, Mengmeng, and Alfred Stein. 2020. "Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects" Remote Sensing 12, no. 24: 4158. https://doi.org/10.3390/rs12244158

APA Style

Li, M., & Stein, A. (2020). Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects. Remote Sensing, 12(24), 4158. https://doi.org/10.3390/rs12244158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop