Next Article in Journal
Monitoring the Impact of Large Transport Infrastructure on Land Use and Environment Using Deep Learning and Satellite Imagery
Previous Article in Journal
Incorporating a Hyperspectral Direct-Diffuse Pyranometer in an Above-Water Reflectance Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Morphological Feature-Oriented Algorithm for Extracting Impervious Surface Areas Obscured by Vegetation in Collaboration with OSM Road Networks in Urban Areas

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Department of Health Administration and Policy, George Mason University, Fairfax, VA 22030, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(10), 2493; https://doi.org/10.3390/rs14102493
Submission received: 1 May 2022 / Revised: 19 May 2022 / Accepted: 19 May 2022 / Published: 23 May 2022

Abstract

:
Remote sensing is the primary way to extract the impervious surface areas (ISAs). However, the obstruction of vegetation is a long-standing challenge that prevents the accurate extraction of urban ISAs. Currently, there are no general and systematic methods to solve the problem. In this paper, we present a morphological feature-oriented algorithm, which can make use of the OSM road network information to remove the obscuring effects when the ISAs are extracted. Very high resolution (VHR) images of Wuhan, China, were used in experiments to verify the effectiveness of the proposed algorithm. Experimental results show that the proposed algorithm can improve the accuracy and completeness of ISA extraction by our previous deep learning-based algorithm. In the proposed algorithm, the overall accuracy (OA) is 86.64%. The results show that the proposed algorithm is feasible and can extract the vegetation-obscured ISAs effectively and precisely.

Graphical Abstract

1. Introduction

Impervious surface refers to the surface covered by natural or artificial impervious materials that can prevent surface water from penetrating the soil. With global urbanization, a vast number of impervious surface areas made of cement and asphalt have replaced natural surface areas dominated by the flora and bare soil in recent decades, resulting in urban environmental problems. Early study has shown that the regional environment changes as the percentage of impervious surface increases [1]. In recent years, more and more environmental issues (e.g., the heat island effect [2] and urban floods [3]) related to urbanization have received significant attention from researchers, and studies have shown that ISAs cause many environmental problems. Therefore, how to accurately estimate the ISAs in urban areas has become an essential topic in the urban remote sensing field. After more than 20 years of development, the studies on impervious surface remote sensing have made significant progress and many innovative remote sensing technologies and methods for retrieving impervious surface information have been proposed [4]. In recent years, machine learning has been successfully applied to high-resolution images to optimize the extraction of ISAs [5,6,7]. Shao et al. [5] fused optical and SAR data at the decision level to estimate urban impervious surface areas based on random forest (RF) classifier and evidence theory and analyzed the uncertainty levels of the impervious surface areas. However, none of the conventional or cutting-edge methods in the past can achieve good results when the ISAs are obscured by vegetation. Due to the obscuring effect, ISAs are often misclassified as vegetation areas in the classification results. The area of the impervious surface extracted by the algorithms is less than the actual area of the impervious surface. Akbari et al. [8] discovered the obscuring effect of ISAs extraction when studying the land cover. The results from the study show that in many cities, a large part of ISAs is not extracted, and in most of these cases, trees cover streets, parking lots, grasslands, and sidewalks. Among them, in most non-residential areas, the paved areas account for 50–70% of the areas under the tree canopy. In residential areas, on average, paved roads cover about 35% of the areas under the tree canopy. Gray and Finster’s investigation [9] also demonstrates that the impervious covering rate varies dramatically in congested urban areas. This is due to the fact that canopy coverage is not always defined by pervious planting zones, such as street trees, which cover areas beyond the area they are planted. A study by van der Linden et al. [10] quantified the occlusion effect of vegetation in the entire study area in the metropolitan area of Berlin, Germany. The study found that more than 30% of the street pixels in the study area were classified as vegetation. These studies show that the proportion of ISAs obscured by vegetation is not low in urban areas, and the under-extraction will reduce the completeness of the ISAs extraction results, which is an obstacle to the analysis of urban environmental issues.
At present, many studies have been published on the association between vegetation obstruction and ISAs estimation, exposing the correlation between ISAs extraction and seasons. Based on the research results, many methods have been proposed to use seasonality to reduce vegetation obscuring effects. A study found that there is an evolutionary inverse relationship between impervious and vegetation coverage [11]. As a result, in the study area, seasonal change in plant quantity has a considerable impact on impervious surface estimation [12]. Gong et al. [13] used seasonal sensitivity as an influencing factor for impervious surface estimation and compared the ISAs extraction of Pascagoula City along the Mississippi Gulf coast in the United States in different seasons. The results show that the summer model significantly reduces the average error (AE) of ISAs estimation, indicating that the sensitivity of regional images to seasons can be detected by calculating AE. The influence of seasons on the extraction of ISAs is reflected in that when vegetation degrades with seasonal changes, the vegetation that was originally obscured on ISAs disappears, and these exposed ISAs are easier to be correctly classified than before [14]. This series of studies indicates that the influence of obstruction can be reduced or removed if the images used to estimate the ISAs are obtained in vegetation degradation seasons. Nevertheless, the method does not work when there are no images available during vegetation degradation seasons, such as in countries or regions with subtropical or tropical climates where vegetation grows all year. That is why a generalized method is needed.
The systematic removal of the obscuring effect of vegetation on ISAs necessitates the use of external data [15]. Hung [15] argues that remote sensing mapping methods based on optical and near-infrared imagery will have significant difficulties when the surface is under a dense vegetation canopy. He proposed using LiDAR data to distinguish the canopy of buildings and vegetation and extract the buildings covered by vegetation. However, collecting LiDAR data is costly and time-consuming for a large area. Zhang et al. [16] provide us with a simpler idea to solve the obscuring effect. To solve the misclassification issue of ISAs and street trees, his study extracted the road map in Nanjing by combining roads from traffic maps and obtained road widths from VHR images visually. The street trees are extracted from the images by supervised classification and then overlaid with the road map to detect whether the roads are misclassified. The study indicates a strong correlation between the distribution of street trees and road networks in urban images, which facilitates the extraction of vegetation-obscured ISAs. Although Zhang’s study reveals the correlation, he does not extract vegetation-obscured ISAs. So far, there is still no universal method to extract vegetation-obscured ISAs.
This paper proposes an algorithm to extract vegetation-obscured ISAs in urban areas and overcomes the limitations of our previous deep learning-based methods. The proposed algorithm can complement the results obtained by deep learning for extracting impervious surfaces covered by vegetation. The algorithm is under the assumption that there is a correlation between road network information and street trees. The proposed algorithm consists of three parts: vegetation detection, processing with mathematical morphology operations, and feature mining. Experiments showed that the proposed algorithm can extract vegetation-obscured ISAs effectively and precisely.

2. Study Area and Data Used

The vegetation obscuring effect is a common problem globally, as shown in Figure 1. In cities with different climates, vegetation-obscured ISAs, especially roads, are common in urban areas. In the follow-up study, we chose Wuhan, China, as the study area.

2.1. The Study Area

Wuhan, one of China’s megacities, is located between 113°41′–115°05′ E and 29°58′–31°22′ N with a humid sub-tropical monsoon climate. Wuhan is located in the middle reaches of the Yangtze River and has a lot of lakes and vegetation. The Yangtze River and its tributary, the Han River, divide Wuhan into three towns, Wuchang, Hankou, and Hanyang. Wuhan has a large-scale road network and a 12 million population. In the urban area of Wuhan, street trees planted along both sides of the road are one of the critical green areas, which leads to significant vegetation obscuring effects.

2.2. Data Used

2.2.1. GF-2 Remotely Sensed Imagery

GF-2 is a civil land observation satellite with sub-meter spatial resolution developed by China, carrying two high-resolution multispectral cameras that can achieve high positioning accuracy. GF-2 was successfully launched on 19 August 2014. Table 1 shows the parameters of the images. Notice that the mosaic data set is used.

2.2.2. OpenStreetMap Road Network

OpenStreetMap (OSM) is an open-source geographic data platform with WGS84 (https://www.openstreetmap.org, accessed on 14 September 2021). In this study, the road network of Wuhan comes from OSM. However, the quality of the data from OSM is not high enough in some cases. Jasmeet [17] concluded the parameters of OSM, such as positional accuracy, completeness, attribute accuracy, and topological consistency, are not accurate. From this perspective, here are some obvious issues:
  • The road network is complete at a large scale; however, as the scale decreases, the completeness of the road network decreases, which means some roads totally disappear on low scales.
  • Although some roads do not disappear, the road information is incomplete on a low scale.
To solve the above issues, we proposed feature mining to find the complete road information so that we can use them in the extraction of impervious areas covered by vegetation.

3. Methodology

The proposed algorithm aims at dealing with the challenges faced by deep learning in extracting urban vegetation-obscured ISAs, shown in Figure 2. The main idea is as follows: based on the existing ISAs extracted by our previous deep learning, the proposed algorithm integrates the OSM road network to extract vegetation-obscured ISAs from VHR images and fuses all ISAs. The extraction of vegetation-obscured ISAs consists of three steps. Firstly, we detect vegetation such as street trees, under which the roads are likely to exist. Secondly, we thin the regions into skeletons while preserving morphological features. Thirdly, we combine the features from the OSM road network with skeletons and retain some important information in the skeletons with road features. Buffers are generated based on the retained ones as vegetation-obscured ISAs. The specific methods used in the three steps will be introduced in the following.

3.1. Vegetation Detection Based on Image Segmentation and Color VI

Detecting vegetation on VHR images is the basis for identifying vegetation-obscured ISAs. Vegetation indices (VI) are often used to detect vegetation from images. However, calculating a VI using pixel information is inefficient, especially from VHR images. Image segmentation can improve detection efficiency because it divides pixels into regions. Therefore, our vegetation detection involves two steps: image segmentation and computation of VI. Image segmentation helps improve detection efficiency.
Since a VHR image contains more detail than a low-resolution image, object-based segmentation is proved to be better [18]. In image segmentation, we use the tools provided by ENVI, an image processing software, where the object-based segmentation consists of two parts: segmentation and merging. In the segmentation part [19], Sobel edge detection is first used to calculate the gradient image. Then, the watershed algorithm [20] is applied to the gradient image, and the algorithm floods from the uniform part of the object to the edge to identify the boundary with obvious features. Merging is used to solve the problems of feature misclassification and over-segmentation in segmentation [21]. The merging part combines spectral and spatial information and iteratively merges adjacent segments. Object-based image segmentation can segment the objects from different scales. However, the choice of the parameters in the segmentation is very important. If the choice of the parameters is not good, the segmentation results will be bad and will have a bad impact on the extraction of ISAs. The exact parameter values should be chosen based on the growth and distribution of vegetation in the study area. In general, the segmentation parameters should be set to a large value and the merge parameter to a relatively small value to portray the objects in as much in detail as possible.
After we segment the objects, we will compute VI for each object. Many VIs were defined in the past and used to extract vegetation from images [22], most of which are related to the near-infrared (NIR) band. The most used VI is NDVI, which is essentially a mathematical operation at the pixel level. In this study, due to the lack of NIR band in the images, Color VI will be used to replace NDVI to detect vegetation. Color VI is usually used for plant detection, and it was originally proposed to use RGB bands to identify living plants in bare lands, wheat straw, and other residues [23]. However, owing to its ability to highlight a specific color, such as the green of plants, we use Color VI for vegetation detection in the remote sensing images. For each object extracted, we calculate the mean value of the RGB bands of the pixels and calculate the Color VI for that object based on the mean value. Color VI for each object is computed as [24]:
VI = 2 × g r b 1.4 × r g
where r , g and b were defined using the mean values ( R ¯ , G ¯ , B ¯ ):
r = R ¯ R ¯ + G ¯ + B ¯
g = G ¯ R ¯ + G ¯ + B ¯
b = B ¯ R ¯ + G ¯ + B ¯
The Color VI values of objects are used to detect vegetation. When performing vegetation detection, the thresholding method is used to determine whether an object is a vegetation region or not. If the Color VI of an object is bigger than the threshold, then the object is determined as a vegetation region. Although the probability that woodland or grass obscures ISAs is low, the vegetation detected using this simple thresholding method possibly contains some woodlands and grasslands. Thus, the vegetation areas obtained by the simple thresholding need to be post-processed to remove as many non-street trees as possible and retain street trees.
Post-processing includes area thresholding and rectangular fitting. The area thresholding method can remove objects with a large area, such as woodlands. The ISAs under large areas have little influence on the whole area; instead, retaining the large-area objects will disturb the morphological study of street trees and finally result in a deviation of ISAs. Rectangular fitting describes how well an image object fits into a rectangle of similar size and proportions. With rectangular fitting, it is possible to remove objects such as football fields and so on.

3.2. Skeleton Extraction

After we obtain the vegetation regions by segmentation, we will thin the vegetation regions into skeletons. The skeleton is extracted by mathematical morphology operations. Mathematical morphology operations are often used in road extraction from remote sensing images [25,26]. Since vegetation-obscured ISAs have a great correlation with road information in morphology, we use road morphology to design a mathematical morphology-based framework to extract skeleton lines of vegetation regions (see Figure 3). We used bwmorph function provided by MATLAB to achieve our goal.

3.2.1. Data Preprocessing

Before thinning the detected regions, we design a part named data preprocessing (see the top part of Figure 4) to avoid unnecessary deviation and fracture of the skeleton lines. In data preprocessing, different morphological operations are used to smooth the boundary of vegetation regions, fill the holes in the regions, and connect the cracks between the regions. First, area thresholding is used to remove the isolated vegetation regions with small areas. The threshold in area thresholding is determined by the type and growth status of local vegetation in the study area. In this study, the threshold was set at the average size of the local individual tree canopy, and a region with an area below this value will be removed. Next, a spur removal operation is used to remove the small branches in the regions or lines. Then, regional dilation and erosion are further used to process the results. Regional dilation makes the boundary of the region expand outwards, thus filling the void and connecting the fracture. On the contrary, erosion makes the boundary of the region shrink inward, preventing some regions from sticking together due to dilation and ensuring that the extracted skeleton line does not produce a large deviation. Properly selecting the parameters for dilation and erosion will achieve a good result. For dilation parameters, it is necessary to ensure that the dilation parameters are as small as possible under the premise of filling the gullies and connecting fractures while avoiding excessive dilation. As for the erosion parameters, the purpose is to restore the dilated regions to the previous size as much as possible to prevent positional deviation. Consequently, the difference between the erosion parameters and the dilation parameters should not be too significant. Between the regional dilation and erosion, we add majority judgment to fill the small holes inside the region. In the 8-neighborhood of a pixel, if there are five or more pixels in its neighborhood equal to 1, then the value of the pixel is set to 1.

3.2.2. Skeleton Extraction

After preprocessing, we use thinning algorithm to extract the skeleton for each vegetation region. Thinning can eliminate the influence of irregular contours, which is suitable for dealing with the irregular boundary of vegetation regions. In addition, during thinning, the morphological features of vegetation regions can be retained. Thinning algorithm iteratively removes successive layers of pixels on the boundary till the width of the skeleton is one pixel. The deletion of a pixel is determined by its neighborhood. As shown in Figure 4, let P be the pixel which we need to decide whether to delete or not, and X1, X2, …, X8 constitute its neighborhood, and in a binary image, the value of the pixel is 0 or 1. The thinning algorithm consists of two sub-iterations, which alternately detect the boundary pixels located in the north and east, then south and west. In the first sub-iteration, P will be deleted when by checking whether it meets some conditions using procedures C1, C2 and C3 (see the explanation below), and the second iteration will not be performed on P. If P is preserved in the first sub-iteration, then P will be determined to be deleted or preserved in the second sub-iteration. In the second sub-iteration, P will be deleted by checking the conditions using procedures C1, C2, and C3′ are met. The conditions are shown below [27], where and refer to logical AND and OR, respectively:
Procedure C1: Procedure C1 calculates the crossing number XH(p) defined by Hilditch for P (Hilditch 1969). If XH(p) equals 1, P is set to be on the boundary of the region using the following equation:
X H p = i = 1 4 b i
where
b i = 1 ,   if   x 2 i 1 = 0   and   x 2 i = 1   or   x 2 i + 1 = 1   0 , otherwise
After that, we will go to procedure C2 or stop here.
Procedure C2 is used for endpoint check. In procedure C2, we first compute n 1 p   a n d   n 2 p using Equations (6) and (7), and then each breaks the neighbor of P into four pairs of adjoining pixels. After that, we count the number of the pair containing 1 or 2 pixels equaling 1.
2 m i n n 1 p , n 2 p 3
where
n 1 p = k = 1 4 x 2 k 1 x 2 k
n 2 p = k = 1 4 x 2 k x 2 k + 1
If the above condition is met, P is not an endpoint.
Procedure C3 is used to check the connectivity of east and north of P using the following equation.
x 2 x 3 x ¯ 8 x 1 = 0
where x ¯ 8 = 1 x 8 . If the above condition is met, then P is located in the east or the north of the region, and the connectivity of P’s east and north is 0.
Procedure C3′ is similar to C3. If the following condition is met, then P is located in the west or the south of the region, and the connectivity of P’s west and south is 0:
x 6 x 7 x 4 x 5 = 0
where x ¯ 4 = 1 x 4 .
In MATLAB, we generate a binary region and extract the skeleton of it by using the thinning algorithm from bwmorph. As Figure 5 shows, the thinning algorithm ensures that the deletion of P does not change the connectivity of the image while using a single-pixel width to generate the result.
However, when the vegetation region is too large, the thinning will generate a lot of small branches. Though the truth of vegetation-obscured ISAs is unknown, the main direction of the vegetation region is most likely to be the main direction of the ISA. In general, vegetation and ISAs are planted or built in the same direction. To remove these branches, the spur removal operation from the previous summary is used again.

3.3. Feature Mining

In Section 2.2.2, we have discussed the advantages and disadvantages of the OSM road network. To make up for the incomplete information on some roads in OSM, ISAs need to be extracted by analyzing the road-like structure of skeleton lines. Therefore, feature mining is used to achieve the goal. Feature mining involves two ways to extract vegetation-obscured ISAs: one combines the feature of skeleton lines with OSM, and the other only relies on skeleton lines. The one with OSM is called feature consistency, and the other is called feature recognition.

3.3.1. Feature Consistency

Feature consistency consists of position consistency and direction consistency. By comparing the two consistencies of an OSM road network with the same skeleton lines under the same coordinate system, the skeleton lines consistent with the OSM roads are retained. The retained ones are assigned with the grade of the OSM roads to build a buffer for ISAs.
The consistency of the position removes the skeleton lines with a large deviation from the OSM road network. The main steps of position consistency are as follows: We first calculate the minimum distance from a skeleton line (corresponding to a vegetation region) to the OSM roads. Then, determine whether this minimum distance is lower than a preset threshold, and if so, keep the skeleton line; otherwise, remove it. For the computation of the minimum distance from the skeleton to the road network, we design a method used for calculating the minimum distance between two polylines, which are shown in Figure 6. The method is described as follows: For two polylines P and Q, we first calculate the distance from each point on polyline P to each point on polyline Q. Similarly, perform the same calculation from each point on polyline Q to polyline P. Finally, take the minimum value of all the distances as the distance between the two lines.
Direction consistency calculates the azimuth of the endpoints of the polyline as the main direction in the projected coordinate system and classifies the direction into classes: horizontal and vertical. If the class of a skeleton line is the same as a road line (the selected road line is the one closest to the skeleton line), the skeleton line is retained; otherwise, it is removed. The range of the values for the two directions is shown below, where x is the azimuth of the endpoints of the polyline and 0 represents the horizontal direction, and 1 represents the vertical direction. The computation of the direction angle is obtained using the following Equation.
o r i = 1 ,   if   x 135   and   x < 225   or   x 315   and   x < 360     or   x 0   and   x < 45   0 , if   x 45   and   x < 135   or   ( x 135   and   x < 315 )
Additionally, the range of classification is shown in Figure 7, and the class of Polyline A is vertical.
As shown in Figure 8, only the skeleton lines that meet both position consistency and direction consistency will be retained. For example, skeleton line A does not satisfy direction consistency, and skeleton line D does not satisfy position consistency. Thus, both are removed.
Based on the retained skeleton lines, buffers are built as the vegetation-obscured ISAs. Since vegetation is widely distributed and has different sizes, the width of an ISA obscured is different. The road grade of an OSM provides a reference for setting buffer width. The road grade of an OSM and the corresponding buffer width in our study area is shown in Table 2. The width setting refers to the relevant Chinese road regulations, and different countries and regions should be adjusted according to local regulations. Notice that the buffer width of the roads without a grade is 4.5 m. In fact, most of these roads are residential or living streets.

3.3.2. Feature Recognition

Feature recognition analyzes the features of the skeleton lines and retains the skeleton lines conforming to road network structure. According to prior knowledge, roads have an obvious network structure. As street trees are vegetation planted on both sides of the roads, the network structure of street trees is consistent with that of roads. Figure 9 shows the network structure of street trees, which is drawn by combining a GF-2 image and a ground survey. From Figure 9, we can find that when the vegetation is similar to the structure of the road network, the vegetation is likely to be the street trees.
We use two features to determine whether a skeleton line is a part of the network structure of street trees. They are the number of connections of the skeleton line node (endpoint) to other skeleton lines and the angle at which the skeleton line intersects with other skeleton lines at the node. As shown in Figure 10, the number of connections for both node A and node B is 3. Additionally, the angle between two polylines is the acute angle. Let the angle between Polyline P 1 and Polyline P 2 is α (see Figure 10), where the coordinates of the endpoints of P 1 are ( x 1 , y 1 ) and ( x 2 , y 2 ) and the coordinates of endpoints of P 2 are ( x 3 , y 3 ) and ( x 4 , y 4 ).
Under the projection coordinate system, the angle α  is calculated as follows:
α = arctan l
where l  is defined as:
l = K 1 K 2 1 + K 1 × K 2
where K 1 , K 2  are defined as:
K 1 = y 1 y 2 x 1 x 2
K 2 = y 3 y 4 x 3 x 4
In general, the number of connections at a node is set to be ≥3 to describe the network structure, and the range of the angles is set to be as close to 90° as possible (for example, the range could be set to be [80, 90]). In fact, not all road networks are approximately checkerboard-shaped; hence, the range of angles depends on the study area.
Based on the skeleton lines conforming to the network structure of street trees, buffers are built as the vegetation-obscured ISAs too. The width of buffers is set according to the grade of roads obscured by street trees in the study area. Generally, they are residential or living streets.

4. Results

4.1. Vegetation Detection from the GF-2 Image

In order to clearly show the experimental results, as shown in Figure 11, an area was selected from the whole study area located in the urban of Wuhan. Figure 12a shows an original GF-2 subimage from the study area. Figure 12b shows the result of image segmentation. In the study, image segmentation was performed using the tool called Segment Only Feature Extraction Workflow in ENVI, where the scale level of the edge algorithm for segmentation is set to 35%, and the level of the full lambda schedule algorithm for merging is set to 96%. After segmentation, the vegetation objects are appropriately bordered by other feature objects (buildings, water, etc.). However, inside a vegetation object, different vegetation types (street trees, woods, grass) possibly merge into one object. After segmentation, we calculated the Color VI of each object and selected an appropriate threshold to extract vegetation. The result is shown in Figure 12c. In the study, Color VI was set to be more than −0.12. By analyzing the results, we could find that vegetation accounted for most of the retained objects. However, some misclassifications were still included in the result, such as buildings with high brightness. We applied post-processing to solve the problems above, and the result is presented in Figure 12d. In the post-processing, the area threshold was set to be 80,000 pixels (if the number of the pixels is more than the threshold, the object will be removed) and the threshold for the rectangular fit degree was set to be 0.9 (if the degree is more than 0.9, the object will be removed). The results show that the objects without obscuring effects, such as large-area woodland, farmlands, and buildings, have been removed.

4.2. Skeleton Extraction Using Mathematical Morphology

After data were preprocessed, we needed to extract the skeleton for further processing. Before skeleton extraction, to reduce the information loss during dimension reduction, we performed some morphological operations on the segmentation result. First, we removed the isolated regions from the binary image of the vegetation regions (as shown in Figure 13a). In this step, a region with small areas will be removed. The threshold of the area for a region to be removed was determined by the canopy size of the street trees. In our study area, the canopy size was like a square with an area of 9 m2 (15 pixels) based on a field survey. The result after area threshold is shown in Figure 13b. Then, we used morphological operations to remove the spurs. The results are shown in Figure 13c. Next, regional dilation was used to expand the regions to connect the cracks between the regions and smooth the boundary of regions. The result is shown in Figure 13d. For dilation operation, the kernel size was set to 3 × 3, and we performed dilation three times for each image. After the dilation operations, majority judgment (introduced in Section 3.2.1) was performed to further bridge the gaps without over-filling the holes. The result is shown in Figure 13e. Finally, regional erosion was performed to recover the size of the regions. For regional erosion, we used the same kernel size while the times of erosion were 4. The result is shown in Figure 13f.
Thinning was performed on the image after regional erosion to obtain the skeleton. However, the skeleton obtained using the above processing still included a lot of spurs (as shown in Figure 14a). We performed spurs removal again to remove the spurs from the skeleton, and we obtained the final skeleton, as shown in Figure 14b.

4.3. Feature Mining

Feature mining involves two parts: feature consistency and feature recognition. Figure 15 shows the result of feature consistency. By matching the position features and direction features of the skeleton lines with the OSM road network (the red part in Figure 15), we kept the skeleton lines as the centerlines of vegetation-obscured ISAs. In Figure 15a, the red polylines are OSM road networks, and the black polylines are skeleton lines. In fact, some of the OSM roads and the roads in GF-2 images do not match exactly. Therefore, to eliminate the error in the study area, we selected some intersections of OSM roads and measured their real coordinates using the real-time kinematic (RTK) technique. The error was the deviation distance value from OSM roads to the real ones, which was calculated by the average error of the point coordinates of intersections of OSM roads and real ones. After calculating the average deviation distance, we added the deviation distance value to the distance threshold when performing position consistency. Figure 15b shows that there were 8687 polylines, and 2521 polylines were kept after feature consistency matching.
Although some skeleton lines associated with the OSM road network were retained, due to the poor completeness of OSM, some skeleton lines belonging to the regions of vegetation-obscured ISAs were removed. Thus, we further used feature recognition to find more vegetation-obscured ISAs. Figure 16 shows the result obtained using feature recognition. In feature recognition, skeleton lines with street tree structures were determined whether they are vegetation-obscured ISAs using the criteria described in Section 3.3.2. In the study area, we set the number of connections of nodes >=3, and the range of the angles of the skeleton lines is [80, 90]. In Figure 16a, there were 8687 skeleton lines. In Figure 16b, 3794 polylines were kept, whereas 1092 polylines are duplicated in Figure 15b.
By setting the retained lines as the centerlines, we generated buffers for vegetation-obscured ISAs. Different widths of buffers were set according to Table 2 in Section 4.1. After generating the buffers, we merged the results of feature consistency and feature recognition as the results of the proposed algorithm. As Figure 17 presents, we combined ISAs obtained by deep learning [28] and the proposed algorithm. Figure 17a shows the ground surface from the GF-2 image, and Figure 17b shows vegetation-obscured ISAs merged with the results from the proposed algorithm and the results from our previous deep learning algorithm.
To show the effectiveness of the proposed algorithm, we evaluated the producer’s accuracy (PA), user’s accuracy (UA), and overall accuracy (OA) of the proposed algorithm for extracting the vegetation-obscured ISAs. As shown in Figure 18, ground truth represents the vegetation-obscured ISAs in the study area, checked from the ground survey. From the result, the effectiveness of the previous deep learning algorithm in extracting vegetation-obscured ISAs is almost zero. Then, we calculated the accuracies of the proposed algorithm, and the results are shown in Table 3.
The result demonstrates that our previous deep learning algorithm was not able to extract the vegetation-obscured ISAs from the optical remote sensing images because spectral features were weak in describing the phenomenon of multi-level distribution of features. Apparently, the proposed algorithm has good effectiveness.

5. Discussion

5.1. Deviation of Extracted Vegetation-Obscured ISAs

Since the features of different vegetation types (street trees, woods, grass) are very similar, it is easy to classify them into one object in the image segmentation stage. Such kinds of objects possibly lead to the deviation of the extracted vegetation-obscured ISAs from the real vegetation-obscured ISAs. As shown in Figure 12b, the grass, woodland, and street trees were merged into one object. Figure 14 shows that these objects led to the deviation in the thinning. In order to remove deviated skeleton lines after thinning, in feature mining, position consistency removed lines with large deviation by matching them with the OSM road network, which improves the accuracy. However, some skeleton lines with small a deviation were not removed, resulting in a lower UA.

5.2. Network Structure of Extracted Vegetation-Obscured ISAs

In feature recognition, the network structure of street trees is the reference for the extraction of vegetation-obscured ISAs. Therefore, the extracted vegetation-obscure ISAs should have an obvious network structure. In fact, in Figure 17, some of the ISAs extracted by the proposed algorithm lost their network structure and were not connected to other ISAs. The reason is that in the process of feature mining, the description of the network structure of street trees is based on local features (the number of connections of a skeleton line node to other skeleton lines and the angle that the skeleton line intersected with other skeleton lines at the node) rather than global features. In order to retain more network information of the extracted vegetation-obscured ISAs, more knowledge about network features need to be added to feature recognition, such as graph theory, etc.

5.3. Overestimation of Extracted Vegetation-Obscured ISAs

Although the proposed algorithm has proved its effectiveness in extracting ISAs, the poor recall ratio means some extracted ISAs were not in existence. As a result, the extracted ISAs were overestimated. Analysis shows: (1) areas obscured by vegetation similar to street trees are not always roads. Sometimes it is just soil underneath or some other pervious surface. Even for roads, the roads are likely to be constructed using sand and mud, as well as other pervious materials. Therefore, it is possible to overestimate ISAs. (2) ISAs were extracted from grass and woodland. As analyzed in Section 5.1, deviation happens for some ISAs. Some of these ISAs were not removed due to low deviation. However, the deviation still leads to the overestimation of ISAs, which can be observed with the spatiotemporal-spectral observation model [29] or observations from UAV and mobile mapping vehicles for better urban mapping [30].

6. Conclusions

The obscuring effect of vegetation often affects the extraction of ISAs from remote sensing images. Obscuring effect reduces the accuracy and connectivity of the extracted ISAs. When the structure of vegetation-obscured areas is similar to street trees, it is feasible to extract the ISAs under street trees. In this paper, we propose a morphological feature-oriented algorithm. The algorithm forms a processing system, including vegetation detection, mathematical morphology processing, and feature mining. The algorithm extracts vegetation-obscured ISAs by analyzing the morphological features of the OSM road network and street trees. The experiments prove the effectiveness of the proposed algorithm in the extraction of the vegetation-obscured ISAs. The overall accuracy was 86.64%.
From the analysis, the deviation and disconnectedness of the extracted ISAs reduce the precision and recall ratio. Therefore, the proposed algorithm needs further improvement: the features used in this paper are local features. The scope of the feature calculation could be extended to improve the accuracy of the extraction of vegetation-obscured ISAs. For example, knowledge of graph theory and other aspects and the network structure of features could be extracted in depth, etc. Additionally, in this paper, we evaluated the proposed algorithm to extract the vegetation-obscured ISAs with linear elements (i.e., roads). In the future work, we will exploit prior knowledge, deep learning, etc. to extend the proposed algorithm to more spatial elements, such as the ISAs of squares covered by tree canopy.

Author Contributions

In this paper, Y.F. and T.M. conceived the methodology and designed the experiments; T.M. performed the experiments; T.M. and Y.F. wrote the paper and S.Z. and J.T. revised this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key R&D Program No. 2018YFB2100501, National Natural Science Foundation of China Program No. 41890823.

Acknowledgments

The authors would like to thank Cheng Tao for the data of ISAs extracted by deep learning and Xin Yuedong for the technical support of RTK.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shao, Z.; Fu, H.; Li, D.; Altan, O.; Cheng, T. Remote sensing monitoring of multi-scale watersheds impermeability for urban hydrological evaluation. Remote Sens. Environ. 2019, 232, 111338. [Google Scholar] [CrossRef]
  2. Coseo, P.; Larsen, L. Accurate Characterization of Land Cover in Urban Environments: Determining the Importance of Including Obscured Impervious Surfaces in Urban Heat Island Models. Atmosphere 2019, 10, 347. [Google Scholar] [CrossRef] [Green Version]
  3. Du, S.; Shi, P.; Van Rompaey, A.; Wen, J. Quantifying the impact of impervious surface location on flood peak discharge in urban areas. Nat. Hazards 2015, 76, 1457–1471. [Google Scholar] [CrossRef]
  4. Weng, Q. Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends. Remote Sens. Environ. 2012, 117, 34–49. [Google Scholar] [CrossRef]
  5. Shao, Z.; Fu, H.; Fu, P.; Yin, L. Mapping Urban Impervious Surface by Fusing Optical and SAR Data at the Decision Level. Remote Sens. 2016, 8, 945. [Google Scholar] [CrossRef] [Green Version]
  6. Huang, F.; Yu, Y.; Feng, T. Automatic extraction of urban impervious surfaces based on deep learning and multi-source remote sensing data. J. Vis. Commun. Image Represent. 2019, 60, 16–27. [Google Scholar] [CrossRef]
  7. Parekh, J.R.; Poortinga, A.; Bhandari, B.; Mayer, T.; Saah, D.; Chishtie, F. Automatic Detection of Impervious Surfaces from Remotely Sensed Data Using Deep Learning. Remote Sens. 2021, 13, 3166. [Google Scholar] [CrossRef]
  8. Akbari, H.; Rose, L.S.; Taha, H. Analyzing the land cover of an urban environment using high-resolution orthophotos. Landsc. Urban Plan. 2003, 63, 1–14. [Google Scholar] [CrossRef]
  9. Gray, K.A.; Finster, M.E. The Urban Heat Island, Photochemical Smog and Chicago: Local Features of the Problem and Solution; Atmospheric Pollution Prevention Division, U.S. Environmental Protection Agency: Washington, DC, USA, 1999. [Google Scholar]
  10. Van der Linden, S.; Hostert, P. The influence of urban structures on impervious surface maps from airborne hyperspectral data. Remote Sens. Environ. 2009, 113, 2298–2305. [Google Scholar] [CrossRef]
  11. Weng, Q.; Lu, D. A sub-pixel analysis of urbanization effect on land surface temperature and its interplay with impervious surface and vegetation coverage in Indianapolis, United States. Int. J. Appl. Earth Obs. Geoinf. 2008, 10, 68–83. [Google Scholar] [CrossRef]
  12. Hu, X.; Weng, Q. Estimating impervious surfaces from medium spatial resolution imagery using the self-organizing map and multi-layer perceptron neural networks. Remote Sens. Environ. 2009, 113, 2089–2102. [Google Scholar] [CrossRef]
  13. Gong, C.; Wu, W. Comparisons of regression tree models for sub-pixel imperviousness estimation in a Gulf Coast city of Mississippi, USA. Int. J. Remote Sens. 2014, 35, 3722–3740. [Google Scholar] [CrossRef]
  14. Cai, C.; Li, P.; Jin, H. Extraction of Urban Impervious Surface Using Two-Season WorldView-2 Images: A Comparison. Photogramm. Eng. Remote Sens. 2016, 82, 335–349. [Google Scholar] [CrossRef]
  15. Hung, C.J.; James, L.A.; Hodgson, M.E. An Automated Algorithm for Mapping Building Impervious Areas from Airborne LiDAR Point-Cloud Data for Flood Hydrology. GIScience Remote Sens. 2018, 55, 793–816. [Google Scholar] [CrossRef]
  16. Zhang, J.; Li, P.; Mazher, A.; Liu, L. Impervious Surface Extraction with Very High Resolution Imagery In Urban Areas: Reducing Tree Obscuring Effect. In Proceedings of the Computer Vision in Remote Sensing (CVRS), Xiamen, China, 16–18 December 2012. [Google Scholar]
  17. Kaur, J.; Singh, J.; Sehra, S.S.; Rai, H.S. Systematic Literature Review of Data Quality within OpenStreetMap. In Proceedings of the International Conference on Next Generation Computing and Information Systems (ICNGCIS), Model Inst Engn & Technol, Jammu, India, 11–12 December 2017. [Google Scholar] [CrossRef]
  18. Li, P.; Guo, J.; Song, B.; Xiao, X. A Multilevel Hierarchical Image Segmentation Method for Urban Impervious Surface Mapping Using Very High-Resolution Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 103–116. [Google Scholar] [CrossRef]
  19. Jin, X. Segmentation-Based Image Processing System. U.S. US8260048 B2, 9 April 2012. [Google Scholar]
  20. Roerdink, J.B.T.M.; Meijster, A. The Watershed Transform: Definitions, Algorithms, and Parallelization Strategies. Fundam. Inform. 2000, 41, 187–228. [Google Scholar] [CrossRef] [Green Version]
  21. Robinson, D.J.; Redding, N.J.; Crisp, D.J. Implementation of a Fast Algorithm for Segmenting Sar Imagery. In Proceedings of the Scientific and Technical Report, Australia: Defense Science and Technology Organization, 1 January 2002. [Google Scholar]
  22. Luo, Y.; Xu, J.; Yue, W. Research on vegetation indices based on remote sensing images. Ecol. Sci. 2005, 24, 75–79. [Google Scholar]
  23. Woebbecke, D.M.; Meyer, G.E.; Vonbargen, K.; Mortensen, D.A. Color Indices for Weed Identification under Various Soil, Residue and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  24. Meyer, G.E.; Neto, J.C. Verification of Color Vegetation Indices for Automated Crop Imaging Applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  25. Ma, H.; Cheng, X.; Wang, X.; Yuan, J. Road Information Extraction from High-Resolution Remote Sensing Images Based on Threshold Segmentation and Mathematical Morphology. In Proceedings of the 6th International Congress on Image and Signal Processing (CISP), Hangzhou, China, 16–18 December 2013. [Google Scholar]
  26. Zhou, H.; Song, X.; Liu, G. Automatic Road Extraction from High Resolution Remote Sensing Image by Means of Topological Derivative and Mathematical Morphology. In Proceedings of the Remote Sensing Image Processing and Geographic Information Systems. 10th International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR)—Remote Sensing Image Processing, Geographic Information Systems, and Other Applications, Xiangyang, China, 28–29 October 2017. [Google Scholar] [CrossRef]
  27. Lam, L.; Lee, S.W.; Suen, C.Y. Thinning methodologies—A comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 869–885. [Google Scholar] [CrossRef] [Green Version]
  28. Cai, B.; Wang, S.; Wang, L.; Shao, Z. Extraction of Urban Impervious Surface from High-Resolution Remote Sensing Imagery Based on Deep Learning. J. Geo-Inf. Sci. 2019, 21, 1420–1429. [Google Scholar]
  29. Shao, Z.; Cheng, G.; Li, D.; Huang, X.; Lu, Z.; Liu, J. Spatiotemporal-Spectral-Angular Observation Model that Integrates Observations from UAV and Mobile Mapping Vehicle for Better Urban Mapping. Geo-Spat. Inf. Sci. 2021, 24, 615–629. [Google Scholar] [CrossRef]
  30. Shao, Z.; Wu, W.; Li, D. Spatio-temporal-spectral observation model for urban remote sensing. Geo-Spat. Inf. Sci. 2021, 24, 372–386. [Google Scholar] [CrossRef]
Figure 1. Vegetation obscuring effects in cities (from Google satellite map).
Figure 1. Vegetation obscuring effects in cities (from Google satellite map).
Remotesensing 14 02493 g001
Figure 2. The flowchart of the extraction of ISAs.
Figure 2. The flowchart of the extraction of ISAs.
Remotesensing 14 02493 g002
Figure 3. The flowchart of skeleton extraction.
Figure 3. The flowchart of skeleton extraction.
Remotesensing 14 02493 g003
Figure 4. Pixels of the neighborhood of P.
Figure 4. Pixels of the neighborhood of P.
Remotesensing 14 02493 g004
Figure 5. Thinning algorithm for binary region.
Figure 5. Thinning algorithm for binary region.
Remotesensing 14 02493 g005
Figure 6. Computation of minimum distance.
Figure 6. Computation of minimum distance.
Remotesensing 14 02493 g006
Figure 7. The range of vertical and horizontal (N: north, E: east).
Figure 7. The range of vertical and horizontal (N: north, E: east).
Remotesensing 14 02493 g007
Figure 8. Consistency of position and direction.
Figure 8. Consistency of position and direction.
Remotesensing 14 02493 g008
Figure 9. The network structure of street trees.
Figure 9. The network structure of street trees.
Remotesensing 14 02493 g009
Figure 10. Skeleton line network features.
Figure 10. Skeleton line network features.
Remotesensing 14 02493 g010
Figure 11. The study area from the GF-2 image: (a) the image of the whole study area; (b) the subimage selected from the study area.
Figure 11. The study area from the GF-2 image: (a) the image of the whole study area; (b) the subimage selected from the study area.
Remotesensing 14 02493 g011
Figure 12. Experiments of vegetation detection: (a) The original sub-image from the study area (GF-2); (b) Result of image segmentation; (c) Classification result using Color VI; (d) Result of post-processing. The black pixels represent the objects of non-vegetation, or vegetation without obscuring effects in (c) and (d) while the white pixels represent the objects of the vegetation obscured ISAs.
Figure 12. Experiments of vegetation detection: (a) The original sub-image from the study area (GF-2); (b) Result of image segmentation; (c) Classification result using Color VI; (d) Result of post-processing. The black pixels represent the objects of non-vegetation, or vegetation without obscuring effects in (c) and (d) while the white pixels represent the objects of the vegetation obscured ISAs.
Remotesensing 14 02493 g012
Figure 13. Experiments of data preprocessing: (a) binary image of vegetation regions; (b) result of area threshold segmentation; (c) result of spur removal; (d) result of regional dilation; (e) result of majority judgment; (f) result of regional erosion. The letters A and A’, B and B’, C and C’, D and D’, E and E’, F and F’ represent subareas from the study area, showing the changes during processing in details. The black pixels represent the areas of non-vegetation, or vegetation without obscuring effects in the images while the white pixels represent the areas of the vegetation obscured ISAs.
Figure 13. Experiments of data preprocessing: (a) binary image of vegetation regions; (b) result of area threshold segmentation; (c) result of spur removal; (d) result of regional dilation; (e) result of majority judgment; (f) result of regional erosion. The letters A and A’, B and B’, C and C’, D and D’, E and E’, F and F’ represent subareas from the study area, showing the changes during processing in details. The black pixels represent the areas of non-vegetation, or vegetation without obscuring effects in the images while the white pixels represent the areas of the vegetation obscured ISAs.
Remotesensing 14 02493 g013
Figure 14. Experiments of skeleton extraction: (a) result of thinning; (b) result of spur removal.
Figure 14. Experiments of skeleton extraction: (a) result of thinning; (b) result of spur removal.
Remotesensing 14 02493 g014
Figure 15. Experiments of feature consistency: (a) skeleton lines overlay with OSM (the red part is OSM roads, and the black one is the skeleton); (b) retained skeleton lines.
Figure 15. Experiments of feature consistency: (a) skeleton lines overlay with OSM (the red part is OSM roads, and the black one is the skeleton); (b) retained skeleton lines.
Remotesensing 14 02493 g015
Figure 16. Experiments of feature recognition: (a) skeleton lines; (b) retained skeleton lines.
Figure 16. Experiments of feature recognition: (a) skeleton lines; (b) retained skeleton lines.
Remotesensing 14 02493 g016
Figure 17. Fusion of ISAs from the proposed algorithm and deep learning: (a) GF-2 image; (b) vegetation-obscured ISAs and result from deep learning. 1, vegetation-obscured ISAs from the proposed algorithm, 2, ISAs from deep learning, 3, pervious surface area from deep learning, 4, water.
Figure 17. Fusion of ISAs from the proposed algorithm and deep learning: (a) GF-2 image; (b) vegetation-obscured ISAs and result from deep learning. 1, vegetation-obscured ISAs from the proposed algorithm, 2, ISAs from deep learning, 3, pervious surface area from deep learning, 4, water.
Remotesensing 14 02493 g017
Figure 18. Extraction effectiveness of the proposed algorithm.
Figure 18. Extraction effectiveness of the proposed algorithm.
Remotesensing 14 02493 g018
Table 1. The parameters of the GF-2 used in this paper.
Table 1. The parameters of the GF-2 used in this paper.
Satellite NameSpatial Resolution (m)Number of BandsCoordinate SystemRange of Study AreaAcquisition Season
GF-20.83 (Red, Green, Blue)WGS8430.50–30.54° N
114.35–114.42° E
Summer, 2020
Table 2. Road grade attribute of OSM and corresponding buffer width.
Table 2. Road grade attribute of OSM and corresponding buffer width.
NumField NameDescriptionBuffer Width (m)
1MotorwayMainly high-grade roads, on which vegetation usually covers the road shoulders or non-motorized lanes attached to high-grade roads. The coverage of vegetation is not broad.1.25
2Trunk
3Secondary
4Tertiary
5Trunk_link
6Secondary_link
7Cycleway
8UnclassifiedContains some special road types. It should be noted that “Unclassified” does not mean that the classification is unknown. It refers to the least important roads in a country’s system.4.5
9Residential
10Living_street
11Service
12NullRoads without grade
13PedestrianOnly for people to walk.1
14Footway
15Steps
16PathA non-specific path. It can be used as a cycleway or sidewalk.
17ConstructionRoads under construction, do not study0
Table 3. Accuracies for the proposed algorithm.
Table 3. Accuracies for the proposed algorithm.
Proposed Algorithm
PA79.98%
UA56.08%
OA86.64%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mao, T.; Fan, Y.; Zhi, S.; Tang, J. A Morphological Feature-Oriented Algorithm for Extracting Impervious Surface Areas Obscured by Vegetation in Collaboration with OSM Road Networks in Urban Areas. Remote Sens. 2022, 14, 2493. https://doi.org/10.3390/rs14102493

AMA Style

Mao T, Fan Y, Zhi S, Tang J. A Morphological Feature-Oriented Algorithm for Extracting Impervious Surface Areas Obscured by Vegetation in Collaboration with OSM Road Networks in Urban Areas. Remote Sensing. 2022; 14(10):2493. https://doi.org/10.3390/rs14102493

Chicago/Turabian Style

Mao, Taomin, Yewen Fan, Shuang Zhi, and Jinshan Tang. 2022. "A Morphological Feature-Oriented Algorithm for Extracting Impervious Surface Areas Obscured by Vegetation in Collaboration with OSM Road Networks in Urban Areas" Remote Sensing 14, no. 10: 2493. https://doi.org/10.3390/rs14102493

APA Style

Mao, T., Fan, Y., Zhi, S., & Tang, J. (2022). A Morphological Feature-Oriented Algorithm for Extracting Impervious Surface Areas Obscured by Vegetation in Collaboration with OSM Road Networks in Urban Areas. Remote Sensing, 14(10), 2493. https://doi.org/10.3390/rs14102493

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop