Next Article in Journal
Mixture Statistical Distribution Based Multiple Component Model for Target Detection in High Resolution SAR Imagery
Previous Article in Journal
Near-Real-Time OGC Catalogue Service for Geoscience Big Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Index Based on Joint Density of Corners and Line Segments for Built-Up Area Detection from High Resolution Satellite Imagery

1
Institute of Photogrammetry and Remote Sensing, Chinese Academy of Surveying and Mapping, Beijing 100830, China, [email protected]
2
Key Laboratory of Mapping from Space, Chinese Academy of Surveying and Mapping, Beijing 100830, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(11), 338; https://doi.org/10.3390/ijgi6110338
Submission received: 25 September 2017 / Revised: 13 October 2017 / Accepted: 1 November 2017 / Published: 2 November 2017

Abstract

:
Detection of built-up areas from Very High Spatial Resolution (VHSR) remote sensing images is a critical step in urbanization monitoring. This paper presents a method for extracting built-up areas from VHSR remote sensing imagery by using feature-level-based fusion of right angle corners, right angle sides and road marks. This method has six main steps. First, line segments are detected. Second, the Harris corner points are detected. Third, the right-angle corners and right-angle sides are determined by cross-verification of the above detected Harris corners and line segments. Fourth, the potential road marks are detected by the template matching method. Fifth, a built-up index image is constructed. Finally, the built-up areas are extracted through a binary thresholding of the above index image. Three satellite images with wide coverage are employed for evaluating the above proposed method. The experimental results suggest that the proposed method outperforms the classic PanTex method. On average, the completeness and the quality of the proposed method are respectively 17.94% and 13.33% better than those of the PanTex method, while there is no great difference between the two methods on the correctness.

1. Introduction

Very High Spatial Resolution (VHSR) aerial and satellite images provide valuable information [1] in diverse fields such as geography, cartography, surveillance, city planning, surveying and mapping. One of the most important pieces of information is about built-up regions. Built-up regions’ information such as area, shape, location, distribution, growth and characteristics greatly helps government agencies and urban planners in updating land use maps, forming long-term plans and monitoring urbanization. As a result, the monitoring of built-up regions has received increasing attention [2]. In this kind of monitoring, the first step is to detect built-up areas from the VHSR remote sensing images [3].
Generally, a built-up area represents a vital and highly dynamic environment, which is mainly composed of both man-made and natural objects [2]. For low and medium spatial resolution remote sensing images with abundant spectral bands, researchers have proposed many built-up area indices such as the Normalized Difference Built-up Index [4], the Enhanced Built-Up and Bareness Index [5] and the Combinational Built-Up Index [6]. However, the above indices may fail to accurately detect built-up areas from the VHSR images because the VHSR images may not contain the necessary spectral bands that are needed in computing these indices.
Recently, many automatic methods [7,8,9,10,11,12,13,14,15,16,17,18,19,20] have been proposed for built-up area detection from VHSR remotely-sensed images. Depending on whether training samples are used, these existing methods can be roughly divided into two groups [7]:
  • The first group detects built-up areas based on supervised classification methods. In this group, a large number of representative training samples is required to learn the patterns of built-up areas for detection. For example, Benediktsson et al. classified built-up areas from panchromatic high-resolution data by using morphological and neural approaches [8]. Zhong and Wang presented an ensemble model of multiple conditional random fields to incorporate multiple features and learn their contextual information for urban detection [9]. Pesaresi et al. used a novel image classification method, called symbolic machine learning, for detailed urban land cover mapping [10]. Hu et al. presented a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework [11]. However, the detection accuracy of built-up areas varies with image types, study areas and the selection of training samples and classifies.
  • The second group directly detects built-up areas without using any training data. With regard to the employed features, these methods are divided into four subcategories:
Texture-based approaches: PanTex [12,13], being a contrast measurement of texture features using the gray-level co-occurrence matrix, has been widely used for global human settlement extraction. However, forested areas, which contain high PanTex values due to tree shadows, are subject to be taken as built-up areas.
Building-density-based approaches: Huang and Zhang [14] propose a building detection method using the difference of morphological profiles, and the corresponding building-density-based feature is employed to extract the built-up areas in [7]. However, the building extraction itself is still a difficult problem and faces great challenges, and it often fails to extract built-up areas.
Corner-density-based approaches: The local key point features such as SIFT (Scale Invariant Feature Transform) [15], local feature point extraction using Gabor filters [3], junctions [7] and Harris corners [16] are widely employed to detect built-up areas. To improve the detection accuracy, the literature presents some variants of corner detection methods such as improved Harris [2] and modified Harris for edges and corners [17]. However, corners extensively exist in farmland areas and highways, which leads to the farmlands and highways possibly being wrongly labeled as human settlements.
Edge-density-based approaches: Edge is an importance feature for image understanding. For example, Gong and Howarth [18] incorporate the edge-density feature in image classification to increase the accuracy by approximately 10%. Ünsalan and Boyer [19] introduce a set of measures based on straight lines to assess land development levels in high-resolution panchromatic satellite images. Recently, Chen et al. [20] realized the extraction of built-up areas from VHSR images using edge density features. However, edges are common features even in natural scenes, leading to the failure of built-up area extraction.
Overall, the second group is more practical than the first group. However, a major issue of the second group is that the employed features for extracting built-up area are indistinctive from those of non-built-up areas. Thus, we focus on developing and selecting unique clues of the built-up areas to improve the extraction accuracy of built-up areas in this article.
In VHSR satellite images, built-up areas have two dominant classes of man-made objects, namely buildings and roads. Furthermore, most of the building roofs have rectangular shapes, and they contain a large number of right-angle corners and right-angle sides. At the same time, road lane markings are abundant on the road surfaces and they are distinctive from the backgrounds. In this sense, both corner points and line segments ((including right-angle sides of building roofs and line segments of road lane markings) can be used as the unique features for built-up area detection. Thus, an Index using Joint Density of Corners and Line Segments (IJDCLS) is proposed to extract built-up areas from VHSR satellite images. Note that the built-up area is more similar to the concept of human settlement in this paper. The main contribution of this paper is the use of right-angle corners and the qualified line segments for spatial voting to promote the robustness of built-up extraction from VHSR images.
The remainder of this paper is organized as follows. Section 2 presents the overall framework of our proposed method. Section 3 demonstrates the experiments and analyses. Section 4 gives concluding remarks.

2. The Proposed Framework for Built-Up Area Extraction

As shown in Figure 1, our proposed method is composed of six main steps: (1) detection of straight line segments by a Line Segment Detector (LSD) [21]; (2) detection of corners by a Harris corner detector; (3) determination of right-angle corners and sides; (4) detection of the line segments of road lane markings by template matching; (5) the generation of a thematic image of Joint Density of Corners and Line Segments (IJDCLS) by spatial voting; and (6) binary thresholding by a trial-and-error method.

2.1. Detection of Line Segments

Intuitively, local edges in an image indicate spectral discontinuity and the existence of structure texture or objects [22]. For some types of man-made objects with strong edge boundaries, such as buildings, cars, boats and airplanes, edge distribution is not only a strong indication of existing objects, but also can be used for locating the centers of objects [22]. Furthermore, line segments are very important clues for building detection. For example, Lin and Nevatia [23] detected buildings and other structures in aerial images using the features of line segments. Moreover, in VHSR remotely-sensed images, straight-line structures are fairly prevalently and regularly distributed in developed areas compared to wilderness or rural areas [19]. Thus, line segments’ density and orientation have been used as clues to detect the built-up areas [20].
Various line segment detectors such as Hough transform and the detector of Burns et al. [24] have been proposed. In this article, LSD, first proposed in [25], is adopted to detect line segments in VHSR images. LSD is pretty fast, and it gives sub-pixel accurate results [21]. LSD is designed to work on any digital image without parameter tuning [21] by controlling the number of false detections, on average, one false alarm per image [25]. LSD is based on Burns et al. [24], and it also uses an a contrario validation approach according to the theory of Desolneux, Moisan and Morel [26,27]. For example, Figure 2a displays a VHRS image sample, and Figure 2b shows the detected line segments by the LSD algorithm. We can see that built-up objects (e.g., buildings and roads) and natural objects (e.g., croplands and forests) generate many line segments.
To limit the distribution of the lengths of line segments, we use two line-segment length thresholds: the maximum ( φ 1 ) and minimum ( φ 2 ) length thresholds. Among the detected line segments, a line segment is deleted if it is shorter than φ 1 or longer than φ 2 . Figure 3a displays the original VHSR image, and Figure 3b displays the refined line segments when φ 1 = 2.00 m (four pixels) and φ 2 = 150.00 m ( 300   pixels).

2.2. Detection of Harris Corners

In VHSR remotely-sensed images, built-up areas contain a number of corners from building roofs, roads and other man-made objects [16]. Many corner detection methods, such as Harris [28], FAST [29] and SUSAN [30], have been designed in the computer vision field. Among them, the classic Harris corner detector and its variants are popular for extracting man-made structures in urban areas [31]. The work done in [2,16,17] has proven that the classic Harris corner detector is an effective and robust algorithm for corner detection.
Therefore, to extract low-level corners in the VHSR images, we use the classic Harris corner detector for detecting built-up areas. Figure 3c displays the detected Harris corners from the image. We can see that the detected corners are abundant in both man-made and natural environments.

2.3. Verification of Harris Corners by Line Segments

In built-up area extraction from VHSR images, four situations exist:
(1)
Some textured areas, such as grassland and forested areas, contain many corners and few line segments, as shown in Figure 3c;
(2)
The farmland areas with a lattice distribution contain many corners and line segments, and their line segments are larger than those of the built-up areas;
(3)
The roads contain many corners and line segments; moreover, some corners have a longer line segment and a shorter line segment;
(4)
A building roof’s corner generally has two orthogonal line segments with medium lengths, as shown in Figure 3d.
The above four facts tell us that one single feature alone (Harris corners or line segments) is not well suited to detect built-up areas. Therefore, the fusion of Harris corners and their supporting line segments might contribute more to built-up area detection than just one single feature does.
In this article, a Harris corner with its supporting line segments is defined as a Harris corner with two orthogonal line segments, and both line segments have medium lengths. As shown in Figure 4, for each Harris corner p i , let l 1 and l 2 be the two nearest line segments around p i and θ i be the angle between the line segments l 1 and l 2 . Moreover, the lengths of the line segments l 1 and l 2 are l e n g t h 1 and l e n g t h 2 , respectively; the distances between Harris corner p i and line segments l 1 and l 2 are d i s t a n c e 1 and d i s t a n c e 2 , respectively. Additionally, distance d between Harris corner p i and line segment l is calculated as follows. Through p i , draw a line perpendicular to l . If the endpoint of the perpendicular line is located between the two endpoints of l , d is equal to the distance between the perpendicular line’s endpoint and point p i . Otherwise, d is equal to the distance between p i and its nearest endpoint of l . Thus, a Harris corner with two supporting line segments should meet the three criteria:
φ 1 < l e n g t h 1 < φ 2   and   φ 1 < l e n g t h 2 < φ 2
| θ i 90 ° | < φ 3
d i s t a n c e 1 < φ 4   and   d i s t a n c e 2 < φ 4
where φ 1 , φ 2 , φ 3 and φ 4 are four predefined thresholds. Moreover, φ 1 and φ 2 are two length thresholds. A Harris corner will be deleted if it does not satisfy the above three criteria. Figure 3d shows the detected right-angle corners and sides when φ 3 = 10 ° and φ 4 = 1.00 m (two pixels herein).
Note that we do not consider the use of vegetation or building indices to verify the Harris corners because not all of the images have sufficient bands to allow the calculation of these indices.

2.4. Detection of Potential Road Lane Markings

In VHSR satellite images, the road lane markings are generally bright linear features. Based on this feature, some methods have been presented for extracting road lane markings from aerial images. For example, Hinz and Baumgartner [32] constructed ideal models of markings. Zhang [33] extracted highways and main roads by detecting road lane markings and zebra crossings. Jin et al. [34] extracted road lane markings from VHSR images based on hierarchical image analyses and 2D Gabor filters. Tournaire and Paparoditis [35] built models of dashed lines using geometric, radiometric and relational characteristics, and they proposed an automatic approach to detect dashed lines based on stochastic analysis. Lin et al. [36] tracked the roads by template matching, and road lane markings are the important clues for road tracking. However, the spatial resolution of satellite image used in this article is lower than aerial images, and road lane markings on satellite images show different characteristics from the ones on aerial images. Thus, the methods mentioned above are not suitable for extraction of road lane markings.
In VHSR satellite panchromatic images, such as QuickBird and WorldView2, a road lane marking is approximately one pixel in width, and its grey values are larger than its nearby backgrounds. Moreover, only the straight-line-segment-shaped road lane markings are concerned. In this article, template matching is employed for detecting road markings. In [37], the template matching using the correlation coefficient as a similarity measure was used in the recognition of road centerline points. Suppose there was a binary bar-shaped reference template, as shown in Figure 5. To detect and locate the centerline points of the bar-shaped roads, the template matching, which correlated the bar-shaped reference template and the image, evaluates the similarity between the reference template and the image; therefore, the centerline point candidates were detected based on the local maximums of the correlation. As a result, we apply the above template matching to VHSR satellite images for the extraction of road lane markings.
Suppose there is a detected line segment with the length of l line , a binary reference template t with the same line length of l line , and a width of three pixels (as shown in Figure 5). Furthermore, for each detected line segment with a length of l line , an image patch, f , with a size of l line × 3 pixels is segmented from the images and then centered by the reference template with the same image size of the patch. Their correlation coefficient is estimated by:
r ( t , f ) = n = 1 N t n f n 1 N ( n 1 N t n ) ( n 1 N f n ) [ n 1 N t n 2 1 N ( n 1 N t n ) 2 ] [ n 1 N f n 2 1 N ( n 1 N f n ) 2 ]
where N = 3 l line .
The correlation coefficient evaluates the similarity between the binary reference template t and the image patch f . The line segment is maintained if its correlation coefficient is larger than a predefined threshold φ 5 . Otherwise, it is deleted. Figure 3e shows the maintained road lane marking candidates when φ 5 = 0.6 .
Note that if the input original image has no obvious road lane markings, this step is skipped.

2.5. Construction of Built-Up Area Index

Generally, built-up areas contain a high density of right-angle corners, right-angle sides and road-lane-marking line segments in a spatial neighborhood, while non-built-up areas sparsely contain these features. This means that, if an image pixel ( x i , y i ) belongs to a built-up area, we expect that more corners and line segments exist in its neighborhood. Hence, a density map of the corners and line segments, computed by spatial voting, is utilized to identify built-up areas.
The spatial voting is performed as follows. Suppose that an image pixel ( x i , y i ) contains n 1 Harris corner pixels and n 2 line segment pixels within a predefined distance threshold, φ 6 . A built-up area pixel is defined by the following likelihood function:
I n d e x ( x i ,   y i ) = k = 1 n 1 100 2 π exp ( ( x i x k ) 2 + ( y i y k ) 2 2 ) + j = 1 n 2 1 2 π exp ( ( x i x j ) 2 + ( y i y j ) 2 2 )
where ( x k , y k ) represents the spatial coordinates of a corner, for k = 1, …, n 1 ; ( x j , y j ) represents the spatial coordinates of a line segment pixel, for j = 1, …, n 2 . The likelihood function highlights the built-up region in the pixel neighborhood. If a pixel is a good candidate for the built-up region, a high value I n d e x ( x i ,   y i ) is expected. Figure 3g shows the built-up area extraction results by the proposed IJDCLS when φ 6 = 150.50 m (301 pixels herein).

2.6. Thresholding of Human-Settlement Index

Once the IJDCLS index image is constructed, a corresponding frequency histogram is formed to obtain a threshold of φ 7 . The built-up regions are generally the pixels with the I n d e x ( x i ,   y i ) larger than φ 7 . Figure 3h shows the detected built-up area candidates from Figure 3g. Figure 3j shows the ground truth data. Visual inspection shows that the detection results closely fits to the ground truth data.
In a short, our proposed method needs seven parameters in total. Among them, five parameters have physical meanings. For example, φ 1 and φ 2 are respectively the minimum length and maximum length of a line segment candidate, because the line segments belonging to building roofs or road lane marking lines are impossibly too long or too short. Moreover, φ 3 and φ 4 are respectively the tolerances of a right-angle and a right-angle side. Additionally, φ 6 is the influence radius of a corner or side point, and it is related to the physical sizes of buildings and blocks, especially the average distance between two man-made objects such as two buildings. If the value of φ 6 is suitable, any two adjoining man-made objects will touch each other, and the built-up areas will have higher values on the feature image. Finally, two parameters, including φ 5 and φ 7 , do not have physical meanings, but they can be decided by the trial-and-error [38] method.

3. Experiments and Analysis

A prototype system for built-up area extraction based on our proposed method is developed using the C++ language in Microsoft Visual Studio 2010. To demonstrate the performance of our method, we compare it with the PanTex [12,13] method. The PanTex code is from the Orfeo ToolBox (OTB) [39], which is an open-source project for state-of-the-art remote sensing. Additionally, all the experiments are conducted on a ThinkPad W520 laptop with an Intel Pentium 2.40-GHz CPU and 2.98 GB of RAM.

3.1. The Test Datasets

Three VHSR satellite images are used for evaluating the performance of our method, and their basic information is referred to in Table 1. The first one is a GeoEye-One pan-sharpened multi-spectral image with a size of 9700 × 8856 pixels. The Ground Sampling Distance (GSD) is 0.50 m, and the covered area is located in the suburban of Suzhou City, Zhejiang Province, China, as shown in Figure 3a. Note only the red band is used in the experiments. The GeoEye data mainly contain two types of objects: built-up and forested areas. Moreover, a lake is contained in the built-up areas, as shown in Figure 3a. Some road lane markings on the highways are obviously visible, and some road lane markings on the streets are invisible in the urban areas.
The second is a QuickBird panchromatic image with a size of 20,786 × 15,448 pixels. The GSD is 0.61 m, and the covered area is located around the central urban and suburban area of Tai’an City, Shandong Province, China, as shown in Figure 6a. Besides the built-up areas, the study area covers various types of objects, including forested mountains, croplands, rivers and lakes, as shown in Figure 6a. Most road lane markings are obviously visible on the highways and main streets in the urban area.
The third is a QuickBird panchromatic image with a size of 6904 × 6905 pixels. The GSD is 0.61 m, and the covered area is located in the central urban area of Linzhi City, Tibet Autonomous Region, China, as shown in Figure 7a. Similar to the second set of data, the third set of data covers built-up areas, rivers, forests, grass land and bare lands, as shown in Figure 7a. Moreover, many islands are located in the rivers. The road lane markings are not observed.
Additionally, to verify the performances of our method, the reference built-up regions are carefully manually created by an experienced operator. The reference data for the three test images are shown in Figure 3j, Figure 6f and Figure 7f, respectively. Note that if lakes are located in the built-up areas (as shown in Figure 3j and Figure 7f), the lakes are regarded as the built-areas in the reference data. Similarly, if a forested park is contained by the built-up area, the park is also regarded as built-up area in the reference data, as shown in the center of Figure 6f.

3.2. Parameters Setting and Results

Both our proposed method and the PanTex are tested on the three test images. As far as our proposed method is concerned, the seven parameters and their values of our method for the three images are listed in Table 2. With the predefined thresholds, the experimental results are obtained. Figure 3f and Figure 6b show the extracted right-angle corners, right-angle sides and road lane markings of the first and second test images, respectively. Figure 3g and Figure 6c demonstrate their generated thematic images respectively; Figure 3h and Figure 6d show the final built-up area extraction results respectively. For the third image, only the right-angle corners and right-angle sides are extracted (as shown in Figure 7b) to construct the index image (as shown in Figure 7c), and the final built-up area extraction results are shown in Figure 7d.
The data in Table 3 suggest that the extracted built-up areas by our proposed method for the three test images are 16.984289 km2, 83.012586 km2 and 7.424627 km2, respectively. For the first test image, visual evaluation suggests that our results are consistent with the reference data except for the lake area. Furthermore, for the other two test images, our method detected more built-up areas compared to the reference data (as shown in Figure 6d and Figure 7d, Table 3). Specifically, our method is capable of making the right recognition in forested areas, but may fail in some farmland areas. Overall, our method achieves a satisfactory performance although it is subject to recognizing some croplands as built-up areas because these croplands have similar shapes as the buildings (as suggested by Figure 6d).
As far as the PanTex method is concerned, the two input parameters for the three images are listed in Table 4. The two parameters include the window size and the binary threshold. The extracted results of the three test images are shown in Figure 3i, Figure 6e and Figure 7e, respectively. The data in Table 3 suggest that the extracted built-up areas by the PanTex method for the three test images are 16.070800 km2, 54.368011 km2 and 6.544921 km2, respectively. Note that the detected results of the first test image are consistent with the reference data except for the lake region. Comparatively, the built-up extraction results of the second and third images are quite different from their reference data. For example, the PanTex method mistakenly recognized forested areas as the built-up areas on the second image, as shown in the top-left part of Figure 6e, because the forested areas have very high contrast values due to the existence of both trees and shadows. At the same time, the built-up areas also have very high contrast values due to the existence of both buildings and shadows. On the third image, compared to the reference data, most detected results are correct, but many true built-up areas are missed; because the heights of most buildings are very low in the built-up areas on the third image, which results in less shadows and low contrast values. Overall, the PanTex method achieves different performances on the three test images.

3.3. Performance Evaluation

In this article, three indicators—correctness ( P e ), completeness ( P c ) and quality ( P q ) [2,40]—are used to evaluate the performance. They are defined as follows:
P e = S auto & manual S auto P c = S auto & manual S manual P q = S auto & manual S auto   maual
where S auto is the area of the automatically-extracted results, S manual is the area of the reference results, S auto & manual is the area of the intersection of the automatically-extracted results and reference results and S auto   manual is the area of the union of the automatically-extracted results and reference results.
The evaluation results of the two methods on the three test datasets are listed in Table 5. On the first test image, both methods achieved pretty similar performance. For example, the quality values of our method and PanTex are 84.78% and 84.05%, respectively. However, for the other two test images, the statistics of the two methods are quite different. For example, the completeness values of our method and PanTex are 91.43% and 61.64%, respectively, on the second test image.
Moreover, the statistical values in Table 5 suggest two very interesting phenomena. First, all the correctness values of PanTex are slightly higher than our method’s on the three test images. On average, the correctness value of PanTex is 2.47% higher than our method, indicating that less significant correctness difference exists between the two methods. Second, all completeness and quality values of our method are higher than those of PanTex on the three test images. On average, the completeness value of our method is 17.94% larger than PanTex’s, and the quality value of our method is 13.33% larger than that of PanTex. On the whole, our proposed method outperforms PanTex with regards to completeness and quality. The statistics in Table 5 demonstrate that our proposed method is capable of detecting more built-up areas than the PanTex method on average.

3.4. Discussion

The above performance evaluation suggests that our proposed method achieved a better performance than the PanTex method on the three test images; but, our method needs more parameters than PanTex. However, most of the seven parameters listed in Table 2 are related to the prior knowledge of the VHSR image. There are five parameters, including φ 1 , φ 2 , φ 3 , φ 4 , φ 6 , that have very clear physical meanings, and they correspond to physical sizes, shapes, the geometrical relationship of buildings, blocks and road lane marking lines. Moreover, there are two parameters, including φ 5 and φ 2 , that are related to the feature spaces, but they can be determined by trial-and-error.
As shown in the results, our proposed method has two shortcomings. First, the extracted results may have holes, as shown in Figure 7d. The holes are produced for two reasons: (1) there are no detected corners, sides or road markings around the holes; (2) the parameter φ 6 is not large enough. Second, our proposed method easily mislabels croplands as built-up areas. This problem may be solved by adopting the multispectral information of the satellite images.

4. Conclusions

In VHSR satellite images, there are more right-angle corners, right-angle sides and road lane marking lines in the built-up areas than in the natural environments, which is used by our method as a unique clue for the extraction of built-up areas. We named the proposed method IJDCLS. Although IJDCLS needs seven parameters, most of the parameters can be determined by the prior knowledge of the image itself. Three VHSR satellite images are used to evaluate the proposed method and PanTex. The experimental results suggest that our proposed method outperformed PanTex. On average, the completeness and quality values of our method are larger by 17.94% and 13.33%, respectively, than those of PanTex, while the correctness values of our method and PanTex are very close, being 92.34% and 89.67% respectively. The above results suggest that our method has potential engineering applications. On the other hand, our proposed method has some disadvantages. First, the final built-up polygons may have holes due to unreasonable setting of the parameters or the complex topological relationships between natural objects and man-made objects. Second, our method may misclassify the farmlands as the built-up areas if both farmlands and buildings have very similar sizes.
The future work will include: (1) the fusion of big/open data [41] or elevation data [42] and VHSR images for accurate detection of built-up areas; (2) the optimization of the algorithms and the adoption of high performance computation to speed up the efficiency; (3) the use of the vegetation index to promote the separation of farmlands from built-up areas; (4) the use of the building index [43,44] or extracted man-made objects [45] to highlight only the impervious surfaces on the images, which is helpful to increase the salience of the built-up areas.

Acknowledgments

This research was funded by: (1) the Basic Research Fund of the Chinese Academy of Surveying and Mapping under Grant 777161103; (2) the Foundation for Remote Sensing Young Talents by the National Remote Sensing Center of China; and (3) the General Program sponsored by the National Natural Science Foundations of China (NSFC) under Grant 41371405.

Author Contributions

Xiaogang Ning performed the experiments, analyzed the data and wrote the manuscript. Xiangguo Lin conceived of and designed the whole framework and revised the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sırmaçek, B.; Ünsalan, C. A probabilistic framework to detect buildings in aerial and satellite images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 211–221. [Google Scholar] [CrossRef]
  2. Tao, C.; Tan, Y.; Zou, Z.; Tian, J. Unsupervised detection of built-up areas from multiple high-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1300–1304. [Google Scholar] [CrossRef]
  3. Sirmacek, B.; Unsalan, C. Urban area detection using local feature points and spatial voting. IEEE Geosci. Remote Sens. Lett. 2010, 7, 146–150. [Google Scholar] [CrossRef]
  4. Xu, H. Extraction of urban built-up land features from Landsat imagery using a thematic oriented index combination technique. Photogramm. Eng. Remote Sens. 2007, 73, 1381–1391. [Google Scholar] [CrossRef]
  5. As-syakur, A.R.; Adnyana, I.W.S.; Arthana, I.W.; Nuarsa, I.W. Enhanced built-up and bareness index (EBBI) for mapping built-up and bare land in an urban area. Remote Sens. 2012, 4, 2957–2970. [Google Scholar] [CrossRef]
  6. Sun, G.; Chen, X.; Jia, X.; Yao, Y.; Wang, Z. Combinational build-up index (CBI) for effective impervious surface mapping in urban areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 2081–2092. [Google Scholar] [CrossRef]
  7. Liu, G.; Xia, G.; Huang, X.; Yang, W.; Zhang, L. A perception-inspired building index for automatic built-up area detection in high-resolution satellite images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 3132–3135. [Google Scholar]
  8. Benediktsson, J.A.; Pesaresi, M.; Arnason, K. Classification and feature extraction for remote sensing images from urban areas based on morphological transformations. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1940–1949. [Google Scholar] [CrossRef]
  9. Zhong, P.; Wang, R. A multiple conditional random fields ensemble model for urban area detection in remote sensing optical images. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3978–3988. [Google Scholar] [CrossRef]
  10. Pesaresi, M.; Corbane, C.; Julea, A.; Florczyk, A.J.; Syrris, V.; Soille, P. Assessment of the added-Value of Sentinel-2 for detecting built-up areas. Remote Sens. 2016, 8, 299. [Google Scholar] [CrossRef] [Green Version]
  11. Hu, Z.; Li, Q.; Zhang, Q.; Wu, G. Representation of block-based image features in a multi-scale framework for built-up area detection. Remote Sens. 2016, 8, 155. [Google Scholar] [CrossRef]
  12. Pesaresi, M.; Gerhardinger, A.; Kayitakire, F. A robust built-up area presence index by anisotropic rotation-invariant textural measure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2008, 6, 2410–2420. [Google Scholar] [CrossRef]
  13. Pesaresi, M.; Gerhardinger, A. Improved textural built-up presence index for automatic recognition of human settlements in arid regions with scattered vegetation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 1, 16–26. [Google Scholar] [CrossRef]
  14. Huang, X.; Zhang, L. Morphological building/shadow index for building extraction from high resolution imagery over urban areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 161–172. [Google Scholar] [CrossRef]
  15. Sirmacek, B.; Unsalan, C. Urban-area and building detection using sift keypoints and graph theory. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1156–1167. [Google Scholar] [CrossRef]
  16. Li, Y.; Tan, Y.; Deng, J.; Wen, Q.; Tian, J. Cauchy graph embedding optimization for built-up areas detection from high-resolution remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2078–2096. [Google Scholar] [CrossRef]
  17. Kovacs, A.; Sziranyi, T. Improved harris feature point set for orientation-sensitive urban-area detection in aerial images. IEEE Geosci. Remote Sens. Lett. 2013, 10, 796–800. [Google Scholar] [CrossRef]
  18. Gong, P.; Howarth, P.J. The use of structural information for improving land-cover classification accuracies at the rural-urban fringe. Photogramm. Eng. Remote Sens. 1990, 56, 67–73. [Google Scholar]
  19. Ünsalan, C.; Boyer, K.L. Classifying land development in high-resolution panchromatic satellite images using straight-line statistics. IEEE Trans. Geosci. Remote Sens. 2004, 42, 907–919. [Google Scholar] [CrossRef]
  20. Chen, H.; Tao, C.; Zou, Z.; Shao, L. Extraction of built-up areas extraction from high-resolution remote-sensing images using edge density features. J. Appl. Sci. 2014, 32, 537–542. [Google Scholar]
  21. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef]
  22. Hu, X.; Shen, J.; Shan, J.; Pan, L. Local edge distributions for detection of salient structure textures and objects. IEEE Geosci. Remote Sens. Lett. 2013, 10, 466–470. [Google Scholar] [CrossRef]
  23. Lin, C.; Nevatia, R. Building detection and description from a single intensity image. Comput. Vis. Image Understand. 1998, 72, 101–121. [Google Scholar] [CrossRef]
  24. Bums, J.B.; Hanson, A.R.; Riseman, E.M. Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 4, 425–455. [Google Scholar]
  25. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
  26. Desolneux, A.; Moisan, L.; Morel, J.M. Meaningful alignments. Int. J. Comput. Vis. 2000, 40, 7–23. [Google Scholar] [CrossRef]
  27. Desolneux, A.; Moisan, L.; Morel, J.M. From Gestalt Theory to Image Analysis, A Probabilistic Approach; Springer: New York, NY, USA, 2008; ISBN 0387726357. [Google Scholar]
  28. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  29. Miroslav, T.; Mark, H. Fast corner detection. Image Vis. Comput. 1998, 16, 75–87. [Google Scholar]
  30. Smith, S.M.; Brady, J.M. SUSAN: A new approach to low level image processing. Int. J. Compu. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
  31. Fonte, L.M.; Gautama, S.; Philips, W.; Goeman, W. Evaluating corner detectors for the extraction of man-made structures in urban areas. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 237–240. [Google Scholar]
  32. Hinz, S.; Baumgartner, A. Automatic extraction of urban road network from multi-view aerial imagery. ISPRS J. Photogramm. Remote Sens. 2003, 58, 83–98. [Google Scholar] [CrossRef]
  33. Zhang, C. Towards an operational system for automated updating of road databases by integration of imagery and geodata. ISPRS J. Photogramm. Remote Sens. 2003, 58, 166–186. [Google Scholar] [CrossRef]
  34. Jin, H.; Feng, Y.; Li, M. Towards an automatic system for road lane marking extraction in large-scale aerial images acquired over rural areas by hierarchical image analysis and Gabor filter. Int. J. Remote Sens. 2012, 33, 2747–2769. [Google Scholar] [CrossRef] [Green Version]
  35. Tournaire, O.; Paparoditis, N. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images. ISPRS J. Photogramm. Remote Sens. 2009, 64, 621–631. [Google Scholar] [CrossRef]
  36. Lin, X.; Zhang, R.; Shen, J. A template-matching based approach for extraction of roads from very high resolution remotely sensed imagery. Int. J. Image Data Fusion 2012, 3, 149–168. [Google Scholar] [CrossRef]
  37. Hu, X.; Tao, C.V. A reliable and fast ribbon road detector using profile analysis and model-based verification. Int. J. Remote Sens. 2005, 26, 887–902. [Google Scholar] [CrossRef]
  38. Jackson, R.R.; Fiona, R.C.; Chris, M.C. Geographic variation in a spider’s ability to solve a confinement problem by trial and error. Int. J. Comp. Psychol. 2006, 19, 282–296. [Google Scholar]
  39. Orfeo ToolBox. Available online: https://www.orfeo-toolbox.org/ (accessed on 1 December 2015).
  40. Zhang, J.; Duan, M.; Yan, Q.; Lin, X. Automatic vehicle extraction from airborne LiDAR data using an object-based point cloud analysis method. Remote Sens. 2014, 6, 8405–8423. [Google Scholar] [CrossRef]
  41. Long, Y.; Liu, L. Transformations of urban studies and planning in the big/open data era: A review. Int. J. Image Data Fusion. 2016, 7, 295–308. [Google Scholar] [CrossRef]
  42. Zhang, J.; Lin, X. Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing. Int. J. Image Data Fusion 2017, 8, 1–31. [Google Scholar] [CrossRef]
  43. Zha, Y.; Gao, J.; Ni, S. Use of normalized difference built-up index in automatically mapping urban areas from TM imagery. Int. J. Remote Sens. 2003, 24, 583–594. [Google Scholar] [CrossRef]
  44. Xu, H.; Huang, S.; Zhang, T. Built-up land mapping capabilities of the ASTER and Landsat ETM+ sensors in coastal areas of southeastern China. Adv. Space Res. 2013, 52, 1437–1449. [Google Scholar] [CrossRef]
  45. Li, Z.; Shi, W.; Wang, Q.; Miao, Z. Extracting man-made objects from high spatial resolution remote sensing images via fast level set evolutions. IEEE Trans. Geosci. Remote Sens. 2015, 53, 883–899. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Ijgi 06 00338 g001
Figure 2. The extraction of line segments by the Line Segment Detector. (a) A subset of a VHSR satellite image; (b) the detected line segments (marked with red color) and (c) the detected line segments (marked with red color) imposed on the original image.
Figure 2. The extraction of line segments by the Line Segment Detector. (a) A subset of a VHSR satellite image; (b) the detected line segments (marked with red color) and (c) the detected line segments (marked with red color) imposed on the original image.
Ijgi 06 00338 g002
Figure 3. The first test image; the intermediate results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the detected line segments; (c) Harris corners; (d) the detected right angle corners and right angle sides; (e) the detected road marks; (f) the retained corners and line segments; (g) the constructed thematic image by spatial voting; (h) the final built-up area (by our proposed method) imposed on the original image; (i) the final built-up area by the PanTex method; (j) the reference built-up area.
Figure 3. The first test image; the intermediate results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the detected line segments; (c) Harris corners; (d) the detected right angle corners and right angle sides; (e) the detected road marks; (f) the retained corners and line segments; (g) the constructed thematic image by spatial voting; (h) the final built-up area (by our proposed method) imposed on the original image; (i) the final built-up area by the PanTex method; (j) the reference built-up area.
Ijgi 06 00338 g003aIjgi 06 00338 g003bIjgi 06 00338 g003c
Figure 4. The construction of a right angle corner and two right angle sides.
Figure 4. The construction of a right angle corner and two right angle sides.
Ijgi 06 00338 g004
Figure 5. The binary reference template for the extraction of road marks.
Figure 5. The binary reference template for the extraction of road marks.
Ijgi 06 00338 g005
Figure 6. The second test image; the results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the retained corners and line segments; (c) the constructed thematic image by spatial voting; (d) the final built-up area (by our proposed method) imposed on the original image; (e) the final built-up area by the PanTex method; (f) the reference built-up area.
Figure 6. The second test image; the results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the retained corners and line segments; (c) the constructed thematic image by spatial voting; (d) the final built-up area (by our proposed method) imposed on the original image; (e) the final built-up area by the PanTex method; (f) the reference built-up area.
Ijgi 06 00338 g006aIjgi 06 00338 g006b
Figure 7. The third test image; the results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the retained corners and line segments; (c) the constructed thematic image by spatial voting; (d) the final built-up area (by our proposed method) imposed on the original image; (e) the final built-up area by the PanTex method; (f) the reference built-up area.
Figure 7. The third test image; the results of our method and the extracted built-up areas. (a) The VHSR satellite image; (b) the retained corners and line segments; (c) the constructed thematic image by spatial voting; (d) the final built-up area (by our proposed method) imposed on the original image; (e) the final built-up area by the PanTex method; (f) the reference built-up area.
Ijgi 06 00338 g007aIjgi 06 00338 g007b
Table 1. Information about the three test images.
Table 1. Information about the three test images.
Image No.Satellite SensorBandsGSD (m)Length × Width (Pixels × Pixels)Location
FirstGeoEye-OnePan-sharpened RGB0.509700 × 8856Suzhou, China
SecondQuickBirdpanchromatic0.6120,786 × 15,448Tai’an, China
ThirdQuickBirdpanchromatic0.616904 × 6905Linzhi, China
Table 2. Values of the seven parameters of our proposed method for the three test images.
Table 2. Values of the seven parameters of our proposed method for the three test images.
Image No. φ 1 (m) φ 2 (m) φ 3 (°) φ 4 (m) φ 5 φ 6 (Pixels) φ 7
First2.00150.0010.001.000.6150.500.01
Second3.0591.5015.001.220.7122.6110.00
Third3.0591.5015.001.22-122.61200.00
Table 3. The areas of the built-up regions by the three methods on the three test images.
Table 3. The areas of the built-up regions by the three methods on the three test images.
Image No.Reference (km2)PanTex (km2)Our Method (km2)
First17.35281716.07080016.984289
Second75.68954454.36801183.012586
Third4.1155746.5449217.424627
Table 4. The values of the two input parameters of the PanTex method for the three test images.
Table 4. The values of the two input parameters of the PanTex method for the three test images.
Image No.Window (Pixels × Pixels)Binary Threshold
First100 × 1000.25
Second84 × 840.4
Third84 × 840.37
Table 5. Statistics of the performance of two built-up extraction methods.
Table 5. Statistics of the performance of two built-up extraction methods.
Image No.The Method P e (%) P c (%) P q (%)
FirstPanTex94.9787.9684.05
Our Method92.7590.7884.78
SecondPanTex85.5661.6455.69
Our Method83.3791.4377.32
ThirdPanTex96.4960.6859.37
Our Method92.8881.8876.99

Share and Cite

MDPI and ACS Style

Ning, X.; Lin, X. An Index Based on Joint Density of Corners and Line Segments for Built-Up Area Detection from High Resolution Satellite Imagery. ISPRS Int. J. Geo-Inf. 2017, 6, 338. https://doi.org/10.3390/ijgi6110338

AMA Style

Ning X, Lin X. An Index Based on Joint Density of Corners and Line Segments for Built-Up Area Detection from High Resolution Satellite Imagery. ISPRS International Journal of Geo-Information. 2017; 6(11):338. https://doi.org/10.3390/ijgi6110338

Chicago/Turabian Style

Ning, Xiaogang, and Xiangguo Lin. 2017. "An Index Based on Joint Density of Corners and Line Segments for Built-Up Area Detection from High Resolution Satellite Imagery" ISPRS International Journal of Geo-Information 6, no. 11: 338. https://doi.org/10.3390/ijgi6110338

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop