Next Article in Journal
Assessment of Canopy Chlorophyll Content Retrieval in Maize and Soybean: Implications of Hysteresis on the Development of Generic Algorithms
Next Article in Special Issue
Fast Segmentation and Classification of Very High Resolution Remote Sensing Data Using SLIC Superpixels
Previous Article in Journal
Differences in Rate and Direction of Shifts between Phytoplankton Size Structure and Sea Surface Temperature
Previous Article in Special Issue
Scalable Bag of Subpaths Kernel for Learning on Hierarchical Image Representations and Multi-Source Remote Sensing Data Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generating Topographic Map Data from Classification Results

Department of Development and Planning, Aalborg University, 9000 Aalborg, Denmark
Remote Sens. 2017, 9(3), 224; https://doi.org/10.3390/rs9030224
Submission received: 30 December 2016 / Accepted: 25 February 2017 / Published: 2 March 2017

Abstract

:
The use of classification results as topographic map data requires cartographic enhancement and checking of the geometric accuracy. Urban areas are of special interest. The conversion of the classification result into topographic map data of high thematic and geometric quality is subject of this contribution. After reviewing the existing literature on this topic, a methodology is presented. The extraction of point clouds belonging to line segments is solved by the Hough transform. The mathematics for deriving polygons of orthogonal, parallel and general line segments by least squares adjustment is presented. A unique solution for polylines, where the Hough parameters are optimized, is also given. By means of two data sets land cover maps of six classes were produced and then enhanced by the proposed method. The classification used the decision tree method applying a variety of attributes including object heights derived from imagery. The cartographic enhancement is carried out with two different levels of quality. The user’s accuracies for the classes “impervious surface” and “building” were above 85% in the “Level 1” map of Example 1. The geometric accuracy of building corners at the “Level 2” maps is assessed by means of reference data derived from ortho-images. The obtained root mean square errors (RMSE) of the generated coordinates (x, y) were RMSEx = 1.2 m and RMSEy = 0.7 m (Example 1) and RMSEx = 0.8 m and RMSEy = 1.0 m (Example 2) using 31 and 62 check points, respectively. All processing for Level 1 (raster data) could be carried out with a high degree of automation. Level 2 maps (vector data) were compiled for the classes “building” and “road and parking lot”. For urban areas with numerous classes and of large size, universal algorithms are necessary to produce vector data fully automatically. The recent progress in sensors and machine learning methods will support the generation of topographic map data of high thematic and geometric accuracy.

Graphical Abstract

1. Introduction

Big progress has been achieved in the classification of aerial and satellite images by means of machine learning methods. The classification results, however, are not topographic maps. Generalizations of the map content and cartographic quality are necessary for topographic maps. The geometric accuracy should meet high demands. Urban areas are of special interest. It is also important that these maps can be produced quickly and updated in short intervals of time. This demand requires automatic processing.
In this contribution, we focus on automatic 2D mapping of urban areas with detailed content including small objects. The positional accuracy of well-defined objects should be less than one meter. The graphical output must have cartographic quality, which includes simplification of the content and representation of man-made objects by straight and orthogonal lines. Such map data are required. An overview on the status of the mapping in the world was recently published in [1]. Information from 244 countries and territories were collected. In the report, it is stated that the map coverage is 33.5% in the scale range 1:25,000 and larger. There exist big differences in the coverage with topographic maps in the mentioned scale range, e.g., 5% (Africa) and 98% (Europe). In some of the areas covered with maps or databases, the data might be 10 to 35 years old. These facts demonstrate the need for improvements in the production and maintenance of such data. Private companies, e.g., Google and Microsoft, have created map data for navigation and location based services. They often use volunteers to create these maps. The Open Street Maps (OSMs) are one example. The contents and accuracy of such maps differ. Tests indicate that these maps may have a geometric accuracy of 1.6 m after correction of systematic errors [2].
The last advancements in the sensor technology and processing methods yield new possibilities to faster produce and update the maps and databases in the important category 1:25,000 and larger. Topographic objects of urban areas need to be mapped in the scale range 1:10,000 and larger. These vector maps may be manually or automatically derived from various source data. The cartographic enhancement and the geometric accuracy are very important issues when topographic maps are the goal. This part will be the focus point in this work. In the past, research has been carried out to find solutions for this task. Some authors proposed methods using lidar data only [3,4,5]. In [3] the 3D point clouds are analyzed and features for each 3D point are derived using points of the neighborhood. By means of these features the edge points of building roofs can be detected and then assembled to lines. The end points of the lines are considered as corner points. A dense point cloud is necessary which was achieved by multiple airborne scans and proper registration. Building boundaries were derived in [4] by means of tracing and regularization of laser point clouds. The applied methodology used filtering and segmentation to obtain 2D point clouds of individual buildings for which boundaries were derived by a convex hull algorithm and a least squares adjustment. The assessment of the geometric accuracy at test sites revealed a standard deviation of about 20% of the spacing of the lidar points. A regular and consistent point distribution is necessary. In [5], a method was presented, which is based on aerial color (RGB) imagery and lidar. The lidar point cloud is used to derive masks for buildings. Line segments and corner points are derived by edge and corner detectors. A post-classification used a pseudo-vegetation index to eliminate the line segments situated within trees. The achieved geometric accuracy was 1.92 m (RMSE).
This contribution has the goal of automatically producing urban 2D maps of high cartographic quality and of high geometric accuracy using aerial imagery only. The article is an updated and extended version of a contribution to GEOBIA 2016 [6]. The findings in this article are now based on two practical examples. The applied methodology for refinement of classification results was supplemented with a new method for polylines. New explanations, figures, tables, and references were added.
The structure of this paper is the following. Section 2 describes the characteristics of topographic maps and databases. The source data and the classification methods are discussed in Section 3 and Section 4. The cartographic enhancement of the classification result is dealt with in Section 5. A new methodology to improve the cartographic quality is presented in Section 6. Information on the assessment of the thematic and geometric accuracy is given in Section 7. Examples of cartographic enhancements are part of Section 8. Discussion and conclusions are in Section 9 and Section 10, respectively.

2. Characteristics of Large Scale Topographic Maps and Databases

Topographic maps of urban areas with many details are produced in scales 1:10,000 and larger. The contents of these maps depend on the purpose of the map. Planning and management are the important applications. The various object types are stored in different layers and can be displayed individually or in combination. The storage in databases allows also an analysis of the map data. The generation and updating of such geographic information systems (GIS) is a major task of mapping organizations today. Details on the assessed accuracy, the time of acquisition, and other information are stored as metadata. Topographic maps are always georeferenced and may also contain elevations. The separation into a planimetric (2D) map and an accompanying digital elevation model (DEM) seems to be a trend in mapping including updating. Another trend in mapping is the generation of three-dimensional maps and databases. Their generation and updating is much more demanding regarding the level of detail and GIS applications. In the following we discuss 2D maps only.

2.1. 2D Maps of Urban Areas

2D maps of urban areas are sometimes also called technical maps. Such maps should have a high geometric accuracy and a high rate of updating. They are digital vector maps and they are displayed on a computer screen in a range of scales, e.g., 1:1000 to 1:10,000. Printing of analog maps may occur on demand only. The production methods are different in the world. Manual digitizing of ortho-images is a fast and cheap method to produce and update digital vector maps. The level of detail and the accuracy of maps produced by this so-called heads-on digitizing depend very much on the resolution of the ortho-images. The smallest object which can be recognized should cover an area of two to three pixels which also must have sufficient contrast to its surroundings. The accuracy of ortho-images should be about two ground sampling distances (GSD). The ortho-images are often used as background information for the map data in vector format. The automatic extraction of points, lines, and areas from ortho-images is subject of this article. This process is also called vectorization.

2.2. Objects of Large Scale Topographic Maps

Objects of large scale topographic maps are buildings, car ports, walls, roads, parking lots, paths, bridges, trees, bushes, hedges and many others. To represent them on a map, they should have a minimum size. This is given by the resolution of the human eye which is about one minute of arc. It means that lines of 0.05 mm in width can be recognized from 30 cm. The resolution of the computer screen is given by its pixel size. For example, the pixel size of a 56-cm screen with 1680 × 1050 pixels is 0.28 mm which corresponds to a resolution of 35 pixels/cm or 90 dpi. The thinnest line which can be displayed on such a screen has a width of 0.28 mm. The lines of man-made objects are often straight and orthogonal to each other. In the graphical representation, such lines should be straight and orthogonal. Furthermore, the lines should be without gaps and the polygons of area objects must be closed.

3. Source Data of Classification

In this contribution, the generation of topographic maps is investigated by enhancing the results of classification. The resources are aerial images, auxiliary data and features (attributes) of topographic objects.

3.1. Type of Imagery

The images used for the classification of urban areas should have spectral bands in the visible (red, green, and blue) and in the non-visible (near-infra-red) parts of the spectrum. The radiometric resolution should be better than eight-bit, corresponding to 256 digital numbers (DN). Furthermore, the images should be metric which means that accurate values for the camera constant, the distortion of the lens, and the position of the principal point must be determined by a calibration. Modern aerial photogrammetric cameras will meet these demands. The imagery must be taken with overlap, e.g., 60%, so that elevations can be derived. The ground sampling distance (GSD) should have a size that spatial positions of well-defined points can be determined with sub-meter accuracy. The taken aerial images must be georeferenced. This means that all images will be connected by means of automatically derived homologous points. This process is called aerial triangulation which requires a few ground control points. Data of sensors for position and attitude, simultaneously recorded with the images, will support this process and permit accurate orientation data of the images. The aerial images can then be transferred into ortho-images. In this transformation, the aerial images are rectified due to their tilts and corrected for differences in terrain elevations. The first task requires the orientation data of the images and the second one needs a digital elevation model. Two elevation models can be used, either the digital surface model (DSM) or the digital terrain model (DTM). The DSM-based ortho-image depicts the buildings in a correct position but their outlines are wriggly lines. In the DTM-based ortho-image, the outlines of buildings are sharp but displaced due to the heights above ground. The DTM is derived from the DSM by means of filtering. Some manual editing may be required. The size of the ortho-image pixel can differ from the GSD value. Each pixel of the ortho-image may have coordinates of the reference system. The coordinate value is valid either for the center or for the upper left corner of a pixel. To achieve high geometric accuracies, the mentioned parameters should be known and be correctly used in the processing.

3.2. Auxiliary Data

In the procedures of classification and graphic enhancement, other data could also be used with advantage. For example, the spatial coordinates of the perspective centers and the heights above ground may be applied to correct the position of buildings when DTM-based ortho-images are used in the classification. Existing maps may be helpful to detect objects. The classification can then be restricted to certain areas, e.g. to roads and parking lots when cars should be detected and removed. Other data may be DEMs that are derived from airborne laser scanning (lidar) or other sensors. The source and auxiliary data are often supplemented by meta data which inform about the accuracy, date of collection, etc. It may be also necessary to carry out tests whether such data are fit for purpose.

3.3. Attributes and Attribute Profiles

The objects of topographic maps can automatically be detected by means of attributes which characterize these objects. The average height of residential houses (dZ) in suburbs may be known in advance. Other attributes used in classification are spectral signature and normalized difference vegetation index (NDVI). They can be derived from imagery. In addition, attribute profiles may be used. These are attributes of the standard attributes (dZ, NDVI). The attribute profiles are, for example, the standard deviation of the intensities or of elevations in the neighborhood of a pixel. More advanced attribute profiles are built from morphological operators [7].

4. Classification Methods

Many classification methods have been developed in the past. Besides the generation of land cover maps with several classes, the extraction of single objects is subject of many studies. The extraction of building boundaries using high resolution images and lidar data has recently been published in [8]. Lidar data are used to produce a coarse boundary, which is then refined by means of edges extracted from stereo images. Precise 3D boundaries of buildings are obtained by this methodology. In [9], 2D building outlines are generated by means of lidar data using elevations and intensities. Our investigation deals with the generation of 2D land cover maps of several classes using high resolution images only. The applied classification method of this research is decision tree (DT). The theoretical background of the DT method is given in [10]. Experiences with DT classification are published inter alia in [11,12]. The DT method uses training data. Thresholds for class attributes are automatically derived which split the training data in two parts. The testing occurs several times until all training data are separated into the selected classes. This recursive partitioning generates a tree structure, which is then used to assign a class to each unit of the land cover map.
The applied classification method may have influence on the results of the cartographic enhancement. Such investigations can be extensive and are not part of this work.

5. Cartographic Enhancement of the Classification Result

To produce topographic maps from classification results, several steps must be carried out. Objects that are not part of topographic maps must be removed and topographic objects should be represented with cartographic quality. The thematic and geometric accuracies of the result should be assessed.

5.1. Removal of Non-Topographic Objects

Topographic maps contain permanent objects only. Cars, boats, people, animals, tents, trampolines, haystacks, and other non-permanent objects are not part of topographic maps. The classification results may have inhomogeneous areas representing more than one class. Buildings, roads, etc. should be represented by one class only. Non-data areas may also be present in the classification result. These areas should be transferred into a proper class.

5.2. Cartographic Refinements

Very small objects like garages, oriels, sheds, and cellar entrances should be removed. A minimum size must be specified, e.g., area objects should cover at least 25 m2 in nature, which means a simplification of the map content must take place. The degree of this generalization will vary for different map types, which are characterized by the number of objects and their level of detail.
Man-made objects like buildings, walls, roads, etc. must be represented in the maps or databases by regular lines. The outlines of buildings mainly consist of orthogonal and parallel straight lines. Even small deviations from linearity, orthogonality and parallelism are easily noticeable by the map user (buyer) and should therefore be avoided. These improvements include the cartographic refinement. Research on cartographic refinement of classification results is not very much dealt with in the literature. In computer-assisted direct mapping, solutions for the generation of orthogonality and parallelism at buildings were given in [13,14,15]. The corner points, however, were identified and measured by a human operator who could examine the calculated results. In the automatic generation of building outlines, the various situations in the images must be handled by software. This task is much more complex. A methodology for an automated solution has been presented in [16]. By means of a DSM a building mask is derived. The corner points of buildings can be detected and line parameters are derived from them. The orientation of the buildings is then averaged for a whole district of an urban area.

5.3. Degree of Automation

The topographic maps must be compiled by a high degree of automation. Some manual work may still be necessary. The solutions may be different depending on the demands, the available resources, and the skills of the personnel. Topographic maps and data bases of different content and levels of quality should be considered. Computation times may also be a matter of concern. Efficient algorithms must be found and applied.

6. A New Methodology to Improve Cartographic Quality

The results of the classification may be cartographically enhanced by means of image processing and image analysis techniques. Two approaches are applied in these investigations. The first approach is a simple one where the focus is on a high degree of automation, but the quality of the map in raster format is limited. We name the approach Level 1. The second approach, called Level 2, derives vectors and yields a higher cartographic quality for buildings, roads, and other man-made objects, but the efforts become higher and some interactions by a human operator may be required. The proposed solution is achieved in small steps and will be carried out by practical tests described in Section 8.

6.1. Generation of Level 1

For the generation of Level 1 quality, a couple of image manipulations are carried out. The resulting map will be represented in raster format. Each class is processed separately. The extraction of a class has generated a binary image which is now processed further. Figure 1 depicts the single steps of processing by means of a flowchart. Details for each step are given in the following.

6.1.1. Dilation and Erosion

These morphological operations are carried out by filtering. A structuring element (SE) must be defined beforehand, e.g., a disc- or diamond-shaped figure covering an area of a few pixels. The manipulations by dilation and erosion first increase the set of pixels and reduce them thereafter. The effect is a smoothing of the boundaries and a removal of some noise (small set of pixels). The shape and the size of the SE determine which noise will be eliminated and what degree of smoothing will be carried out. The selected shape (diamond) and size of the SE (5 × 5 pixels) were found in pre-tests.

6.1.2. Outlines and Filling

The first manipulation filters the image using a moving rectangular window. Two parameters must be specified: the size of the window (matrix) and the offset from the average intensity within the window. The dimensions of the window determine the width of the outlines. A small size window produces thin outlines, which better separate objects from each other. The thresholding offset may be a small value, e.g., 0.01. The second operation generates a filling of the whole object with pixels of intensity “1”. The areas of the objects, e.g., buildings, are then homogeneous.

6.1.3. Labeling of Objects

The connected sets of pixels with the intensity “1” can now be labeled by a digit. The applied algorithm performs a standard connected-component analysis. The number of objects can then be counted.

6.1.4. Computation of Features

Features of objects such as position, area, maximum radius, orientation, etc. are derived for each of the objects in the binary image (B). The formula for the area (A) of an object is:
A = i = 1 n j = 1 m B [ i , j ]
where i, j = image coordinates and n, m = width and height of the binary image of one object.
The coordinates of the center of an object (xc, yc) are calculated by:
x c = i = 1 n j = 1 m j B [ i , j ] A y c = i = 1 n j = 1 m i B [ i , j ] A
The units are pixels. More details about these formulas can be found in [17].

6.1.5. Removal of Small Objects

Small objects can now be removed using a threshold for the area (A) or the radius of an object. The result is a generalization of the map content.

6.1.6. Display and Output

The result of three classes can quickly be displayed by means of RGB-channels. Overlaps between classes can then be discovered. To have all classes in the map, the images should be plotted by means of colors. The sequence of plotting should follow the rule that hard objects (buildings, roads, and walls) should be plotted last. Overlaps with the soft classes (vegetation) are then repressed.

6.2. Generation of Level 2

The Level 2 approach uses results of Level 1 and improves the lines of man-made objects. The resulting map is represented in vector format. Each object must be processed individually. It is extracted from the connected component image using its label. The 2D point cloud of each boundary line has then to be separated from the other points (pixels) of the object. The parameters of each line can now be calculated by least squares adjustment. The next step is the generation of orthogonal and parallel lines. Corner points are then calculated by intersecting successive lines. The polygons forming buildings, car ports, walls, etc. must be closed. The suggested approach is depicted by means of a flowchart for the class “building” (cf. Figure 2). It will be explained in more detail in the following.

6.2.1. Extracting of Point Clouds

The boundaries of man-made objects consist of several lines. The boundaries are approximated by straight lines. Parallel and orthogonal lines exist at buildings, walls, car ports, roads, etc. The first step is the extraction of the 2D point clouds forming the boundary lines. The separation of lines is carried out by means of the Hough transform, which uses a voting mechanism. Each point (pixel) of the point cloud votes for several combinations of parameters. The parameter set that receives most votes are the winners. The lines are modeled by the Hesse normal form
ρ = x cos θ + y sin θ
where ρ = distance from the origin (O) and θ = azimuth of the normal vector to the line (Figure 3). In the parameter space, H (θ, ρ), the coordinates (x, y) are constants and the parameters (θ, ρ) are variables.
All points of the point cloud of a building boundary are mapped in the parameter space using combinations of θ and ρ. The cells of the parameter space are used as an accumulator which is incremented by 1 when a point satisfies Equation (3). The highest values in the accumulator array (H) correspond to long boundary lines (cf. Figure 4). More detailed information on the Hough transform algorithm and its applications can be found in [17,18].
The related parameters (θ, ρ) are analyzed to decide which 2D point clouds must be extracted so that all lines of the building can be modeled.
The analysis of the parameter space is carried out by a subroutine in which additional information is used. Useful information is the length of the line and the parameters of an ellipse which is fitted to the pixels belonging to the extracted object (building). These ellipse parameters are the coordinates of the center (xc, yc), the orientation of the major axis (θ + 90°), and the minimum radius (semiminor axis). The lines forming the outlines of the object can then be identified and all points which are close to one line, i.e., a local 2D point cloud, is extracted.

6.2.2. Calculation of Lines

The extracted 2D point clouds of building outlines may be modeled by:
a i x + y + c i = 0
and the coefficients (ai and ci) are determined more accurately using adjustment procedures. A graphical output of the calculated line together with the point cloud may be used for checking the results.

6.2.3. Generation of Orthogonal and Parallel Lines

The generated boundary lines are straight but not orthogonal and parallel to each other. This task is described in the following. We will use a building with four corners as an example (cf. Figure 5).
At first, preliminary coordinates of the corner points (Pk) are obtained by intersection of two consecutive lines (li and li+1).
x P k + 1 = c i + 1 c i a i a i + 1
y P k + 1 = c i a i + 1 c i + 1 a i a i a i + 1
The final coordinates of the corner points (Pk) are derived in two steps. First, the slope value (a) is found by a weighted average
a = i = 1 n w i a i i = 1 n w i
where n is the number of lines and wi is a weight. The weight is the number of extracted points for one line which is proportional to the length of the line. The slope of the lines orthogonal to the main direction of the building are given by
a o r t h o g o n a l = 1 a
Such values must be converted before the averaging. A threshold for the tolerated deviation from orthogonality and parallelism is also used.
The second step calculates the ci-values of Equations (5) and (6) by least squares adjustment. These equations are linear and the adjustment is, therefore, very simple. The unknowns (vector x) are then obtained by:
A x = b + r
The design matrix (A) and the vectors (x, b, r) have the following designations:
( k 2 + k 2 0 0 k 1 k 3 0 0 0 + k 2 k 2 0 0 k 3 k 1 0 0 0 k 2 + k 2 0 0 k 1 k 3 k 2 0 0 + k 2 k 1 0 0 k 3 ) ( c 1 c 2 c 3 c 4 ) = ( x 2 y 2 x 3 y 3 x 4 y 4 x 1 y 1 ) + ( r x 2 r y 2 r x 3 r y 3 r x 4 r y 4 r x 1 r y 1 )
The matrix elements k1, k2, and k3 are calculated by:
k 1 = 1 1 + a 2
k 2 = a 1 + a 2
k 3 = a 2 1 + a 2
If there are more than four lines in the building, the matrix and the vectors are extended after the same pattern. The unknown ci-values are found by:
x = ( A T A ) 1 A T b

6.2.4. Calculation of Corner Points

The adjusted coordinates of the corner points (p) are calculated by:
p = A x
Equation (14) can be extended by a weight matrix
W = d i a g ( w 1 , w 2 , ... w n )
and the unknowns (ci) are then derived by:
x = ( A T W A ) 1 A T W b
The adjustment by a least squares procedure can also derive accuracy values. The estimated residuals are obtained by:
r = p b
from which the variance factor and the covariance matrix for the corner coordinates (ΣP) are derived by:
σ 0 2 = r T W r n u
and
p = σ 0 2 A ( A T W A ) 1 A T .
where n = number of coordinates, and u = number of unknowns. In the example are n = 8 and u = 4.
The accuracy of the corner coordinates by means of the covariance matrix is an interior accuracy only. For the assessment of the exterior accuracy, we need accurate reference values.

6.2.5. Closing of Polygons

The derived polygon should be closed. It is achieved by repeating the first point in the list of points to be used in plotting.

6.2.6. Generation of Polylines

The above-described procedure is relatively simple to realize. It is suitable for outlines of buildings. Other objects of the topographic database, e.g., impervious surfaces (roads and parking lots) and vegetated areas (grass, bushes, and trees), are composed of polylines. The segments may be straight or curved. The polylines can have an open end or they can be closed. For these irregular lines, another approach may be applied. The formulas are based on the Hesse normal form of a straight line.
To derive accurate values for the line parameters (θ, ρ) from the cloud of points P(xi, yi), Equation (3) is rewritten as
x i = ρ / cos θ y i tan θ y i = ρ / sin θ x i / tan θ
and these equations of the observations are then linearized.
x i / θ = ρ tan θ / cos θ y i / cos 2 θ x i / ρ = 1 / cos θ y i / θ = ρ / ( sin θ tan θ ) + x i / sin 2 θ y i / ρ = 1 / sin θ
Approximations for the line parameters (θ°, ρ°) are taken from the Hough matrix (cf. Section 6.2.1). Corrections (, ) to the approximate values are obtained by least squares adjustment by Equation (23).
( x 1 / θ y 1 / θ x 1 / ρ y 1 / ρ ) ( d θ d ρ ) = ( x 1 x 1 o y 1 y 1 o ) + ( r x 1 r y 1 )
Using the designations of the general equation of least squares adjustment (cf. Equation (9)), the matrix on the left side is the design matrix (A). It is multiplied with the vector of the unknowns (x). On the right side of Equation (23) is the sum of two vectors, the reduced observations (b) and the corrections (r). The values and are calculated for each point by means of Equations (21) using the approximations of the line parameters. Improved line parameters are then obtained by Equation (24).
θ = θ 0 + d θ ρ = ρ 0 + d ρ
If the approximate values of the line parameters are too coarse, iterations may become necessary. The calculation of the corner points (Pk) will then be based on Equations (3), and Equations (25) and (26) are obtained.
x P k + 1 = q 1 ρ i + q 2 ρ i + 1 y P k + 1 = q 3 ρ i + q 4 ρ i + 1
where
q 1 = 1 / cos θ i tan θ i / ( sin θ i cos θ i tan θ i + 1 ) q 2 = tan θ i / ( cos θ i + 1 tan θ i sin θ i + 1 ) q 3 = 1 / ( sin θ i cos θ i tan θ i + 1 ) q 4 = 1 / ( cos θ i + 1 tan θ i sin θ i + 1 )
In the example depicted in Figure 6, the closed polygon has five vertices (Pk). The segments of the polygon are straight lines (li) which are defined by the Hesse parameters (θ, ρ).
The line segments may also consist of circular arcs or splines. In this case, the least squares adjustment of the line parameters may be avoided.

7. Assessment of the Thematic and Geometric Accuracy

The assessment of the accuracies is carried out separately for the results of the classification, the enhanced map of Level 1, and for the geometric accuracy of the enhanced map of Level 2. The assessment of the thematic accuracy requires accurate and reliable reference values for a sample. The sample should be independent from the training areas. The sample size should be big enough to obtain small confidence intervals for the accuracy measures. It is optimal to have a reference value for each unit of the map. This is a costly approach. In practical work, a small sample is used. It must be taken randomly from the compiled map and true values must be determined at these positions. The number of the sample units is calculated after the likelihood ratio test (LRT) confidence interval [19].

7.1. Assessment of the Thematic Accuracy (Classification Results)

The applied accuracy measures are error matrix, overall accuracy, user’s and producer’s accuracy. They are based on pixels. The formulas and definitions are given in [20]. To inform about the reliability of the calculated accuracy measures, also confidence intervals are calculated. Other accuracy measures are completeness (recall), correctness (precision) and F1 score [21]. Completeness corresponds to producer’s accuracy.

7.2. Assessment of the Thematic Accuracy (Enhanced Maps)

The assessment of the thematic accuracy by the mentioned measures can also be carried out for the enhanced map of Level 1. The assessment of the results of Level 1 may also use an accuracy measure that is based on objects. The number of objects in the scene is then compared with the detected and mapped ones.

7.3. Assessment of the Geometric Accuracy

The assessment of the geometric accuracy is carried out by means of well-defined points, e.g., the corner points of buildings. They are measured on the ortho-image. The accuracy measures, Root Mean Square Error (RMSE), Mean (µ), and standard deviation (σ) are calculated for each of the buildings. The average of all Mean values is the displacement of the enhanced map regarding the reference. The comparison of the two data sets requires that an equal number of corner points exists. This may not always be the case. In [22], a metric is proposed that evaluates the differences between reference polygons and derived line segments. This so-called PoLIS method calculates orthogonal distances of vertices to line segments. We prefer the RMSE, µ, and σ as measures for simplicity and because they are used as a standard in topographic mapping.

8. Examples of Cartographic Enhancements

Two examples are used to test the feasibility of the proposed methods. The data sets differ regarding the type of urban area, the ground sampling distance, and the used camera.

8.1. Example 1

The data of Example 1 are part of the ISPRS “2D semantic labeling contest” [21]. The selected test site is a city area in Germany where high buildings are close to each other. Trees, bushes and grass planes are situated between the buildings. Many cars are on roads and parking lots.

8.1.1. Description of Source Data

The original imagery is taken by a large-format photogrammetric camera (Zeiss DMC). The images have four bands (R, G, B, and NIR) and are of very high spatial resolution (GSD = 0.09 m). The exposures occurred at sunshine which resulted in long shadows beside elevated objects. A digital surface model (DSM), a normalized digital surface model (nDSM), a false-color ortho-image (cf. Figure 7), and a reference map were derived from the images by the organizers of the test. The reference map is manually produced by means of the DSM-based ortho-image and consists of five major urban land cover classes (“impervious surface”, “building”, “low vegetation”, “tree”, and “car”). The selected test site covers 4 ha.

8.1.2. Classification

The classification starts with the training of the classifier. The chosen formula in modeling of the classes uses five variables (attributes):
ref1 ~ ndsm + ndvi + sdZ_5 + b1 + sdb1_5
where ref1 = reference classes of Example 1, ndsm = normalized digital surface model (dZ-value), ndvi = normalized difference vegetation index, sdZ_5 = standard deviation of the Z-value (elevation), b1 = intensity value of the near-infrared channel (band1) of the “true” ortho-image, and sdb1_5 = standard deviation of the intensities of band 1 (b1). The formula is written in the notation for statistical models [23].
The ndsm-attribute is the height above ground and is calculated by:
ndsm = DSM − DTM
The ndvi is derived from the intensities in the NIR-band and the Red-band by Equation (29):
ndvi = (INIRIR)/(INIR + IR)
where INIR = intensity in the NIR-band, and IR = intensity in the R-band. The units of ndsm are meters (m) and of INIR/IR digital numbers (DN) in the range 0–255.
The calculation of the standard deviations of the Z-values (sdZ_5) and of the infra-red band (sdb1_5) used the surrounding of 5 × 5 pixels of the digital elevation model and of the spectral band 1, respectively. The decision tree (Figure 8) is trained by means of an adjacent map comprising 2995 × 1783 pixels (or 4.3 ha) which contains all six classes. The class “clutter/background” consists mainly of water (river) in this area.
The variables b1 and sdb1_5 did not contribute to the result of the classification and are, therefore, not contained in the nodes of the derived DT.

8.1.3. Results

The generated land cover map is depicted in Figure 9. A visual comparison with the ortho-image reveals that the important classes “building” and “impervious surface” are well detected. Nevertheless, their areas are not homogeneous and their borderlines are not smooth. A cartographic enhancement is necessary to produce topographic map data in raster and vector format. First, we assess the thematic accuracy of the classification.
The assessment of the thematic accuracy of the produced land cover map used a complete map derived by manual work. It contains 4.93 million units (pixels), which are coded by the names of the classes. The objects in this reference map have a defined boundary. The reference is completely independent from the training area. The derived accuracy measures are contained in Table 1. The user’s accuracy reveals that the classes “impervious surfaces” and “building” are above 80%. The classes “low vegetation” and “trees” are less accurate (59% and 74% respectively). The class “car” is 6% and class “clutter/background” 0% only. It means that the last two classes could not be determined at all.
The overall accuracy is calculated with 64.3% (95% CI: 64.3%–64.3%). The calculated confidence interval (CI) is very narrow due to the big number of checkpoints. The CI-values for the user’s and producer’s accuracy are, therefore, not given.
To evaluate the achieved accuracy a comparison with the results of the training area is performed as well (cf. Table 2). The assessment of the training area has been carried out with a thematic map consisting of 4.83 million pixels. The user’s accuracy for the class “car” is also poor (7%), the class “clutter/background”, however, 79%. The overall accuracy is 69.3%. It should be mentioned that the classes “car” and “clutter/background” are not topographic objects and have, therefore, been neglected in the following cartographic enhancement.

8.1.4. Cartographic Enhancement

The result of the cartographic enhancement (Level 1) is depicted in Figure 10. It is carried out after the proposed procedures described in Section 6.1. For the generation of Level 1 quality, the program package “EBImage” was applied [24]. The morphological operations used a diamond-shaped SE of 5 × 5 pixels. When generating the outlines of buildings, the selected parameters were 2 × 2 units corresponding to 5 × 5 pixels (size of the moving window) and 0.01 (offset from the average intensity).
The minimum area of a building to be mapped was assumed to be 25 m2 and for the areas of class “low vegetation” 21 m2. The areas of class “impervious surface” used as threshold a radius of 2 m and the areas of class “tree” a radius of 4 m. All objects smaller than these thresholds were removed. This generalization produces some areas of no data which should be filled again. In this way, the cartographic quality can be further improved.

8.1.5. Thematic Accuracy of the Enhanced Map

The assessment of the enhanced map used the same reference as for the assessment of the classification result (4.93 million pixels). The thematic accuracy of the topographic objects is contained in Table 3. The user’s accuracy of class “building” improved by 2%, but the user’s accuracy of class “impervious surface” deteriorated by 4%. The same user’s accuracies were obtained for the two vegetation classes (“tree”, “low vegetation”). Practically, the accuracies of the four topographic classes remained the same. The overall accuracy of the enhanced map (derived of the four topographic classes) is also nearly the same (62.7% and 64.0%, respectively). The threshold for class “tree” was set to a radius of 4 m, which removed all trees smaller than the selected threshold. The producer’s accuracy (or thematic completeness) is therefore low for class “tree” (45%), but high for class “building” (82%). The use of other thresholds and/or attributes may improve the results. The process of this enhancement can be automated.

8.1.6. Geometric Accuracy of the Enhanced Map (Level 2)

The generation of vector data for buildings is carried out as described in Section 6.2. The result of vectorizing the building outlines of a part of the test site is depicted in Figure 11. The buildings consist of a varying number of corners. The outlines are not all parallel and orthogonal to each other. Building 41 includes a side which is nearly parallel to the vertical axis of the coordinate system. In this case, the procedures described in Section 6.2.6 had to be applied.
The corners points are well defined and can be used for the assessment of the geometric accuracy. The coordinate errors can be calculated from Equations (19) and (20). They are small (1.0 pixel or 0.09 m). This is, however, an interior accuracy only. The root mean square errors, derived from reference values, are absolute errors (cf. Table 4). The averages of all RMSEx and RMSEy are 1.2 m and 0.7 m, respectively, when the manually derived map (GT) was used as reference. Reference values were also derived by digitizing the corner points of buildings on top of the DSM-based ortho-image. The results are about the same. Altogether 31 corner points have been checked. The averages of the standard deviations (σx, σy) are about the same as the RMSE values. This indicates that the systematic shifts (µx, µy) of the coordinates with respect to the reference are very small. The results may be improved when a 2D transformation is applied. Besides the shifts, a rotation and two scale factors will then be corrected.

8.2. Example 2

The source data of Example 2 are part of the demo data of the Leica RCD30 camera. The selected test site is a suburb area in Switzerland where residential houses are well separated from each other. Trees, bushes and grass planes are situated between the houses. The selected test site covers 1.4 ha. The land cover map to be generated should have the following six classes (“building”, “road and parking lot”, “wall and car port”, “tree”, “hedge and bush”, “grass”).

8.2.1. Description of Source Data

The original imagery is taken by a medium-format metric camera (RCD30). The images have four bands (R, G, B, NIR) and are of very high spatial resolution (GSD = 0.05 m). The exposures occurred at sunshine which resulted in shadows beside elevated objects. A digital surface model (DSM), a normalized digital surface model (nDSM), and a false-color ortho-image were derived from the provided images. Manual editing of the DSM and nDSM has been carried out to use optimal source data. The derived DSM and DTM have a spacing of 0.25 m.

8.2.2. Classification

The classification starts with the training of the classifier. The chosen formula in modelling of the classes (in notation for statistical models) uses two variables only:
ref2 ~ ndsm + ndvi
where ref2 = reference classes of Example 2, ndsm = normalized digital surface model (dZ-value), ndvi = normalized difference vegetation index.
The ndsm attribute is the height above ground and is calculated by Equation (28); the ndvi is derived from the intensities in the NIR-band and the Red-band according Equation (29). The decision tree (cf. Figure 12) is trained by means of a few areas for each class. These training areas were digitized on top of the false-color ortho-image. Altogether, 17,449 DSM-cells together with their two attributes (ndvi, ndsm) were extracted and used for the training of the classifier.
By means of the derived decision tree a land cover map could be generated. For each point (cell) of the DSM, a class value will be assigned. Figure 13 depicts the generated land cover map.

8.2.3. Results

A visual check reveals that the class “building” is distinctly extracted. The class “wall and car port” is less visible. Misclassification can be noticed in the classes of large area (“building”, “road and parking lot”). White areas are gaps in the data. The generated land cover map is a raw result. An enhancement is necessary to achieve cartographic quality.
The size of the independent sample comprises 546 units (DTM cells). They are randomly extracted from the derived land cover map and their true class is found at their spatial position (easting, northing, and elevation) by stereo-observation in the oriented image pair of false-color images. This approach results in an independent and reliable reference. The derived user’s accuracies are contained in Table 5. The user’s accuracy reveals that the classes “building” and “road and parking lot” (impervious surfaces) are very high (99% and 90%, respectively). The classes “grass”, “hedge and bush” and “trees” are also well determined (81%, 78% and 78%, respectively). The accuracy of class “wall and car port” is only 26%. The width of the 95% CI is wider than in Example 1 due to the relatively small number of checkpoints (91 per class).
More details on the generation and assessment of this land cover map are given in [12]. We focus now on the main topic, the cartographic enhancement of the classification result.

8.2.4. Cartographic Enhancement

The processing for Level 1 is carried out as described in Section 6.1. Each class of the six classes is treated separately. The enhancing of the class “building” used for the morphological operators a diamond-shaped SE of 5 × 5 pixels. The boundaries of all areas in the class “building” were obtained by adaptive thresholding using a window of the size 5 × 5 pixels and an offset of 0.01. Objects of an area less than 70 m2 were removed. The other classes are processed in a similar way. The result of cartographic enhancement (Level 1) is depicted in Figure 14. The areas of the six classes are homogenous. The outlines of the objects, however, are not smooth or straight. Visible errors are in areas without nDSM-data (white color). The thresholding has removed small objects (trees and bushes).

8.2.5. Geometric Accuracy of Buildings (Level 2)

The enhanced map in vector format (Level 2 map) was produced as described in Section 6.2. The corner points of 14 buildings are determined and plotted with connecting straight lines (cf. Figure 15). This digital 2D map may be plotted in different scales and in different map projections. A high cartographical quality is thereby achieved for the class “building”. The houses are also plotted on top of the ortho-image (cf. Figure 16). This allows a visual check of the geometric accuracy of the derived houses.
To assess the geometric accuracy, an accurate reference is required. For this purpose, the corner points are identified on the ortho-image and then digitized by means of a cursor. Sixty-two corners were measured and compared with the reference values. The resulting Root Mean Square Errors (RMSE) and standard deviations (σ) are contained in Table 6.
A sub-meter accuracy has been achieved for both coordinates (easting and northing). The results may be improved when a 2D transformation is applied. The obtained accuracy is related to the ortho-image, which in this example is based on a DTM.

8.2.6. Other Topographic Objects

Experiences with other topographic objects in vector format (Level 2) are limited. Walls and hedges may also be represented by orthogonal and parallel lines. Objects with irregular outlines (roads, parking lots, and vegetated areas) are processed as described in Section 6.2.6. The map of the class “road and parking lot” is depicted in Figure 17. The segments are relatively small. Basically, it is difficult to detect and to extract small lines and to define their sequence in the polygon automatically. A simultaneous graphical output and some interactions will support error-free solutions. In the Hough transform, steps of Δθ = 5° and Δρ = 5 pixels and ranges of 0°–175° and 0–600 pixels were applied. Such ranges result in a Hough matrix of 36 × 121 = 4356 accumulator cells.

9. Discussion

The applied method at Example 1 used pixels of the derived ortho-image as units. The thematic map used for training contained six classes but was not identical with the area to be classified. The distribution of the areas of the produced land cover map was, therefore, different from the distribution in the training area. The obtained user’s accuracy is good for the classes “building” (88%) and “impervious surface” (84%). The detection of cars is poor with the selected approach. It was, however, not a goal of this investigation to map non-topographic objects. The quality of the input data and of the reference data is important for good results and should be tested. For example, the accuracy of the elevations (Z-values) is of lower accuracy at the boundaries of buildings. These areas could have been ignored in the assessment. The obtainable accuracies would then definitely be higher. The cartographic enhancement improves the quality of the land cover map. The user’s accuracy is about the same for all topographic classes (“building”, “impervious surface”, “tree”, and “low vegetation”). The producer’s accuracy (completeness) improved 4% for class “building”, but decreased 20% for class “tree”. The use of other thresholds and/or attributes may improve the results further. The applied classifier (DT) could easily handle the relatively large amount of data. The processing and plotting of all classes in map-like colors required high processing times. The geometric accuracy derived from 31 corners of buildings were RMSEx = 1.2 m and RMSEy = 0.7 m.
At Example 2, the applied method utilized cells of the derived elevation model as units. Two attributes, NDVI and nDSM, were derived from the images and used in the supervised classification. The refinement of the classification result (Level 1) revealed some shortcomings. They are due to editing of the DSM and the applied thresholding. The test of the geometric accuracy by means of 62 well-defined points (building corners) revealed RMSEx = 0.8 m and RMSEy = 1.0 m. This is again a good result which would allow 2D large-scale mapping. For example, the standards for digital geospatial data in the USA for map scales in the range 1:2000 to 1:4000 require a positional accuracy of RMSEx = RMSEy = 1.0 m [25].
The enhancements were carried out separately for each class because the form of buildings is different from the form of parking lots. This separate processing may not be required when general solutions are implemented. More tests are necessary to generate Level 2 data automatically for urban areas with numerous classes.
The size of the area has also an influence on the thematic accuracy. Urban areas change their character from downtown to the suburbs or to the industrial quarters. More classes and more training data are required. In Example 1, e.g., the training area was taken from a very different area (patch). The achieved overall accuracy of the test site was then 5% less than the overall accuracy of the training area.

10. Conclusions

The goal of these investigations was to produce enhanced map data in raster and vector format of urban areas automatically. Imagery of metric aerial cameras was used as input data only. Accurate heights and true vegetation indices were derived from the overlapping multispectral images. The applied classification method (DT) used very few attributes. The proposed refinement of the classification results produced raster and vector data. Raster data with a simple object catalog (as in the two examples) can be produced with a high level of automation. The automatic generation of topographic map data in vector format is more difficult to accomplish. The proposed method generates straight lines by means of the Hough transform followed by least squares adjustment. Orthogonal and parallel lines are generated by new approaches. Universal algorithms and tests are necessary to obtain 2D vector data of driveways, walls and other topographic objects fully automatically. A sub-meter accuracy was obtained for well-defined points of buildings. The obtained results in terms of cartographic quality and geometric accuracy are very promising.

Acknowledgments

The author thanks the ISPRS WG III/4 and Leica Geosystems for providing test data. Special thanks are given to the editors and the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Konecny, G.; Breitkopf, U.; Radke, A. The status of the topographic mapping in the world—A UNGGIM_ISPRS Project 2012-2015. Z. Vermess. 2016, 141, 20–26. [Google Scholar]
  2. El-Ashmawy, K.L.A. Testing the positional accuracy of OpenStreetMap data for mapping applications. Geod. Cartogr. 2016, 42, 25–30. [Google Scholar] [CrossRef]
  3. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. Proc. ISPRS 2006, 36, 86–91. [Google Scholar]
  4. Sampath, A.; Shan, J. Building boundary tracing and regularization from airborne LiDAR point clouds. Photogramm. Eng. Remote Sens. 2007, 73, 805–812. [Google Scholar] [CrossRef]
  5. Awrangjeb, M.; Ravanbakhsb, M.; Fraser, C. Automatic detection of residential buildings using LIDAR data and multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2010, 65, 457–467. [Google Scholar] [CrossRef]
  6. Höhle, J. From classification results to topographic maps. In Proceedings of the GEOBIA 2016: Solutions and Synergies, Enschede, The Netherlands, 14–16 September 2016.
  7. Dalla Mura, M.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3747–3762. [Google Scholar] [CrossRef]
  8. Li, H.; Zhong, C.; Hu, X.; Xiao, L.; Huang, X. New methodologies for precise building boundary extraction from LiDAR data and high resolution image. Sens. Rev. 2013, 33, 157–165. [Google Scholar] [CrossRef]
  9. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  10. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  11. Friedl, M.A.; Brodley, C.E. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  12. Höhle, J. Generation of 2D land cover maps for urban areas using decision tree classification. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-7, 15–21. [Google Scholar]
  13. Höhle, J.; Jacob, A. New Instrumentation for Direct Photogrammetric Mapping. Photogramm. Eng. Remote Sens. 1981, 47, 761–767. [Google Scholar]
  14. Schenk, T. Ausgleichung von Rechtwinkelzügen. Bildm. Luft. 1986, 54, 155–165. [Google Scholar]
  15. Kraus, K. Photogrammetrie, Band 3-Topographische Informationssysteme; Dümmler: Köln, Germany, 2000. [Google Scholar]
  16. Li, Y.; Zhu, L.; Shimamura, K.; Tachibana, K. A refining method for building object aggregation and footprint modelling using multi-source data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B3, 41–46. [Google Scholar] [CrossRef]
  17. Jain, R.; Kasturi, R.; Schunck, B.G. Machine Vision; McGraw-Hill, Inc.: New York, NY, USA, 1995. [Google Scholar]
  18. Illingworth, J.; Kittler, J. A survey of the hough transform. Comput. Vis. Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
  19. Young, G.A.; Smith, R.L. Essentials of Statistical Inference; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  20. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  21. ISPRS WG III/4. 2D Semantic Labeling Contest. 2014. Available online: http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html (accessed on 14 December 2016).
  22. Avbelj, J.; Müller, R.; Bamler, R. A metric for polygon comparison and building extraction evaluation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 170–174. [Google Scholar] [CrossRef]
  23. Chambers, J.; Hastie, T. Statistical Models in S; CRC Press, Inc.: Boca Raton, FL, USA, 1991. [Google Scholar]
  24. Pau, G.; Sklyar, O.; Huber, W. Introduction to EBImage—An Image Processing and Analysis Toolkit for R. 2013. Available online: http://www.bioconductor.org/packages/release/bioc/html/EBImage.html (accessed on 14 December 2016).
  25. ASPRS. ASPRS Positional Accuracy Standards for Digital Geospatial Data. Photogramm. Eng. Rem Sens. 2015, 81, A1–A26. [Google Scholar]
Figure 1. Steps in cartographic enhancement of land cover maps (Level 1).
Figure 1. Steps in cartographic enhancement of land cover maps (Level 1).
Remotesensing 09 00224 g001
Figure 2. Steps in the cartographic enhancement (Level 2) at the example of class “building”.
Figure 2. Steps in the cartographic enhancement (Level 2) at the example of class “building”.
Remotesensing 09 00224 g002
Figure 3. Parameters θ and ρ of a building outline in the xy-system.
Figure 3. Parameters θ and ρ of a building outline in the xy-system.
Remotesensing 09 00224 g003
Figure 4. Display of the parameter space (H) of the Hough transform. The pixels with the highest intensities have the parameters (θ, ρ) of straight lines of the unknown polygon.
Figure 4. Display of the parameter space (H) of the Hough transform. The pixels with the highest intensities have the parameters (θ, ρ) of straight lines of the unknown polygon.
Remotesensing 09 00224 g004
Figure 5. Sketch of building outlines and corner points.
Figure 5. Sketch of building outlines and corner points.
Remotesensing 09 00224 g005
Figure 6. Closed polyline with five vertices (Pk) and straight line segments (li).
Figure 6. Closed polyline with five vertices (Pk) and straight line segments (li).
Remotesensing 09 00224 g006
Figure 7. DSM-based ortho-image (false-color).
Figure 7. DSM-based ortho-image (false-color).
Remotesensing 09 00224 g007
Figure 8. Decision tree derived from an existing land cover map. (b = “building”, s = “impervious surface”, c = “car”, m = “clutter/background”, v = “low vegetation”, t = “tree”; ndvi = normalized difference vegetation index (DN), ndsm = normalized digital surface model (m), sdZ5 = standard deviation of elevations (Z-values) in the 5 × 5 pixels’ surroundings (m).
Figure 8. Decision tree derived from an existing land cover map. (b = “building”, s = “impervious surface”, c = “car”, m = “clutter/background”, v = “low vegetation”, t = “tree”; ndvi = normalized difference vegetation index (DN), ndsm = normalized digital surface model (m), sdZ5 = standard deviation of elevations (Z-values) in the 5 × 5 pixels’ surroundings (m).
Remotesensing 09 00224 g008
Figure 9. Classification result (red = “building”, dark green = “tree”, green = “low vegetation”, grey = “impervious surface”, yellow = “car”, blue = “clutter”).
Figure 9. Classification result (red = “building”, dark green = “tree”, green = “low vegetation”, grey = “impervious surface”, yellow = “car”, blue = “clutter”).
Remotesensing 09 00224 g009
Figure 10. Enhanced land cover map (Level 1). (red = “building”, dark green = “tree”, green = “low vegetation”, grey = “impervious surface”, white = “no data”; units of the image coordinates are pixels).
Figure 10. Enhanced land cover map (Level 1). (red = “building”, dark green = “tree”, green = “low vegetation”, grey = “impervious surface”, white = “no data”; units of the image coordinates are pixels).
Remotesensing 09 00224 g010
Figure 11. Enhancement of class “building” (Level 2). The units of the coordinate axes are pixels.
Figure 11. Enhancement of class “building” (Level 2). The units of the coordinate axes are pixels.
Remotesensing 09 00224 g011
Figure 12. Decision tree derived from training areas (b = “building”, t = “tree”, g = “grass”, h = “hedge and bush”, r = “road and parking lot”, w = “wall and car port”; ndvi = normalized difference vegetation index; ndsm = normalized digital surface model). Taken from [12].
Figure 12. Decision tree derived from training areas (b = “building”, t = “tree”, g = “grass”, h = “hedge and bush”, r = “road and parking lot”, w = “wall and car port”; ndvi = normalized difference vegetation index; ndsm = normalized digital surface model). Taken from [12].
Remotesensing 09 00224 g012
Figure 13. Generated land cover map (classification result). Six classes are coded by colors (red = “building”, light green = “grass”, dark green = “tree”, green = “hedge and bush”, grey = “road and parking lot”, orange = “wall and car port”). Taken from [12].
Figure 13. Generated land cover map (classification result). Six classes are coded by colors (red = “building”, light green = “grass”, dark green = “tree”, green = “hedge and bush”, grey = “road and parking lot”, orange = “wall and car port”). Taken from [12].
Remotesensing 09 00224 g013
Figure 14. Enhanced land cover map of six classes (Level 1). (red = “building”, light green = “grass”, dark green = “tree”, green = “hedge and bush”, grey = “road and parking lot”, orange = “wall and car port”; the numbers below the map represent meters). Adapted from [12].
Figure 14. Enhanced land cover map of six classes (Level 1). (red = “building”, light green = “grass”, dark green = “tree”, green = “hedge and bush”, grey = “road and parking lot”, orange = “wall and car port”; the numbers below the map represent meters). Adapted from [12].
Remotesensing 09 00224 g014
Figure 15. Enhanced vector map of class “building” (Level 2).
Figure 15. Enhanced vector map of class “building” (Level 2).
Remotesensing 09 00224 g015
Figure 16. False-color ortho-image together with the enhanced outlines of buildings.
Figure 16. False-color ortho-image together with the enhanced outlines of buildings.
Remotesensing 09 00224 g016
Figure 17. Enhanced vector map of class “road and parking lot” (Level 2).
Figure 17. Enhanced vector map of class “road and parking lot” (Level 2).
Remotesensing 09 00224 g017
Table 1. User’s accuracy (uacc) and producer’s accuracy (pacc) of the test site.
Table 1. User’s accuracy (uacc) and producer’s accuracy (pacc) of the test site.
ClassUacc (%)Pacc (%)
impervious surface8452
building8878
low vegetation5957
tree7465
car684
clutter/background00
Table 2. User’s accuracy (uacc) and producer’s accuracy (pacc) of training area.
Table 2. User’s accuracy (uacc) and producer’s accuracy (pacc) of training area.
ClassUacc (%)Pacc (%)
impervious surface7957
building9073
low vegetation6455
tree8787
car767
clutter/background7973
Table 3. User’s accuracy (uacc) and producer’s accuracy (pacc) of enhanced land cover map (Level 1).
Table 3. User’s accuracy (uacc) and producer’s accuracy (pacc) of enhanced land cover map (Level 1).
ClassUacc (%)Pacc (%)
impervious surface8852
building8682
low vegetation5758
tree7445
Table 4. Geometric accuracy of class “building” (Level 2). (GT = Ground Truth, Ortho = Ortho-image).
Table 4. Geometric accuracy of class “building” (Level 2). (GT = Ground Truth, Ortho = Ortho-image).
# of Building# of CornersGT (m)Ortho (m)
RMSExRMSEyRMSExRMSEy
3881.10.81.00.7
3941.10.41.20.5
4041.20.51.10.4
4151.60.81.60.9
4340.90.71.00.6
4761.30.80.90.6
average5.21.20.71.10.6
Table 5. User’s accuracy of the derived classes (adapted from [12]).
Table 5. User’s accuracy of the derived classes (adapted from [12]).
ClassAccuracy (%)95% CI (%)
building9995–100
hedge and bush7869–86
grass8172–89
road and parking lot9083–95
tree7869–86
wall and car port2618–36
Table 6. Geometric accuracy of class “building” (E = easting, N = northing).
Table 6. Geometric accuracy of class “building” (E = easting, N = northing).
# of Check Points62
RMSEE0.8 m
RMSEN1.0 m
σE0.8 m
σN0.9 m

Share and Cite

MDPI and ACS Style

Höhle, J. Generating Topographic Map Data from Classification Results. Remote Sens. 2017, 9, 224. https://doi.org/10.3390/rs9030224

AMA Style

Höhle J. Generating Topographic Map Data from Classification Results. Remote Sensing. 2017; 9(3):224. https://doi.org/10.3390/rs9030224

Chicago/Turabian Style

Höhle, Joachim. 2017. "Generating Topographic Map Data from Classification Results" Remote Sensing 9, no. 3: 224. https://doi.org/10.3390/rs9030224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop