Next Article in Journal
A Fluorescent Sensor for Dinitrobenzoic Acid Based on a Cyanuric Acid and Xanthene Skeleton
Next Article in Special Issue
Rapid Urbanization and Implications for Flood Risk Management in Hinterland of the Pearl River Delta, China: The Foshan Study
Previous Article in Journal
SU-8 Cantilevers for Bio/chemical Sensing; Fabrication, Characterisation and Development of Novel Read-out Methods
Previous Article in Special Issue
Road Asphalt Pavements Analyzed by Airborne Thermal Remote Sensing: Preliminary Results of the Venice Highway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data

1
Rubenstein School of Environment and Natural Resources, University of Vermont, George D. Aiken Center, 81 Carrigan Drive, Burlington, VT 05405, USA
2
Northeastern Research Station, USDA Forest Service, South Burlington, VT 05403, USA
*
Author to whom correspondence should be addressed.
Sensors 2008, 8(3), 1613-1636; https://doi.org/10.3390/s8031613
Submission received: 29 January 2008 / Accepted: 28 February 2008 / Published: 10 March 2008
(This article belongs to the Special Issue Sensors for Urban Environmental Monitoring)

Abstract

:
Accurate and timely information about land cover pattern and change in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the Gwynns Falls watershed from 1999 to 2004. The Gwynns Falls watershed includes portions of Baltimore City and Baltimore County, Maryland, USA. An object-based approach was first applied to implement the land cover classification separately for each of the two years. The overall accuracies of the classification maps of 1999 and 2004 were 92.3% and 93.7%, respectively. Following the classification, we conducted a comparison of two different land cover change detection methods: traditional (i.e., pixel-based) post-classification comparison and object-based post-classification comparison. The results from our analyses indicated that an object-based approach provides a better means for change detection than a pixel based method because it provides an effective way to incorporate spatial information and expert knowledge into the change detection process. The overall accuracy of the change map produced by the object-based method was 90.0%, with Kappa statistic of 0.854, whereas the overall accuracy and Kappa statistic of that by the pixel-based method were 81.3% and 0.712, respectively.

1. Introduction

Accurate and timely information about land cover in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. Although land cover changes can be monitored by traditional inventories and survey, satellite/aerial remote sensing provides a cost-effective way in land cover change detection, as it can explicitly reveal spatial patterns of land cover change over a large geographic area in a recurrent and consistent way.
Change detection has been defined as a process of “identifying differences in the state of an object or phenomenon by observing it at different times” [1]. Various methods have been employed using remotely sensed data for land cover change detection for many decades in urban environments [1-3]. Those methods may be broadly classified into two categories: pre-classification change detection and post-classification comparison [1, 4].
A variety of change detection techniques has been developed for pre-classification change detection, or simultaneous analysis of multitemporal data [1, 3, 4], including image differencing [5], image regression [5], image ratioing [6], vegetation index differencing [7], principal components analysis [8], change vector analysis [9-10], artificial neural networks [11], and classification tree [12] to name just a few. These techniques generally generate “change” vs. “no-change” maps, but do not specify the type of change [1-2].
Post-classification comparison methods detect land cover change by comparing independently produced classifications of images from different dates [1, 4]. Although the post-classification comparison method requires the classifications of images acquired from different times, it can not only locate the changes, but also provide “from-to” change information [13-15]. In addition, post-classification comparison minimizes the problems caused by variation in sensors and atmospheric conditions, as well as vegetation phenology between different dates, since data from different dates are separately classified [1, 4] and hence reflectance data from those two dates need not be adjusted for direct comparability.
Pixel-based post-classification comparison has been widely used for land cover/land use change detection. In particular, this method has been successfully applied for change detection using land cover maps obtained from remotely sensed imagery with coarse or medium spatial resolution [e.g. 14-16]. As the urban environment is extremely complex and heterogeneous, and features are often smaller than the size of a medium-resolution pixel (e.g., buildings and side walks), there is an increasing interest in urban land cover mapping and change detection using high-spatial resolution multispectral imagery from satellite and digital aerial sensors (e.g., QuickBird from DigitalGlobe, Inc., IKONOS from GeoEys, Inc., Emerge from Emerge, Inc.). However, relatively few studies have tested how a pixel-based post-classification comparison approach performs when using very high-spatial resolution imagery.
Meanwhile, object-based image analysis is quickly gaining acceptance among remote sensors and has demonstrated great potential for classification and change detection of high-spatial resolution multispetral imagery in heterogeneous urban environments [e.g. 17-19]. Rather than dealing with individual pixels, the object-based approach first segments imagery into homogeneous small objects, which then serve as building blocks for subsequent classification and change detection of larger entities. Object characteristics such as shape, spatial relations and reflectance statistics, can be used for classification and change detection. Several researchers have demonstrated that an object-based approach based to image segmentation could improve the accuracy and efficiency of change detection (e.g. 17, 20-23]. Although there is an increasing interest in the application of object-based approaches for change detection, relatively few studies have investigated the effectiveness and efficiency of an object-based approach for post-classification comparison change detection, particularly, using very high-spatial resolution data [24-25].
This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the Gwynns Falls watershed in Maryland from 1999 to 2004. The objectives are to: (1) develop an object-based classification and post-classification change detection approach to map and monitor land cover changes in urban areas; (2) compare an object-based approach with a pixel-based method and evaluate their effectiveness for post-classification comparison change detection in an urban setting; and (3) use the resulting information to map land cover and land cover change in the Gwynns Falls watershed from 1999 to 2004.

2. Study area

This research focused on the Gwynns Falls watershed, a study site of the Baltimore Ecosystem Study (BES), a long-term ecological research project (LTER) of the National Science Foundation ( www.beslter.org). The Gwynns Falls watershed, with an area of approximately 17,150, lies in Baltimore City and Baltimore County, Maryland and drains into the Chesapeake Bay (Figure 1). The Gwynns Falls watershed traverses an urban-suburban-rural gradient from the urban core of Baltimore City, through older inner ring suburbs to rapidly suburbanizing areas in the middle reaches and a rural/suburban fringe in the upper section. Land cover in the Gwynns Falls Watershed varies from highly impervious in the lower sections to a broad mix of impervious surface and forest cover in the middle and upper sections. The variety of urban and suburban land cover types, combined with the diversity of growing urbanization along the urban-rural gradient, makes it ideal for this study.

3. Methods

3.1 Data collection and preprocessing

High spatial resolution color-infrared digital aerial imagery, Light Detecting And Ranging (LIDAR) data, and other ancillary data were used in this study. Digital aerial imagery from Emerge Inc. for two years (October 1999 and August 2004), were collected for the Gwynns Falls watershed. The imagery was 3-band color-infrared, with green (510 – 600nm), red (600 – 700nm), and near-infrared bands (800 - 900nm). Pixel size for the imagery was 0.6m. The imagery was orthorectified using a bilinear interpolation resampling method, and meets the National Mapping Accuracy Standards for scale mapping of 1:3,000 (3-meter accuracy with 90% confidence). LIDAR data used in this study were acquired in March 2002. Both the first and last vertical returns were recorded for each laser pulse, with an average point spacing of approximately 1.3m. A surface cover height model with 1-m spatial resolution was derived from the LIDAR data, which was used to aid in land cover classification.
Property parcel boundaries and building footprints datasets were obtained in digital format from Baltimore City and Baltimore County municipal governments, and were used to both facilitate object segmentation and obtain greater classification accuracy. The parcel boundaries had a high degree of spatial accuracy when compared with 1:3,000 scale 0.5m aerial imagery. A limited assessment was conducted to compare the building footprints to the Emerge image data. The comparison indicates that the building footprints agree spatially with the Emerge imagery, but a small proportion of building footprints have not yet been digitized by municipal data providers in the study area.

3.2 Object-based classification

An object-based approach was used to conduct the classification separately on the data collected for the two years. Five land cover classes were used: 1) buildings, 2) pavement, 3) coarse textured vegetation (trees and shrubs), 4) fine textured vegetation (herbaceous vegetation and grasses), and 5) bare soil [26]. Here, we briefly describe the classification processes. A more detailed account is given in Zhou and Troy [19].
We first segmented the image into objects. The image segmentation algorithm used in this study followed the fractal net evolution approach [27], which is embedded in Definiens Developer (formerly known as eCognition) [28]. The segmentation algorithm is a bottom-up region merging technique, which is initialized with each pixel in the image as a separate segment. In subsequent steps, segments are merged based on their level of similarity. The user uses a scale parameter which indirectly controls the size of objects by specifying how much heterogeneity is allowed within each [29]. The greater the scale parameter, the larger the average size of the objects. User-defined color and shape parameters can also be set to change the relative weighting of reflectance and shape in defining segments. The process stops when there are no more possible merges given the defined scale parameter. The segmentation was conducted at a very fine scale, with a scale parameter of 20. The scale parameter of 20 was determined by visual interpretation of the image segmentation results, where object primitives were considered to be internally homogenous, i.e., all pixels within an object primitive belonged to one cover class. Both the parcel boundary layer and the building footprints data were used as thematic layers when performing the segmentation. Due to the lower resolution, the LIDAR data were not considered in the segmentation process. However, the elevation information derived from the LIDAR was used for the classification [19].
Once the segmentation was done, we used rule-based classification to classify each of the object primitives into one of the five land cover classes. The knowledge base of classification rules developed for the same geographic region in Zhou and Troy [19] was applied for the classification. The knowledge base of rules is a combination of classification rules relating to characteristics like object brightness, height, size, shape, adjacency, etc. The classified image objects were exported to a thematic raster layer with all of the 5 classes for each of the two years.

3.3 Classification Accuracy Assessment

An accuracy assessment of the classification results was performed using reference data created from visual interpretation of the Emerge image data. The accuracy assessment was carried out separately for the two years. A stratified random sampling method was used to generate the random points in the software of Erdas Imagine™ (version 9.1) [30]. A total number of 350 random points were sampled, with at least 50 random points for each class [31]. Error matrices that describe the patterns of mapped class relative to the reference data were generated, from which the overall accuracies, user's and producer's accuracies, and Kappa statistics were derived to assess the accuracies of the classification maps [32].

3.4 Post-classification Change Detection

Multi-date post-classification comparison change detection was performed to investigate land cover change in the study area from 1999 to 2004, following the land cover classifications for the two years. We conducted a comparison of two different land cover change detection methods: traditional (i.e., pixel-based) post-classification comparison [15, 16, 33] and object-based post-classification comparison.
Several types of land cover change were considered to be highly unlikely (e.g. buildings change to other land cover types) in our study area. Consequently, 16 classes, including 15 change categories and the class of no change, as listed in Table 1, were used in subsequent change detection analysis.

3.4.1 Pixel-based post-classification change detection

Pixel-based post-classification comparison is a common approach for land cover change detection [1, 13], and has been successfully used by studies such as Yang [16] and Yuan et al. [15] to monitor land cover and land use changes in urban areas. In this study, the pixel-based change detection was performed by first generating a difference map (i.e., a binary image of change and no-change) between 1999 and 2004. The difference map was created by comparing the land cover types of the two classification maps. A 7×7 pixel low pass filter (i.e., a smoothing algorithm that takes the average for a 7×7 cell moving window) was applied to the difference map to reduce the “salt and pepper” effects and remove the edge errors caused by spatial inaccuracies between data from the two years [34]. The window size of the filter was determined based on the horizontal errors of the Emerge data, which were estimated to be within 3 meters at the 90% confidence level. The final change map with the 15 change categories and the class of no change was created by an overlaying analysis on the binary image and the two classification maps.

3.4.2 Object-based post-classification change detection

Similar to object-based classification, the first step in object-based change detection is to perform the image segmentation. However, instead of using multispectral imagery, we segmented the 2004 land cover classification map.
The resultant objects from the segmentation were identical to those of a union overlay operation between the two classified polygon layers (i.e., the 1999 and 2004 classification maps), in which all polygons from both classification layers were split at their intersections and preserved in the resultant object level, as illustrated in Figure 2. Both classification maps for 1999 and 2004 were used as thematic layers when performing the segmentation. When using a thematic layer, the borders separating different thematic classes are restrictive for any further segmentation [28]. In other words, the generated objects were not allowed to cross any of the borders of different land cover classes. We generated the objects based exclusively on the information of thematic layers by setting the weight of the image layer to 0 [28].
Following the segmentation, a knowledge base of change detection rules was created to classify each object into one of the 16 change classes at the most disaggregated spatial level. The knowledge base of rules is a combination of typically “if-then” rules [19]. Most of the rules were created to reduce the errors that were propagated from the attribute and position errors in the two classification maps. In post-classification change detection, the attribute errors in the two classification maps and errors in spatial registration between the two classification maps lead to a significant overestimation of actual change [3, 35]. Object characteristics including land cover types for the two years, spatial relations (e.g. distance to neighbors and adjacency), and shape features were utilized to create rules for change detection. The choice of the relevant features and their threshold values were determined by combining expert knowledge and quantitative analyses [19]. We briefly describe the class hierarchy (See Figure 3) and the associated features and rules used to identify the different types of land cover change here.
We first separated the objects with no change from those with possible changes by comparing the land cover types obtained from the two thematic layers. Objects with the same types of land cover for the two years were identified as having no change (NoChange), whereas those with different types of land cover were considered as possibly being changed (PossibleChange). Those objects with land cover transformation that was considered as highly unlikely (see Table 1) were also classified as NoChange.
Before we further classified the changes into different categories, rules were first created to reduce the edge errors and “salt and pepper” effects that were propagated through spatial inaccuracies and classification errors from the two classification maps. To reduce the edge errors caused by spatial misregistration, objects that were classified as PossibleChange were reclassified as NoChange, if their widths were less than 3 meters. The threshold value of 3 meters was determined based on the prior knowledge that the horizontal errors of Emerge data were estimated to be within 3 meters at the 90% confidence level. In addition, objects with areas of less than 10m2 were also reclassified as NoChange to reduce the “salt and pepper” effects.
We then classified the objects of PossibleChange into 5 classes: ToBuilding, ToPavement, ToBaresoil, ToCV, and ToFV, based on the information of land cover type from the 2004 classification layer. For instance, if the land cover type of an object was building in 2004, then the object were classified as ToBuilding. Rules varying by land cover type were then created to either further classify those objects into sub-categories or eliminate false detection errors.
Each object classified as ToBuilding would be identified as being falsely detected and reclassified as NoChange if it satisfied one of the two conditions: 1) its shape area was less than 50m2 and it was not spatially adjacent to a building; or 2) its shape area was less than 50m2 and it was spatially adjacent to a building, but its compactness was larger than 2. The rules were created based on our knowledge that the changes to building would occur mainly in two ways: 1) development of a totally new building, with a minimum size of the footprint of 50 m2, which was determined by a statistical analysis on building sizes in the study area, and 2) expansion of an existing building, where the area could be less than 50 m2, but it should be adjacent to an existing building. Otherwise, an object would be classified as one of the 3 classes: BareSoil-Building, CV-Building, and FV-Building, based on the associated land cover type in 1999.
Objects of ToCV were first classified into three subclasses: Pavement-CV, BareSoil-CV, and FV-CV. Changes to CV mainly came from the growth or expansion of existing tree canopies, occurring mostly at the edges or boundaries of existing trees and forest stands, where errors propagated through position and attribute errors from the classification maps were relatively large. Therefore, rules were created to reduce commission errors for the classes FV-CV and Pavement-CV. Specifically, objects that were classified as FV-CV were reclassified as NoChange if their relative borders to CV were greater than 0.6, or their relative borders to CV were less than 0.3. Similarly, objects of Pavement-CV were reclassified as NoChange if their relative borders to CV were less than 0.3.
Objects of ToFV were also first classified into three subclasses, Pavement-FV, BareSoil-FV, and CV-FV. Rules were then generated to identify falsely detected changes to FV. An object of CV-FV was reclassified as NoChange if it was not spatially adjacent to any of the three types of change, that is, ToBuilding, ToPavement, and ToBaresoil. This rule was created based on our observation that most of the changes from CV to FV occurred simultaneously with at least one of the three land cover conversions. Objects of Pavement-FV were identified as NoChange if they bordered buildings. This rule was created to reduce the commission errors caused by the classification errors of FV in the 2004 classification map, where some of the shaded pavement was misclassified as FV.

3.5 Change Detection Accuracy Assessment

The expected accuracy of change detection can be roughly estimated by simply multiplying the accuracies of each individual classification [1, 4]. However, to be able to quantitatively evaluate the accuracy of the change maps, we need to generate stratified samples, and determine whether they are correctly classified [15, 36]. We applied this approach to evaluate the accuracies of the two post-classification comparison methods, using reference data created from visual interpretation of the bitemporal Emerge image data. As the accuracy assessment required very intensive visual analysis, we aggregated the sub-change categories of each land cover type into one change class, when conducting the analysis. For instance, the three sub-classes of ToBuilding, BareSoil-Building, CV-Building, and FV-Building, were aggregated into one class ToBuilding. Consequently, the accuracy assessment was performed on 6 strata, including the class of NoChange and five change classes.
For each of the two change maps, a stratified random sampling method was used to generate the random points in the software of Erdas Imagine™ (version 9.1) [30]. A total number of 400 random points were sampled, with 200 random points for the class of NoChange and at least 30 random points for each of the 5 change classes [31]. Error matrices that describe the patterns of mapped class relative to the reference data were used Overall accuracy, user's and producer's accuracies, and Kappa statistic obtained from the error matrices were used to assess the change detection accuracy [32].

4. Results

4.1 Classification and Change Detection Accuracy

4.1.1 Classification Accuracy

The classification accuracies derived from error matrices were listed in Table 2. The overall accuracies and the user's and producer's accuracies of individual classes were consistently high for the classification maps of the two years. The overall accuracies for 1999 and 2004 were 92.3% and 93.7%, respectively, and the Kappa statistics equaled 0.899 and 0.921. User's and producer's accuracies of individual classes in 1999 ranged from 83.6% to 100%, whereas those in 2004 varied from 91.4% to 97.7%.

4.1.2 Change Detection Accuracy

Table 3 lists the error matrix of the five change classes and the NoChange class for the change map obtained from the pixel-based post-classification comparison approach. User's and producer's accuracies for each of the classes, the overall accuracy, and Kappa statistic that were derived from the error matrix were also summarized in the table. The overall accuracy for the pixel-based post-classification comparison approach was 81.3%, with Kappa statistic of 0.712. Except the classes of NoChange and ToBareSoil, the user's accuracies of the other classes were relatively low. Particularly, the user's accuracies for the classes of ToCV and ToFV were only 43.6% and 48.3%, respectively. The relatively large commission errors for the two classes were mostly caused by areas with no change being falsely identified as being changed. For instance, as shown in Table 3, more than half of the detected changes to CV (20 out of 39) were actually NoChange, while 24 out of 58 of the detected changes for FV were NoChange. For all the classes except NoChange, the producer's accuracies were higher than their corresponding user's accuracies, ranging from 73.9% to 96.4%.
The accuracies for the change map resulting from the object-based method were summarized in Table 4. The overall accuracy and Kappa statistic for the object-based approach were 90.0% and 85.4%, respectively. Under the object-based approach, for most of the cases, both the user's and producer's accuracies of the individual classes were higher than those using the pixel-based method. The producer's accuracies of all the classes were consistently high, ranging from 81.8% to 96.3%. The user's accuracies for most of the classes were relatively high, except that for class of ToCV, with value of 61.1%.

4.2 Land cover and its change in the Gwynns Falls watershed from 1999 to 2004

Figures 4 and 5 depict the classification results for the Gwynns Falls watershed in 1999 and 2004, respectively. Table 5 lists the area and proportion of each of the five land cover classes for the watershed in 1999 and 2004. The changes of the area and percentage for each of the five classes from 1999 to 2004 were also summarized in Table 5. The table shows that the areas of bare soil and fine textured vegetation were greatly reduced, whereas the areas of building, pavement and coarse textured vegetation increased, with the increase of the area of pavement being the most. From 1999 to 2004, the proportion of impervious surface (building plus pavement) increased approximately 2.3%, while fine textured vegetation and bare soil decreased 1.3% and 1.2%, respectively. The coverage of coarse textured vegetation slightly increased, with value of 0.2%.
Table 6 shows a matrix that was derived from the change detection results obtained from the object-based method. The matrix details the land cover conversions from one land cover type to another between 1999 and 2004. In the matrix, the value in each of the cells was the amount of land that was converted from one land cover type to another. For instance, the value of 24.22 (the fourth row, second column) means that 24.22 hectares of bare soil were converted to buildings from 1999 to 2004. The column total sums the total amount of land that was converted to the land cover type given in the column heading, whereas the row total sums the amount of land that was originally of the type given in the row headings but was changed. The matrix shows that major land cover conversions occurred from fine textured vegetation to pavement and coarse textured vegetation, and from bare soil to fine textured vegetation and pavement, as well as from coarse textured vegetation to fine textured vegetation and pavement. Altogether, 2244.9 hectares, or 13.1 % of the land within the Gwynns Falls watershed experienced land cover conversions in the five-year time period.
Figure 6 depicts the land cover change map for the Gwynns Falls watershed from 1999 to 2004, which was obtained by using the object-based change detection approach. The change map not only shows where land cover changes occurred, but also illustrates the nature of land cover conversions. This map reveals spatial patterns in land cover changes. For instance, although Table 6 indicates that the area of coarse textured vegetation increased in the five-year time period, the change map shows that the loss of forests was generally caused by the conversions of big chunks of forestland to developed land (e.g. Figure 7, Panel A), whereas the gain of tree canopy mainly came from the growth or expansion of existing trees or forest stands, taking the form of disparate and numerous small pieces (e.g. Figure 7, Panel B). This type of information might have important implications for urban forestland management [37].

5. Discussions

The object-based approach used in this study proved to be very effective for classification of high-spatial resolution multispectral imagery in urban environments. The accuracies of the classification maps for the two years were consistently high relative to many other methods [38-39]. When high-resolution imagery is used in heterogeneous urban landscapes, conventional pixel-based classification approaches that only utilize spectral information have very limited usefulness. This is because the spectral characteristics among different land cover types (e.g. building and pavement) could be very similar, while spectral variation within the same land cover type or even within the same object might be high [19, 40]. For instance, a single building may have a wide range of reflectance values in its constituent pixels based on differences in materials and shading. Under an object-based approach, the grouping of pixels into objects decreases the variance within the same land cover type by averaging the pixels within the objects. Further, as we are dealing with objects instead of pixels, we are able to employ spatial relations, shape metrics, and expert knowledge to aid in the classification, all of which are crucial in discriminating between different land cover types with similar spectral response characteristics [19].
The results from our analyses show that although the overall accuracy (81.3%) of the pixel-based change detection method was acceptable, the user's and producer's accuracies for certain classes (e.g. user's accuracies of 43.6% for ToCV and 48.3% for ToFV, see Table 4) were too low.
The relatively large commission errors for the classes ToCV and ToFV might be caused by the position and/or attribute errors that were propagated through the two classification maps [41]. As the accuracies of the two classes for the independent classification maps were relatively high (See Table 2), the relatively large commission errors might be mainly caused by position errors [2, 35, 42]. Although the post-change detection refinements (i.e., smoothing window) reduced the errors caused by misregistration to some extent, our accuracy assessment on the change map confirmed the assumption that commission errors were largely caused by spatial inaccuracies, or spatial misregistration between the two classification maps. This was particularly true for those land cover types such as CV where changes to those land cover types mainly occurred at the edges or boundaries of the existing land cover.
Accurate spatial registration is crucial for reliable and accurate assessment of land cover changes [1, 35, 42]. However, precise geometric registration of images is often very difficult to achieve [1], particularly for high-spatial resolution imagery. Although post-change detection refinements can be applied to reduce errors caused by misregistration to some extent [34], our results indicated that the commission errors caused by misregistration still were significant, particularly for tree canopy and fine textured vegetation (See Figure 8 as an example).
The results from our analyses indicated that an object-based approach provides a better means for post-classification change detection than a pixel-based method. Under the object-based approach, both the overall accuracy and the Kappa statistic greatly increased (See Table 4 and 5). In most of the cases, both the user's and producer's accuracies of the individual classes were significantly improved. In particular, the commission errors for most of the classes have been greatly reduced. For instance, the user's accuracy for FV increased from 48.3% to 84.4%.
A post-classification comparison approach based on image segmentation and rule-based change detection provides an effective way to incorporate spatial information and expert knowledge into the change detection process in turn reducing errors that could propagate from attribute and position errors of the two classification maps. Firstly, an object-based approach provides an effective way to incorporate prior knowledge on the spatial inaccuracies of the imagery (or classification maps) to reduce errors caused by spatial misregistration. Although post-classification comparison might be able to alleviate the problem of getting accurate registration of multidate images [1], our results indicated that spatial misregistration could introduce significant errors to change detection when applying a pixel-based post-classification comparison method to high-spatial resolution classification maps because pixels must be aligned perfectly to allow for pixel-based comparison (See Figure 8). Therefore, change detection techniques that require less precise registration of images are highly desirable [1]. Under an object-based post-classification approach, the minimum mapping units become larger, thus reducing the need for exact correspondence between layers [43]. Further, rules can be created to effectively reduce the significant effects of spatial inaccuracies based on the horizontal errors of the Emerge data. Our analysis of accuracy assessment suggested that change detection errors, particularly commission errors were greatly reduced with the object-based approach.
Secondly, we are able to utilize spatial relations, object features, and expert knowledge for change detection. Rules that vary by class can be developed to reduce commission and omission errors based on the characteristics of different classes. For instance, we could create rules for building changes using our knowledge of building areas, as illustrated in this study. Furthermore, prior knowledge of the classification accuracies of individual classes can be effectively integrated into the change detection process to reduce errors, particularly, commission errors that could propagate from the attribute errors of the classification maps. For instance, based on the discovery that the classification error for buildings in 1999 was mainly caused by part of the buildings being misclassified as pavement, and based on the fact that there are almost no actual transitions from pavement to buildings in our study area, we created a rule to disallow conversions of pavement to buildings.
Both the object-based and pixel-based methods have limited success in change detection for the class of CV. With the pixel-based method, the user's and producer's accuracy for the class ToCV were only 43.6% and 73.9%, respectively. Under the object-based approach, although the producer's accuracy of ToCV was greatly improved to 91.7%, the user's accuracy was still relatively low, with value of 61.1%. The relatively large commission errors were mainly caused by two reasons. First, most of the changes to CV occurred at the edges or boundaries of existing trees and forest stands, where registration errors and edge effects could cause great commission errors in determination of change vs. no change [1, 15, 35]. Secondly, along the edges and boundaries of existing trees and forest stands, the accuracies of classification maps were relatively low because of the mixed-object effects, shadows and inaccuracies of ancillary data used in the classification [19]. Therefore, we found it was very difficult to create effective rules to distinguish real changes from errors caused by misregistration and misclassification.
We also should note that an object-based approach is more computationally intensive than a pixel-based method. While a pixel-based method is relatively straightforward and easily performed, under an object-based approach, the development of the knowledge-base could be very complex, and the effectiveness of the rules highly depends on expert inputs and availability of information on the classification maps and change classes. We found that the accuracy analysis on the pixel-based change map could provide valuable insights of detection errors, based on which effective rules could be developed to eliminate those errors in an object-based approach. Therefore, a combination of an object-based approach with a pixel-based method might provide an optimal change detection technique for describing changes in land cover in terms of quantity, location, shape and pattern, and transitions in urban environments.

Acknowledgments

This research was funded by the Northern Research Station, USDA Forest Service and the National Science Foundation LTER program (grant DEB- 042376). The authors would also like to thank the two anonymous reviewers for their helpful comments and suggestions.

References

  1. Singh, A. Digital change detection techniques using remotely-sensed data. International Journal of Remote Sensing 1989, 10, 989–1003. [Google Scholar]
  2. Lu, D.; Mausel, P.; Brondizio, E.; Moran, E. Change detection techniques. International Journal of Remote Sensing 2004, 25(12), 2365–2407. [Google Scholar]
  3. Rogan, J.; Chen, D.M. Remote sensing technology for mapping and monitoring land-cover and land-use change. Progress in Planning 2004, 61(4), 301–325. [Google Scholar]
  4. Yuan, D.; Elvidge, C. D.; Lunetta, R. S. Survey of multispectral methods for land cover change analysis. In Remote Sensing Change Detection: Environmental Monitoring Methods and Applications; Lunetta, R. S., Elvidge, C. D., Eds.; Ann Arbor Press: Chelsea, MI, 1998; pp. 21–39. [Google Scholar]
  5. Ridd, M. K.; Liu, J. A comparison of four algorithms for change detection in an urban environment. Remote Sensing of Environment 1998, 63, 95–100. [Google Scholar]
  6. Prakash, A.; Gupta, R. P. Land-use mapping and change detection in a coal mining area-a case study in the Jharia coalfield, India. International Journal of Remote Sensing 1998, 19, 391–410. [Google Scholar]
  7. Howarth, P. J.; Boasson, E. Landsat digital enhancements for change detection in urban environments. Remote Sensing of Environment 1983, 13, 149–160. [Google Scholar]
  8. Gong, P. Change detection using principal component analysis and fuzzy set theory. Canadian Journal of Remote Sensing 1993, 19, 22–29. [Google Scholar]
  9. Chen, J.; Gong, P.; He, C.; Pu, R.; Shi, P. Land-use/land-cover change detection using improved change-vector analysis. Photogrammetric Engineering and Remote Sensing 2003, 69, 369–80. [Google Scholar]
  10. Johnson, R. D.; Kasischke, E. S. Change vector analysis: a technique for the multitemporal monitoring of land cover and condition. International Journal of Remote Sensing 1998, 19, 411–426. [Google Scholar]
  11. Dai, X. L.; Khorram, S. Remotely sensed change detection based on artificial neural networks. Photogrammetric Engineering and Remote Sensing 1999, 65, 1187–1194. [Google Scholar]
  12. Rogan, J.; Miller, J.; Stow, D.A.; Franklin, J.; Levien, L.; Fischer, C. Land-Cover Change Monitoring with Classification Trees Using Landsat TM and Ancillary Data. Photogrammetric Engineering and Remote Sensing 2003, 69(7), 793–804. [Google Scholar]
  13. Jensen, J. R. Introductory digital image processing: A remote sensing perspective; Prentice-Hall: New Jersey, 2004; p. 526. [Google Scholar]
  14. Mas, J. F. Monitoring land-cover changes: a comparison of change detection techniques. International Journal of Remote Sensing 1999, 20, 139–152. [Google Scholar]
  15. Yuan, F.; Sawaya, K. E.; Loeffelholz, B.; Bauer, M. E. Land cover classification and change analysis of the Twin Cities (Minnesota) metropolitan area by multitemporal Landsat remote sensing. Remote Sensing of Environment 2005, 98(2), 317–328. [Google Scholar]
  16. Yang, X. Satellite monitoring of urban spatial growth in the Atlanta metropolitan area. Photogrammetric Engineering and Remote Sensing 2002, 68(7), 725–734. [Google Scholar]
  17. Im, J.; Jensen, J. R.; Tullis, J. A. Object-based change detection using correlation image analysis and image segmentation. International Journal of Remote Sensing 2008, 29(2), 399–423. [Google Scholar]
  18. Shackelford, A. K.; Davis, C. H. A combined fuzzy pixel-based and object-based approach for classification of high-resolution multispectral data over urban areas. Geoscience and Remote Sensing, IEEE Transactions 2003, 41(10), 2354–2363. [Google Scholar]
  19. Zhou, W.; Troy, A. An Object-oriented Approach for Analyzing and Characterizing Urban Landscape at the Parcel Level. International Journal of Remote Sensing.
  20. Civco, D.; Hurd, J.; Wilson, E.; Song, M.; Zhang, Z. A comparison of land use and land cover change detection methods. ASPRS-ACSM Annual Conference 2002. [Google Scholar]
  21. Laliberte, A. S.; Rango, A.; Havstad, K. M.; Paris, J. F.; Beck, R. F.; Mcneely, R.; Gonzalez, A. L. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sensing of Environment 2004, 93, 198–210. [Google Scholar]
  22. Niemeyer, I.; Nussbaum, S.; Canty, M. J. Automation of change detection procedures for nuclear safeguards-related monitoring purposes. IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 27 July, 2003. DVD-ROM.
  23. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS Journal of Photogrammetry & Remote Sensing 2004, 58, 225–238. [Google Scholar]
  24. Blaschke, T. A framework for change detection based on image objects. Göttinger Geographische Abhandlungen. Erasmi, S., Cyffka, B., Kappas, M., Eds.; 2005, pp. 1–9. Available online at http://www.definiens.com/binary_secure/213_140.pdf?binary_id=213&log_id=10463&session_id=63e0a3f4d2bb30984e35e709eb54e658 (accessed 21 December 2007).
  25. Frauman, E.; Wolff, E. Change detection in urban areas using very high spatial resolution satellite images-case study in Brussels: locating main changes in order to update the Urban Information System (UrbIS) database. Remote Sensing for Environmental Monitoring, GIS Applications, and Geology V. 2005, 5983, 76–87. [Google Scholar]
  26. Cadenasso, M. L.; Pickett, S.T.A; Schwarz, K. Spatial heterogeneity in urban ecosystems: reconceptualizing land cover and a framework for classification. Frontiers in Ecology and the Environment 2007, 5(2), 80–88. [Google Scholar]
  27. Baatz, M.; Schape, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. In Angewandte GeographischeInformationsverabeitung. XII; Strobl, T., Blaschke, T., Griesebner, G., Eds.; Beitragezum AGIT -Symp.: Salzburg, Karlsruhe, 2000; pp. 12–23. [Google Scholar]
  28. Definiens. Defineins Developer. 2007. Software: http://www.definiens.com/.
  29. Benz, U. C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of Photogrammetry & Remote Sensing 2004, 58, 239–258. [Google Scholar]
  30. Congalton, R. G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 1991, 37, 35–46. [Google Scholar]
  31. Goodchild, M.F.; Biging, G. S.; Congalton, R. G.; Langley, P. G.; Chrisman, N. R.; Davis, F. W. Final Report of the Accuracy Assessment Task Force. In California Assembly Bill AB1580; Santa Barbara; University of California, National Center for Geographic Information and Analysis (NCGIA), 1994. [Google Scholar]
  32. Foody, G. M. Status of land cover classification accuracy assessment. Remote Sensing of Environment 2002, 80, 185–201. [Google Scholar]
  33. Lunetta, R.; Elvidge, C. Remote Sensing Change Detection; Taylor & Francis, 1999; p. 320. [Google Scholar]
  34. Murakami, H.; Nakagawa, K.; Hasegawa, H.; Shibata, T.; Iwanami, E. Change detection of buildings using an airborne laser scanner. ISPRS Journal of Photogrammetry and Remote Sensing 1999, 54(2), 148–152. [Google Scholar]
  35. Stow, D. A. Reducing the effects of misregistration on pixel-level change detection. International Journal of Remote Sensing 1999, 20(12), 2477–2483. [Google Scholar]
  36. Fuller, R. M.; Smith, G. M.; Devereux, B. J. The characterization and measurement of land cover change through remote sensing: Problems in operational applications. International Journal of Applied Earth Observation and Geoinformation 2003, 4, 243–253. [Google Scholar]
  37. Zhou, W.; Troy, A. In review. Development of an object-oriented framework for classifying and inventorying human-dominated forest ecosystems. International Journal of Remote Sensing.
  38. Chen, D.; Stow, D. A.; Gong, P. Examining the effect of spatial resolution and texture window size on classification accuracy: an urban environment case. International Journal of Remote Sensing 2004, 25(11), 2177–2192. [Google Scholar]
  39. Thomas, N.; Hendrix, C.; Congalton, R.G. A comparison of urban mapping methodsusing high resolution digital imagery. Photogrammetric Engineering and Remote Sensing 2003, 69, 963–972. [Google Scholar]
  40. Cushnie, J. L. The interactive effects of spatial resolution and degree of internal variability within land-cover type on classification accuracies. International Journal of Remote Sensing 1987, 8, 15–22. [Google Scholar]
  41. Congalton, R. G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC/Lewis Press: Boca Raton, FL, 1999; p. 168. [Google Scholar]
  42. Townshend, J. R. G.; Justice, C. O.; Gurney, C.; McManus, J. The impact of misregistration on change detection. IEEE Transactions on Geoscience and Remote Sensing 1992, 30, 1054–1060. [Google Scholar]
  43. Justice, C. O.; Markham, B. L.; Townshend, J. R. G.; Kennard, R. L. Spatial degradation of satellite data. International Journal of Remote Sensing 1989, 10, 1539–1561. [Google Scholar]
Figure 1. The Gwynns Falls watershed includes portions of Baltimore City and Baltimore County, MD, USA, and drains into the Chesapeake Bay.
Figure 1. The Gwynns Falls watershed includes portions of Baltimore City and Baltimore County, MD, USA, and drains into the Chesapeake Bay.
Sensors 08 01613f1
Figure 2. Image objects for change detection (Panel C) represent the intersections between the two classification maps (Panel A: 1999; Panel B: 2004). Paned D shows the change detection results obtained from the object-based approach.
Figure 2. Image objects for change detection (Panel C) represent the intersections between the two classification maps (Panel A: 1999; Panel B: 2004). Paned D shows the change detection results obtained from the object-based approach.
Sensors 08 01613f2
Figure 3. The class hierarchy for object-based post-classification comparison change detection.
Figure 3. The class hierarchy for object-based post-classification comparison change detection.
Sensors 08 01613f3
Figure 4. The classification result of the 5 land cover classes for the Gwynns Falls watershed in 1999.
Figure 4. The classification result of the 5 land cover classes for the Gwynns Falls watershed in 1999.
Sensors 08 01613f4
Figure 5. The classification result of the 5 land cover classes for the Gwynns Falls watershed in 2004.
Figure 5. The classification result of the 5 land cover classes for the Gwynns Falls watershed in 2004.
Sensors 08 01613f5
Figure 6. The land cover change map for the Gwynns Falls watershed from 1999 to 2004, which was derived from an object-based change detection approach.
Figure 6. The land cover change map for the Gwynns Falls watershed from 1999 to 2004, which was derived from an object-based change detection approach.
Sensors 08 01613f6
Figure 7. Examples that change map shows different patterns of land cover changes. Panel A shows an example that a big chunk of forestland was converted to development, whereas pane B shows that the gain of tree canopy mainly came from the growth or expansion of existing trees and forest stands, with the form of numerous small pieces. Please refer to Figure 6 for legend information.
Figure 7. Examples that change map shows different patterns of land cover changes. Panel A shows an example that a big chunk of forestland was converted to development, whereas pane B shows that the gain of tree canopy mainly came from the growth or expansion of existing trees and forest stands, with the form of numerous small pieces. Please refer to Figure 6 for legend information.
Sensors 08 01613f7
Figure 8. A comparison of the change detection results of the same landscape using the two different approaches: the pixel-based post-classification comparison (Panel C) and the object-based post-classification comparison (Panel D). Panel A shows the 1999 Emerge image for the landscape, while Panel B shows the one for 2004.
Figure 8. A comparison of the change detection results of the same landscape using the two different approaches: the pixel-based post-classification comparison (Panel C) and the object-based post-classification comparison (Panel D). Panel A shows the 1999 Emerge image for the landscape, while Panel B shows the one for 2004.
Sensors 08 01613f8
Table 1. Class name and description for each of the 15 change categories and the no-change class.
Table 1. Class name and description for each of the 15 change categories and the no-change class.
Class NameClass Description
NoChangeLand cover with no changes; Land cover changes from building to other land cover types, and from pavement to building, were considered as highly unlikely, and thus were classified as no-change.
BareSoil-BuildingLand cover type changes from bare soil in 1999 to buildings in 2004
CV-BuildingLand cover type changes from CV in 1999 to buildings in 2004
FV-BuildingLand cover type changes from FV in 1999 to buildings in 2004
BareSoil-PavementLand cover type changes from bare soil in 1999 to pavement in 2004
CV-PavementLand cover type changes from CV in 1999 to pavement in 2004
FV-PavementLand cover type changes from FV in 1999 to pavement in 2004
Pavement-BareSoilLand cover type changes from pavement in 1999 to bare soil in 2004
CV-BareSoilLand cover type changes from CV in 1999 to bare soil in 2004
FV-BareSoilLand cover type changes from FV in 1999 to bare soil in 2004
Pavement-CVLand cover type changes from pavement in 1999 to CV in 2004
BareSoil-CVLand cover type changes from bare soil in 1999 to CV in 2004
FV-CVLand cover type changes from FV in 1999 to CV in 2004
Pavement-FVLand cover type changes from pavement in 1999 to FV in 2004
BareSoil-FVLand cover type changes from bare soil in 1999 to FV in 2004
CV-FVLand cover type changes from CV in 1999 to FV in 2004
CV: coarse textured vegetation; FV: fine textured vegetation.
Table 2. Summary of the classification accuracies for 1999 and 2004.
Table 2. Summary of the classification accuracies for 1999 and 2004.
19992004

Land cover classUser's Acc.
(%)
Producer's Acc.
(%)
User's Acc.
(%)
Producer's Acc.
(%)
Building83.694.493.493.4
CV97.794.497.793.3
FV94.989.391.492.5
Pavement91.988.391.894.4
Bare soil90.010095.994.0
Overall accuracy92.3%93.7%

Kappa statistic0.8990. 921
Table 3. Error matrix of the six classes for the change map derived from pixel-based post-classification comparison, with user's and producer's accuracy for each class, overall accuracy and Kappa statistic.
Table 3. Error matrix of the six classes for the change map derived from pixel-based post-classification comparison, with user's and producer's accuracy for each class, overall accuracy and Kappa statistic.
Reference dataRow TotalUser Acc.
(%)

Classified dataNoChangeToBuildingToCVToFVToPavementToBareSoil
NoChange1930141120096.5
ToBuilding42500303278.1
ToCV200172003943.6
ToFV241528005848.3
ToPavement04003304180.5
ToBareSoil20000273093.3
Column Total2473023343728400
Producer Acc.
(%)
78.183.373.982.489.296.4
Overall accuracy81.3%

Kappa Statistics0.712
Table 4. Error matrix of the six classes for the change map derived from object-based post-classification comparison, with user's and producer's accuracy for each class, overall accuracy and Kappa statistic.
Table 4. Error matrix of the six classes for the change map derived from object-based post-classification comparison, with user's and producer's accuracy for each class, overall accuracy and Kappa statistic.
Classified dataReference dataRow TotalUser Acc.
(%)
NoChangeToBuildingToCVToFVToPavementToBareSoil
NoChange1921121320096.0
ToBuilding02702013090.0
ToCV110223003661.1
ToFV50138104584.4
ToPavement25005205988.1
ToBareSoil10000293096.7
Column Total2113324455433400
Producer Acc. (%)91.081.891.784.496.387.9
Overall accuracy90.0%
Kappa Statistics0.854
Table 5. Summary of land cover and its changes in the Gwynns Falls watershed from 1999 to 2004.
Table 5. Summary of land cover and its changes in the Gwynns Falls watershed from 1999 to 2004.
Land cover19992004Relative Change
Area
(ha)
Proportion
(%)
Area
(ha)
Proportion
(%)
Area
(ha)
Proportion
(%)
Building1989.511.62055.612.066.10.4
CV5876.934.35915.834.538.90.2
FV4839.028.24616.026.9-223.0-1.3
Pavement4122.824.04442.625.9319.81.9
Bare soil321.01.9119.20.7-201.8-1.2
Table 6. Land cover changes from 1999 to 2004 (ha), derived from the object-based post-classification comparison method.
Table 6. Land cover changes from 1999 to 2004 (ha), derived from the object-based post-classification comparison method.
FromBuildingPavementBare soilCVFVTotal
To
Building0
Pavement10.00102.350.39112.74
Bare soil24.22112.493.88133.07273.66
CV30.3693.5625.47115.09264.48
FV11.55226.4536.40197.18471.58
Total66.13432.5071.87303.41248.55
Relative Change66.13319.76-201.7938.93-223.03

Share and Cite

MDPI and ACS Style

Zhou, W.; Troy, A.; Grove, M. Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data. Sensors 2008, 8, 1613-1636. https://doi.org/10.3390/s8031613

AMA Style

Zhou W, Troy A, Grove M. Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data. Sensors. 2008; 8(3):1613-1636. https://doi.org/10.3390/s8031613

Chicago/Turabian Style

Zhou, Weiqi, Austin Troy, and Morgan Grove. 2008. "Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data" Sensors 8, no. 3: 1613-1636. https://doi.org/10.3390/s8031613

APA Style

Zhou, W., Troy, A., & Grove, M. (2008). Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data. Sensors, 8(3), 1613-1636. https://doi.org/10.3390/s8031613

Article Metrics

Back to TopTop