Next Article in Journal
Multitemporal Monitoring of the Morphodynamics of a Mid-Mountain Stream Using UAS Photogrammetry
Next Article in Special Issue
Development of a Bi-National Great Lakes Coastal Wetland and Land Use Map Using Three-Season PALSAR and Landsat Imagery
Previous Article in Journal
Mapping Ground Subsidence Phenomena in Ho Chi Minh City through the Radar Interferometry Technique Using ALOS PALSAR Data
Previous Article in Special Issue
On the Importance of Training Data Sample Selection in Random Forest Image Classification: A Case Study in Peatland Ecosystem Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

1
Division of Polar Ocean Environment, Korea Polar Research Institute, 26 Songdomiraero, Yeonsugu, Incheon 406-840, Korea
2
Department of Marine Geosciences, University of Miami, 4600 Rickenbacker Causeway, Miami, FL 33149, USA
3
Satellite Information Application Center, Korea Aerospace Research Institute, 169-84 Gwahakro, Yuseonggu, Daejeon 305-333, Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(7), 8563-8585; https://doi.org/10.3390/rs70708563
Submission received: 8 February 2015 / Revised: 14 June 2015 / Accepted: 25 June 2015 / Published: 7 July 2015
(This article belongs to the Special Issue Towards Remote Long-Term Monitoring of Wetland Landscapes)

Abstract

:
The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol) synthetic aperture radar (PolSAR) data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering). We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR) observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

Graphical Abstract

1. Introduction

Tropical and subtropical wetlands are among the most productive ecosystems on Earth, providing numerous ecosystems services, including critical habitat for a variety of fauna and flora, energy and nutrients for coral reefs, and protection of near-shore areas from natural disasters such as storm surge or tsunami [1,2]. Tropical and subtropical wetlands include both inland freshwater and coastal saltwater wetland types. The Everglades, which is a World Heritage Site, International Biosphere Reserve, and a Wetland of International Importance, is the largest natural region of subtropical wilderness in the United States. Over the past century, the Everglades wetlands have been threatened by severe environmental stresses induced by climate change, human population growth, urban expansion, and agricultural and other land conversion. With the recognition of its global importance, various restoration plans have been authorized to protect the Everglades. Protecting the wetlands requires detailed assessments of their vegetation distribution in terms of vegetation types and vegetation changes over time.
Previous vegetation classifications of the Everglades were mainly conducted using airborne- and space-based images. The conventional approaches of visual interpretation techniques with aerial photographs and optical satellite images were adopted to generate detailed vegetation maps [3,4]. A related technique was the use of stereo-plotters with color infrared aerial photography to classify the vegetation in Water Conservation Area 2A (WCA-2A) of the Everglades [5]. Hyperspectral imagery has been regarded as a powerful tool for vegetation mapping due to its fine spectral resolution. Data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) have been used to map vegetation over the Everglades using spectral angle mapper and neural network classifiers [6,7]. Fuller [8] suggested that IKONOS imagery was likely to be useful for detecting large, dense stands of invasive trees in Everglades National Park (ENP). It is important to note that most of these approaches have relied on airborne-based optical imagery, which is costly, time-consuming, and limited by the weather condition.
Remotely sensed Synthetic Aperture Radar (SAR) observations have been widely used for monitoring tropical and subtropical wetlands because they can collect images through clouds, rain or fog. In addition, SAR images are sensitive to biomass and flooded vegetated structures [9,10]. Wetland interferometric SAR (InSAR) techniques, which can measure water-level changes with high spatial resolution in the aquatic environments with emergent vegetation, have been used to detect surface flow patterns over wetland areas [11,12,13,14]. Interferometric coherence, phase and amplitude were used by Ramsey et al. [15] in conjunction with a coastal land classification map to study changes in intensity of vegetation returns over a season. Several studies have reported on successful wetland vegetation mapping of various wetland areas [16,17,18,19,20]. However, because most of these classification studies relied on multi-temporal single polarization radar observations, they showed a limited capability to discriminate vegetated wetlands compared with other results from multi-spectral images. As an alternative approach, data fusion with multi-sensor and multi-temporal optical and SAR data has been reported to improve wetland characterization [21]. As multi-polarimetric SAR observation systems have been developed, more backscattering coefficient information over wetlands could be used for vegetation classification mapping. As indicated in previous studies, more abundant backscattering information of multi-polarimetric SAR observations becomes helpful for more accurate classification of vegetated wetlands [22,23,24,25,26].
Polarimetric SAR (PolSAR) decompositions are useful for discriminating and mapping the Earth’s surfaces according to scattering behaviors [27,28]. Freeman and Durden [29] successfully decomposed quadruple SAR data into three components: Single bounce, double bounce, and volume scattering. A fourth, helix, component was added to Freeman and Durden’s decomposition by Yamaguchi et al. [30,31], to resolve anomalous power problem in the decomposition results. Similarly, several mathematical approaches were proposed in recent studies to resolve anomalous behavior in decomposition maps [32,33,34]. More recent studies indicate that conventional scattering theories with simple double bounce and volume scattering models are not sufficient for explaining microwave scattering behaviors in wetland environments [35,36].
Our study was aimed at examining the applicability of four-component decompositions for wetland vegetation classification. We based our study on quad-polarimetric data from TerraSAR-X (TSX) (X-band, 3.1 cm), which were collected during the Dual Receive Antenna (DRA) Campaign in 2010. We chose this dataset because the TSX decomposition is more sensitive to vegetation variation than the C-band Radarsat-2 decomposition [37].

2. Study Area

The Everglades are vast and unique subtropical wetlands that cover most of southern Florida. Anthropogenic changes in the past century have severely impacted the drainage pattern of the wetlands and destroyed a significant part of the natural wetland ecosystem. To preserve and restore this fragile wetland environment, the Comprehensive Everglades Restoration Plan was established in 2000. An important part of the restoration plan includes vegetation recovery assessment, which is evaluated using vegetation types. For our study area we focused on Tarpon Bay in the coastal wetland area, which is located in the southwestern section of the ENP in southern Florida (Figure 1a). We chose this area because it lies in the transition zone between salt- and freshwater vegetation ecosystems and has been affected by the sea level rise and anthropogenic changes to the Everglades hydrological system.
The vegetation in the study area comprises mainly freshwater swamps and saltwater marshes. Sawgrass (herbaceous vegetation) and hardwood hammock (swamp forest in tree islands) covers the freshwater swamps. The saltwater marshes consist mainly of mangrove forests of variable height. Whereas tall mangrove trees are distributed in the southwestern part of ENP, especially along tide channels, short mangrove vegetation is found in many places. In the transition zone, the vegetation was classified as prairies, marshes, and scrub [38]. The optical color composite images of Landsat-7 ETM+ [39] and RapidEye clearly show an inhomogeneous vegetation pattern, affected by the location of the tidal channels (Figure 1b).
Figure 1. (a) Location of the study area in the western Everglades, shown with a Landsat ETM+ image rendered as a true-color composite [39]. The frames mark the swath locations of data acquired by X-band TerraSAR-X SAR (16 April 2010) and RapidEye optical (3 December 2010) satellite. The green frame indicates the location of main study area. (b) RapidEye multispectral true-color composite image over the study area. (c) Pauli decomposition of TerraSAR-X SAR image as a color composite image: HH-VV (red), HH+VV (blue), and HV (green).
Figure 1. (a) Location of the study area in the western Everglades, shown with a Landsat ETM+ image rendered as a true-color composite [39]. The frames mark the swath locations of data acquired by X-band TerraSAR-X SAR (16 April 2010) and RapidEye optical (3 December 2010) satellite. The green frame indicates the location of main study area. (b) RapidEye multispectral true-color composite image over the study area. (c) Pauli decomposition of TerraSAR-X SAR image as a color composite image: HH-VV (red), HH+VV (blue), and HV (green).
Remotesensing 07 08563 g001

3. Data

Our classification study relied on two data types, space-based Synthetic Aperture Radar (SAR) and multispectral optical observations. TerraSAR-X is an advanced SAR satellite, which was launched on 15 June 2007. It has the Dual Receive Antenna (DRA) mode, transmits a radar signal using the full antenna area, and receives signal returns using two independent channels, which are divided electrically into two sections. We used TerraSAR-X quad-polarimetric data from 16 April 2010, acquired with the DRA StripMap mode. Due to the multi-polarization data acquisition, the swath width was relatively narrow and extended up to 15 km with 1.2 m in range and 6.6 m in azimuth spatial resolution.
RapidEye is a satellite constellation mission, with five satellites traveling along the same orbit. Each satellite has an equally calibrated identical pushbroom sensor and provides high-resolution multispectral imagery (nadir ground sample distance: 6.5 m; orthorectified resampled pixel size: 5 m) in five optical bands corresponding to the blue visible (440–510 nm), green visible (520–590 nm), red visible (630–690 nm), red-edge (690–730 nm), and near infrared (760–880 nm) portions of the electromagnetic spectrum. The red-edge band, which is sensitive to changes in chlorophyll content [40,41], is very useful for classifying vegetation types [42,43,44]. We used a RapidEye image collected on 3 December 2010. This image provided the best available data among RapidEye images available for this region. The image date was near the beginning of the dry season, which extends from November to May. Although the RapidEye image date preceded the date of TSX quad-pol data by five months, both datasets were acquired during the dry season under similar environmental conditions and we did not anticipate significant changes in the vegetation between the two acquisition dates. Technical details of the two datasets are described in Table 1.
Table 1. Characteristics of the TerraSAR-X (TSX) and RapidEye data used in this study.
Table 1. Characteristics of the TerraSAR-X (TSX) and RapidEye data used in this study.
TerraSAR-X RapidEye
Acquisition date16 April 2010Acquisition date03 December 2010
Wavelength3.1 cmSpectral bands
Carrier frequencyX-band (9.6 GHz)Blue440–510 nm
Pulse repetition frequency2950 HzGreen520–590 nm
ADC sampling rate164.8 MHzRed630–685 nm
PolarizationQuad-polRed Edge690–730 nm
Flight directionAscendingNear Infrared (NIR)760–850 nm
Incidence angle32.6 degIncidence angle7.11 deg
Azimuth pixel spacing2.40 mGeometric resolution6.5 m (resampled 5 m)
Range pixel spacing0.91 mDynamic range12 bits
Figure 2. Photograph images obtained by the helicopter survey. (a) Mixed vegetation with prairie (sawgrass) and forest (mangrove); (b) Mixed vegetation with mostly prairie (sawgrass) and small forest (mangrove); (c) Scrub vegetation with short mangrove and buttonwood; (d) Typical mangrove dominated forest.
Figure 2. Photograph images obtained by the helicopter survey. (a) Mixed vegetation with prairie (sawgrass) and forest (mangrove); (b) Mixed vegetation with mostly prairie (sawgrass) and small forest (mangrove); (c) Scrub vegetation with short mangrove and buttonwood; (d) Typical mangrove dominated forest.
Remotesensing 07 08563 g002
The Florida Coastal Everglades Long Term Ecological Research (FCE-LTER) program was established by the National Science Foundation in May of 2000 in southern Florida. The FCE-LTER project provides a wealth of data and data product, including a vegetation map and digital database of South Florida’s National Park [38]. The detailed vegetation database, which is in a geographic information system, was developed by the Center for Remote Sensing and Mapping Science at The University of Georgia and the South Florida Natural Resources Center, Everglades National Park [3,45]. Conventional visual interpretation techniques were used with optical airborne- and spaceborne-based observations to generate the vegetation map. Although the map is based on remote sensing data acquired two decades ago, it is still widely used as a reference map and is the only available map.
We used the Vegetation Map and Digital Database of South Florida’s National Park Service Lands as reference maps to evaluate our classification results. However, because the information represented in the map was dated, we examined possible vegetation changes by comparing multi-temporal Landsat-TM images collected from 1994 to 2011. In addition, we conducted a helicopter field survey in 2014 to verify the current vegetation distribution in some representative areas (Figure 2).

4. SAR Decomposition

Polarimetric SAR (PolSAR) decomposition is a common method for characterizing the Earth’s surface. The Pauli decomposition is widely used as a simple method for mapping the surface according to the three scattering mechanisms, which are single bounce, double bounce, and volume scattering [27,46,47]. A three-component scattering decomposition approach proposed by Freeman and Durden has been successfully applied to decompose quadruple polarized SAR data under reflection symmetry conditions [29]. Yamaguchi et al. [30,31] added a fourth helix component to their decomposition to account for non-reflection symmetry conditions. Several other studies have been performed to estimate the volume scattering component considering non-reflection symmetry condition [48,49,50] and an extended volume scattering model was proposed to consider randomly orientated diplane scatterers [51]. To resolve anomalous values generated by the previous three- and four-decomposition methods, mathematical operations on the decomposed coherency matrix have also been studied [32,33,34].
We used the Hong and Wdowinski (H&W) [36] scattering component decomposition approach to classify wetland vegetation. This new four-scattering component decomposition method was derived by two of the authors of the current paper to extract a double bounce component from cross-pol, which was developed in accordance with new SAR phase observations (interferograms) in tropical and subtropical wetlands [36]. According to common radar scattering theory, wetland InSAR works because of double-bounce scattering components, which reflect inundated conditions beneath the vegetation. However, an almost identical fringe pattern indicating surface water level changes in both co- and even cross-polarizations has been reported in research in Everglades wetlands [52], signifying that the cross-polarization signal samples the water surface beneath the vegetation. To explain these interesting phase observations from the cross-pol, we adopted a rotated dihedral model, which is the simplest scattering mechanism that accounts for scattering in the cross-pol signal. The decomposition extracts the co-polarization double bounce components which are calculated based on the conventional polarimetric decomposition approach and the cross-polarization double bounce components. The full description and mathematical formulation of the four scattering decomposition model have been described by Hong and Wdowinski [36]. This previous study indicated that the decomposition method showed enhanced distinctions in land cover beyond those revealed with the Yamaguchi decomposition [37]. Thus, we chose the H&W decomposition method to evaluate its performance for classifying wetland vegetation types in the Everglades. Yamaguchi’s decomposition has been widely regarded as the most popular method for polarimetric SAR decomposition, and its applicability has been demonstrated. To evaluate the utility of our decomposition particularly for wetland vegetation classification, we compared our classification results with results obtained using Yamaguchi’s decomposition method based on rotated coherence matrix [32].
Figure 3. Hong & Wdowinski decomposition analysis of TerraSAR-X (TSX) quadruple polarimetric data acquired over the study area. (a) Single bounce component; (b) Double bounce component from co-pol; (c) Double bounce component from cross-pol; (d) Double bounce component from both the co- and cross-pol; (e) Volume scattering component; (f) Color composite image using our decomposition: blue = single bounce, red = double bounce (both from the co- and the cross-pol), and green = volume scattering.
Figure 3. Hong & Wdowinski decomposition analysis of TerraSAR-X (TSX) quadruple polarimetric data acquired over the study area. (a) Single bounce component; (b) Double bounce component from co-pol; (c) Double bounce component from cross-pol; (d) Double bounce component from both the co- and cross-pol; (e) Volume scattering component; (f) Color composite image using our decomposition: blue = single bounce, red = double bounce (both from the co- and the cross-pol), and green = volume scattering.
Remotesensing 07 08563 g003
We used a 3 × 3 coherency matrix, which was extracted with PolSARpro software [28] to derive the four scattering component model. The coherency matrix was computed using a multi-look process with 1 × 2 factors in direction of range and azimuth, respectively. To suppress speckle noise in the SAR image, a relatively large window size of 11 × 11 was applied to estimate an ensemble average, accounting for creating reduced spatial resolution. Figure 3 represents the results of the H&W decomposition analysis using the TSX data. The TSX quad-pol decomposition is more sensitive to vegetation variation compared with our previous results using Radsatsat-2 quad-pol C-band observations [36]. The decomposition results from our previous research with Radarsat-2 data over Tarpon Bay showed dominant volume scattering throughout the image [36], and most of the double bounce scattering occurred over sawgrass and some mangroves. Hence, we roughly could distinguish sawgrass from scrub with relatively low resolution. However, the decomposition of the TSX dataset shows large color variability resulting from the shorter wavelength of the X-band SAR signal, which has more sensitivity for vegetation variation with high resolution [37]. These newer and more detailed features of vegetation distribution can be useful for classifying wetland vegetation.

5. Vegetation Changes in Everglades and Selection of Vegetation Types

To be aware of recent vegetation condition, we compared various available materials including Landsat TM time-series imagery, a RapidEye image, a vegetation map, and aerial photographs acquired by a helicopter survey. A visual comparison of Landsat TM time-series images from 1 April 1994, 23 April 2008, 25 December 2010, and 10 November 2011, showed that the general distribution of vegetation had hardly changed in the Tarpon Bay region (Figure 4). Most vegetation typically adapts to the prevailing environmental conditions, except for indicator species, which are very sensitive to environmental changes. We concluded that the distribution of coarse vegetation types in our study area appeared to be constant over the past two decades.
We then compared Landsat TM data from 25 December 2010, with RapidEye imagery from 23 December 2010, by overlaying the vegetation map on both images. Although Landsat TM images have a limited resolution for identifying detailed vegetation species, the higher-resolution RapidEye image allowed us to visually interpret vegetation conditions in more detail. Even though the vegetation near the Tarpon Bay area seemed to be unchanged, we could not completely rely on the vegetation map. In addition, some vegetation species were not even detected by photographs acquired during the helicopter survey. We therefore decided to simplify the vegetation types in our classification to three classes: forest, scrub, and prairie.
Figure 4. A time series of Landsat TM data rendered as false-color composite images overlaid with the vegetation map created in 1999. (a) 1 April 1994; (b) 23 April 2008; (c) 25 December 2010; (d) 10 November 2011. The intensive red color along the tidal channel corresponds well with the mangrove forest, and the color ranges are consistent across the time series.
Figure 4. A time series of Landsat TM data rendered as false-color composite images overlaid with the vegetation map created in 1999. (a) 1 April 1994; (b) 23 April 2008; (c) 25 December 2010; (d) 10 November 2011. The intensive red color along the tidal channel corresponds well with the mangrove forest, and the color ranges are consistent across the time series.
Remotesensing 07 08563 g004aRemotesensing 07 08563 g004b
Figure 5. Field sample sites for training (red) and reference (yellow) shown on the RapidEye image (true-color combination using red, green, and blue visible bands).
Figure 5. Field sample sites for training (red) and reference (yellow) shown on the RapidEye image (true-color combination using red, green, and blue visible bands).
Remotesensing 07 08563 g005
For these three vegetation types, we set up 145 sample sites based on the RapidEye satellite image, where the characteristic features could be recognized by comparison to the reference vegetation map and photographs from the helicopter survey. These sites were then separated randomly into training (74 sites) and reference samples (71 sites) (Figure 5).

6. Wetland Vegetation Classification

6.1. Method

Quad-pol-based vegetation classification previously has been conducted with two different data types. One type relied on coefficients of radar backscatter [53,54,55,56,57] and the other type used decomposition maps, which are based on physical scattering mechanisms [23,27,29,30,31,34,46,47,48]. In this study, we compared classification results derived from PolSAR decomposition with those derived from optical satellite image data. We also compared classification results using Yamaguchi’s decomposition method [32] to evaluate the benefit of our decomposition for wetland vegetation classification.
We applied an object-oriented approach to classify vegetation cover. This approach is based on classifying objects or image segments that are delineated as homogeneous units with similar spectral characteristics (this delineation process is called segmentation), rather than classifying individual pixel values. Segmentation enables the acquisition of a variety of textural and spatial features, such as shape, in addition to spectral values, resulting in improved classification accuracy [58]. SAR images usually have speckle noises or relative roughness compared to optical images. Similar artifacts of the SAR signal, which can be found in the scattering component results from TSX quadruple data, prevent the characterization of the surface type into specific classes using only pixel values. Thus, we assumed that the object-oriented classification would be a better approach than a pixel-based classification. The advantages of object-oriented classification using high-resolution image data have been reported by many studies [59,60,61,62,63]. However, determining appropriate segmentation parameters is a time-consuming process, generally based on trial and error evaluation to derive homogeneous image segments representing similar thematic units such as vegetation types [62,64,65].
We applied a multi-resolution segmentation method with eCognition Developer 8 [66]. First, different parameters for the scale factor, shape, and compactness were evaluated through iterative trial and error (Table 2). For the object homogeneity eCognition Developer adopts three criteria of scale, shape/color and compactness/smoothness. The shape defines the percentage the spectral values of the image layers will contribute to the entire homogeneity criterion. As the shape and the color are complementary, the assigned shape value determines automatically the color criteria. In addition to spectral information, the object homogeneity is optimized with regard to the object shape, defined by the compactness parameter. The compactness should be used when different image objects are rather compact, are separated from non-compact objects only by a relatively weak spectral contrast.
Table 2. Parameter settings for image segmentation.
Table 2. Parameter settings for image segmentation.
Segmentation TypeImage SourceImage LayerScaleShapeCompactness
Type ORapidEyeBlue, Green, Red, Red-Edge, NIR, NDVI500.10.5
Type STSXSingle, Double, Double from co-pol, Double from cross-pol, Volume500.10.5
Type MTSX & RapidEyeBlue, Green, Red, Red-Edge, NIR, NDVI, Single, Double, Double from co-pol, Double from cross-pol, Volume500.10.5
We then generated three types of segmentation outputs based on the different optical and SAR datasets. One output relied on optical RapidEye’s spectral characteristics (segmentation type: O). A second output was based on processed SAR decomposition components (segmentation type: S). The third output was calculated with both TSX and RapidEye features (segmentation type: M).
We configured image segmentation settings to build object boundaries around sample sites for the training and reference samples. In this way, sample sites were delineated as image objects for use in training and testing of the supervised classification.
After image segmentation we conducted a supervised classification with a nearest neighbor classifier, which used a set of training samples from different classes to assign membership values. We used 74 sample sites for training (24 for forest, 23 for scrub, 18 for prairie, and nine for water). The membership values were between 0 and 1, depending on the image object’s feature space distance from its nearest neighbor. A membership value of 1 is assigned when the image object is identical to a sample. If the image object differs from the sample, the feature space distance has a fuzzy dependency on the feature space distance from the nearest sample of a class. The user can select the features to be considered for the feature space. For an image object to be classified, only the nearest sample is used to evaluate its membership value. The effective membership function at each point in the feature space is a combination of the fuzzy function over all samples of that class. When the membership function is described as one dimensional it means it is related to one feature [67].
Table 3. Classification scenarios.
Table 3. Classification scenarios.
ScenarioImage Layer Features Used for Classification (Mean Object Value)Segmentation Type
Scenario 1RapidEye’s blue, green, red, red-edge, NIR, and NDVIType O
Scenario 2RapidEye’s blue, green, red, red-edge, NIR and NDVI & TerraSAR-X’s single, double, double from co-pol, double from cross-pol, and volumeType O
Scenario 3TerraSAR-X’s single, double, double from co-pol, double from cross-pol, and volumeType O
Scenario 4TerraSAR-X’s single, double, double from co-pol, double from cross-pol, and volumeType S
Scenario 5TerraSAR-X’s single, double, volume and helix using the Yamaguchi decompositionType O
We applied five classification scenarios, which adopted different nearest neighbor feature spaces as described in Table 3. Scenario 1 used only optical image information with the five multispectral bands of RapidEye (blue, green, and red visible, red-edge, and near-infrared) and its normalized difference vegetation index (NDVI), based on the segmentation type O. Scenario 2 used all available features from TSX and RapidEye image layers, with the segmentation type O. Scenario 3 adopted five TSX decomposition components layers (single, double, double from co-pol, double from cross-pol, and volume), but was based on the segmentation type O, in which the image objects were created with RapidEye’s spectral features. Since the differentiation of vegetation type into forest, scrub, and prairie using optical satellite imagery is relatively easy, we assumed that image objects by segmentation type O would be reliable as homogeneous vegetation units. Hence, we could examine how the SAR polarimetric decomposition components were related to vegetation types by comparing these three combinations of input layers. Scenario 4 used the five TSX decomposition components layers based on the segmentation type S, in which the image objects were created with TSX data only. This scenario provided the contrasting case of using only SAR features. Finally, Scenario 5 adopted four TSX decomposition results using Yamaguchi’s method with segmentation type O, which we then compared with results from Scenario 3.

6.2. Segmentation Results

We tested three segmentation types, one calculated by the five SAR decomposition components, one by RapidEye’s spectral bands, and one by both of TSX and RapidEye’s features. The mixed use of five TSX features and six RapidEye’s spectral bands produced segmentation results very similar to those calculated with just the RapidEye spectral bands. Thus we continued the analysis with only two types of segmentation output (Type O and Type S) (Table 2).
The segmentation by the SAR features was conducted at the coarse resolution level and did not divide the vegetation units into much detail, whereas the segmentation based on RapidEye’s spectral features produced fine units, despite using the same parameter settings (Figure 6 and Figure 7). In segmentation type S, the segments were relatively well divided near the Tarpon Bay areas where mangrove forests appeared along the water and sawgrass occurred on the inward side (Figure 6a,b). However the image objects in inland areas where the mangroves and buttonwood scrubs were mixed with sawgrass were not well segmented (Figure 6c,d). In the case of segmentation type O, the image objects were smaller and quite well divided, representing different vegetation characteristics (Figure 7).
Figure 6. Segmentation results of type S displayed on the TSX image using false-color composite image decomposition layers (red = double bounce scattering, green = volume scattering, and blue = single bounce scattering in [a] and [c]) and on the RapidEye image using false-color composite images (red = near infrared, green = red visible, and blue = green visible bands in [b] and [d]); (a,b) are near tidal canals; (c,d) are near inland areas.
Figure 6. Segmentation results of type S displayed on the TSX image using false-color composite image decomposition layers (red = double bounce scattering, green = volume scattering, and blue = single bounce scattering in [a] and [c]) and on the RapidEye image using false-color composite images (red = near infrared, green = red visible, and blue = green visible bands in [b] and [d]); (a,b) are near tidal canals; (c,d) are near inland areas.
Remotesensing 07 08563 g006
Figure 7. Segmentation results of type O displayed on the TSX image using false-color composite image decomposition layers (red = double bounce scattering, green = volume scattering, and blue = single bounce scattering in [a] and [c]) and on the RapidEye image using false-color composite images (red = near infrared, green = red visible, and blue = green visible bands in [b] and [d]); (a,b) are near tidal canals; (c,d) are near inland areas.
Figure 7. Segmentation results of type O displayed on the TSX image using false-color composite image decomposition layers (red = double bounce scattering, green = volume scattering, and blue = single bounce scattering in [a] and [c]) and on the RapidEye image using false-color composite images (red = near infrared, green = red visible, and blue = green visible bands in [b] and [d]); (a,b) are near tidal canals; (c,d) are near inland areas.
Remotesensing 07 08563 g007

6.3. Classification Results and Accuracy Assessment

Results differed among the five classification scenarios. When only optical multispectral characteristics were used (Scenario 1), the forest class mostly was assigned along water flows corresponding well with highly vital vegetative areas; the prairie class appeared mostly in the inland areas (and was particularly well recognized in the area marked B in Figure 8d); and the scrub class covered the remaining areas. When RapidEye and TSX features were used together (Scenario 2), the classification results were very similar to those of Scenario 1. When SAR features were based on segmentation type O (Scenario 3), wider areas near tidal canals were assigned to the forest class. For example, vegetation along the narrow tidal canals was classified as forest in Scenario 3, but shown as scrub class in Scenario 1 (particularly recognizable in the area marked C in Figure 8d). These areas showed very high vitality in the original RapidEye image. Thus, we interpreted these areas as the edges of mangrove forest, which were classified as scrub in Scenario 1 using multi-spectral features and as forest in Scenario 3 using SAR features. In this case, the SAR classification was advantageous for identifying the successive mangrove forest. In Scenario 3, more areas were classified as prairie in the north part of the study area (particularly in area A in Figure 8d). The classification results using SAR features only (Scenario 4) show that the forest class is a bit exaggerated along the tidal canals due to the coarse segmentation. Narrow mangrove forests were not classified as forest and the scrub and prairie classes were found as scattered on the northern part of the study area. From Scenario 5 using Yamaguchi’s decomposition method, the overall classification pattern was similar to Scenario 3, as shown Figure 8c; however, more areas were classified as water in the northern part of the study area. In summary, the classification results of five scenarios showed some differences, but shared common patterns of distributed vegetation type such as forests along the tidal canal, prairies behind the forest inward to the land, and scrubs distributed widely in the inland area.
Figure 8. Classification results. (a) Scenario 1 (brown = forest, yellow = scrubs, green = prairie, blue = water); (b) Scenario 2; (c) Scenario 3; (d) RapidEye image with false-color combination (red = NIR, green = red visible, and blue = green visible); (e) Scenario 4; and (f) Scenario 5.
Figure 8. Classification results. (a) Scenario 1 (brown = forest, yellow = scrubs, green = prairie, blue = water); (b) Scenario 2; (c) Scenario 3; (d) RapidEye image with false-color combination (red = NIR, green = red visible, and blue = green visible); (e) Scenario 4; and (f) Scenario 5.
Remotesensing 07 08563 g008
We calculated the classification error matrix (Table 4) by using 71 field sample sites (25 for forest, 22 for scrub, 21 for prairie, and three for water). The classification by RapidEye’s spectral features (Scenario 1) showed the highest overall classification accuracy (95.8%), followed by the classification developed using SAR features with segmentation type O (Scenario 3; 93.0%). The classification resulting from mixed optical and radar features (Scenario 2) was somewhat less successful (88.7%). The classification based on SAR features only with segmentation type S (Scenario 4) had the lowest classification accuracy among the four scenarios (87.3%), and the prairie class was more often misclassified, compared with forest and scrub classes. The accuracy difference between Scenarios 2 and 4 is statistically not significant. Scenario 5, which was based on Yamaguchi’s method, resulted in 84.5% overall accuracy, which was the lowest classification accuracy among scenarios.
Table 4. Accuracy of each scenario by class (F = forest, S = scrub, P = prairie, N = Not classified, PA = producer’s accuracy [%], UA = user’s accuracy [%], OA = overall accuracy [%]).
Table 4. Accuracy of each scenario by class (F = forest, S = scrub, P = prairie, N = Not classified, PA = producer’s accuracy [%], UA = user’s accuracy [%], OA = overall accuracy [%]).
Scenario 1
Reference
FSPWPA (%)
classF22 100.0
S 25 100.0
P 21 100.0
W 0.0
N 3
UA100.0100.0100.00.0
OA95.8
Scenario 2
Reference
FSPWPA (%)
classF21 1 95.5
S1242 88.9
P 18 100.0
W 0.0
N 1 3
UA95.596.085.70.0
OA88.7
Scenario 3
Reference
FSPWPA (%)
classF22 195.6
S 241192.3
P 120190.9
W 0.0
N
UA100.096.095.20.0
OA93.0
Scenario 4
Reference
FSPWPA (%)
classF2111187.5
S1221188.0
P 119190.5
W 1 0.0
N
UA95.588.090.50.0
OA87.3
Scenario 5
Reference
FSPWPA (%)
classF2032176.9
S2221184.6
P 18194.7
W 0.0
N
UA90.988.085.70.0
OA84.5
In all scenarios, none of the water reference sites were classified correctly; consequently, the producer’s and user’s accuracies both were 0.0%. This may have resulted because we included very few training and reference samples for water, as our main focus was on the potential of SAR features for vegetation classification. However, we examined three small water reference sites to determine if the SAR features were more advantageous for water detection. The interesting result was that the water areas were unclassified in the Scenario 1 approach (multispectral features only), but classified as vegetation types (each site differently as forest, shrub, and prairie) in scenarios based on SAR features (Scenario 3 and 4). We suppose that the misclassification could be caused by the short wavelength of the X-band SAR signal being reflected from a rough water surface surrounded by vegetation, which may act like volume scattering.

7. Discussion and Conclusions

In this study, we examined the usefulness of quad-pol X-band TerraSAR-X data for vegetation mapping over the Everglades wetland. We applied the H&W four-component decomposition model to extract scattering behavior for characterizing the wetland vegetation. We hypothesized that each vegetation type is characterized by typical scattering behavior detectable by the decomposition approach and would be helpful for the vegetation classification. We also compared the classification results using Yamaguchi’s decomposition to evaluate the performance of our decomposition method.
Overall accuracy for our classification results ranged from 84.5% to 95.8%. The best accuracy was achieved using only RapidEye multispectral layers, which indicates that cloud-free optical data are very good for generating maps of general vegetation types. Good accuracy (93.0%) was also achieved with SAR image feature layers, indicating the high potential of polarimetric SAR decomposition products for detecting wetland vegetation, particularly for mangrove forests. The overall pattern of classification results between our decomposition and Yamaguchi’s method were very similar, even though the overall accuracy using our decomposition method was much higher than achieved with Yamaguchi’s decomposition. Our reference samples were placed in the middle of relatively unchanged vegetation stands to capture their homogeneous characteristics. Consequently, the classification matrix could not validate the classification accuracy over areas transitioning from forest to shrub or edges between two vegetation types. From our visual inspection of the original RapidEye image, we determined that mangrove forests along the tidal canals were underestimated in the classification based on optical data, but classified well when SAR features were used. The better performance of SAR decomposition products for detecting mangrove forest is likely due to the radar signal containing physical scattering characteristics over the target surface. The multispectral image contains just reflectance from sun illumination, whereas the SAR signal includes information about the surface geometry in the form of backscattering effects (e.g., surface, double bounce, and volume scattering). We can use the scattering information to ascribe physical meaning to the surface targets. Polarimetric SAR decomposition with the aid of optical imagery could be very useful for vegetation classification. Furthermore, a high accuracy level in the vegetation classification using SAR decomposition features shows a very advantageous benefit, particularly in the case of cloudy weather conditions, in which optical sensors have limited ability to sense the vegetation.
However some limitations still remain. Operational space-based X-band quadruple polarimetric observations are not yet available. The returned values of the SAR image are recorded as a power, and therefore are always positive. We can discover the negative power when we apply a model-based decomposition algorithm. Our decomposition approach suffers from a negative power problem similar to those encountered with other model-based decomposition methods [32,33,34]. Once these scattering decomposition results are improved by developing a better model, the accuracy of vegetation classification should also improve. The negative value could be due to speckle-like noise in the input image that prohibits the segmentation and subsequent classification. Most decomposition methods for estimating various scattering components rely on quad-pol SAR observations [28,46]. Although dual-pol observations have more information than single-pol SAR images, the ability to distinguish between the different vegetation types is somewhat limited. The only operational polarimetric SAR satellite system currently is the C-band Radarsat-2, but its polarimetric sensitivity over tropical and subtropical wetlands is not better than that of the X-band wavelength data [37,68].
It is impressive that the short wavelength of X-band TSX decomposition products yielded very good sensitivity for vegetation characterization, particularly the detail with which the mangroves along the tidal channels were mapped. The high sensitivity indicates that the scattering behaviors around the mangroves in SAR observations can be very helpful in discriminating mangroves from other vegetation types. We interpret that the high sensitivity of the X-band TSX data were very suitable for the characteristics of vegetation cover in Everglades wetlands. In upland vegetated areas, longer wavelength radar observations have proven more useful for classifying the vegetation because of increased canopy penetration depth [69,70,71]. Thus, a mix of polarimetric SAR systems could provide stronger capabilities for mapping a greater variety of vegetation types.
We observed that our decomposition method can provide better classification accuracy (93.0%) than Yamaguchi’s decomposition approach (84.5%), which is based on rotated coherency matrix [32]. Both of the classification results revealed similar patterns, as shown in Figure 8c,f, but Yamaguchi’s method (Scenario 5) resulted in a relatively overestimated water class. We can also detect a greater portion of the scrub class in Scenario 5, compared with Scenario 3. The higher probability to be classified as prairie in region A of Figure 8d can be explained by more dominant surface and volume scattering components of Yamaguchi’s method. The better classification accuracy using H&W decomposition may result from the characteristics in which the wetland environment was considered [36]. The H&W decomposition has limitation in that it produces more negative single scattering component compared other decomposition methods. However, it has an advantage to provide more double bounce scattering component from both co- and cross-pol SAR observations at the wetland environment. These characteristics might be useful to discriminate tall mangrove forest or scrub from other herbaceous vegetation such as prairie. We will further investigate the performance of our decomposition method for other study areas comparing other decomposition results.
Our classification results showed that a SAR feature-based approach offers good potential for vegetation mapping, even though multispectral and hyperspectral remotely sensed images have been widely utilized to map wetland vegetation [72,73,74,75]. We achieved a mapping accuracy of more than 85% when only SAR features were used. However, accuracy of more than 90% was achieved when SAR features were used for vegetation classification following the application of multispectral bands to develop vegetation object boundaries (image segmentation). Where both multispectral and SAR data are available, they can be used in combination to improve vegetation mapping, but where persistent cloud cover limits the availability of multispectral data, as is often the case in tropical and subtropical wetland environments, the high accuracy level we attained with only SAR data demonstrates the value of SAR systems for mapping these globally important resources.

Acknowledgments

The authors would like to thank the German Aerospace Center for access to the TSX data. This study was supported by KOPRI project of PE15040. This work was also funded through NASA Cooperative Agreement No. NNX08BA43A (WaterSCAPES: Science of Coupled Aquatic Processes in Ecosystems from Space) grants.

Author Contributions

S.H. acquired the TerraSAR-X and RapidEye satellite data and processed the SAR decomposition to set up the classification procedure. H.K. conceived the classification approach and adapted it to the wetland vegetation in the Everglades. S.W. reviewed the results and organized the paper. E.F. contributed the airborne imagery acquired by helicopter survey. All authors contributed to the interpretation of the results and agreed on the conclusions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Center for International Forestry Research (CIFOR). Tropical Wetlands Initiative: For Climate Adaptation and Mitigation; CIFOR: Bogor, Indonesia, 2012. [Google Scholar] [CrossRef]
  2. Barbier, E.B. Valuing environmental functions: Tropical wetlands. Land Econ. 1994. [Google Scholar] [CrossRef]
  3. Madden, M.; Jones, D.; Vilchek, L. Photointerpretation key for the everglades vegetation classification system. Photogramm. Eng. Remote Sens. 1999, 65, 171–177. [Google Scholar]
  4. Rutchey, K.; Vilchek, L. Air photointerpretation and satellite imagery analysis techniques for mapping cattail coverage in a northern everglades impoundment. Photogramm. Eng. Remote Sens. 1999, 65, 185–191. [Google Scholar]
  5. Rutchey, K.; Schall, T.; Sklar, F. Development of vegetation maps for assessing everglades restoration progress. Wetlands 2008, 28, 806–816. [Google Scholar] [CrossRef]
  6. Hirano, A.; Madden, M.; Welch, R. Hyperspectral image data for mapping wetland vegetation. Wetlands 2003, 23, 436–448. [Google Scholar] [CrossRef]
  7. Zhang, C.; Xie, Z. Combining object-based texture measures with a neural network for vegetation mapping in the everglades from hyperspectral imagery. Remote Sens. Environ. 2012, 124, 310–320. [Google Scholar] [CrossRef]
  8. Fuller, D. Remote detection of invasive melaleuca trees (Melaleuca quinquenervia) in South Florida with multispectral IKONOS imagery. Int. J. Remote Sens. 2005, 26, 1057–1063. [Google Scholar] [CrossRef]
  9. Hess, L.L.; Melack, J.M.; Simonett, D.S. Radar detection of flooding beneath the forest canopy: A review. Int. J. Remote Sens. 1990, 11, 1313–1325. [Google Scholar] [CrossRef]
  10. Ramsey, E.; Rangoonwala, A.; Bannister, T. Coastal flood inundation monitoring with satellite C-band and L-band Synthetic Aperture Radar data. J. Am. Water Resour. Assoc. 2013, 49, 1239–1260. [Google Scholar] [CrossRef]
  11. Alsdorf, D.E.; Melack, J.M.; Dunne, T.; Mertes, L.A.K.; Hess, L.L.; Smith, L.C. Interferometric radar measurements of water level changes on the Amazon flood plain. Nature 2000, 404, 174–177. [Google Scholar] [CrossRef] [PubMed]
  12. Wdowinski, S.; Amelung, F.; Miralles-Wilhelm, F.; Dixon, T.H.; Carande, R. Space-based measurements of sheet-flow characteristics in the everglades wetland, Florida. Geophys. Res. Lett. 2004, 31. [Google Scholar] [CrossRef]
  13. Wdowinski, S.; Kim, S.W.; Amelung, F.; Dixon, T.H.; Miralles-Wilhelm, F.; Sonenshein, R. Space-based detection of wetlands’ surface water level changes from L-band SAR interferometry. Remote Sens. Environ. 2008, 112, 681–696. [Google Scholar] [CrossRef]
  14. Hong, S.-H.; Wdowinski, S.; Kim, S.-W. Evaluation of TERRASAR-X observations for wetland InSAR application. IEEE Trans. Geosci. Remote Sens. 2010, 48, 864–873. [Google Scholar] [CrossRef]
  15. Ramsey, I.; Elijah; Lu, Z.; Rangoonwala, A.; Rykhus, R. Multiple baseline radar interferometry applied to coastal land cover classification and change analyses. GISci. Remote Sens. 2006, 43, 283–309. [Google Scholar] [CrossRef]
  16. Yamagata, Y.; Yasuoka, Y. Classification of wetland vegetation by texture analysis methods using ERS-1 and JERS-1 images. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium, Tokyo, Japan, 18–21 August 1993; pp. 1614–1616.
  17. Kasischke, E.S.; Bourgeau-Chavez, L.L. Monitoring South Florida wetlands using ERS-1 SAR imagery. Photogramm. Eng. Remote Sens. 1997, 63, 281–291. [Google Scholar]
  18. Hess, L.L.; Melack, J.M.; Novo, E.M.; Barbosa, C.C.; Gastil, M. Dual-season mapping of wetland inundation and vegetation for the central Amazon Basin. Remote Sens. Environ. 2003, 87, 404–428. [Google Scholar] [CrossRef]
  19. Martinez, J.-M.; Le Toan, T. Mapping of flood dynamics and spatial distribution of vegetation in the amazon floodplain using multitemporal SAR data. Remote Sens. Environ. 2007, 108, 209–223. [Google Scholar] [CrossRef]
  20. Evans, T.L.; Costa, M. Landcover classification of the lower nhecolândia subregion of the Brazilian pantanal wetlands using ALOS/PALSAR, RADARSAT-2 and ENVISAT/ASAR imagery. Remote Sens. Environ. 2013, 128, 118–137. [Google Scholar] [CrossRef]
  21. Bourgeau-Chavez, L.L.; Riordan, K.; Powell, R.B.; Miller, N.; Nowels, M. Improving wetland characterization with multi-sensor, multi-temporal SAR and optical/infrared data fusion. In Advances in Geosci. Remote Senssing; InTech: Rijeka, Croatia, 2009. [Google Scholar]
  22. Baghdadi, N.; Bernier, M.; Gauthier, R.; Neeson, I. Evaluation of C-band SAR data for wetlands mapping. Int. J. Remote Sens. 2001, 22, 71–88. [Google Scholar] [CrossRef]
  23. Touzi, R. Wetland characterization using polarimetric RADARSAT-2 capability. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium, Denver, CO, USA, 31 July–4 August 2006; pp. 1639–1642.
  24. Touzi, R.; Deschamps, A.; Rother, G. Phase of target scattering for wetland characterization using polarimetric C-band sar. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3241–3261. [Google Scholar] [CrossRef]
  25. Brisco, B.; Kapfer, M.; Hirose, T.; Tedford, B.; Liu, J. Evaluation of C-band polarization diversity and polarimetry for wetland mapping. Can. J. Remote Sens. 2011, 37, 82–92. [Google Scholar] [CrossRef]
  26. Brisco, B.; Schmitt, A.; Murnaghan, K.; Kaya, S.; Roth, A. SAR polarimetric change detection for flooded vegetation. Int. J. Digit. Earth 2013, 6, 103–114. [Google Scholar] [CrossRef]
  27. Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
  28. ESA. Polsarpro. Available online: http://earth.esa.int/polsarpro/ (accessed on 10 June 2015).
  29. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  30. Yamaguchi, Y.; Moriyama, T.; Ishido, M.; Yamada, H. Four-component scattering model for polarimetric SAR image decomposition. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1699–1706. [Google Scholar] [CrossRef]
  31. Yamaguchi, Y.; Yajima, Y.; Yamada, H. A four-component decomposition of polsar images based on the coherency matrix. IEEE Geosci. Remote Sens. Lett. 2006, 3, 292–296. [Google Scholar] [CrossRef]
  32. Yamaguchi, Y.; Sato, A.; Boerner, W.M.; Sato, R.; Yamada, H. Four-component scattering power decomposition with rotation of coherency matrix. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2251–2258. [Google Scholar] [CrossRef]
  33. Lee, J.S.; Ainsworth, T.L. The effect of orientation angle compensation on coherency matrix and polarimetric target decompositions. IEEE Trans. Geosci. Remote Sens. 2011, 49, 53–64. [Google Scholar] [CrossRef]
  34. van Zyl, J.J.; Arii, M.; Kim, Y. Model-based decomposition of polarimetric SAR covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3452–3459. [Google Scholar]
  35. Atwood, D.; Leinss, S.; Matthiss, B.; Jenkins, L.; Wdowinski, S.; Hong, S.-H. Wave propagation model for coherent scattering from a randomly distributed target. In Proceedings of POLinSAR 2013 Workshop, Frascati, Italy, 28 January–1 February 2013.
  36. Hong, S.H.; Wdowinski, S. Double-bounce component in cross-polarimetric SAR from a new scattering target decomposition. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3039–3051. [Google Scholar] [CrossRef]
  37. Wdowinski, S.; Hong, S.-H. Tropical wetland characterization with polarimetry SAR. In Proceedings of 9th Advanced SAR Workshop (ASAR), Longueuil, QC, Canada, 15–18 October 2013.
  38. Florida Coastal Everglades Long Term Ecological Research. Available online: http://fcelter.fiu.edu (accessed on 9 June 2015).
  39. Global Land Cover Facility. Available online: http://www.landcover.org/data/landsat (accessed on 9 June 2015).
  40. Cibula, W.; Carter, G. Identification of a FAR-red reflectance response to ectomycorrhizae in slash pine. Int. J. Remote Sens. 1992, 13, 925–932. [Google Scholar] [CrossRef]
  41. Filella, I.; Penuelas, J. The red edge position and shape as indicators of plant chlorophyll content, biomass and hydric status. Int. J. Remote Sens. 1994, 15, 1459–1470. [Google Scholar] [CrossRef]
  42. Kim, H.-O.; Yeom, J.-M. Effect of red-edge and texture features for object-based paddy rice crop classification using rapideye multispectral satellite image data. J. Remote Sens. 2014, 35, 7046–7068. [Google Scholar]
  43. Schuster, C.; Förster, M.; Kleinschmit, B. Testing the red edge channel for improving land-use classifications based on high-resolution multi-spectral satellite data. Int. J. Remote Sens. 2012, 33, 5583–5599. [Google Scholar] [CrossRef]
  44. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of multitemporal rapideye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
  45. Welch, R.; Madden, M.; Doren, R.F. Mapping the everglades. Photogramm. Eng. Remote Sens. 1999, 65, 163–170. [Google Scholar]
  46. Cloude, S. Polarisation: Applications in Remote Sensing; Oxford University Press: New York, NY, USA, 2010. [Google Scholar]
  47. Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC: Boca Raton, FL, USA, 2009; Volume 142. [Google Scholar]
  48. Neumann, M.; Ferro-Famil, L.; Pottier, E. A general model-based polarimetric decomposition scheme for vegetated areas. In Proceedings of POLinSAR 2009 Workshop, Frascati, Italy, 26–30 January 2009.
  49. Arii, M.; van Zyl, J.J.; Kim, Y. A general characterization for polarimetric scattering from vegetation canopies. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3349–3357. [Google Scholar] [CrossRef]
  50. Arii, M.; van Zyl, J.J.; Kim, Y. Adaptive model-based decomposition of polarimetric SAR covariance matrices. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1104–1113. [Google Scholar] [CrossRef]
  51. Sato, A.; Yamaguchi, Y.; Singh, G.; Park, S.E. Four-component scattering power decomposition with extended volume scattering model. IEEE Geosci. Remote Sens. Lett. 2012, 9, 166–170. [Google Scholar] [CrossRef]
  52. Hong, S.-H.; Wdowinski, S. Evaluation of the quad-polarimetric RADARSAT-2 observations for the wetland InSAR application. Can. J. Remote Sens. 2012, 37, 484–492. [Google Scholar] [CrossRef]
  53. Rignot, E.; Chellappa, R.; Dubois, P. Unsupervised segmentation of polarimetric SAR data using the covariance matrix. IEEE Trans. Geosci. Remote Sens. 1992, 30, 697–705. [Google Scholar] [CrossRef]
  54. Chen, K.; Huang, W.; Tsay, D.; Amar, F. Classification of multifrequency polarimetric SAR imagery using a dynamic learning neural network. IEEE Trans. Geosci. Remote Sens. 1996, 34, 814–820. [Google Scholar] [CrossRef]
  55. Chen, C.-T.; Chen, K.-S.; Lee, J.-S. The use of fully polarimetric information for the fuzzy neural classification of SAR images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2089–2100. [Google Scholar] [CrossRef]
  56. Dong, Y.; Milne, A.K. Segmentation and classification of vegetated areas using polarimetric SAR image data. IEEE Trans. Geosci. Remote Sens. 2001, 39, 321–329. [Google Scholar] [CrossRef]
  57. Lombardo, P.; Sciotti, M.; Pellizzeri, T.M.; Meloni, M. Optimum model-based segmentation techniques for multifrequency polarimetric SAR images of urban areas. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1959–1975. [Google Scholar] [CrossRef]
  58. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  59. Luscier, J.D.; Thompson, W.L.; Wilson, J.M.; Gorham, B.E.; Dragut, L.D. Using digital photographs and object-based image analysis to estimate percent ground cover in vegetation plots. Front. Ecol. Environ. 2006, 4, 408–413. [Google Scholar] [CrossRef]
  60. Mathieu, R.; Freeman, C.; Aryal, J. Mapping private gardens in urban areas using object-oriented techniques and very high-resolution satellite imagery. Landsc. Urban Plan. 2007, 81, 179–192. [Google Scholar] [CrossRef]
  61. Neubert, M.; Herold, H.; Meinel, G. Assessing image segmentation quality—Concepts, methods and application. In Object-Based Image Analysis; Springer: Berlin, Germany, 2008; pp. 769–784. [Google Scholar]
  62. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  63. Qi, Z.; Yeh, A.G.-O.; Li, X.; Lin, Z. A novel algorithm for land use and land cover classification using RADARSAT-2 polarimetric SAR data. Remote Sens. Environ. 2012, 118, 21–39. [Google Scholar] [CrossRef]
  64. Ehlers, M.; Gaehler, M.; Janowsky, R. Automated techniques for environmental monitoring and change analyses for ultra high resolution remote sensing data. Photogramm. Eng. Remote Sens. 2006, 72, 835–844. [Google Scholar] [CrossRef]
  65. Kim, H.-O.; Kleinschmit, B.; Kenneweg, H. High resolution satellite imagery for the analysis of sealing in the metropolitan area seoul. Remote Sens. GIS Environ. Stud.: Appl. Geogr. 2005, 113, 281. [Google Scholar]
  66. Ecognition. Available online: http://www.ecognition.com (accessed on 10 June 2015).
  67. Trimble. About classification; Ecognition Developer 8.64.0: User Guide; Trimble Germany Gmbh: Muenchen, Germany, 2010; pp. 106–123. [Google Scholar]
  68. Brisco, B.; Ahern, F.; Hong, S.-H.; Wdowinski, S.; Murnaghan, K.; White, L.; Atwood, D.K. Polarimetric decompositions of temperate wetlands at C-band. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015. [Google Scholar] [CrossRef]
  69. Lucas, R.M.; Mitchell, A.L.; Rosenqvist, A.; Proisy, C.; Melius, A.; Ticehurst, C. The potential of L-band SAR for quantifying mangrove characteristics and change: Case studies from the tropics. Aquat. Conserv.: Mar. Freshw. Ecosyst. 2007, 17, 245–264. [Google Scholar] [CrossRef]
  70. Mougin, E.; Proisy, C.; Marty, G.; Fromard, F.; Puig, H.; Betoulle, J.; Rudant, J.-P. Multifrequency and multipolarization radar backscattering from mangrove forests. IEEE Trans. Geosci. Remote Sens. 1999, 37, 94–102. [Google Scholar] [CrossRef]
  71. Trisasongko, B.H. Tropical mangrove mapping using fully-polarimetric radar data. J. Math. Funda. Sci. 2009, 41, 98–109. [Google Scholar] [CrossRef]
  72. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetl. Ecol. Manag. 2010, 18, 281–296. [Google Scholar] [CrossRef]
  73. Kamal, M.; Phinn, S.; Johansen, K. Object-based approach for multi-scale mangrove composition mapping using multi-resolution image datasets. Remote Sens. 2015, 7, 4753–4783. [Google Scholar] [CrossRef]
  74. Zhang, C.; Kovacs, J.M.; Liu, Y.; Flores-Verdugo, F.; Flores-de-Santiago, F. Separating mangrove species and conditions using laboratory hyperspectral data: A case study of a degraded mangrove forest of the Mexican Pacific. Remote Sens. 2014, 6, 11673–11688. [Google Scholar] [CrossRef]
  75. Heenkenda, M.K.; Joyce, K.E.; Maier, S.W.; Bartolo, R. Mangrove species identification: Comparing Worldview-2 with aerial photographs. Remote Sens. 2014, 6, 6064–6088. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Hong, S.-H.; Kim, H.-O.; Wdowinski, S.; Feliciano, E. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types. Remote Sens. 2015, 7, 8563-8585. https://doi.org/10.3390/rs70708563

AMA Style

Hong S-H, Kim H-O, Wdowinski S, Feliciano E. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types. Remote Sensing. 2015; 7(7):8563-8585. https://doi.org/10.3390/rs70708563

Chicago/Turabian Style

Hong, Sang-Hoon, Hyun-Ok Kim, Shimon Wdowinski, and Emanuelle Feliciano. 2015. "Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types" Remote Sensing 7, no. 7: 8563-8585. https://doi.org/10.3390/rs70708563

Article Metrics

Back to TopTop