Next Article in Journal
Land-Use Mapping in a Mixed Urban-Agricultural Arid Landscape Using Object-Based Image Analysis: A Case Study from Maricopa, Arizona
Previous Article in Journal
A Parallel Computing Paradigm for Pan-Sharpening Algorithms of Remotely Sensed Images on a Multi-Core Computer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mangrove Species Identification: Comparing WorldView-2 with Aerial Photographs

1
Research Institute for the Environment and Livelihoods, Charles Darwin University, Ellengowan Drive, Casuarina, NT 0909, Australia
2
Environmental Research Institute of the Supervising Scientist, Department of the Environment, G.P.O. Box 461, Darwin, NT 0801, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2014, 6(7), 6064-6088; https://doi.org/10.3390/rs6076064
Submission received: 25 March 2014 / Revised: 20 June 2014 / Accepted: 23 June 2014 / Published: 27 June 2014

Abstract

:
Remote sensing plays a critical role in mapping and monitoring mangroves. Aerial photographs and visual image interpretation techniques have historically been known to be the most common approach for mapping mangroves and species discrimination. However, with the availability of increased spectral resolution satellite imagery, and advances in digital image classification algorithms, there is now a potential to digitally classify mangroves to the species level. This study compares the accuracy of mangrove species maps derived from two different layer combinations of WorldView-2 images with those generated using high resolution aerial photographs captured by an UltraCamD camera over Rapid Creek coastal mangrove forest, Darwin, Australia. Mangrove and non-mangrove areas were discriminated using object-based image classification. Mangrove areas were then further classified into species using a support vector machine algorithm with best-fit parameters. Overall classification accuracy for the WorldView-2 data within the visible range was 89%. Kappa statistics provided a strong correlation between the classification and validation data. In contrast to this accuracy, the error matrix for the automated classification of aerial photographs indicated less promising results. In summary, it can be concluded that mangrove species mapping using a support vector machine algorithm is more successful with WorldView-2 data than with aerial photographs.

Graphical Abstract

1. Introduction

Mangroves are salt-tolerant evergreen forests that create land-ocean interface ecosystems. They are found in the intertidal zones of marine, coastal or estuarine ecosystems of 124 tropical and sub-tropical countries and areas [1]. Mangroves are a significant habitat for sustaining biodiversity and also provide direct and indirect benefits to human activities.
Despite the increased recognition of their socio-economic benefits to coastal communities, mangroves are identified as among the most threatened habitats in the world [2]. Degradation and clearing of mangrove habitats is occurring on a global scale due to urbanization, population growth, water diversion, aquaculture, and salt-pond construction [3].
In recent years, numerous studies have been undertaken to further understand the economic and ecological values of mangrove ecosystems and to provide a means for effective management of these resources [1,2,46]. Mangrove forests are often very difficult to access for the purposes of extensive field sampling, therefore remotely sensed data have been widely used in mapping, assessing, and monitoring mangroves [4,710].
According to Adam et al. [11], when using remote sensing techniques for mapping wetland vegetation, there are two major challenges to be overcome. Firstly, the accurate demarcation of vegetation community boundaries is difficult, due to the high spectral and spatial variability of the communities. Secondly, spectral reflectance values of wetland vegetation are often mixed with that of underlying wet soil and water. That is, underlying wet soil and water will attenuate the signal of the near-infrared to mid-infrared bands. As a result, the confusion among mangroves, other vegetation, urban areas and mudflats will decrease map classification accuracy [8,1214]. Consequently, remote sensing data and methods that have been successfully used for classifying terrestrial vegetation communities cannot be applied to mangrove studies with the same success.
Using remote sensing to map mangroves to a species level within a study area presents further challenges. For instance, Green et al. [8] reviewed different traditional approaches for satellite remote sensing of mangrove forests. After testing them on different data sources, the study confirmed that the type of data can influence the final outcome. Heumann [10] further demonstrated the limitations of mapping mangrove species compositions using high resolution remotely sensed data. The potential for using hyperspectral remote sensing data for wetland vegetation has been discussed in numerous studies, however the results are still inconclusive when considering mangrove species discrimination [10,11,13,15]. Therefore, the selection of data sources for mangrove mapping should include consideration of ideal spectral and spatial resolution for the species.
Some laboratory studies using field spectrometers have suggested the ideal spectral range for mangrove species discrimination [8,1618]. In one such study, Vaiphasa et al. [16] investigated 16 mangrove species and concluded that they are mostly separable at only a few spectral locations within the 350–2500 nm region. The study didn’t specifically explore the use of airborne or spaceborne hyperspectral sensors for mangrove species mapping (such as the best spectral range, number of bands within that range, and optimal spatial resolution). It has, however, encouraged further full-scale investigations on mangrove species discrimination, which could involve extensive field and laboratory investigations, necessitating high financial and time investments. A viable alternative may be to use satellite data with narrow spectral bands lying within the ideal spectral range for species composition identification.
The WorldView-2 (WV2) satellite imaging sensor provides such data, and therefore may have increased potential for accurately mapping the distribution of mangrove species. Its combination of narrow spectral bands and high spatial resolution provides benefits over the other freely or commercially available satellite remote sensing systems albeit at a cost. Therefore, the combination of WV2 data with advanced image processing techniques will be an added value to wetland remote sensing.
After determining ideal or optimal image data requirements, the selection of an appropriate method of processing those data for mapping with maximum achievable accuracy is critical. Kuenzer et al. [19] provided a detailed review of mangrove ecosystem remote sensing over the last 20 years, and emphasized the need for exploitation of new sensor systems and advanced image processing approaches for mangrove mapping. The most promising results for mangrove mapping can be found in the study by Heumann [20], who demonstrated high accuracy when discriminating mangroves from other vegetation using a combination of WV2, and QuickBird satellite images. However, the accuracy was poor for mangrove mapping to the species level. Though the advanced technological nature of remotely sensed images demands solutions for different image-based applications, it can be concluded that little has been adapted to mangrove environments compared to other terrestrial ecosystems.
An approach that may prove fruitful for mapping mangrove environments is the Support Vector Machine (SVM) algorithm, a useful tool that minimizes structural risk or classification errors [20]. SVM is a supervised, non-linear, machine learning algorithm that produces good classification outcomes for complex and noisy data with fewer training samples. SVM can be used with high dimensional data as well [21]. Though this is a relatively new technique for mangrove mapping, it has widely been applied in other remote sensing application domains with different sensors [22]. For example, Huang et al. [23] compared SVM with three different traditional image classifiers, and obtained significantly increased land cover classification accuracy with an SVM classifier.
This study aims to investigate the potential of using high spatial resolution remote sensing data for discriminating mangroves at a species level. In order to achieve this aim, three objectives were identified: (a) to identify and extract mangrove coverage from other vegetation; (b) to apply the SVM algorithm to distinguish individual mangrove species; and (c) to compare the accuracies of these techniques when using WV2 and aerial photographs in order to identify the most accurate combination of data input and image processing technique.

2. Data and Methods

2.1. Study Area

This study focused on a mangrove forest within a small coastal creek system: Rapid Creek in urban Darwin, Australia (Figure 1). It is situated on the north western coast line of the Northern Territory centered at latitude 12°22′S, and longitude 130°51′E. This area represents relatively diverse, spatially complex, and common mangrove communities of the Northern Territory, Australia, and is one of the more accessible areas in the region for field survey. The aerial extent of the study area shown in the Figure 1 is approximately 400 hectares.
According to the mangrove classification of Brocklehurst and Edmeades [24], Avicennia marina (Gray mangroves) and Ceriops tagal (Yellow mangroves) are the most dominant species for this area, though that study was completed nearly 20 years ago. Based on the more recent study of Ferwerda et al. [25], there are some other species such as Bruguiera exaristata (Orange mangroves), Lumnitzera racemosa (Black mangroves), Rhizophora stylosa (Stilt mangroves), and Aegialitis annulata (Club mangroves) in the Rapid Creek mangrove forest. However, clear boundaries of individual mangrove species in this area have not been demarcated by any previous study.

2.2. Field Survey

Field data were collected during January to April 2013, in the Rapid Creek mangrove forest. Unfortunately, this did not coincide with the overpass of either sensor utilized (June 2010). However, the field data can still be considered valid for calibration and validation purposes, as it is unlikely that any large shifts in species composition of the site would have been experienced since that time [25]. Further, there was no record of any natural or human disturbances that could have had a devastating effect on the mangroves during this period. In any case, field sample sites were located away from edges and transition zones to avoid any errors in classification due to growth, regeneration, or vegetation decline. After observing the mangrove zonation patterns visible in the WV2 image, a random sampling pattern within zones was adopted to identify species.
A sample was defined as a homogenous area of at least 4 m2, which is at least 16 pixels in the WV2 image. Coordinates of these sample polygons and available species were recorded using a non-differentially corrected Global Positioning System (GPS). Special attention was given to orient the sample plots in north-south and east-west directions in order to easier locate them in the images. To overcome GPS inaccuracies, the plots were located with respect to natural features on the ground as far as possible. For instance: the distance to water features, roads or edges of mudflats were recorded. Further, 10 readings were averaged for the final location. Mangrove species in the field were identified using the Mangrove Plant Identification Tool Kit (published by Greening Australia Northern Territory) and The Authoritative Guide to Australia’s Mangroves (published by University of Queensland) [26,27].

2.3. Remote Sensing Data and Pre-Processing

A WV2 satellite image was selected as the main image data source for this study. As the sensor produces images with high spectral and spatial resolution, WV2 is an ideal solution for vegetation and plant studies [28]. The image was acquired on 5 June 2010, with 8 multispectral bands at 2.0 m spatial resolution and a panchromatic band with 0.5 m spatial resolution. To compare this work with higher spatial resolution remote sensing data (Table 1), true color aerial photographs were used, which were acquired on 7 June 2010 using an UltraCamD large format digital camera [29].
WV2 images were radiometrically corrected according to the sensor specifications published by DigitalGlobe® [31]. Digital numbers were converted to at-sensor radiance values, and then to top-of-atmosphere reflectance values. The additive path radiance was removed using the dark pixel subtraction technique in ENVI 5.0 software. Images were geo-referenced using rational polynomial coefficients provided with the images, and ground control points extracted from digital topographic maps of Darwin, Australia.
In order to utilize both the high spatial and spectral resolution options provided with WV2 panchromatic and multispectral layers, pan-sharpening options were investigated. Pan-sharpening is defined as a pixel fusion method that increases the spatial resolution of multi-spectral images [32]. Although there are several methods available for pan-sharpening, the high pass filter method was selected because it is known to be one of the best which produces a fused image without distorting the spectral balance of the original image [33,34]. Once applied, the statistical values of the spectral information of the input and output multispectral products are similar. Palubinskas’ study [35] also proposed the high pass filter method as a fast, simple and good pan-sharpening method for WV2 images, by analyzing performances of several image fusion methods. In addition, this method is known to be one of the best choices when the pixel resolution ratio of higher to lower is greater than 6:1 [36]. However, special attention must be given when selecting the filter kernel size as it should reflect the radiometric normalization between the two images. Chavez et al. [33] stated that twice the pixel resolution ratio is an ideal solution for the kernel size. This means that a kernel size: 15 × 15 is an optimal solution for WV2 data. All image radiometric, atmospheric and geometric corrections must be done prior to pan-sharpening in order to minimize geometric and radiometric errors.
Accordingly, the WV2 multispectral image was then pan-sharpened to 0.5 m spatial resolution to incorporate the edge information from the high spatial resolution panchromatic band into the lower spatial resolution multispectral bands using the high pass filter pan-sharpening method. The Coastal band and the NIR2 band were not used for further processing because of the limited spectral range of the panchromatic band (Table 1).
The aerial photographs were oriented to ground coordinates following digital photogrammetric image orientation steps in the Leica Photogrammetric Suite (LPS) software using image orientation parameters extracted from the camera calibration report. The coordinates of ground control points were extracted from Australian geographic reference stations near Darwin, Australia and digital topographic maps of Darwin [37]. The ortho-photograph of the area was then generated to achieve a geometrically and topographically corrected image with a resultant resolution of 14 cm for further studies. Radiometric calibration information was not available.
In order to directly compare with WV2 imagery, another ortho-photo was created with a pixel size of 0.5m. In order to remove the artifacts that can be created from resampling, a low pass filter was applied to the raw aerial photographs first. Ortho-rectification was then completed using the cubic convolution interpolation.

2.4. Image Classification

Image classification of WV2 and aerial data was undertaken in two steps. Firstly, mangroves and non-mangroves were separated using eCognition Developer 8.7 software. Then the classification was refined to discriminate mangroves at a species level using ENVI 5.0 software. The process was applied to two sets of WV2 images: the first WV2 image with a spatial resolution of 2m (without pan-sharpening) and the other one with a spatial resolution of 0.5 m (with pan-sharpening), and two sets of ortho-photographs: the first with a spatial resolution of 0.14 m (AP0.14M) and the other one with a spatial resolution of 0.5 m (AP0.5M). This was done in order to investigate the influence of spatial resolution and the pan-sharpening effect on classification accuracy. Figure 2 shows the overall workflow diagram for this study, which is described in greater detail below.

2.4.1. Separating Mangroves and Non-Mangroves

To separate mangroves from other features, object-based image analysis (OBIA) was used. OBIA is based on segmentation, which partitions the image into meaningful, spatially continuous and spectrally homogeneous objects or pixel groups [15,20]. The major challenge is in determining appropriate similarity measures which discriminate objects from each other. Therefore, the spectral profiles of identifiable features in the satellite image were analyzed.
Class-specific rules were developed incorporating contextual information from the WV2 image and relationships between image objects at different hierarchical levels, to separate mangroves and non-mangroves. The segmentation at level 1 identified objects that can be grouped to coarse classification structures. All spectral bands ranging from Blue to NIR1 were used, and weights were assigned as 1 for the segmentation at each level. The segmentation parameters were selected based on the pixel size and the expected compactness of resulting objects. Buildings, soil, roads and mudflats were classified as “Buildings-Roads-Mudflats”, and as the spectral reflectance ratio of Yellow to NIR1 of this class was less than the mean ratio of Yellow to NIR1 of objects, the ratio was introduced as a threshold value for the classification to extract “Buildings-Roads-Mudflats”. The brightness calculated from reflectance values from Blue to NIR1 bands were analyzed, and the low values (less than the mean brightness of objects) were used to extract water features. The remaining objects at this level were classified as “Candidate-Mangrove-1” claiming this class as the parent objects for the next level.
At level 2, specific details (home gardens and other vegetation) within parent objects were identified (Figure 3). Home gardens and other vegetation were removed considering the objects enclosed by “Buildings-Roads-Mudflats” class. Further, remaining home gardens and other vegetation were identified using a red edge normalized difference index (reNDVI) obtained from NIR1 and Red-Edge bands:
reNDVI = ( R N I R 1 R Red Edge ) ( R N I R 1 R Red Edge )
with RNIR1 and RRed–Edge being the reflectance in the NIR1 and Red-Edge bands respectively [38,39]. The index was used assuming that it would provide a good measure of biophysical properties of plants: chlorophyll content and water-filled cellular structures to separate these classes from others due to the rapid change in reflectance of vegetation in the Red-Edge region [39].
As the reNDVI of home gardens and other vegetation classes were higher than the mean reNDVI of objects, the value was introduced as a threshold value for the classification. The remaining objects from this level were classified as “Candidate-Mangrove-2”.
At the final level, the normalized difference vegetation index (NDVI) obtained from NIR1 and Red bands:
NDVI = ( R N I R 1 R Red ) ( R N I R 1 R Red )
where RNIR1 and RRed are the reflectance in the NIR1 and Red bands respectively, was used to separate “Final-Mangroves”. This vegetation index was also introduced to the classification as a threshold value since “Final-Mangroves” class have higher NDVI values than the mean NDVI values of the objects. The classified objects were closely analyzed for final refinements. Refinements were done to classify objects by incorporating object geometry and neighborhood information to the process. For example: the relation to the neighboring borders was analyzed. The transferability of the rule set was maintained using variables instead of specific values for class separation. Finally, the outline of the mangrove area was extracted for further analysis. This method was tested on both sets of WV2 images (see Figure S1 for overall workflow diagram).
The same process was then applied to the two sets of aerial photographs (AP0.14M and AP0.5M). However, when segmenting AP0.14M, different segmentation parameters were used at each level due to its higher spatial resolution. When applying OBIA to aerial photographs, the possibility of developing a successful rule set for the segmentation is limited due to the broad band width and limited number of spectral bands of the aerial camera (see Figure S2 for overall workflow diagram). Further, limitations associated with radiometric resolution of aerial photographs could affect the accuracies. As a result, the final mangrove area was manually edited to remove objects of known home gardens, other vegetation, and grasslands.

2.4.2. Mangrove Species Classification

With the increase in mangrove studies and mapping using remote sensing comes a growing implementation of advanced processing techniques. Although traditional remote sensing can provide important information for monitoring the ecosystem, changes, and extent of mangroves, more contextual and probabilistic methods can be utilized to improve the accuracy of classification, and for discriminating individual species.
Traditional land cover hard classification is based on the assumption that each pixel corresponds to a single class [40]. This is not always true. When the instantaneous field of view of the sensor covers more than one class of land cover or objects, the pixel may have reflectance values from more than one class, and is defined as a mixed pixel [41]. Mangroves are closed forests which can become very dense due to their limited habitat range. Mixed pixels therefore have to be expected in the image. Hence, traditional pixel based image analysis techniques do not fully exploit the contextual variations in species distribution [10,11,20]. A better alternative is soft classification, which predicts the proportion of each land cover class within each pixel, resulting in more informative representation of land cover [40,42,43].
One of the soft classification algorithms is the SVM, which locates a best non-linear boundary in higher dimensional feature space. It works with pixels that are in the vicinity of classes, while minimizing over-fitting errors of training data [21]. Hence, a small training set is sufficient to achieve accurate classifications. The mathematical formation and detailed description of the SVM architecture can be found in Tso and Mather [21] and Mountrakis et al. [22].
When designing the SVM architecture, careful selection of a kernel function is important to increase the accuracy of the classification. The position of the decision boundary always varies with the kernel function and its parameters [21,22,44]. Descriptive information about the kernel function and its parameter selection can be found in Tso and Mather [21].
The extracted mangrove areas were classified using the SVM algorithm. Field data was collected to represent the five main mangrove species occurring in the study area: Avicennia marina, Ceriops tagal, Bruguiera exaristata, Lumnitzera racemosa, and Rhizophora stylosa. Sonneratia alba, Excoecaria ovalis, and Aegialitis annulata have not been used for the classification mainly due to low coverage. In addition, Sonneratia alba does not exhibit distinctive spectral variation compared to other species (Section 3.2). Field data were then divided into two groups, for training (69 samples) and for validation (47 samples) of the classifiers. The multiclass SVM classifier developed by Canty [45] was modified and implemented in ENVI extensions in the IDL environment in order to define the case specific parameters with an iterative process. This helped to determine the best fitting parameters for this study. The Radial Basis Function (RBF) was found to be the best kernel function with a gamma value of 0.09, and penalty parameter of 10.
The SVM algorithm was applied to two sets of band combinations of the WV2 image. One set uses the visible spectral range, i.e., blue, green, yellow, red and red-edge bands (WV2-VIS) in order to directly compare it with aerial photographs. The other set consisted of the red, red-edge and NIR1 bands (WV2-R/NIR1). These bands were selected based on the research by Wang and Sousa [18]. The study indicated that the majority of mangrove species which occur in the Rapid Creek area can be discriminated using the influential wavelengths: 630, 780, 790, 800, 1480, 1530 and 1550 nm spectral bands (which correspond to the red and NIR bands of WV2). The pan-sharpened sets of images were named as PS-WV2-VIS and PS-WV2-R/NIR1 for easy reference. The same process was applied to AP0.14M and AP0.5M. The same training data were used for all four datasets to maintain consistency between methods.

2.5. Accuracy Assessment

The accuracy of the mangrove species classification was assessed at the pixel level using descriptive and analytical statistical techniques. The 560 random validation points were generated inside the field samples (2 m × 2 m plots). Therefore, there were a large number of validation points for accuracy assessment for each species (Section 3.2). The generated maps were visually inspected against field observations, satellite images and aerial photographs according to Congalton [46]. A confusion matrix was generated, and users’ and producers’ accuracies, together with kappa statistics, were investigated for each identified mangrove species.

3. Results

3.1. Field Survey

There are five mangrove species which are most abundant and can easily be identified along Rapid Creek: Avicennia marina, Ceriops tagal, Bruguiera exaristata, Lumnitzera racemosa, and Rhizophora stylosa (Stilt mangroves). Avicennia marina and Ceriops tagal are the most widely spread species in this area, while Lumnitzera racemosa covers the majority of the hinterland area. Sonneratia alba (Apple mangroves) are found at two locations within the site, covering approximately 20 square meters. Excoecaria ovalis (Milky mangroves) and Aegialitis annulata (Club mangroves) were also recognized during the field investigation, though they do not represent significant coverage within the forest.

3.2. Separating Mangroves and Non-Mangroves

The analysis of spectral profiles within the Rapid Creek coastal area was the key to introducing class specific rules for OBIA (Figure 4).
Buildings, soil, mudflats and water showed highly distinctive spectral profiles compared to other features. The mangrove species are most notably separable within the range of wavelengths of 478.3 nm and 832.9 nm, with the exception of Sonneratia alba, while the spectral profile of Avicennia marina generally is more distinctive from other species across a broader range of the spectrum.
The locations of field samples are shown in Figure 5. The mangrove outline was successfully extracted from the WV2 image using OBIA. The “Final-Mangroves” class extracted from the WV2 image (without pan-sharpening) and aerial images did, however, still include both mangroves and the adjacent home gardens and other vegetation to be edited manually. The WV2 image was more useful than the aerial image for extracting the overall mangrove coverage, as less manual editing was required.
Table 2 shows the number of sample points that were used for validation for each species together with the number of samples used for training the classification. Sample points were generated from 47 validation samples shown in Figure 5.

3.3. Mangrove Species Classification

Figure 6 shows the derived mangrove species maps. Figure 6a was produced from the PS-WV2-VIS image, and the visual appearance is more closely related to the dominating species of the area than the other five maps (Figure 6b–f). Avicennia marina (AM) and Ceriops tagal (CT) dominate 69% of total mangrove area while Bruguiera exaristata (BE) accounts for only 5%. The majority of the hinterland margin is occupied by Lumnitzera racemosa (LR) or mixed Lumnitzera racemosa, Bruguiera exaristata and Ceriops tagal. Rhizophora stylosa (RS) dominates only along the creek and its branches (Figure 6a). However, the visual assessment confirmed that some of the Ceriops tagal has been misclassified as Bruguiera exaristata (especially on the west of the study area Figure 6a).
When testing the same method with the PS-WV2-R/NIR1 image, there was no significant difference in visual appearance of the classification except for the classes Rhizophora stylosa and Bruguiera exaristata (Figure 6a and b). The hinterland margin and areas along the water features were successfully classified with their dominated species. Furthermore, the visual appearance of classification of Rhizophora stylosa and Ceriops tagal are better than that of PS-WV2-VIS classification.
Figure 7 shows an example of positive detection of Lumnitzera racemosa (orange color) at hinterland from pan-sharpened WV2 image classifications. However, when considering the classification results of WV2 image with 2 m spatial resolution, the detection of Lumnitzera racemosa and Bruguiera exaristata was relatively poor (Figure 7h,i).
Figure 6c and d show classifications of the WV2 image without pan-sharpening. In both instances, most of the Avicennia marina was classified correctly, especially in the classification of WV2-R/NIR1. In addition, there was no significant detection of Bruguiera exaristata and Ceriops tagal classes in either classification. The classification results of WV2-R/NIR1 shows misclassification of Ceriops tagal and Lumnitzera racemosa as Rhizophora stylosa. Further, both results were not able to capture the mangrove zonation pattern observed in the field.
The accuracy assessment of the PS-WV2-R/NIR1 classification revealed approximately same overall accuracy as the PS-WV2-VIS classification. Although the total extent of Rhizophora stylosa and Ceriops tagal was approximately equivalent to the PS-WV2-VIS image classification, the area covered by Ceriops tagal was smaller than that class in the PS-WV2-VIS image classification (Figure 8a,b). Some of the Ceriops tagal areas may be misclassified as Rhizophora stylosa and Bruguiera exaristata, because the percentage of the extents of the other species didn’t change significantly (Figure 8). The extents of Lumnitzera racemosa was almost the same in both instances while the extent of Avicennia marina was reduced by approximately 2.5 hectares compared to the PS-WV2-VIS classification.
The WV2 image with a spatial resolution of 2 m demonstrated rather different results than the pan-sharpened image classifications. In both instances, the classified extents of Bruguiera exaristata were less than 1% of the total mangrove area. However, the extent of Rhizophora stylosa obtained from the WV2-R/NIR1 classification was similar to the pan-sharpened image classification though the visual appearance indicates some misclassifications around the mudflats and at the edges of the mangrove coverage (Figure 6d). The classification of WV2-VIS did not show the Ceriops tagal class, whereas the classification of WV2-R/NIR1 represented 2% of the total area (Figure 8e,f). The extent of Avicennia marina class was not considerably different to other pan-sharpened WV2 and aerial image classifications.
The classification results of using the AP0.14M input had several differences compared to the WV2 classifications (Figure 6). The classes derived from the aerial photographs were patchier, and the classification did not capture the mangrove zonation pattern described in previous studies of this area well. Figure 7 shows one example of capturing mangrove zonation patterns by different classification approaches. Lumnitzera racemosa class obtained from the classification of AP0.5M was patchier than others. The percentage of the extent of Avicennia marina is almost the same at all classifications. Further, the extent of Rhizophora stylosa from the classification of aerial images is the same as that of the classification of PS-WV2-VIS image. The extents of Lumnitzera racemosa was significantly larger using the technique on aerial photographs, while the extent of Ceriops tagal was half that of the pan-sharpened WV2 classification (Figure 8c,d).
When classifying the AP0.5M aerial photograph, there were no significantly large differences in the extents of all classes compared to the AP0.14M classifications (Figure 8c,d). A slight increase in Rhizophora stylosa and Avicennia marina extents and a decrease in Bruguiera exaristata extent were noted in the AP0.5M classification compared to the AP0.14M classification. These classes exhibited fewer contiguous patterns and highly deviated from reality, thus producing some misclassification.

3.4. Accuracy Assessment

The map produced from PS-WV2-VIS best visually represents the zonation pattern of the different species. This classification also has the highest values for both overall accuracy (89%) and Kappa statistics (0.86, Table 3). Despite the overall accuracy of the PS-WV2-VIS classification being 2% higher than the PS-WV2-R/NIR1 classification, the results of the Kappa analysis shows these two classifications were not significantly different whereas the visual appearance of this classification is better than some of the classes obtained from PS-WV2-VIS.
Table 3 shows the lowest accuracy assessment figures for the maps generated from non pan-sharpened WV2 images. The Kappa statistics were not strong enough to represent the good contingency with validation samples. This is also supported by the visual appearance of these maps (Figure 6).
Visual inspection of the maps produced from the aerial photographs (AP0.14M and AP0.5M) indicated low quality classification results, especially the classification results of AP0.5M. They were not able to capture most of the variations visible in the photographs. Further, it can be seen that Ceriops tagal and Avicennia marina species have mostly been misclassified as Rhizophora stylosa and Bruguiera exaristata (Figure 8c). The descriptive analysis of the classification of AP0.14M has shown the relatively low accuracy of 68%, with Kappa statistics limited to 0.60 (Table 3). The accuracy of the map produced from the AP0.5M was slightly lower (Kappa equals to 0.58), with an overall accuracy of 68% (Table 3).
For individual species classifications, Table 4 shows low user’s accuracy for Bruguiera exaristata for the PS-WV2-VIS image classification, compared to other species. In contrast, Ceriops tagal has a user’s accuracy of 84% with 55% producer’s accuracy. Lumnitzera racemosa, for example, has a producer’s accuracy of 100% while the user’s accuracy is 87%. This means that there were no omissions from this class, but were more inaccurate inclusions providing an over-estimation of this coverage. In the PS-WV2-R/NIR1 classification the extent of Rhizophora stylosa was also over-estimated.
The individual classification accuracies of Bruguiera exaristata did not calculate for WV2 images as there was no sufficient coverage identified by classification to do so. However, both producer’s and user’s accuracies of Ceriops tagal was 2% from WV2-R/NIR1 (Table 4). Lumnitzera racemosa was highly over-estimated in the WV2-VIS classification whereas Rhizophora stylosa did in the WV2-R/NIR1 classification. These inaccurate inclusions that provide over-estimations were highly evidence in their visual maps (Figure 6). The only successfully classified class was Avicennia marina.

4. Discussion

4.1. Separating Mangroves and Non-Mangroves

One of the advantages of using WV2 data is the comparatively large number of spectral bands available within a limited spectral range. This enables more flexibility in applying a wide range of rules in OBIA. For example, although the visual appearance of home gardens and other vegetation are the same as mangroves, there is a detectable spectral difference in mangroves within NIR1 and red-edge regions (Figure 4a,b). In a recent study of mangrove mapping using SPOT-5 satellite images, mudflats within mangrove habitat required manual removal [47]. In this study, the yellow spectral band was very useful for extracting buildings, soil and mudflats automatically. The normalized differences calculated from NIR1 and Red-Edge bands successfully isolated home gardens and other vegetation from mangroves. However, the possibility of repeating this with aerial photographs was restricted due to spectral band limitations.
Most of the healthy green grassy areas near the mangrove boundary had similar spectral profiles to the mangrove species at hinterland (Figure 4). Therefore, the main challenge was to separate mangroves from the healthy green grass. To achieve this, at different hierarchical levels, contextual information, geometry and neighborhood characteristics of objects were used. For example: the analysis of relation to border to Candidate-Mangrove-1 and Candidate-Mangrove-2 classes, and the normalized difference indices extracted from spectral information of objects related to their parent objects were used successfully. Having increased the accuracy of the areal extent of mangroves, the extraction procedure was fully automated for pan-sharpened WV2 images.
Overall, this approach has effectively discriminated different land cover classes surrounding a mangrove ecosystem using WV2 images. When using pan-sharpened images, the whole process was automated, and can be repeated in a robust manner. There was no manual editing involved. However, when applying OBIA to the WV2 images with a spatial resolution of 2 m, there was some manual editing involved. The next stage will be to test the transferability of the derived rule set to a different location. However, since the rule sets consist of image variables rather than set numerical values for class discrimination, they can be tested on other areas easily.
Aerial photographs were visually appealing, and it was easy to visually identify mangroves, as well as gaps between mangroves. When applying OBIA to the aerial photographs, the high spatial resolution helped to create small, compact objects in the OBIA environment and then to discriminate vegetation and non-vegetation features successfully. However, isolating mangroves from home gardens and other vegetation types was difficult. The OBIA rule set was amended considering the limitations of spectral and radiometric resolutions of the data. Even though the aerial image dataset required manual editing, the problem of having a heterogeneous mixture of vegetation, mangroves, soil and water can be overcome by isolating the mangrove coverage before further classification.
When comparing the above results, regardless of spatial resolution, a relatively large number of spectral bands within the limited spectral range of visible and NIR would be an ideal solution for mangrove coverage identification. This is supported by the exploratory spectral separability analysis of WV2 images by Heumann [20]. His study demonstrated increasing accuracy when discriminating mangroves from other vegetation using WV2 data and a decision tree classification algorithm. However, the quantitative analysis of radiometric resolution differences of these datasets must be considered, as it may play a significant role on the classification accuracies.

4.2. Comparison of Mangrove Species Classifications

As described in Section 3.2, the differences in classification of species composition reflects the reliability of using various data sources and advanced algorithms for classification. The overall accuracy of the PS-WV2-VIS image classification is very good. The mangrove zonation pattern described by Brocklehurst and Edmeades [24] for this area has successfully been identified (Figure 6a). Further, these results are supported by the field surveys of Ferwerda et al. [25], who identified the presence of Rhizophora stylosa closer towards creek banks or tidal flats, and Lumnitzera racemosa and Ceriops tagal located on the high tidal range. The classifications in this study detected similar patterns (Figure 6).
The Brocklehurst and Edmeades [24] study is the only published mangrove mapping study in Darwin, Australia, undertaken almost 20 years ago. It does not demarcate individual species boundaries. Field investigation of the study area identified many changes that have occurred over this time period. For instance, Rhizophora stylosa, and Lumnitzera racemosa were not yet documented. However, since their study was done using manual field survey methods and visual interpretation of remote sensing images, updating the changes by repeating their data capturing methods would require significant time and financial investments. However, the method introduced by this study is repeatable and could be performed at reasonable time intervals in order to constantly update mangrove coverage maps.
When using SVM classification algorithm, attention must be given to the parameter selection of SVM architecture. For example: since the performance of SVM is based on the kernel function used and its parameters, the penalty parameter that works with an optimal boundary selection of the training data has to be considered carefully [21]. In this study, ten was selected as the ideal penalty parameter to locate training samples on the correct side of the decision boundary. A larger penalty parameter of the SVM exhibits over-fitting of training data, thus reducing classification accuracies.
Another consideration is the benefit of image fusion. Even in this study, when classifying mangroves without pan-sharpening, individual species accuracies were low. Over the years, with the development of advanced image/signal processing techniques, image fusion has become a tool that improves the spatial resolution of images while preserving spectral information. Many recent studies have indicated that these algorithms are more sophisticated for improved information extraction rather solely for visualization. For example: Zhang [48] revised recent studies that extracted information from pan-sharpened data, and concluded that well pan-sharpened image could improve the information extraction. Although an appropriately pan-sharpened image could provide more information for feature extraction, there is room for further development of pan-sharpening techniques [35,48].
When comparing these results with global mangrove studies, special attention must be given to the geographic region. Mangrove ecosystems characteristics are different from region to region due to soil salinity, ocean current, tidal inundation, and various geomorphic, edaphic, climatic and biotic factors etc. [6,10,49]. Given these considerations, it will be interesting to see if this technique would be a viable alternative for the tropical arid or sub-arid mangrove environment as it exhibits greater structural complexity than this study area. Kamal and Phinn [15] compared pixel-based and object-based image analysis techniques using hyperspectral data for mangrove identification. They were not able to achieve reasonably high accuracies for many species using WV2 images. Since their study area lies in Queensland, Australia, and the mangrove ecosystems around this area are known to be similar to Northern Territory, Australia [4,24] the results can directly be compared to this study.
Although the most dominating factors for spectral reflectance variation are biochemical and biophysical parameters of the plants, the reflectance spectra of mangroves are mostly combined with those of underlying soil, water and atmospheric vapour. Therefore, a degradation of classification can be expected, especially in the regions where water absorption is stronger [11]. For example, although the band selection of PS-WV2-R/NIR1 lies within the ideal spectral range for classified species identified by Wang and Sousa [18], the results of some classes were lower than that of PS-WV2-VIS classification due to the broad NIR band, which includes a region highly sensitive to the water absorption. The classification of WV2-R/NIR1 indicated the lowest accuracy, demonstrating the importance of high spectral resolution in achieving high accuracies.
Most of the traditional approaches for mangrove remote sensing are based on interpretation of aerial photographs. Heumann [10] summarized 11 mangrove studies using aerial photographs. Among them, Dahdouh-Guebas et al. [50] successfully mapped individual species using image attributes extracted from aerial photographs. In that sense, they used visual interpretation techniques rather than computational classification. However, in this application, aerial photographs with broad spectral bands could not delineate the features available in the mixed pixels due to spectral resolution limitations resulting high omission errors. This is also evident from the spectral profile analysis of this study (Figure 4). Most of the species were not able to be discriminated from each other. Therefore, although the same classifier has been used, the classification from higher spatial resolution aerial photography is of lower accuracy.
When comparing the outcome of these six data sets, apart from the spatial and spectral resolutions, radiometric resolution must be taken into account. A sensor with high radiometric resolution is more sensitive to capture small differences in reflectance values. Further, electromagnetic characteristics and signal-to-noise ratio of sensors can influence the classification accuracies. In this study, it was not possible to take these radiometric effects into account when resampling aerial photographs to simulate the WV2 image.

4.3. Accuracy Assessment

The visual appearance and the statistical values of the PS-WV2-VIS classification showed the strongest agreement between generated maps and reference data, according to Congalton [46]. By contrast, the classifications of WV2-VIS and WV2-R/NIR1 have the lowest level of agreement between species maps and validation data. The maps generated from AP0.14M and AP0.50M have a moderate level of agreement to the reference data, having Kappa statistics between 0.60 and 0.64 [46]. The results of the error matrix analysis of species classification were lower than the pan-sharpened WV2 image classifications (Table 4). Despite the visual clarity and higher spatial resolution of the aerial photographs, the resultant classifications did not generate better accuracy results than the pan-sharpened WV2 images undergoing the same treatment and process. It can be clearly seen that the WV2 image with a spatial resolution of 2 m was not a successful alternative in this context.
The error matrix was examined to make more analytical observations about individual species. This is a very effective way to describe both errors of inclusion and exclusion of each species represented in the classification [46,51]. As explained by Congalton [51], the user’s accuracy is an indication of whether the pixels classified on the map actually represent the same species on the ground. The probability of the reference pixel being correctly classified is the producer’s accuracy.
Scrutiny of the error matrix reveals that there is confusion in discriminating Ceriops tagal from Bruguiera exaristata and Rhizophora stylosa (Table 4). This is because of their spectral separability measures within that specific range and is also due to the patchy distribution pattern. For example: mostly Ceriops tagal is mixed with Bruguiera exaristata and Avicennia marina in this study area [24,25]. The WV2 images with low spatial resolution were not able to spectrally discriminate species and to identify the complex structural situation.
The error matrix also demonstrated the successful detection of Avicennia marina from the WV2 images with a low spatial resolution. This is because of the visually distinctive, smooth and similar spatial pattern of Avicennia marina on the images (or its distinctive spectral profile). Although most of the reference pixels of Lumnitzera racemosa and Rhizophora stylosa were correctly classified as their respective classes, their actual representations on maps were poor. This is evident from the lower user’s accuracy (Lumnitzera racemosa got 82% and Rhizophora stylosa got 70%) than the producer’s accuracy (Lumnitzera racemosa and Rhizophora stylosa equals to 13%).
Overall, when investigating classifications of both pan-sharpened WV2 images, Lumnitzera racemosa and Avicennia marina have the highest values for the producer’s accuracy, indicating that the probability of this species being classified as another is low. The smooth and similar spatial pattern helps the classification techniques to detect them accurately.
This is evidence that the combination of high spatial resolution remote sensing data using a relatively large number of spectral bands within the visible and NIR region and the SVM non-linear machine learning classification technique, is a powerful tool for mixed environments such as mangroves. However, spatial autocorrelation will reduce the accuracy up to a certain level. For instance, the noise from non-leaf surfaces such as tree branches and background can degrade the results of spectral separability of mangroves and thus can reduce classification accuracy at species level.
In both instances, the aerial photographs showed lower classification accuracies than the pan-sharpened WV2 images. For example: at both instances, it was not possible to successfully differentiate Bruguiera exaristata and Lumnitzera racemosa from other species, both having lower producers’ and users’ accuracies. The user’s accuracy of Bruguiera exaristata, Ceriops tagal and Rhizophora stylosa indicates high errors of commission, because other species were highly misclassified as them.

5. Conclusions and Recommendations

This study compared high spatial resolution aerial photographs with satellite remote sensing data for the purpose of mangrove species discrimination in two steps. First, mangroves and non-mangroves were separated using object-based image analysis method. Then, the mangrove coverage only was classified into species level. The study demonstrated that a large number of spectral bands with higher spatial resolution (pan-sharpened WV2 image) were more accurate than broad spectral bands within the blue, green, and red regions, when discriminating mangroves from other features in an image. In addition, our findings show that, using a calibrated, high radiometric resolution sensor such as WV2 allows greater classification automation with reduced manual editing.
When further classifying down to species level, the highest accuracy (overall accuracy of 89%) was obtained from the pan-sharpened WV2 image using five spectral bands within the visible range. The pan-sharpened visible image covered the same spectral range with the same spatial resolution as the aerial photographs. Therefore, the higher accuracy of the former compared to the overall accuracy of 68% from resampled aerial photographs is attributed to the increased number of narrow bands available for analysis, rather than the total wavelength range. Compared to these results, however, there is no considerable difference between the mangrove species map obtained from the pan-sharpened Red, Red-Edge and NIR1 bands of WV2 image. Further, this study demonstrated the significant increase in classification accuracy when using pan-sharpened imagery, on the condition that spectral and radiometric integrity is maintained using an appropriate algorithm such as high pass filter pan-sharpening method.
This study also demonstrated a unique application of the support vector machine algorithm for mangrove species mapping. While this advanced image processing technique has previously been used in other environments, it is particularly beneficial for mangroves because it efficiently deals with the dense, heterogeneous nature of mangrove forests.
Although the method used in this study is tested on a mangrove forest with a small number of species, the obtained results were very impressive. These results provide a valuable contribution to the mangrove species mapping methodologies. We would recommend repeating this process on larger study areas with greater species diversity in order to determine the efficiency and accuracy of the proposed data and SVM methods. The method could also be tested with blue to NIR1 wavelength bands of pan-sharpened WV2 on a computationally powerful system. The transferability of the rule set developed for OBIA can also be tested on a different data set. Further, to obtain a higher degree of species classification accuracies, a quantitative analysis of the effects of differences between radiometric resolutions should be investigated.

Supplementary Information

remotesensing-06-06064-s001.pdf

Acknowledgments

The support of the Northern Territory Government, Australia in providing aerial photographs for this study is gratefully acknowledged. Authors also appreciate Sandra Grant and three anonymous reviewers for constructive advice and editorial comments.

Author Contributions

Muditha K. Heenkenda designed the research, processed the remote sensing data, and drafted the manuscript with co-authors providing supervision and mentorship throughout the process.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Food and Agriculture Organization, The World’s Mangroves 1980–2005; FAO Forestry Paper 153; Food and Agriculture Organization of the United Nations: Rome, Italy, 2007.
  2. Suratman, M.N. Carbon Sequestration Potential of Mangroves in Southeast Asia. In Managing Forest Ecosystems: The Challenge of Climate Change; Bravo, F., Jandl, R., LeMay, V., Gadow, K., Eds.; Springer: New York, NY, USA, 2008; pp. 297–315. [Google Scholar]
  3. Bouillon, S.; Rivera-Monroy, V.H.; Twilley, R.R.; Kairo, J.G. Mangroves. In The Management of Natural Coastal Carbon Sinks; Laffoley, D., Grimsditch, G., Eds.; IUCN: Gland, Switzerland, 2009; pp. 13–22. [Google Scholar]
  4. Komiyama, A.; Ong, J.E.; Poungparn, S. Allometry, biomass, and productivity of mangrove forests: A review. Aquatic Bot 2008, 89, 128–137. [Google Scholar]
  5. Metcalfe, K. The Biological Diversity, Recovery from Disturbance and Rehabilitation of Mangroves in Darwin Harbour, Northern Territory. In Faculty of Education, Health & Science; Charles Darwin University: Darwin, NT, Australia, 2007; pp. 1–17. [Google Scholar]
  6. Alongi, D.M. Present state and future of the world’s mangrove forests. Environ. Conserv 2002, 29, 331–349. [Google Scholar]
  7. Blasco, F.; Gauquelin, T.; Rasolofoharinoro, M.; Denis, J.; Aizpuru, M.; Caldairou, V. Recent advances in mangrove studies using remote sensing data. Mar. Freshw. Resour 1998, 49, 287–296. [Google Scholar]
  8. Green, E.P.; Cleark, C.D.; Mumby, P.J.; Edwards, A.J.; Ellis, A.C. Remote sensing techniques for mangrove mapping. Int. J. Remote Sens 1998, 19, 935–956. [Google Scholar]
  9. Lucas, R.M.; Ellison, J.C.; Mitchell, A.; Donnelly, B.; Finlayson, M.; Milne, A.K. Use of stereo aerial photography for quantifying changes in the extent and height of mangroves in tropical Australia. Wetl. Ecol. Manag 2002, 10, 161–175. [Google Scholar]
  10. Heumann, B.W. Satellite Remote sensing of mangrove forests: Recent advances and future opportunities. Progr. Phys. Geogr 2011, 35, 87–108. [Google Scholar]
  11. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetl. Ecol. Manag 2010, 18, 281–296. [Google Scholar]
  12. Gao, J. A hybrid method toward accurate mapping of mangroves in a marginal habitat from SPOT multispectral data. Int. J. Remote Sens 1998, 19, 1887–1899. [Google Scholar]
  13. Held, A.; Ticehurst, C.; Lymburner, L.; Williams, N. High resolution mapping of tropical mangrove ecosystems using hyperspectral and radar remote sensing. Int. J. Remote Sens 2003, 24, 2739–2759. [Google Scholar]
  14. Nandy, S.; Kushwasha, S.P.S. Study on the utility of IRS LISS-III data and the classification techniques for mapping of Sunderban mangroves. J. Coast. Conserv 2010, 15, 123–137. [Google Scholar]
  15. Kamal, M.; Phinn, S. Hyperspectral data for mangrove species mapping: A comparison of pixel-based and object-based approach. Remote Sens 2011, 3, 2222–2242. [Google Scholar]
  16. Vaiphasa, C.; Ongsmwang, S.; Vaiphasa, T.; Skidmore, A.K. Tropical mangrove species discrimination using hyperspectral data: A laboratory study. Estuar. Coast. Shelf Sci 2005, 65, 371–379. [Google Scholar]
  17. Vaiphasa, C.; Skidmore, A.K.; Boer, W.F.D.; Vaiphasa, T. A hyperspectral band selector for plant species discrimination. ISPRS J. Photogramm. Remote Sens 2007, 62, 225–235. [Google Scholar]
  18. Wang, L.; Sousa, W.P. Distinguishing mangrove species with laboratory measurements of hyperspectral leaf reflectance. Int. J. Remote Sens 2009, 30, 1267–1281. [Google Scholar]
  19. Kuenzer, C.; Bluemel, A.; Gebhardt, S.; Quoc, T.V.; Dech, S. Remote sensing of mangrove ecosystems: A review. Remote Sens 2011, 3, 878–928. [Google Scholar]
  20. Heumann, B.W. An object-based classification of mangroves using a hybrid decision tree—support vector machine approach. Remote Sens 2011, 3, 2440–2460. [Google Scholar]
  21. Tso, B.; Mather, P.M. Support Vector Machines. In Classification Methods for Remotely Sensed Data, 2nd ed; CRC Press: New York, NY, USA, 2009; pp. 125–153. [Google Scholar]
  22. Mountrakis, G.; Jungho, I.; Ogole, C. Support vector machines in remote sensing: A review. Int. J. Photogramm. Remote Sens 2011, 66, 247–259. [Google Scholar]
  23. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar]
  24. Brocklehurst, P.; Edmeades, B. The Mangrove Communities of Darwin Harbour: Northern Territory; Department of Lands, Planning and Environment, Northern Territory Government: Darwin, NT, Australia, 1996. [Google Scholar]
  25. Ferwerda, J.G.; Ketner, P.; McGuinness, K.A. Differences in regeneration between hurricane damaged and clear-cut mangrove stands 25 years after clearing. Hydrobiologia 2007, 591, 35–45. [Google Scholar]
  26. Wightman, G. Mangrove Plant Identikit for North Australia’s Top End; Greening Australia: Darwin, NT, Australia, 2006; p. 64. [Google Scholar]
  27. Duke, N.C. Australia’s Mangroves: The Authoritative Guide to Australia’s Mangrove Plants; University of Queensland: Brisbane, QLD, Australia, 2006; p. 200. [Google Scholar]
  28. DigitalGlobe, The Benefits of the 8 Spectral Bands of WorldView-2; DigitalGlobe: Longmont, CO, USA, 2009.
  29. Northern Territory Government, Northern Territory Digital Data and Information; Department of Lands, Planning and the Environment: Darwin, NT, Australia, 2010.
  30. DigitalGlobe, DigitalGlobe Core Imagery Products Guide; DigitalGlobe: Longmont, CO, USA, 2011.
  31. Updike, T.; Comp, C. Radiometric Use of WorldView-2 Imagery. In Technical Note; DigitalGlobe: Longmont, CO, USA, 2010; pp. 1–17. [Google Scholar]
  32. Amro, I.; Mateos, J.; Vega, M.; Molina, R.; Katsaggelos, A.K. A survey of classical methods and new trends in pansharpening of multispectral images. EURASIP J. Adv. Signal Process 2011. [Google Scholar] [CrossRef]
  33. Chavez, J.P.S.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. J. Photogramm. Eng. Remote Sens 1991, 57, 295–303. [Google Scholar]
  34. Chavez, J.P.S.; Bowell, J.A. Comparison of the spectral information content of Landsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona. J. Photogramm. Eng. Remote Sens 1988, 54, 1699–1708. [Google Scholar]
  35. Palubinskas, G. Fast, simple, and good pan-sharpening method. J. Appl. Remote Sens 2013. [Google Scholar] [CrossRef]
  36. Intergraph Corporation, IMAGINE Workspace: HPF Resolution Merge Manual; Intergraph: Huntsville, AL, USA, 2013.
  37. Australian Government. 2013. Available online: http://www.ga.gov.au/ (accessed on 25 January 2013).
  38. Henrich, V.; Krauss, G.; Götze, C.; Sandow, C. Index Database. 2012. Available online: http://www.indexdatabase.de/db/i-single.php?id=126 (accessed on 29 May 2014).
  39. Ahamed, T.; Tian, L.; Zhang, Y.; Ting, K.C. A review of remote sensing methods for biomass feedstock production. Biomass Bioenergy 2011, 35, 2455–2469. [Google Scholar]
  40. Mertens, K.C.; Verbeke, L.P.C.; Ducheyne, E.I.; De Wulf, R.R. Using genetic algorithms in sub-pixel mapping. Int. J. Remote Sens 2003, 24, 4241–4247. [Google Scholar]
  41. Tatema, A.J.; Lewisa, H.G.; Atkinsonb, P.M.; Nixona, M.S. Super-resolution land cover pattern prediction using a Hopfield neural network. Remote Sens. Environ 2002, 79, 1–14. [Google Scholar]
  42. Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Super-resolution mapping using Hopfield neural network with panchromatic image. Int. J. Remote Sens 2011, 32, 6149–6176. [Google Scholar]
  43. Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Multiple-class land-cover mapping at the sub-pixel scale using a Hopfield neural network. Int. J. Appl. Earth Obs. Geoinforma 2001, 3, 184–190. [Google Scholar]
  44. Chapelle, O.; Haffner, P.; Vapnik, V.N. Support vector machines for histogram-based image classification. Trans. Neural Netw 1999, 10, 1055–1064. [Google Scholar]
  45. Canty, M.J. Supervised Classification: Part 1. In Image Analysis, Classification, and Change Detection in Remote Sensing with Algorithm for ENVI/IDL, 2nd ed; CRC Press: Boca Raton, FL, USA, 2009; pp. 1–441. [Google Scholar]
  46. Congalton, R.G. Accuracy assessment and validation of remotely sensed and other spatial information. Int. J. Wildl. Fire 2001, 10, 321–328. [Google Scholar]
  47. Quoc, T.V.; Oppelt, N.; Leinenkugel, P.; Kuenzer, C. Remote sensing in mapping mangrove ecosystem—An object-based approach. Remote Sens 2013, 5, 183–201. [Google Scholar]
  48. Zhang, Y. Pan-Sharpening for Improved Information Extraction. In Advances in Photogrammmetry, Remote Sensing and Spatial Information Sciences; Taylor & Fransis: London, UK, 2008; pp. 185–202. [Google Scholar]
  49. Alongi, D.M. Zonation and seasonality of benthic primary production and community respiration in tropical mangrove forests. Oecologia 1994, 98, 320–327. [Google Scholar]
  50. Dahdouh-Guebas, F.; Verheyden, A.; Kairo, J.G.; Jayatissa, L.P.; Koedam, N. Capacity building in tropical coastal resource monitoring in developing countries: A re-appreciation of the oldest remote sensing method. Int. J. Sustain. Dev. World Ecol 2006, 13, 62–76. [Google Scholar]
  51. Congalton, R.G. A review of assessing the accuracy of classification of remotely sensed data. Remote Sens. Environ 1991, 37, 35–46. [Google Scholar]
Figure 1. The study area located in coastal mangrove forest of Rapid Creek in Darwin, Northern Territory, Australia; WV2 data © Digital Globe. (Coordinate system: Universal Transverse Mercator Zone 52 L, WGS84).
Figure 1. The study area located in coastal mangrove forest of Rapid Creek in Darwin, Northern Territory, Australia; WV2 data © Digital Globe. (Coordinate system: Universal Transverse Mercator Zone 52 L, WGS84).
Remotesensing 06 06064f1
Figure 2. Mangrove species mapping using remotely sensed data. Image segmentation and initial classification was completed in eCognition Developer 8.7 software. The support vector machine algorithm was implemented in the ENVI-IDL environment for species classification.
Figure 2. Mangrove species mapping using remotely sensed data. Image segmentation and initial classification was completed in eCognition Developer 8.7 software. The support vector machine algorithm was implemented in the ENVI-IDL environment for species classification.
Remotesensing 06 06064f2
Figure 3. The schematic explanation of image objects at different hierarchical levels. The first level was at pixel level. At the second level: vegetation and non-vegetation areas were identified. Mangrove areas were extracted at the final level.
Figure 3. The schematic explanation of image objects at different hierarchical levels. The first level was at pixel level. At the second level: vegetation and non-vegetation areas were identified. Mangrove areas were extracted at the final level.
Remotesensing 06 06064f3
Figure 4. Spectral profiles of: (a) all features except mangroves extracted from WV2 image; (b) mangrove species only extracted from WV2 image; (c) mangrove species extracted from aerial photographs of spatial resolution 0.14 m; and (d) mangrove species extracted from aerial photographs of spatial resolution 0.5 m. (AM-Avicennia marina, CT-Ceriops tagal, BE-Bruguiera exaristata, LR-Lumnitzera racemosa, RS-Rhizophora stylosa, SA-Sonneratia alba).
Figure 4. Spectral profiles of: (a) all features except mangroves extracted from WV2 image; (b) mangrove species only extracted from WV2 image; (c) mangrove species extracted from aerial photographs of spatial resolution 0.14 m; and (d) mangrove species extracted from aerial photographs of spatial resolution 0.5 m. (AM-Avicennia marina, CT-Ceriops tagal, BE-Bruguiera exaristata, LR-Lumnitzera racemosa, RS-Rhizophora stylosa, SA-Sonneratia alba).
Remotesensing 06 06064f4
Figure 5. The locations of field samples available for calibration and validation purposes, and mangrove coverage extracted from the WV2 image.
Figure 5. The locations of field samples available for calibration and validation purposes, and mangrove coverage extracted from the WV2 image.
Remotesensing 06 06064f5
Figure 6. (a) Classification of pan-sharpened WV2 image using blue to red-edge bands; (b) Classification of pan-sharpened WV2 images using red to NIR1 bands; (c) Classification of WV2 image using blue to red-edge bands; (d) Classification of WV2 images using red to NIR1 bands; (e) Classification of aerial photographs with 0.14 m spatial resolution; and (f) Classification of aerial photographs with 0.5 m spatial resolution.
Figure 6. (a) Classification of pan-sharpened WV2 image using blue to red-edge bands; (b) Classification of pan-sharpened WV2 images using red to NIR1 bands; (c) Classification of WV2 image using blue to red-edge bands; (d) Classification of WV2 images using red to NIR1 bands; (e) Classification of aerial photographs with 0.14 m spatial resolution; and (f) Classification of aerial photographs with 0.5 m spatial resolution.
Remotesensing 06 06064f6
Figure 7. The comparison of the WorldView2 image and different classification approaches: (a) Pan-sharpened WV2 image; (b) Classification results of PS-WV2-VIS; (c) Classification results of PS-WV2-R/NIR1; (d) Aerial photograph; (e) Classification results of AP0.14M; (f) Classification results of AP0.5M; (g) WV2 image; (h) Classification results of WV2-VIS; (i) Classification results of WV2-R/NIR1.
Figure 7. The comparison of the WorldView2 image and different classification approaches: (a) Pan-sharpened WV2 image; (b) Classification results of PS-WV2-VIS; (c) Classification results of PS-WV2-R/NIR1; (d) Aerial photograph; (e) Classification results of AP0.14M; (f) Classification results of AP0.5M; (g) WV2 image; (h) Classification results of WV2-VIS; (i) Classification results of WV2-R/NIR1.
Remotesensing 06 06064f7
Figure 8. Extents (%) of mangrove species obtained using six different input datasets: (a) pan-sharpened WV2 image with bands blue to red-edge; (b) pan-sharpened WV2 image with bands red to NIR; (c) aerial photographs with 0.14 m spatial resolution; (d) aerial photographs with 0.5 m spatial resolution; (e) WV2 image with bands blue to red-edge; and (f) WV2 image with bands red to NIR.
Figure 8. Extents (%) of mangrove species obtained using six different input datasets: (a) pan-sharpened WV2 image with bands blue to red-edge; (b) pan-sharpened WV2 image with bands red to NIR; (c) aerial photographs with 0.14 m spatial resolution; (d) aerial photographs with 0.5 m spatial resolution; (e) WV2 image with bands blue to red-edge; and (f) WV2 image with bands red to NIR.
Remotesensing 06 06064f8
Table 1. Spectral band information of WV2 image and aerial photographs obtained from UltraCamD camera [29,30].
Table 1. Spectral band information of WV2 image and aerial photographs obtained from UltraCamD camera [29,30].
BandSpectral Range (nm)Spatial Resolution (m)
WorldView-2
Panchromatic447–8080.5
Coastal396–4582.0
Blue442–5152.0
Green506–5862.0
Yellow584–6322.0
Red624–6942.0
Red-Edge699–7492.0
NIR1765–9012.0
NIR2856–10432.0
Aerial photographs
Blue380–6000.14
Green480–7000.14
Red580–7200.14
Table 2. Number of samples (4 m2 each) used for training and number of sample points generated for validation for each mangrove species.
Table 2. Number of samples (4 m2 each) used for training and number of sample points generated for validation for each mangrove species.
SpeciesNo. of Samples for Training (4 m2 or 16 Pixels Each)No. of Points for Validating the Classification
Avicennia marina (AM)22216
Bruguiera exaristata (BE)14106
Ceriops tagal (CT)1080
Lumnitzera racemosa (LR)1278
Rhizophora stylosa (RS)1196
Table 3. Overall accuracy and Kappa statistics obtained for each classification.
Table 3. Overall accuracy and Kappa statistics obtained for each classification.
PS-WV2-VISPS-WV2-R/NIR1WV2-VISWV2-R/NIR1AP0.14MAP0.5M
Overall accuracy89%87%58%42%68%68%
Kappa0.860.840.460.250.600.58
Table 4. Comparison of producer’s accuracy and user’s accuracy for each species, obtained from different remote sensing data sources. (AM—Avicennia marina, CT—Ceriops tagal, BE—Bruguiera exaristata, LR—Lumnitzera racemosa, RS—Rhizophora stylosa).
Table 4. Comparison of producer’s accuracy and user’s accuracy for each species, obtained from different remote sensing data sources. (AM—Avicennia marina, CT—Ceriops tagal, BE—Bruguiera exaristata, LR—Lumnitzera racemosa, RS—Rhizophora stylosa).
ImageAccuracy (%)AMBECTLRRS
PS-WV2-VISProducer’s acc.98735510095
User’s acc.9872848789

PS-WV2-R/NIR1Producer’s acc.955470100100
User’s acc.10083687281

WV2-VISProducer’s acc.98****8228
User’s acc.99****1319

WV2-R/NIR1Producer’s acc.94**24470
User’s acc.98**21213

AP0.14MProducer’s acc.8327457377
User’s acc.9446356059

AP0.5MProducer’s acc.9120657944
User’s acc.7725707265
**Did not calculate individual accuracies

Share and Cite

MDPI and ACS Style

Heenkenda, M.K.; Joyce, K.E.; Maier, S.W.; Bartolo, R. Mangrove Species Identification: Comparing WorldView-2 with Aerial Photographs. Remote Sens. 2014, 6, 6064-6088. https://doi.org/10.3390/rs6076064

AMA Style

Heenkenda MK, Joyce KE, Maier SW, Bartolo R. Mangrove Species Identification: Comparing WorldView-2 with Aerial Photographs. Remote Sensing. 2014; 6(7):6064-6088. https://doi.org/10.3390/rs6076064

Chicago/Turabian Style

Heenkenda, Muditha K., Karen E. Joyce, Stefan W. Maier, and Renee Bartolo. 2014. "Mangrove Species Identification: Comparing WorldView-2 with Aerial Photographs" Remote Sensing 6, no. 7: 6064-6088. https://doi.org/10.3390/rs6076064

Article Metrics

Back to TopTop