Next Article in Journal
Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification
Next Article in Special Issue
Feasibility of a Real-Time Embedded Hyperspectral Compressive Sensing Imaging System
Previous Article in Journal
Nonlinear and Dotted Defect Detection with CNN for Multi-Vision-Based Mask Inspection
Previous Article in Special Issue
Performance and Accuracy Comparisons of Classification Methods and Perspective Solutions for UAV-Based Near-Real-Time “Out of the Lab” Data Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data

Department of Earth and Space Science and Engineering, York University, Toronto, ON M3J 1P3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(22), 8942; https://doi.org/10.3390/s22228942
Submission received: 14 October 2022 / Revised: 9 November 2022 / Accepted: 14 November 2022 / Published: 18 November 2022
(This article belongs to the Special Issue Feature Papers in the Remote Sensors Section 2022)

Abstract

:
The goal of this research was to improve wetland classification by fully exploiting multi-source remotely sensed data. Three distinct classifiers were designed to distinguish individual or compound wetland categories using random forest (RF) classification. They were determined, in part, to best use the available remotely sensed features in order to maximize that information and to maximize classification accuracy. The results from these classifiers were integrated according to Dempster–Shafer theory (D–S theory). The developed method was tested on data collected from a study area in Northern Alberta, Canada. The data utilized were Landsat-8 and Sentinel-2 (multi-spectral), Sentinel-1 (synthetic aperture radar—SAR), and digital elevation model (DEM). Classification of fen, bog, marsh, swamps, and upland resulted in an overall accuracy of 0.93 using the proposed methodology, an improvement of 5% when compared to a traditional classification method based on the aggregated features from these data sources. It was noted that, with the traditional method, some pixels were misclassified with a high level of confidence (>85%). Such misclassification was significantly reduced (by ~10%) by the proposed method. Results also showed that some features important in separating compound wetland classes were not considered important using the traditional method based on the RF feature selection mechanism. When used in the proposed method, these features increased the classification accuracy, which demonstrated that the proposed method provided an effective means to fully employ available data to improve wetland classification.

1. Introduction

Wetlands are critically linked to major issues such as climate change, wildlife habitat health, biodiversity, and groundwater issues. More specifically, wetlands play important roles in flood mitigation, water quality protection, and global carbon and methane cycles, acting as buffers against droughts, protecting coastlines from rising tides and storms, and being responsible for sediment retention [1,2,3,4,5]. North American and global wetland losses are estimated to be on the order of 50% since the early 1700s, and nearly 35% of global wetlands have been lost since 1970 [2,3,4,5]. Wetland conservation is well established as a matter of national and international public policy. Accurate maps of wetland boundaries and their changes are essential for effective monitoring, and remotely sensed imagery provides researchers with a means to achieve those goals [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21].
Remotely sensed imagery has been used to generate wetland maps with various levels of success [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. High-spatial-resolution remotely sensed imagery has created some of the most accurate wetland maps with the disadvantages of limited coverage and large time and resource demands; turnaround times for these products can be years [4]. Wetland classification using medium-spatial-resolution satellite imagery such as the Landsat or ASTER series of sensors is common and considered a standard approach [17], with the best results found when one class dominates the classification area (>30 m2) [17]. However, studies have shown that, when mixtures of wetland types are of the same order as the sensor resolution [17], class separation becomes more difficult. The addition of ancillary data such as elevation maps and field samples to the classification process of medium resolution imagery has been found to increase classification accuracies, with accuracies ranging from 30% to 82%, depending on the techniques used [12,17,23,24,25].
In most classification publications, only accuracy-related measures were reported. However, it is also important to examine the uncertainty of the classification results and the nature of the misclassification [26]. It is common that the uncertainty is high for some of the misclassified pixels. However, for some misclassified pixels, the uncertainty might be low, which is more problematic in the utilization of the classification results. The same is true for the correctly classified pixels but with high uncertainty. In our previous studies [27,28], where a random forest (RF) classifier was executed on a concatenating set of features derived from multi-source remotely sensed data, up to 30% of misclassified pixels were classified with a confidence level of greater than 85%. The results also showed that, for about 40% of the misclassified pixels, the correct class (ground truth) was found to be the second choice, and, in some cases, the difference in the posterior probability between the top two categories was only ~0.05. The method used in our previous studies [27,28] is commonly used for cover type classification using multi-source remotely sensed data, as discussed more in detail. An apparent conclusion is that the aggregated features may not be able to reliably separate the land-cover types of interest. However, this does not mean that these cover types cannot be separated by those features. With the aggregation, some features important to separate certain cover types might be masked by others. Classification may be improved if features are utilized differently. In that vein, one motivation for this study was to develop advanced methods to reduce these errors by maximizing the usage of available datasets.
With the wide availability of multi-source remotely sensed data, data fusion techniques have also been utilized to improve the classification of remotely sensed imagery [29,30,31,32,33,34,35,36]. Generally speaking, data fusion can be performed at the pixel, feature, and decision levels. With pixel-level fusion, an improved fused image is generated by combining two data sources [37]. The most notable example is the pan-sharpened image by combining the low-spatial-resolution multi-spectral imagery and the high-spatial resolution panchromatic imagery. In this context of image classification, individual data sources are not explicitly analyzed and potentially not fully utilized with the pixel-level fusion methods. Accordingly, data fusion at the feature level and decision level is the focus of this study. Feature-level fusion involves concatenating sets of features before the classification process. Decision-level fusion involves the merging of decisions from multiple classifiers either with different features or using different classification methods. The feature-level fusion is the most commonly used, due to its simplicity and demonstrated success [31,38]. However, the high dimensionality in the feature space that results from feature-level fusion, even after feature reduction efforts, is likely to be a concern for applications where the size of training samples is small [39]. In addition, features derived from different data sources are usually treated equally by most classifiers (such as RF methods), even though some of the data sources may be more reliable than others [26]. On the contrary, each data source is analyzed separately in decision-level fusion, and the uncertainty and imprecision associated with each data source can be measured and considered in the fusion process. The challenge with decision-level fusion lies in the selection of propositions for each data source and effective ways to combine the decisions. In this study, we developed an ‘ensemble classifier’ based on both feature-level and decision-level fusion to improve wetland classification, henceforth referred to as the ensemble classifier. Below, we describe the motivation for the method development and its uniqueness compared with existing fusion methods for remote sensing classification.
A previous study of ours showed that broad class separations are an effective way of classifying data in a hierarchical fashion [28]. This study showed that different image features and/or datasets could be tailored, through analysis, to be used at different stages of classification, within a hierarchy, in order to create superior or more consistent results, when compared to previous studies which have relied strictly on the resolution or characteristics of those inputs to drive splits in those classification hierarchies [40,41,42,43,44,45,46,47,48,49,50,51]. In the current study, we leveraged these broad class separations to create two additional classifiers, in addition to a traditional classifier focused on separating individual classes, to create an ensemble classifier to best utilize the available datasets for the study area. This was intended to not only increase classification accuracy but also reduce the number of high-confidence misclassified pixels through additional observation and analysis (as mentioned earlier). We propose to combine these classifiers through a Dempster–Shafer (D–S) theory of combined evidence using the results from the three classifiers, due to its capability of handling uncertainty [32].
This study is unique in two aspects considering the proposed method and analysis. For most existing methods related to ensemble classifiers and/or decision-level fusion, multiple classifiers are employed to deal with identical sets of classes [30,34,52,53]. Our work fully leveraged the diversified features derived from available multi-source remotely sensed data to be used in individually designed classifiers with unique class propositions. Furthermore, prior knowledge on wetland cover types and remotely sensed data used was also utilized in the selection of features for each classifier, in addition to the data-driven machine learning approaches that were commonly and exclusively used in most classification methods. One might argue that hierarchical classification methods effectively utilized features to separate different categories of cover types. However, the uncertainty associated with the classifiers in the hierarchy was not addressed [28] and was difficult to be accounted for in the lower parts of the hierarchy. The D–S theory used in this study provides an effective means to consider the uncertainty in each classifier. Similarly, to the argument on rule-based past-processing applied to classification maps, the advantage of this method in in its handling of uncertainty and avoiding the selection of thresholds in any rule-based methods. In this study, detailed analysis of the nature of misclassification was also carried out, which was lacking in the literature [29,30,34,52,53].
It is worth mentioning that deep learning is attracting substantial attention in cover type classification using multi-source remotely sensed data including but not limited to wetland classification [40,41,42,43,44,45,46]. These developed deep learning methods can also be categorized as pixel-level, feature-level, and decision-level fusion. Most of them implement feature-level fusion. The issues discussed earlier on pixel-level and feature-level fusion for classification also apply to those based on deep learning. Nevertheless, results have shown classification accuracies which are not dramatically different from those using other classification techniques. In addition, deep learning techniques generally require very large datasets in terms of available features and training data, and very large computational resources. With that said, decision-level fusion methods including the one proposed in this study can be used together with deep learning networks.
The remainder of the paper is structured as follows: in Section 2, the study area and images used are described; the methodology including data processing, feature extraction and selection, and the developed ensemble classifier is documented in Section 3; results and a discussion are presented in Section 4 and Section 5, respectively; in Section 6, conclusions and future work are provided.

2. Study Area and Images Used

The study area was selected from a location in Northern Alberta due to the availability of the Alberta Biodiversity Monitoring Institute (ABMI) wetland inventory [54]. Figure 1 illustrates the rough approximate area of interest. This wetland inventory comprises five different land cover classes (fens, bogs, marshes, swamps, and upland), identified and mapped out using photo interpreted data [54]. The ABMI wetland inventory is parsed out in individual study areas throughout Northern Alberta, Canada, each approximately 21 km2 in size. For this study, 10 sites, as shown in Figure 1, were selected because of the domination by wetland cover types. Collection and analysis for the photo data were completed in 2016 [54].
The land-cover classes identified in the ABMI wetland inventory (bogs, fens, marshes swamps, and upland) and their detailed descriptions can be found in [47]. Below, the characteristics of these cover types more relevant to remote sensing data interpretation are summarized. Bogs are hydrologically isolated peatlands, receiving water from precipitation only. They are stagnant, with low nutrient availability, and support low biological diversity. Bogs typically have a low water table, appearing dry at the surface.
Fens are also peatlands, but hydrologically connected. Fens can be nutrient-poor or -rich, depending on nutrient input from water sources. Fens often have high water tables and connect wetland systems over great distances. Marshes are mineral wetlands. They exhibit a variable water table, which can vary throughout the season. Marshes receive water from a combination of ground water, runoff, and precipitation, as well as through connecting streams. They are periodically dry, with nutrient-rich soil, promoting the growth of a diverse range of emergent, grass-like vegetation. Swamps are considered mineral wetlands, although they may also exist as peatlands in some cases, with woody plant cover that comprises more than 25% of the total area. Swamps receive water from a combination of ground water, runoff, and precipitation. Water movement ranges from stagnant to dynamic. Swamps typically represent transition zones between other wetlands and non-wetland areas, known as uplands, and support high biological diversity.
The upland class is a broad non-wetland class created by the ABMI in order to encompass non-wetland land covers such as grassy areas, cleared areas, and sparse and dense forests of various species. This includes upland deciduous, mixed-wood, and coniferous stands (age classes combined), grassy areas, and shrub areas [54].
For individual study areas shown in Figure 1, ground-truth data were provided in the ABMI dataset. The number of labeled pixels for each study area was determined from the size of land-cover plots identified by ariel imagery and ground survey data as per the ABMI wetland maps. As shown by the example of areas identified as swamp in Figure 2, the labeled pixels were clustered in areas. In the selection of training and validation data, the groupings of pixels were maintained. On average, approximately 64% of the identified pixels were used for training, with the remainder used for validation. These pixel and land-cover assignments are summarized in Table 1.
Landsat-8, Sentinel-2, and Sentinel-1 imagery represented the primary image sources used in this study. Attempts were made to acquire imagery close to 2016–2017 to match the collection dates of the aerial imagery used to create the ABMI dataset. However, additional images from other dates were also collected in order to create a more robust dataset. It should also be noted that Sentinel-1 imagery coverage of the study area was not available until 2017. The Landsat-8 series of sensors collect multispectral optical imagery with a spatial resolution of 30 m by 30 m across all spectral bands, including bands centered on the thermal spectrum [55]. All Landsat imagery used was Level 1G, which is both radiometrically and geometrically corrected.
The Sentinel-2 imagery used was the Level 2A bottom-of-atmosphere reflectance in the cartographic geometry imagery product. These images have a resolution of 10 m by 10 m and contain four bands. These bands are centered on 492.4 nm, 559.8 nm, 664.6 nm, and 832.8 nm—blue, green, red, and near-infrared (NIR) respectively [56]. Sentinel-2 imagery was chosen due to its availability, higher resolution compared to Landsat-8, and spectral bands which are useful in characterizing both vegetation and water.
Sentinel-1 imagery (C-band) had a resolution of 5 m by 20 m [57] and two channels in VV and VH. Sentinel-1 imagery was resampled to 10 m by 10 m in order to facilitate ease of analysis with the other imagery products.
Lastly, a digital elevation map (DEM) of the study area taken from the Canadian Digital Surface Model [58] at a spatial resolution of 30 m by 30 m with an associated DEM derived slope was used.
In total, three Landsat-8 images, seven Sentinel-2 images, and four Sentinel-1 images were collected. Table 2 summarizes the dates and types of imagery that were collected for this study.

3. Methodology

In this study, an ensemble classifier using a feature- and decision-level fusion framework was developed. Leveraging prior knowledge and all available data in the study area, three classifiers were first designed to reliably distinguish individual or compound classes among the five cover types (fen, swamp, marsh, bog, and upland), executed in parallel with one another using a RF classifier, and the results from these classifiers were then combined according to the D–S theory. The base of this ensemble classifier was the commonly used (also referred to as the traditional method) feature-based fusion method (Classifier #1) for the classification of the five individual classes (fen, swamp, marsh, bog, and upland). As discussed later, with the traditional method, some features known to have high discriminant powers in separating some broad classes (such as wetland and upland) are often not selected using automatic feature selection methods. This may lead to some confusion between wetland and upland classes due to the absence of these features. To overcome this problem, two additional classifiers (Classifiers #2 and #3) were designed to classify compound cover types. For Classifier #2, two broad cover types were classified—wetland (fen, swamp, marsh, bog) vs. dry land covers (upland). For Classifier #3 the focus was on separating more structured land covers (swamp and upland) to less structured land covers (fen, bog, and marsh). Due to the uncertainty expected from any classification method, the D–S theory was employed to combine the results from these classifiers.
Figure 3 outlines the overall workflow for our approach, and the details are described below.

3.1. Features and Their Derivation

Furthermore, 184 candidate features were derived and are summarized in Table 3. The calculations and analyses performed to produce these features are described below.
For this study, 11 different types of remotely sensed features were used. They were vegetation indices, surface albedo, and textual measures derived from multi-spectral imagery, surface temperature from the thermal bands of multi-spectral imagery, backscatter coefficients and derived features from SAR imagery, and digital elevation models (DEMs) and features derived from DEMs. These features were selected in order to characterize vegetative activity, water content, radiometric absorption, horizontal structure and roughness, water content of surface objects, and topography. It is worth mentioning that textual features derived from Sentinel-2 imagery were selected due to their success in the classification of land covers in the popular literature [40,59,60,61] and from our own observations. This is further expanded upon in Section 4 and Section 5. Surface temperature, from our past study [27], was shown to be useful in classifying wetland types.
Specifically, the vegetation indices used included the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the near-infrared reflectance vegetation index (NIRv). NDVI is a popular and standard vegetation index sensitive to leaf area index, coverage, pigment content of vegetation canopies, and vegetative photoactivity [62,63]. EVI is defined as
EVI = 2.5 × R B 8 R B 4 R B 8 + 6 × R B 4 7.5 × R B 2 + 1 ,
where R B 2 , R B 4 , and R B 8 are the reflectance at spectral bands 2, 4, and 8, from Sentinel-2 imagery, respectively. EVI was not calculated using Landsat-8 imagery due to early tests which found that EVI using Sentinel-2 imagery was of a much greater significance during classification. EVI has been shown to be effective in characterizing vegetation features such as leaf area index, temporal changes in vegetative activity and resolving vegetation differences from areas which have complex background surface reflectance [64,65,66]. NIRv, a near-infrared reflectance vegetation index, is defined as
NIRv = R B 8 R B 4 R B 8 + R B 4 × R B 8 .
Its success in characterizing vegetation in a mixed pixel environment and low leaf areas has been reported in the literature [67]. Again, NIRv was calculated for Sentinel-2 imagery only due to its larger significance when compared to NIRv calculated with Landsat-8 imagery. NDWI works on a similar principle to NDVI but is designed to be sensitive to water content rather than to photosynthetic activity. NDWI is defined as
NDWI = R B 5 R B 6 R B 5 + R B 6 ,
where R B 5 and R B 6 are the reflectance in the green and mid-infrared band (MIR), respectively, from Landsat-8 imagery. NDWI was only calculated for Landsat-8 imagery because early tests showed that, for Sentinel-2 imagery in our study area, NDVI was a much more significant feature compared to NDWI. The authors of [68] asserted that NDWI is more sensitive to changes in liquid water content of vegetation canopies vs. NDVI. They [68] also argued that the effect of atmospheric aerosol scatter effects in the MIR region are weak; thus, NDWI is less sensitive to atmospheric optical depth compared with NDVI. Due in part to its success in the popular scientific literature, NDWI is a standard layer product for the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor [69].
Surface albedo is a measure of reflectivity from a surface, ranging from 0 (full absorption) to 1 (complete reflectance). A standard approach in determining the surface albedo using Landsat imagery is through a numerically determined relationship described by Liang et al. [70,71]. Liang described albedo α using Landsat-5 TM imagery through the following equation:
α = 0.356 α 1 + 0.130 α 3 + 0.373 α 4 + 0.085 α 5 + 0.072 α 7 0.0018 ,
where the subscript on each α represents a band number in a Landsat-5 TM image. For Landsat-8 imagery the band subscripts were α 2 ,   α 4 ,   α 5 ,   α 6 ,   α 7 vs. α 1 ,   α 3 ,   α 4 ,   α 5 ,   α 7 for Landsat-5.
Surface temperature was calculated for individual pixels from Landsat-8 imagery using the standard methodology from the Landsat-8 (L8) Data Users Handbook [55].
The textural features were derived from Sentinel-2 imagery, due to its relatively higher spatial resolution in comparison with that of Landsat 8. The three texture features (mean, variance, and entropy) were calculated within a window size of 4 × 4 pixels for the four Sentinel-2 imagery bands, using the standard software suites in ENVI 5.6 [65], and they are defined in Equations (5)–(7). This window size was determined empirically.
Mean = i = 0 N g 1 iP i ,
where Ng is the number of distinct grey levels in the quantized image, and P i is the probability of the occurrence of each gray level [72].
Variance = i = 0 N g 1 i M 2 P i ,
where M is the mean as defined in Equation (5) [72].
Entropy = i = 0 N g 1 P i × lnP i .
In these equations, Ng is the number of distinct gray levels in the quantized image, P i is the probability of the occurrence of each grey level, and M is the mean as defined in Equation (5) [72]. In this study, Ng was determined automatically by ENVI through the available quantization range of the imagery.
The backscatter coefficients in VV and VH, denoted as σ vv and σ vH , respectively, were obtained from the calibrated Level 1 Single Look Complex (SLC) product of Sentinel-1 [57]. In order to reduce noise, the enhanced frost speckle filter from PCI Geomatica with a 5 × 5 pixel window was used to filter all Sentinel-1 imagery. The window size was chosen on the basis of empirical analysis. After processing, the Sentinel-1 imagery was georeferenced to the Sentinel-2 imagery.
From the Sentinel-1 imagery, we also produced an adaption of the quad-polarization of the SAR vegetation index (RVI) proposed by Periasamy [73], i.e., the dual-polarization SAR vegetation index (DPSVI), defined as
DPSVI = σ vv + σ vH σ vv .
This index has been found to be a significant feature in separating different types of crops and from separating land covers of high vegetation water content from land covers better characterized by dry biomass.
Additionally, the DEM, DEM-derived slope, and valley bottom flatness (VBF) were used. Slope was calculated from the DEM using the ENVI 5.6 topographic modeling function with a 3 × 3 window. The DEM and DEM-derived slope were selected to determine the role geographic features play in distinguishing wetland classes. For instance, it is known that some species of fens prefer to grow on slopes. VBF was calculated using the open-source GIS software suite System for Automatic Geoscientific Analysis (SAGA) using the processed DEM data as previously described. VBF measures the degree of valley bottom flatness at multiple scales [74]. Large flat valleys are typical of landscapes for wetlands, once open water has been masked from the data. Experiments were conducted while varying slope thresholds, where it was found that a slope threshold of 17 produced the most significant VBF feature. VBF has been found to be a very significant feature in the classification of wetlands from the ABMI dataset, as reported by the Alberta Biodiversity Monitoring Institute [54].
As a final note regarding the imagery and features used in this study, in order to preserve the information from the higher-resolution Sentinel-2 imagery, all images were resampled to 10 m by 10 m resolution when layers were stacked together.

3.2. Feature Selection

As mentioned earlier, three classifiers were designed in this study, and two feature selection methods were employed. For Classifier #1 (see details in the next section) where all cover types were identified, the built-in feature selection mechanism in the RF classification was used. This was to fully utilize the abovementioned extracted features and maximize their discriminant power in the classification. For Classifiers #2 and #3, where broad cover types (compounds of cover types) were considered, the feature selection was conducted on the basis of prior knowledge and experimentation. This was applied to maintain the independency in the features used for these three classifiers to avoid any bias in the fusion process. Furthermore, it was also noted in a previous study [27] that a subset of an analyzed and ranked set of features could be outperformed, in a classification setting, by a set of features selected through a holistic approach. This is further expanded upon in Section 5.
The RF importance value is determined through an iterative exploration of the dataset [75]. It is computed by summing changes in the percentage increase in mean squared error (MSE) due to splits on every predictor and dividing the sum by the number of branch nodes for that tree, averaged over all trees. These calculations are performed on all input features, with larger values implying that a feature is more significant. Additionally, it was observed from our previous study [27] that there was a plateau in classification accuracy once a specific number of image features was reached. With the increase in the number of features in a given classification, it was likely that the redundance among those features was increased, implying that there is a ceiling to the classification accuracy for a given dataset. Furthermore, any noise and confliction among a large number of features might negatively affect the classification accuracy. Keeping the aforementioned in mind, when selecting sets of features, we were cognizant of identifying the appropriate number of features in order to avoid redundancy and noise. This is further expanded upon in Section 5.

3.3. The Ensemble Classification Method Based on the D–S Theory

Our previous investigation [28] showed that, in the context of wetland classification, in a hierarchical framework, certain features can separate and classify a group of wetland types more effectively and more reliably when compared to a group focused on distinguishing individual types. One disadvantage of a hierarchical framework lies in the fact that the misclassification in the higher hierarchy is propagated to the subsequent levels of classification. To address this issue, three classifiers with different propositions were designed and carried out first, and their results were then combined according to the D–S theory. In this way, the uncertainty associated with each classifier was considered.
In Classifier #1, individual wetland cover types were considered. The classification propositions were fen, bog, marsh, swamp, and upland. This classifier type is commonly used; thus, it was taken as the baseline method for comparison. For Classifier #2, two broad cover types were classified—wetland vs. dry land covers. This classifier would utilize features which excel at identifying moisture, and flat structural features in pixels such as water indices, SAR backscatter coefficients, and DEM and its derivatives. For Classifier #3, the focus was on separating more structured land covers (swamp and upland) from less structured land covers (fen, bog, and marsh). For Classifier #3 the use of SAR features and textural features was leveraged given their performance advantages in those areas.
As mentioned in the previous section, for Classifier #1, a suitable set of features were selected using the RF feature selection method. For Classifiers #2 and #3, feature selection was conducted on the basis of prior knowledge in the separation of the two broad classes.
The RF classifier is an ensemble, supervised, machine learning algorithm. It operates by constructing multitudes of decision trees with the ultimate class of a given input determined by the majority vote from those decision trees [75,76,77]. With RF, diversification of the decision trees is accomplished by developing those trees from various subsets created through bagging or bootstrap aggregating from the training data [76]. RF lends itself well to parallelization and computational streamlining for investigating the nuances of large datasets. This has led RF to become one of the most successful and widely implemented machine learning algorithms to date [75,76]. RF generally requires two main input parameters: the number of trees to grow and the depth or complexity of those trees (p-value). More trees generally result in higher classification accuracies but at greater computational costs. However, at some point, increasing the number of trees no longer increases classification accuracy. Similarly, choosing a tree depth that is too shallow tends to produce trees that underfit, whereas choosing trees that are too deep will overfit the data. A total of 150 trees were used as determined through experimentation. A p-value of 0.05 was determined using the curvature test, which is utilized with the RF classifier to determine when to terminate a split in a decision tree. The aforementioned techniques used to determine RF input parameters is considered to be a standard approach [75,76,77,78].
For the results generated from RF classification, not only was the class assignment for each pixel generated, but also the posterior probability, which was treated as the mass function within the framework of D–S theory.
The D–S theory is a general framework for reasoning with uncertainty. It allows the user to combine evidence from different sources and arrive at a degree of belief (a mass function) that takes into account all of the available, independent, sources of evidence. Given that the RF classifier works on a majority voting principle, one product it produces is a confidence value for each of the possible outcomes based on the percentage of votes. We utilized this confidence value as a measure of belief in that outcome in the context of the D–S framework. When executing computations with the D–S rule, for each of our classifiers, we treat it as a proposition in the D–S framework. The D–S rule states that
A = B 1 ... B n = A m 1 B 1 m n B n 1 K ,   K = B 1 ... B n = i = 1 n m i B i ,
where m A is the mass function of a proposition A after considering n pieces of evidence (in our case, the different classifiers), m i B i is the mass function in the proposition B i supported by the i-th piece of evidence, and K is known as the total conflict factor [79]. As shown in Figure 3, the three classifiers mentioned earlier were first computed using the RF classification method; their results were then combined using the Dempster rule of combination under the D–S framework. The final classification was produced by assigning a given pixel to the class with the maximum mass function. As part of the analysis of the final classification result, comparisons were made to examine changes in land-cover assignment, and to see how the number of high-confidence misclassified pixels changed from the standard classifier (Classifier #1).

4. Results

Table 4 summarizes the features selected for Classifier #1 resulting from the feature selection method described in Section 3.2, as well as those for Classifiers #2 and #3. On the basis of these features, the classification accuracies were 87.5%, 88.3%, and 89.5% for Classifiers #1, #2, and #3, respectively.
When examining Table 4, we can note that, for Classifier #1, there were a broad mix of features from different sources and image types. For Classifiers #2 and #3 (broad class separations) a more limited and specific set of features were used. It can be further noticed that the features employed in Classifiers #2 and #3 were mostly excluded from the feature sets automatically selected for Classifier #1. This demonstrated that important features that could be used to distinguish compound cover types would not be employed at all using the traditional feature-level fusion method. Additionally, having different groups of features utilized in these classifiers indicated independence in their classification results, which is important within the framework of the D–S theory.
Classification maps generated by the proposed ensemble method for two selected tested areas dominated by upland and wetland are shown in Figure 4 and Figure 5, respectively, together with the true-color composite of Sentinel-2 imagery and ground-truth maps. The misclassification pixels by the traditional method and those that were corrected by the proposed ensemble method are highlighted in Figure 4D and Figure 5D. Observing Figure 4A,B and Figure 5A,B, it can be observed that the classification results generated using the ensemble method were consistent with the ground-truth maps and visual observations. It can be further noted that the misclassification using the traditional method (Classifier #1) was clustered in the upland area (Figure 4C) and fen area (Figure 5C), both in locations with high spatial variation, and the majority of the misclassification pixels were corrected by the addition of Classifiers #2 and #3 (the ensemble method).
In addition to the visual assessment of the classification results, quantitative analysis was carried out and the confusion matrices for the traditional method and the Ensemble Classifier are shown in Table 5 and Table 6, respectively. In this section, we will focus on observations from these results, while detailed discussion will be provided in the discussion section.
In general, the ensemble classifier incorporating the three classifiers together based on the D–S theory resulted in an increase in the classification accuracy from 87.5% (the traditional method, Classifier #1) to 93.5%. Upon closer examination of the results using the traditional method (Table 5), it can be noted that the producer accuracy was high and fairly uniform across all land covers. However, the user accuracy was lower; in particular, it was the lowest at 14% for swamp. When examining the results using the proposed method (Table 6), it can be noted that the user accuracy for swamps was increased by ~0.18 to 0.32. In addition, it can be observed that the upland land cover was misclassified the most in terms of the number of raw pixels and as a percentage of pixels misclassified. This might be due to the broad nature of the upland class.
Checking the misclassified pixels, it can be noted that, for some, the support (mass function) for the “wrong” cover type was very strong (over 0.85), indicating a high confidence for the class assignment. However, it was observed that there was a reduction in the number of high-confidence misclassified pixels from 26222 to 23,588—a reduction of ~10% using the proposed method (Table 7). These results show that the addition of two classifiers with compound classes through the ensemble classifier provided value in increasing the accuracy and decreasing the number of the incorrectly classified pixels with high confidence.
To further examine the improvement in individual land-cover classification provided by the ensemble classifier, tables to show changes in the pixel assignments for each cover type were generated (Table 8 and Table 9).
It can be noted from these tables that the majority of misclassified pixels, across all classes, which were reclassified by the ensemble classifier, were moved to the upland class. Of additional note, a large number of pixels originally assigned to swamps were moved to other classes, including the upland class. This movement in the assignment of pixels would also explain the large increase in user accuracy for swamps by the proposed ensemble classifier. Among these misclassified pixels with their assignment changes, some of them were classified correctly using the proposed method, while some were still misclassified, and the correct class had the second strongest support from the evidence. However, for some in the latter group, the classification uncertainty was high. That is, for these pixels, the largest mass function was not significantly different from the second largest one (difference between 0.05–0.10), leading to large uncertainty class assignment. These pixels were also summarized, as shown in Table 9.

5. Discussion

5.1. On Feature Selection and Selected Features for Classification

In this study, the selection of features for Classifier #1 followed a standard data-driven machine learning methodology, which is commonly used. The features for Classifiers #2 and #3 were manually selected, following a holistic approach, similar to that presented in a previous paper of ours [27]. From a holistic standpoint, we selected families of features which, by design, were best suited for class separation sought for each classifier, while ensuring the independence of these classifiers required by the D–S theory. The design of Classifiers #2 and #3 in terms of class propositions was to fully utilize the available datasets. It was observed that, for Classifier #1, most features selected were from optical imagery. For instance, backscatter coefficients and related indices from SAR imagery and water indices from optical imagery were known and, thus, identified for Classifier #2 (separating wetlands from upland covers), while backscatter coefficients from SAR imagery and textural features from optical imagery were identified for Classifier #3 (separating structured from less structured land covers). Through feature analysis and experimentation, we were able to determine a set of features which maximized the classification accuracy for those classifiers. As an interesting note, in the previous study [27], we reported that there were many instances where a set of holistically determined features actually produced more accurate classification results when compared to sets of features selected through quantitative analysis. In this study, we also observed the same phenomenon when determining feature inputs for Classifiers #2 and #3. These results may call for an integrated knowledge-based and data-driven feature selection method. These results also confirmed our belief (briefly mentioned in Section 1) that simple feature-level fusion for classification using multi-source remotely sensed data might underutilize some features.
From an imagery standpoint, it was noted that there was no clear correlation between the collection dates of the imagery and their significance. Intuitively, imagery closer to 2016 (the collection date of the aerial imagery used to create the ABMI plots) should be of greater significance but this was generally not the case. Landsat imagery from 2015 and 2016 was more significant when compared to the collection from 2020, while, for both Sentinel-1 and -2, there was no clear correlation. This might indicate that features of these cover types exhibited in Sentinel-2 SAR imagery were not highly dynamic. It was also suspected that factors such as atmospheric attenuation and inter-year variations in water levels were due in part in driving these differences.
By exploring images, it was also noted that classification accuracies from inputs strictly drawn from Landsat-8 images produced accuracies which were higher when compared to classification experiments where inputs were strictly drawn from Sentinel-2 images. This was counter-intuitive. It would be expected that inputs with higher resolution would result in higher classification accuracies. However, upon further analysis, it was noted that the land covers considered in this study were broader when compared to other land cover maps which have more narrow class definitions [80]. In previous studies, by virtue of data availability, land covers such as fens and bogs were parsed further into treed and non-treed versions of those land covers. For those datasets, the higher-resolution imagery might have provided the expected accuracy increases; however, with this ABMI dataset with broader classes, it is suspected that the high spatial resolution of the Sentinel-2 images might have introduced more variability among cover types, which made the classification it more difficult. Lastly, during these experiments, it was noted that the classification accuracy when using only individual datasets was some 5–8% lower when compared to classification accuracies from multi-source remotely sensed data, which is consistent with the literature.

5.2. On Misclassified Pixels

The core of this study was the development of the ensemble classifier in an effort to increase classification accuracies while also reducing the number of the incorrectly classified pixels with high confidence. The prevalence of misclassified pixels of high confidence (>85% certainty in assignment) and misclassified pixels which had the correct land cover class as the second highest ranked land cover was noted in this study with Classifier #1 (the traditional method). As shown by the results, these issues were overcome by adding two classifiers in the proposed ensemble classifier to a certain extent. Examining the mis-classified pixels using the proposed method, it was noticed that the three classifiers were not always in agreement with one another, as shown in Table 10. It was further noticed that the misclassified pixels with high confidence were located at the transition zones between cover types, as shown in Figure 6. This intuitively and physically makes sense since the transition from one wetland cover type to another is fuzzy in nature [81].
In addition to the transition zones where these classifiers tended to conflict with each other, pixels with disagreement among classifiers were also within in the areas with high variability according to a visual examination, as shown in Figure 7. This would drive variations in features, which in turn could then contribute to the variability in the outcomes of the different classification propositions. Additional information may be needed to further solve this confliction.
The misclassification involving upland may also be due to the fact that the upland class was very broad and encompassed a great deal of different land-cover types, which led to large variations in the selected features for it. As an example, Classifier #3 was used to classify structured and nonstructured cover type. Upland was included in the structured class, considering the domination of trees and shrubs in this class. However, there were also nonstructured cover types in this class. To mitigate this, we attempted to split the upland class to two subclasses during the decision-level fusion process according to the D–S theory. However, there was no real improvement in the results (not shown). The best strategy was to separate upland to different categories, which was not attempted due to the lack of training samples for detailed upland cover types.

5.3. On the Proposed Ensemble Classifier

The overall classification accuracy of the proposed method was 0.93. When compared to other studies, it was noted that this accuracy was greater or comparable with those obtained for land-cover classification using multi-source remotely sensed data [34,82,83,84]. In addition, the proposed method was less complex than some of these studies. It should be stressed that we could not find classification studies of our study area which used a decision- or feature-level fusion framework for direct comparison in the literature; furthermore, all of the comparable studies we found used different datasets or combinations thereof, for both the land-cover maps and the remotely sensed imagery used. However, it can be noted that the ABMI conducted its own classification studies of its own dataset using Landsat-5 and Landsat-8 imagery, with an RF classifier; the classification accuracies were around 0.8–0.85 [54].
A direct comparison was carried out in this study with the traditional classification method based on feature-level fusion using multi-source remotely sensed data (Classifier #1). Results were presented and discussed in the previous sections. The improvement of the proposed method over the traditional method relied on its effective utilization of available datasets and features. As previously mentioned in Section 1, features that can be used to separate certain cover types might be excluded by considering all cover types together, such as the features derived from SAR imagery and DEM. The inclusion of these features otherwise excluded in Classifiers #2 and #3 led to an increase in in user accuracy of the swamp class by ~18%. It was also noted that, in Classifier #1, the impact of the SAR imagery was lower when compared to it being utilized in a classifier focused on broader class separations. When combined in the ensemble classifier, the value of this imagery was better utilized.
In total, the proposed ensemble classifier provided a framework to effectively utilize the best available data in order to support wetland classification. In this study, while an RF classifier was employed, other classifiers could be utilized. We experimented with support vector machine (SVM) and naïve Bayes classifiers, where we found that the overall accuracies were generally lower (by ~5–8%), but the computation times were greatly reduced, compared to similar RF tests, in some cases by over 80%. The additional classification accuracy gained by using RF was obtained at a considerable computational cost.
The proposed ensemble classifier combined the strengths of various types of remotely sensed data in the differentiation of wetland cover types. This same principle could be applied to the classification of other cover types. The three classifiers were designed parallelly and independently, even though the same classifier (RF) was used. The idea of designing two classifiers to classify broader cover types was inspired by the hierarchical classification methods including our own work. As mentioned earlier, with hierarchical classification methods, the errors/uncertainties in the higher hierarchies are often not considered in the lower ones; thus, error propagation is the biggest problem. With the proposed method in this study, the uncertainty associated with these classifiers was explicitly considered under the framework of the D–S theory and, thus, solved the error propagation problem in hierarchical classification. In the same vein, this study expands the literature on the utilization of the D–S theory. Even though the D–S theory is powerful conceptually, its application is not trivial, especially in the determination of the mass functions, including the selection of propositions of non-zero mass functions. As mentioned in Section 1, in most studies based on the D–S theory, identical sets of classes were often employed for different classifiers [30,34,52,53]. Different prepositions were considered for the three classifiers in this study, and they were selected according to prior knowledge of the wetland cover types and remotely sensed data. Not only did this result in higher classification accuracies but it also provides us a framework for future work where we can more easily explore subclasses, class overlaps, and unknown classes.
The usage of prior knowledge in the designing of Classifiers #2 and #3 was also one of the disadvantages of the proposed approach. In this study, the categories and features were selected manually. This may not be practical for studies dealing with a large number of cover types. Ideally, a knowledge-based automatic approach would be preferable. This will be pursued in future work.
Lastly, the ensemble classifier in its current form has not been successful in dealing with cover types with great diversity such as the upland class. Even though it is ideal to separate such cover types into different several classes during the training process, it might not be realistic due to the difficulty in the selection of training samples. Our initial experiments where we tried to separate two subclasses during the decision-level fusion process did not show accuracy increases or effective or consistent class separation. Future versions of this classifier will have to address this.

6. Conclusions

An ensemble classification methodology combining three classifiers based on the D–S theory was developed and tested on a study area in Northern Alberta. Classifier #1 was a traditional feature-level fusion method for classification using multi-source remotely sensed data where all land-cover classes were classified together. The other two classifiers were focused on compound cover types. With Classifier #2, wetland cover types (fen, bog, marsh, and swamp) and dry land covers (upland) were considered, whereas, with Classifier #3, the focus was on the separation of less structured land covers (fen, bog, and marsh) and more structured ones (swamp and upland). Features used for classification were determined using the analysis of RF feature significance for Classifier #1 and through a more holistic approach for Classifiers #2 and #3. Use of a holistic approach for feature selection was not traditional; however, on the basis of prior knowledge and experimentation, we were able to select a set of features for Classifiers #2 and #3 which produced high accuracy when compared to a strict feature significance analysis approach. This also mimicked the results observed in past studies [27]. Once each classifier was computed, those results were combined using the Dempster’s combination rule. Results showed that the proposed ensemble classifier increased the classification accuracy from 0.88 to 0.93, compared with the traditional classification method (Classifier #1). Additionally, it was noted that there was a reduction of ~10% in the number of the misclassified pixels with high confidence, which provides additional assurance in the quality of the classification results, something which is generally not explored in this style of research.
The proposed approach provided a framework to intelligently utilize available remotely sensed data for wetland classification, which could be employed for other cover type classification. Incorporating data-driven machine learning and knowledge-based holistic methods, different propositions were designed; thus, different features were selected for these three classifiers to maximize their discriminant powers in the classification of these wetland cover types (individually or in combination). As detailed in the discussion, this made this framework unique compared with most studies based on D–S theory reported in the literature. In addition, compared with hierarchical classification methods, the proposed ensemble classifier’s advantages were enhanced by selecting different features to classify different classes, while its weaknesses were addressed by explicitly taking into account the uncertainties of different classifiers.
Even though the holistic knowledge-based method was successful in the design of Classifiers #2 and #3, prior knowledge could be utilized in a more explicit and automatic fashion, enabling the proposed method to be employed as a general framework in wider applications. This will be endeavored moving forward. With the current approach, advanced features derived from the available datasets will be further explored, and more classifiers will be added. Additional testing will be also carried out by expanding the study area to the remaining parts of the ABMI wetland inventory. Other data sources, such as RADARSAT-2 and LiDAR images, will be considered.

Author Contributions

Conceptualization, all; methodology, all; implementation, A.J.; investigation, all; data curation, all; writing—original draft preparation, A.J.; writing—review and editing, all; supervision, B.H.; funding acquisition, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by the NSERC under grant # RGPIN-2021-03624. The European Space Agency Sentinel-1 imagery and the PolSARPro software were implemented. Natural Resources Canada and the Government of Canada provided the Canadian Digital Elevation Model.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author (A.J.) upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blaustein, A.R.; Wake, D.B.; Sousa, W.P. Amphibian declines—Judging stability, persistence, and susceptibility of populations to local and global extinctions. Conserv. Biol. 1994, 8, 60–71. [Google Scholar] [CrossRef]
  2. Dahl, T.E. Status and Trends of Wetlands in the Conterminous United States 1986 to 1997; US Department of the Interior, Fish and Wildlife Service: Washington, DC, USA, 2000.
  3. Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote sensing for wetland classification: A comprehensive review. Gisci. Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
  4. U.S. Fish and Wildlife Service. National Wetlands Inventory: A Strategy for the 21st Century; US Department of the Interior, Fish and Wildlife Service: Washington, DC, USA, 2002.
  5. Finlayson, C.M.; Davidson, N.C. Global Review of Wetland Resources and Priorities for Wetland Inventory: Summary Report; Supervising Scientist: Canberra, Australia, 1999. [Google Scholar]
  6. Amani, M.; Mahdavi, S.; Afshar, M.; Brisco, B.; Huang, W.; Mirzadeh, S.M.J.; White, L.; Banks, S.; Montgomery, J.; Hopkinson, C. Canadian Wetland Inventory using Google Earth Engine: The First Map and Preliminary Results. Remote Sens. 2019, 11, 842. [Google Scholar] [CrossRef] [Green Version]
  7. Bourgeau-Chavez, L.; Endres, S.; Battaglia, M.; Miller, M.E.; Banda, E.; Laubach, Z.; Higman, P.; Chow-Fraser, P.; Marcaccio, J. Development of a Bi-National Great Lakes Coastal Wetland and Land Use Map Using Three-Season PALSAR and Landsat Imagery. Remote Sens. 2015, 7, 8655–8682. [Google Scholar] [CrossRef] [Green Version]
  8. Bwangoy, J.-R.B.; Hansen, M.C.; Roy, D.P.; De Grandi, G.; Justice, C.O. Wetland mapping in the Congo Basin using optical and radar remotely sensed data and derived topographical indices. Remote Sens. Environ. 2010, 114, 73–86. [Google Scholar] [CrossRef]
  9. Ceron, C.N.; Melesse, A.M.; Price, R.; Dessu, S.B.; Kandel, H.P. Operational actual wetland evapotranspiration estimation for South Florida using MODIS imagery. Remote Sens. 2015, 7, 3613–3632. [Google Scholar] [CrossRef] [Green Version]
  10. Davranche, A.; Lefebvre, G.; Poulin, B. Wetland monitoring using classification trees and SPOT-5 seasonal time series. Remote Sens. Environ. 2010, 114, 552–562. [Google Scholar] [CrossRef] [Green Version]
  11. Eisavi, V.; Homayouni, S.; Yazdi, A.M.; Alimohammadi, A. Land cover mapping based on random forest classification of multitemporal spectral and thermal images. Environ. Monit. Assess. 2015, 187, 14. [Google Scholar] [CrossRef]
  12. Gallant, J.; Dowling, T.I. A multiresolution index of valley bottom flatness for mapping depositional areas. Water Resour. Res. 2003, 39, 1347. [Google Scholar] [CrossRef]
  13. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
  14. Masoumi, F.; Eslamkish, T.; Abkar, A.A.; Honarmand, M.; Harris, J.R. Integration of spectral, thermal, and textural features of ASTER data using Random Forests classification for lithological mapping. J. Afr. Earth Sci. 2017, 129, 445–457. [Google Scholar] [CrossRef]
  15. Miyamoto, M.; Kushida, K.; Yoshino, K.; Nagano, T.; Sato, Y. Evaluation of multispatial scale measurements for monitoring wetland vegetation, Kushiro wetland, JAPAN: Application of SPOT images, CASI data, airborne CNIR video images and balloon aerial photography. In Proceedings of the IGARSS 2003: IEEE International Geoscience and Remote Sensing Symposium, Learning from Earth’s Shapes and Sizes, Toulouse, France, 21–25 July 2003; Volume I–VII, pp. 3275–3277. [Google Scholar]
  16. Mwita, E.; Menz, G.; Misana, S.; Becker, M.; Kisanga, D.; Boehme, B. Mapping small wetlands of Kenya and Tanzania using remote sensing techniques. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 173–183. [Google Scholar] [CrossRef]
  17. Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  18. Ramsey, E.W.; Laine, S.C. Comparison of landsat thematic mapper and high resolution photography to identify change in complex coastal wetlands. J. Coast. Res. 1997, 13, 281–292. [Google Scholar]
  19. Rundquist, D.C.; Narumalani, S.; Narayanan, R.M. A review of wetlands remote sensing and defining new considerations. Remote Sens. Rev. 2001, 20, 207–226. [Google Scholar] [CrossRef]
  20. Tian, S.; Zhang, X.; Tian, J.; Sun, Q. Random Forest Classification of Wetland Landcovers from Multi-Sensor Data in the Arid Region of Xinjiang, China. Remote Sens. 2016, 8, 954. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, Y.; Knight, J.; Rampi, L.P.; Cao, R. Mapping wetland change of prairie pothole region in Bigstone county from 1938 year to 2011 year. In Proceedings of the 2014 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014. [Google Scholar] [CrossRef]
  22. Frohn, R.C.; Autrey, B.C.; Lane, C.R.; Reif, M. Segmentation and object-oriented classification of wetlands in a karst Florida landscape using multi-season Landsat-7 ETM+ imagery. Int. J. Remote Sens. 2011, 32, 1471–1489. [Google Scholar] [CrossRef]
  23. Kushwaha, S.P.S.; Dwivedi, R.S.; Rao, B.R.M. Evaluation of various digital image processing techniques for detection of coastal wetlands using ERS-1 SAR data. Int. J. Remote Sens. 2000, 21, 565–579. [Google Scholar] [CrossRef]
  24. Millard, K.; Richardson, M. Wetland mapping with LiDAR derivatives, SAR polarimetric decompositions, and LiDAR-SAR fusion using a random forest classifier. Can. J. Remote Sens. 2013, 39, 290–307. [Google Scholar] [CrossRef]
  25. Wright, C.; Gallant, A. Improved wetland remote sensing in Yellowstone National Park using classification trees to combine TM imagery and ancillary environmental data. Remote Sens. Environ. 2007, 107, 582–605. [Google Scholar] [CrossRef]
  26. Hu, B.; Li, Q.; Hall, G.B. A decision-level fusion approach to tree species classification from multi-source remotely sensed data. ISPRS Open J. Photogramm. Remote Sens. 2021, 1, 100002. [Google Scholar] [CrossRef]
  27. Judah, A.; Hu, B. The Integration of Multi-source Remotely-Sensed Data in Support of the Classification of Wetlands. Remote Sens. 2019, 11, 1537. [Google Scholar] [CrossRef] [Green Version]
  28. Judah, A.; Hu, B. The Integration of Multi-Source Remotely Sensed Data with Hierarchically Based Classification Approaches in Support of the Classification of Wetlands. Can. J. Remote Sens. 2021, 48, 158–181. [Google Scholar] [CrossRef]
  29. Bo, C.; Lu, H.; Wang, D. Hyperspectral Image Classification via JCR and SVM Models With Decision Fusion. IEEE Geosci. Remote Sens. Lett. 2016, 13, 177–181. [Google Scholar] [CrossRef]
  30. Bui, D.H.; Mucsi, L. Comparison of Layer-stacking and Dempster-Shafer Theory-based Methods Using Sentinel-1 and Sentinel-2 Data Fusion in Urban Land Cover Mapping. Geo-Spat. Inf. Sci. 2022, 25, 425–438. [Google Scholar] [CrossRef]
  31. Chen, W.-S.; Dai, X.; Pan, B.; Huang, T. A novel discriminant criterion based on feature fusion strategy for face recognition. Neurocomputing 2015, 159, 67–77. [Google Scholar] [CrossRef]
  32. Hu, Y.; Zhang, J.; Ma, Y.; An, J.; Ren, G.; Li, X. Hyperspectral Coastal Wetland Classification Based on a Multiobject Convolutional Neural Network Model and Decision Fusion. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1110–1114. [Google Scholar] [CrossRef]
  33. Imani, M.; Ghassemian, H. An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges. Inf. Fusion 2020, 59, 59–83. [Google Scholar] [CrossRef]
  34. Jia, S.; Zhan, Z.; Zhang, M.; Xu, M.; Huang, Q.; Zhou, J.; Jia, X. Multiple Feature-Based Superpixel-Level Decision Fusion for Hyperspectral and LiDAR Data Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1437–1452. [Google Scholar] [CrossRef]
  35. Rasti, B.; Ghamisi, P.; Gloaguen, R. Hyperspectral and LiDAR Fusion Using Extinction Profiles and Total Variation Component Analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3997–4007. [Google Scholar] [CrossRef]
  36. Zhong, Z.; Fan, B.; Ding, K.; Li, H.; Xiang, S.; Pan, C. Efficient Multiple Feature Fusion with Hashing for Hyperspectral Imagery Classification: A Comparative Study. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4461–4478. [Google Scholar] [CrossRef]
  37. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
  38. Liao, W.; Pizurica, A.; Bellens, R.; Gautama, S.; Philips, W. Generalized Graph-Based Fusion of Hyperspectral and LiDAR Data Using Morphological Features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 552–556. [Google Scholar] [CrossRef]
  39. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  40. Yuan, Y.; Meng, X.; Sun, W.; Yang, G.; Wang, L.; Peng, J.; Wang, Y. Multi-Resolution Collaborative Fusion of SAR, Multispectral and Hyperspectral Images for Coastal Wetlands Mapping. Remote Sens. 2022, 14, 3492. [Google Scholar] [CrossRef]
  41. Sun, W.; Ren, K.; Meng, X.; Yang, G.; Xiao, C.; Peng, J.; Huang, J. MLR-DBPFN: A Multi-Scale Low Rank Deep Back Projection Fusion Network for Anti-Noise Hyperspectral and Multispectral Image Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522914. [Google Scholar] [CrossRef]
  42. Wang, C.; Xu, M.; Jiang, Y.; Zhang, G.; Cui, H.; Li, L.; Li, D. Translution-SNet: A Semisupervised Hyperspectral Image Stripe Noise Removal Based on Transformer and CNN. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  43. Hosseiny, B.; Mahdianpari, M.; Brisco, B.; Mohammadimanesh, F.; Salehi, B. WetNet: A Spatial-Temporal Ensemble Deep Learning Model for Wetland Classification Using Sentinel-1 and Sentinel-2. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  44. Hu, Y.; Zhang, J.; Ma, Y.; Li, X.; Sun, Q.; An, J. Deep learning classification of coastal wetland hyperspectral image combined spectra and texture features: A case study of Huanghe (Yellow) River Estuary wetland. Acta Oceanol. Sin. 2019, 38, 142–150. [Google Scholar] [CrossRef]
  45. O’Neil, G.L.; Goodall, J.L.; Behl, M.; Saby, L. Deep learning Using Physically-Informed Input Data for Wetland Identification. Environ. Model. Softw. Environ. Data News 2020, 126, 104665. [Google Scholar] [CrossRef]
  46. Jiao, L.; Sun, W.; Yang, G.; Ren, G.; Liu, Y. A Hierarchical Classification Framework of Satellite Multispectral/Hyperspectral Images for Mapping Coastal Wetlands. Remote Sens. 2019, 11, 2238. [Google Scholar] [CrossRef] [Green Version]
  47. Koma, Z.; Seijmonsbergen, A.C.; Kissling, W.D. Classifying wetland-related land cover types and habitats using fine-scale lidar metrics derived from country-wide Airborne Laser Scanning. Remote Sens. Ecol. Conserv. 2021, 7, 80–96. [Google Scholar] [CrossRef]
  48. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform. Remote Sens. 2019, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  49. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M.; Brisco, B. An efficient feature optimization for wetland mapping by synergistic use of SAR intensity, interferometry, and polarimetry data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 450–462. [Google Scholar] [CrossRef]
  50. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M. A New Hierarchical Object-Based Classification Algorithm for Wetland Mapping in Newfoundland, Canada. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 9233–9236. [Google Scholar]
  51. Muñoz, D.F.; Cissell, J.R.; Moftakhari, H. Adjusting Emergent Herbaceous Wetland Elevation with Object-Based Image Analysis, Random Forest and the 2016 NLCD. Remote Sens. 2019, 11, 2346. [Google Scholar] [CrossRef] [Green Version]
  52. Guo, B. Entropy-Mediated decision fusion for remotely sensed image classification. Remote Sens. 2019, 11, 352. [Google Scholar] [CrossRef] [Green Version]
  53. Feng, T.; Ma, H.; Cheng, X. Land-cover classification of high-resolution remote sensing image based on multi-classifier fusion and the improved Dempster–Shafer evidence theory. J. Appl. Remote Sens. 2021, 15, 014506. [Google Scholar] [CrossRef]
  54. Alberta Biodiversity Monitoring Institute. ABMI Wetland Inventory—Technical Documentation ABMI Geospatial Center, March, 2021; University of Alberta: Edmonton, AB, Canada, 2021. [Google Scholar]
  55. Ihlen, V.; Zanter, K. Landsat 8 (L8) Data Users Handbook; Version 5.0; Department of the Interior U.S. Geological Survey: Sioux Falls, SD, USA, 2019.
  56. European Space Agency. Sentinel-2 Products Specification Document; European Space Agency: Paris, France, 2021.
  57. European Space Agency. Sentinel-1-Observation Scenario—Planned Acquisitions—ESA. 2018. Available online: https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario (accessed on 5 February 2018).
  58. Natural Resources Canada Map Information Branch. Canadian Digital Elevation Model Product Specifications. Government of Canada. Last Modified 1 April 2013. 2016. Available online: http://ftp.geogratis.gc.ca/pub/nrcan_rncan/elevation/cdem_mnec/doc/CDEM_product_specs.pdf (accessed on 9 September 2020).
  59. Liu, H.; Jiang, Q.; Ma, Y.; Yang, Q.; Shi, P.; Zhang, S.; Tan, Y.; Xi, J.; Zhang, Y.; Liu, B.; et al. Object-Based Multigrained Cascade Forest Method for Wetland Classification Using Sentinel-2 and Radarsat-2 Imagery. Water 2022, 14, 82. [Google Scholar] [CrossRef]
  60. Munizaga, J.; García, M.; Ureta, F.; Novoa, V.; Rojas, O.; Rojas, C. Mapping Coastal Wetlands Using Satellite Imagery and Machine Learning in a Highly Urbanized Landscape. Sustainability 2022, 14, 5700. [Google Scholar] [CrossRef]
  61. Wu, Z.; Zhang, J.; Deng, F.; Zhang, S.; Zhang, D.; Xun, L.; Ji, M.; Feng, Q. Superpixel-Based Regional-Scale Grassland Community Classification Using Genetic Programming with Sentinel-1 SAR and Sentinel-2 Multispectral Images. Remote Sens. 2021, 13, 4067. [Google Scholar] [CrossRef]
  62. Sellers, P.J. Canopy reflectance, photosynthesis and transpiration. Int. J. Remote Sens. 1985, 6, 1335–1372. [Google Scholar] [CrossRef] [Green Version]
  63. Richards, J.A.; Richards, J. Remote Sensing Digital Image Analysis; Springer: Berlin, Germany, 1999; Volume 3. [Google Scholar]
  64. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  65. Lin, S.; Li, J.; Liu, Q.; Li, L.; Zhao, J.; Yu, W. Evaluating the Effectiveness of Using Vegetation Indices Based on Red-Edge Reflectance from Sentinel-2 to Estimate Gross Primary Productivity. Remote Sens. 2019, 11, 1303. [Google Scholar] [CrossRef] [Green Version]
  66. Rocha, A.V.; Shaver, G.R. Advantages of a two band EVI calculated from solar and photosynthetically active radiation fluxes. Agric. For. Meteorol. 2009, 149, 1560–1563. [Google Scholar] [CrossRef]
  67. Badgley, G.; Field, C.B.; Berry, J.A. Canopy near-infrared reflectance and terrestrial photosynthesis. Sci. Adv. 2017, 3, e1602244. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  69. Hubanks, P.A.; King, M.D.; Platnick, S.A.; Pincus, R.A. MODIS Atmospheric L3 Gridded Product Algorithm Theoretical Basis Document; ATBD-MOD-30; Goddard Space Flight Center: Greenbelt, MD, USA, 2008.
  70. Liang, S. Narrowband to broadband conversions of land surface albedo I: Algorithms. Remote Sens. Environ. 2001, 76, 213–238. [Google Scholar] [CrossRef]
  71. Liang, S.; Strahler, A.H.; Walthall, C. Retrieval of Land Surface Albedo from Satellite Observations: A Simulation Study. J. Appl. Meteorol. 1999, 38, 712–725. [Google Scholar] [CrossRef]
  72. Hall-Beyer, M. The GLCM Tutorial Home Page. Available online: http://www.fp.ucalgary.ca/mhallbey/tutorial.htm (accessed on 9 September 2020).
  73. Periasamy, S. Significance of dual polarimetric synthetic aperture radar in biomass retrieval: An attempt on Sentinel-1. Remote Sens. Environ. 2018, 217, 537–549. [Google Scholar] [CrossRef]
  74. Gallant, A.L. The Challenges of Remote Monitoring of Wetlands. Remote Sens. 2015, 7, 10938–10950. [Google Scholar] [CrossRef]
  75. Biau, G.; Scornet, E. A random forest guided tour. Test 2016, 25, 197–227. [Google Scholar] [CrossRef] [Green Version]
  76. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  77. Kecman, V.; Huang, T.-M.; Vogt, M. Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance. In Support Vector Machines: Theory and Applications; Lipo, W., Ed.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 255–274. [Google Scholar]
  78. Loh, W.Y. Regression trees with unbiased variable selection and interaction detection. Stat. Sin. 2002, 12, 361–386. [Google Scholar]
  79. Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping; Springer: Berlin/Heidelberg, Germany, 1967; pp. 57–72. [Google Scholar]
  80. The Canadian Wetland Classification System. The National Wetlands Working Group 1997; University of Waterloo: Waterloo, ON, Canada, 1997. [Google Scholar]
  81. Dronova, I. Object-Based Image Analysis in Wetland Research: A Review. Remote Sens. 2015, 7, 6380–6413. [Google Scholar] [CrossRef] [Green Version]
  82. Ghosh, A.; Sharma, R.; Joshi, P. Random forest classification of urban landscape using Landsat archive and ancillary data: Combining seasonal maps with decision level fusion. Appl. Geogr. 2014, 48, 31–41. [Google Scholar] [CrossRef]
  83. Liu, S.; Gao, M. Decision Fusion Using Similarity-weighted JCR and Mid-level Features based ELM for Hyperspectral Image Classification with Limited Training Samples. Int. J. Remote Sens. 2022, 43, 873–893. [Google Scholar] [CrossRef]
  84. Useya, J.; Chen, S. Comparative Performance Evaluation of Pixel-Level and Decision-Level Data Fusion of Landsat 8 OLI, Landsat 7 ETM+ and Sentinel-2 MSI for Crop Ensemble Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4441–4451. [Google Scholar] [CrossRef]
Figure 1. The study area from a geographic perspective together with a Landsat-8 True-Color image (RGB Bands 4, 3, 2) and aerial imagery. Individual study areas are highlighted as red polygons, drawn from the ABMI wetland inventory dataset on the Landsat-8 image.
Figure 1. The study area from a geographic perspective together with a Landsat-8 True-Color image (RGB Bands 4, 3, 2) and aerial imagery. Individual study areas are highlighted as red polygons, drawn from the ABMI wetland inventory dataset on the Landsat-8 image.
Sensors 22 08942 g001
Figure 2. Example of a study area highlighting the individual evaluation and training sets for swamps (A) and its corresponding cover type map (ground truth) (B).
Figure 2. Example of a study area highlighting the individual evaluation and training sets for swamps (A) and its corresponding cover type map (ground truth) (B).
Sensors 22 08942 g002
Figure 3. Workflow of the proposed ensemble classifier combining the results of three different classifiers based on D–S theory.
Figure 3. Workflow of the proposed ensemble classifier combining the results of three different classifiers based on D–S theory.
Sensors 22 08942 g003
Figure 4. Classification result of a test area dominated by upland: (A) true-color composite of a Sentinel-2 image; (B) ground-truth-based classification map; (C) classification map using the proposed method. Misclassified pixels highlighted in red; (D) Classification map using Classifier #1, where the misclassified pixels, which were corrected using the proposed method, are highlighted in green.
Figure 4. Classification result of a test area dominated by upland: (A) true-color composite of a Sentinel-2 image; (B) ground-truth-based classification map; (C) classification map using the proposed method. Misclassified pixels highlighted in red; (D) Classification map using Classifier #1, where the misclassified pixels, which were corrected using the proposed method, are highlighted in green.
Sensors 22 08942 g004
Figure 5. Classification result of a test area dominated by wetland: (A) true-color composite of a Sentinel-2 image; (B) ground-truth-based classification map; (C) classification map using Classifier #1 where misclassified pixels are highlighted in red; (D) classification map using Classifier #1, where the misclassified pixels, which were corrected using the proposed method, are highlighted in green.
Figure 5. Classification result of a test area dominated by wetland: (A) true-color composite of a Sentinel-2 image; (B) ground-truth-based classification map; (C) classification map using Classifier #1 where misclassified pixels are highlighted in red; (D) classification map using Classifier #1, where the misclassified pixels, which were corrected using the proposed method, are highlighted in green.
Sensors 22 08942 g005
Figure 6. A Test area containing the misclassified pixels with high confidence: (A) true-color Sentinel-2 image; (B) classification map using the proposed method with the misclassified pixels highlighted in red.
Figure 6. A Test area containing the misclassified pixels with high confidence: (A) true-color Sentinel-2 image; (B) classification map using the proposed method with the misclassified pixels highlighted in red.
Sensors 22 08942 g006
Figure 7. (A) True-color image of test area from Sentinel-2 imagery; (B) classification map showing pixels where two classifiers disagree or all classifiers disagree, as highlighted in red and green, respectively.
Figure 7. (A) True-color image of test area from Sentinel-2 imagery; (B) classification map showing pixels where two classifiers disagree or all classifiers disagree, as highlighted in red and green, respectively.
Sensors 22 08942 g007
Table 1. Land-cover class assignment and the number of pixels contained in the training and validation set.
Table 1. Land-cover class assignment and the number of pixels contained in the training and validation set.
ClassNumber Assigned to ClassNumber of Pixels in Training SetNumber of Pixels in Validation Set
Fen1288,343156,102
Bog236,63714,479
Marsh325,30923,416
Swamp4109,49091,510
Upland52,314,364636,441
Table 2. Summary of satellite imagery collected for this study.
Table 2. Summary of satellite imagery collected for this study.
Image IDImagerySeasonDateLevel of ProcessingAccessed From
#1Landsat-8Summer27 July 2015Level 1GUnited States Geological Survey (USGS)
#2Landsat-8Fall15 September 2016Level 1GUSGS
#3Landsat-8Fall10 September 2020Level 1GUSGS
#4Sentinel-2Fall17 September 2017Level 2AEuropean Space Agency (ESA)—Sentinel
#5Sentinel-2Summer11 August 2017Level 2AESA—Sentinel
#6Sentinel-2Summer2 September 2018Level 2AESA—Sentinel
#7Sentinel-2Fall29 September 2020Level 2AESA—Sentinel
#8Sentinel-2Summer28 June 2021Level 2AESA—Sentinel
#9Sentinel-2Summer1 July 2021Level 2AESA—Sentinel
#10Sentinel-2Summer28 July 2021Level 2AESA—Sentinel
#11Sentinel-1Summer12 August 2018Level 1—SLCESA—Sentinel
#12Sentinel-1Summer27 July 2019Level 1—SLCESA—Sentinel
#13Sentinel-1Fall19 September 2020Level 1—SLCESA—Sentinel
#14Sentinel-1Summer9 August 2021Level 1—SLCESA—Sentinel
Table 3. Features used during this study and their associated variable index. Reflection is shortened to “Reflect.” and Sentinel is shortened to “Senti.” M, V, and E correspond to the mean, variance, and entropy texture, respectively. The number at the end of each feature name refers to the image ID in Table 2.
Table 3. Features used during this study and their associated variable index. Reflection is shortened to “Reflect.” and Sentinel is shortened to “Senti.” M, V, and E correspond to the mean, variance, and entropy texture, respectively. The number at the end of each feature name refers to the image ID in Table 2.
IndexNameIndexNameIndexNameIndexName
1B1 Reflect. #147B3 Senti. 2 #693B4-M-Senti. 2 #6139B2-E-Senti. 2 #4
2B2 Reflect. #148B4 Senti. 2 #694B1-M-Senti. 2 #7140B3-E-Senti. 2 #4
3B3 Reflect. #149B1 Senti. 2 #795B2-M-Senti. 2 #7141B4-E-Senti. 2 #4
4B4 Reflect. #150B2 Senti. 2 #796B3-M-Senti. 2 #7142B1-E-Senti. 2 #5
5B5 Reflect. #151B3 Senti. 2 #797B4-M-Senti. 2 #7143B2-E-Senti. 2 #5
6B6 Reflect. #152B4 Senti. 2 #798B1-M-Senti. 2 #8144B3-E-Senti. 2 #5
7B7 Reflect. #153B1 Senti. 2 #899B2-M-Senti. 2 #8145B4-E-Senti. 2 #5
8NDVI #154B2 Senti. 2 #8100B3-M-Senti. 2 #8146B1-E-Senti. 2 #6
9NDWI #155B3 Senti. 2 #8101B4-M-Senti. 2 #8147B2-E-Senti. 2 #6
10Albedo #156B4 Senti. 2 #8102B1-M-Senti. 2 #9148B3-E-Senti. 2 #6
11Temp1 #157B1 Senti. 2 #9103B2-M-Senti. 2 #9149B4-E-Senti. 2 #6
12Temp2 #158B2 Senti. 2 #9104B3-M-Senti. 2 #9150B1-E-Senti. 2 #7
13B1 Reflect. #259B3 Senti. 2 #9105B4-M-Senti. 2 #9151B2-E-Senti. 2 #7
14B2 Reflect. #260B4 Senti. 2 #9106B1-M-Senti. 2 #10152B3-E-Senti. 2 #7
15B3 Reflect. #261B1 Senti. 2 #10107B2-M-Senti. 2 #10153B4-E-Senti. 2 #7
16B4 Reflect. #262B2 Senti. 2 #10108B3-M-Senti. 2 #10154B1-E-Senti. 2 #8
17B5 Reflect. #2 63B3 Senti. 2 #10109B4-M-Senti. 2 #10155B2-E-Senti. 2 #8
18B6 Reflect. #2 64B4 Senti. 2 #10110B1-V-Senti. 2 #4156B3-E-Senti. 2 #8
19B7 Reflect. #2 65Senti. VV-#11111B2-V-Senti. 2 #4157B4-E-Senti. 2 #8
20NDVI #266Senti. VH-#11112B3-V-Senti. 2 #4158B1-E-Senti. 2 #9
21NDWI #2 67Senti. VV-#12113B4-V-Senti. 2 #4159B2-E-Senti. 2 #9
22Albedo #2 68Senti. VH-#12114B1-V-Senti. 2 #5160B3-E-Senti. 2 #9
23Temp1 #2 69Senti. VV-#13115B2-V-Senti. 2 #5161B4-E-Senti. 2 #9
24Temp2 #2 70Senti. VH-#13116B3-V-Senti. 2 #5162B1-E-Senti. 2 #10
25B1 Reflect. #3 71Senti. VV-#14117B4-V-Senti. 2 #5163B2-E-Senti. 2 #10
26B2 Reflect. #372Senti. VH-#14118B1-V-Senti. 2 #6164B3-E-Senti. 2 #10
27B3 Reflect. #3 73DEM119B2-V-Senti. 2 #6165B4-E-Senti. 2 #10
28B4 Reflect. #374Slope120B3-V-Senti. 2 #6166EVI Senti. 2 #4
29B5 Reflect. #3 75NDVI Senti. 2 #4121B4-V-Senti. 2 #6167NIRv Senti. 2 #4
30B6 Reflect. #376NDVI Senti. 2 #5122B1-V-Senti. 2 #7168EVI Senti. 2 #5
31B7 Reflect. #377NDVI Senti. 2 #6123B2-V-Senti. 2 #7169NIRv Senti. 2 #5
32NDVI #378NDVI Senti. 2 #7124B3-V-Senti. 2 #7170EVI Senti. 2 #6
33NDWI #3 79NDVI Senti. 2 #8125B4-V-Senti. 2 #7171NIRv Senti. 2 #6
34Albedo #3 80NDVI Senti. 2 #9126B1-V-Senti. 2 #8172EVI Senti. 2 #7
35Temp1 #381NDVI Senti. 2 #10127B2-V-Senti. 2 #8173NIRv Senti. 2 #7
36Temp2 #3 82B1-M-Senti. 2 #4128B3-V-Senti. 2 #8174EVI Senti. 2 #8
37B1 Senti. 2 #483B2-M-Senti. 2 #4129B4-V-Senti. 2 #8175NIRv Senti. 2 #8
38B2 Senti. 2 #484B3-M-Senti. 2 #4130B1-V-Senti. 2 #9176EVI Senti. 2 #9
39B3 Senti. 2 #485B4-M-Senti. 2 #4131B2-V-Senti. 2 #9177NIRv Senti. 2 #9
40B4 Senti. 2 #486B1-M-Senti. 2 #5132B3-V-Senti. 2 #9178EVI Senti. 2 #10
41B1 Senti. 2 #587B2-M-Senti. 2 #5133B4-V-Senti. 2 #9179NIRv Senti. 2 #10
42B2 Senti. 2 #588B3-M-Senti. 2 #5134B1-V-Senti. 2 #10180Senti. DPSVI-#11
43B3 Senti. 2 #589B4-M-Senti. 2 #5135B2-V-Senti. 2 #10181Senti. DPSVI-#12
44B4 Senti. 2 #590B1-M-Senti. 2 #6136B3-V-Senti. 2 #10182Senti. DPSVI-#13
45B1 Senti. 2 #691B2-M-Senti. 2 #6137B4-V-Senti. 2 #10183Senti. DPSVI-#14
46B2 Senti. 2 #692B3-M-Senti. 2 #6138B1-E-Senti. 2 #4184VBF-10
Table 4. Features used in each classifier, as determined though our analysis in order to maximize classification accuracy. Index refers to the image index from Table 3.
Table 4. Features used in each classifier, as determined though our analysis in order to maximize classification accuracy. Index refers to the image index from Table 3.
Classifier #1Classifier #2Classifier #3
IndexNameIndexNameIndexFeature Name
1B1 Reflect. #1 180Senti. DPSVI-#1165Senti. VV-#11
2B2 Reflect. #1181Senti. DPSVI-#1266Senti. VH-#11
3B3 Reflect. #1182Senti. DPSVI-#1367Senti. VV-#12
4B4 Reflect. #1183Senti. DPSVI-#1468Senti. VH-#12
7B7 Reflect. #1184VBF-1069Senti. VV-#13
15B3 Reflect. #221NDWI #270Senti. VH-#13
16B4 Reflect. #233NDWI #371Senti. VV-#14
19B7 Reflect. #2 72Senti. VH-#14
20NDVI #2 73DEM
184VBF 92B3-M-Senti. 2 #6
23Temp1 #2 108B3-M-Senti. 2 #10
127B2 -V- Senti.2 #8 115B2-V-Senti. 2 #5
123B2-V-Senti. 2 #7
Table 5. Confusion matrix of the traditional method (Classifier #1). Rows represent the classification, while columns represent the reference.
Table 5. Confusion matrix of the traditional method (Classifier #1). Rows represent the classification, while columns represent the reference.
FenBogMarshSwampUplandProducer AccuracyUser Accuracy
Fen134,8452212175313,69136010.86380.7354
Bog94813,1860277680.91060.7505
Marsh1502121,1824682630.90450.7595
Swamp7214011781001730.88510.1422
Upland45,3272130483634,388549,7600.86380.9925
Overall accuracy0.875
Table 6. Confusion matrix of the proposed ensemble classifier. Rows represent the classification, while columns represent the reference.
Table 6. Confusion matrix of the proposed ensemble classifier. Rows represent the classification, while columns represent the reference.
FenBogMarshSwampUplandProducer AccuracyUser Accuracy
Fen145,39021321674550813980.93150.8267
Bog66813,628017580.94120.8234
Marsh822022,2002901040.94810.8142
Swamp53424838462480.92470.3203
Upland28,445766331011,986591,9340.93010.9974
Overall accuracy0.935
Table 7. Number of the misclassified pixels with high confidence and their land cover assignments for the traditional and proposed methods.
Table 7. Number of the misclassified pixels with high confidence and their land cover assignments for the traditional and proposed methods.
FenBogMarshSwampUpland
High conf. misclassified Pixels—Classifier #1971460432141420,167
High conf. misclassified Pixels—the proposed method660850526534815,370
Table 8. Matrix showing the assignments using the proposed ensemble classifier in comparison with Classifier #1 (the traditional method) for all pixels with changes in class assignment. Columns are the land covers that a misclassified pixel was first assigned to from Classifier #1. The rows correspond to the land cover that a pixel was assigned to by the ensemble classifier.
Table 8. Matrix showing the assignments using the proposed ensemble classifier in comparison with Classifier #1 (the traditional method) for all pixels with changes in class assignment. Columns are the land covers that a misclassified pixel was first assigned to from Classifier #1. The rows correspond to the land cover that a pixel was assigned to by the ensemble classifier.
Final Land Cover
FenBogMarshSwampUpland
Initial Land coverFen0911604918
Bog0001204
Marsh0002336
Swamp15,394868614018,180
Upland120669134430
Table 9. Matrix showing pixel assignments by the proposed ensemble classifier in comparison with Classifier #1 (the traditional method) for a subset of the pixels shown in Table 7. For the pixels shown here, the correct class had the second largest support from the evidence, but the largest and second largest mass functions were similar. Columns are the land covers that a misclassified pixel was first assigned to from Classifier #1. The rows correspond to the land cover that a pixel was assigned to by the ensemble classifier.
Table 9. Matrix showing pixel assignments by the proposed ensemble classifier in comparison with Classifier #1 (the traditional method) for a subset of the pixels shown in Table 7. For the pixels shown here, the correct class had the second largest support from the evidence, but the largest and second largest mass functions were similar. Columns are the land covers that a misclassified pixel was first assigned to from Classifier #1. The rows correspond to the land cover that a pixel was assigned to by the ensemble classifier.
Final Land Cover
FenBogMarshSwampUpland
Initial Land coverFen012284903
Bog0000204
Marsh0002333
Swamp7125114182017,482
Upland7082234100
Table 10. Land-cover breakdown of the misclassified pixels where two classifiers disagreed or all disagreed.
Table 10. Land-cover breakdown of the misclassified pixels where two classifiers disagreed or all disagreed.
FenBogMarshSwampUpland
Two Disagree681715023552327,402
All Disagree75287617330
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Judah, A.; Hu, B. An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data. Sensors 2022, 22, 8942. https://doi.org/10.3390/s22228942

AMA Style

Judah A, Hu B. An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data. Sensors. 2022; 22(22):8942. https://doi.org/10.3390/s22228942

Chicago/Turabian Style

Judah, Aaron, and Baoxin Hu. 2022. "An Advanced Data Fusion Method to Improve Wetland Classification Using Multi-Source Remotely Sensed Data" Sensors 22, no. 22: 8942. https://doi.org/10.3390/s22228942

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop