Next Article in Journal
Characteristics of BD3 Global Service Satellites: POD, Open Service Signal and Atomic Clock Performance
Previous Article in Journal
A Full-Coverage Daily Average PM2.5 Retrieval Method with Two-Stage IVW Fused MODIS C6 AOD and Two-Stage GAM Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Uncertainty Descriptor for Quantitative Measurement of the Uncertainty of Remote Sensing Images

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(13), 1560; https://doi.org/10.3390/rs11131560
Submission received: 9 May 2019 / Revised: 13 June 2019 / Accepted: 26 June 2019 / Published: 1 July 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Reliable image classification results are crucial for the application of remote sensing images, but the reliability of image classification has received less attention. In particular, the inherent uncertainty of remote sensing images has been disregarded. The uncertainty of a remote sensing image accumulates and propagates continuously in the classification process and ultimately affects the reliability of the classification results. Therefore, quantitative description and investigation of the inherent uncertainty of remote sensing images are crucial in achieving reliable remote sensing image classification. In this study, we analyze the sources of uncertainty of remote sensing images in detail and propose a quantitative descriptor for measuring image uncertainty comprehensively and effectively. In addition, we also design two verification schemes to verify the validity of the proposed uncertainty descriptor. Finally, the validity of the proposed uncertainty descriptor is confirmed by experimental results on three real remote sensing images. Our study on the uncertainty of remote sensing images may help the development of uncertainty control methods and reliable classification schemes of remote sensing images.

Graphical Abstract

1. Introduction

With the continuous development of space-to-earth observation technology, more and more remote sensing images with different spatial, spectral, and temporal resolutions have appeared in recent years. Different types of remote sensing imagery can record surface information in detail from different aspects. For example, high spatial resolution remote sensing images can effectively record the detailed spatial information of various ground objects. Hyperspectral imagery with hundreds of continuous narrow spectral bands can record the spectral information of different ground objects in detail [1]. Moreover, multi-temporal images can timely record surface change information. Therefore, remote sensing images have been widely and effectively utilized in many fields, such as urban monitoring, environment assessment, and decision making [2,3]. But, accurate and reliable land use/land cover classification is a primary and basic task in these applications. Therefore, remote sensing image classification is crucial; it has been receiving much attention in recent decades and has always been a research hotspot [4].
To obtain effective surface coverage information from remote sensing imagery, scholars have developed numerous remote sensing image classification techniques for different types of remote sensing imagery; examples include Support Vector Machine (SVM) [5,6], Maximum Likelihood Classification [7], K-Nearest Neighbor [8], and a series of improved methods for them [9,10,11]. Moreover, many new classification methods have emerged. For instance, some object-oriented methods [12,13] have been proposed and studied for the classification of high-resolution imagery. Sparse representation [14,15] and ensemble learning methods (e.g., random forest [16] and rotation forest [17]) have also been developed for the classification of hyperspectral imagery. Furthermore, as deep learning continues to penetrate the field of visual recognition, it provides new research directions for remote sensing image classification [18].
Although many approaches have been proposed and important advancements have been achieved to obtain high-precision surface classification results, minimal attention has been devoted to the reliability of remote sensing image classification. Obviously, the reliability of land use classification results is crucial for their application. In recent years, some scholars have gradually begun to study the reliability in image classification. For example, Carmel [19] proposed a data uncertainty control approach for remotely sensed data through aggregation. Choi et al. [20] proposed a weighted SVM with classification uncertainty and reported that this method has higher classification accuracy than conventional SVM. Li and Zhang [21] proposed a Markov chain geostatistical framework for land cover classification with uncertainty assessment, which can assess the uncertainty associated with classified data. Giacco et al. [22] analyzed the uncertainty in the classification of multispectral satellite imagery using SVMs and Self-Organizing Maps and showed how the uncertainty measure can be useful in the fusion of several classifiers. Feizizadeh [23] proposed an approach that integrates fuzzy synthetic evaluation and Dempster–Shafer theory (FSE-DST) for spatial uncertainty analysis and accuracy assessment in object-based image classification. Shi et al. [24] proposed a validation scheme to evaluate the reliability of land cover products, including result and process reliability evaluation. Wilson and Granlund [25] proposed that uncertainty is one of fundamental results in signal processing and suggested to consider the impact of uncertainty throughout the entire image processing pipeline. Gillmann et al. [26] presented an uncertainty-aware image pre-processing paradigm, which considered the input image’s uncertainty and propagated it through the entire pipeline. Gillmann et al. [27] also considered the uncertainty of image segmentation and proposed a flexible multi-class segmentation method by fusing a fuzzy and a hierarchical segmentation approach together, which could obtain more efficient segmentation results and uncertainty of the segmentation result in visualization.
However, most of these studies on the reliability of image classification focused on: (i) controlling the uncertainty in the classification process to improve the reliability of the classification results; and (ii) evaluating the uncertainty or reliability of the classification results. They disregarded the uncertainty of remote sensing images themselves (e.g., image uncertainty caused by noise and mixed pixels). Although Gillmann et al. [26] considered the inherent uncertainty of image data, there was a deficiency in the description and quantification of the inherent uncertainty of the image itself, especially the remote sensing imagery. In fact, remote sensing images naturally have varying degrees of uncertainty due to the complexity of the remote sensing imaging process. According to uncertainty propagation theory [28], the uncertainty of a remote sensing image accumulates and propagates continuously in the classification process and ultimately affects the reliability of the classification results. Therefore, quantitative description and investigation of the inherent uncertainty of remote sensing imagery are crucial in achieving reliable remote sensing image classification.
This study aims to analyze the sources of uncertainty of remote sensing imagery in detail and propose a quantitative description model for measuring image uncertainty comprehensively and effectively. Note that the uncertainty here mainly refers to classification uncertainty caused by data characteristics of the image itself when given a remote sensing image for classification. The uncertainties caused by other image processing (such as radiation correction), the representativeness of the selected samples and the robustness of the classification algorithm, etc. are not investigated in this article. This work is expected to provide a basis for studying the accumulation and propagation mechanisms of uncertainty in the classification of remote sensing imagery and guidance for the study of remote sensing image uncertainty control methods, which will improve the accuracy and reliability of remote sensing image classification.
The remaining part of this paper is organized as follows. A source analysis for remote sensing image uncertainty (including spatial distribution uncertainty and semantic uncertainty) is presented in Section 2. Section 3 presents quantitative measurement methods for the uncertainties proposed in Section 2 and the image uncertainty descriptor. Section 4 introduces two validity verification schemes for the proposed uncertainty descriptor. After that, Section 5 describes the experimental results and analysis on three test images. Finally, Section 6 draws the conclusions of this paper.

2. Source Analysis for Uncertainty

Different types of remote sensing imagery can record ground object information from different levels of detail. However, due to the complexity of the surface, errors in the calibration, limitations of the radiometric resolution of a sensor and imaging process of remote sensing imagery, etc., different degrees of uncertainty exist in the acquired remote sensing images. This uncertainty causes different degrees of confusion between pixels of different categories in the image, leading to some erroneous results when the image is classified. For a remote sensing image, in the pixel-based classification process, its classification uncertainty mainly refers to the classification uncertainty of the pixels in the image, which originates from two aspects.

2.1. Spatial Distribution Uncertainty

The uncertainty caused by differences in the spatial distributions of different pixels in an image is defined herein as Spatial Distribution Uncertainty (SDU). This spatial distribution mainly refers to the distance from the pixel to the object boundary, which is used to quantify the influence of the adjacency effects in the remote sensing image. Obviously, mixed pixels are seriously affected by SDU in an image. Mixed pixels with different mixing levels are often formed at the boundary of different objects in the image. These mixed pixels are usually easily misclassified in the classification process; thus, their degree of classification uncertainty is high. Of course, the formation of remote sensing imagery is a very complicated process. A single pixel is usually inevitably interfered by the spectral reflection of pixels in adjacent regions in the process of acquiring images. Therefore, in addition to mixing pixels, other pixels close to the boundary of the object are also affected by varying degrees of SDU because of adjacency effects; the closer they are to the boundary of the object, the greater the influence is and the higher the SDU level is. As indicated in [29], all objects can be divided into the Interior Part (IP) and Boundary Part (BP) in the image’s fuzzy topological space. Mixed pixels and pixels within a certain range on both sides of the object boundary are the most important components of the BP of the object in the image’s fuzzy topological space (e.g., pixels in part B in Figure 1) and are often easily misclassified in the classification task. By contrast, the center pixels of the object, which are far from the object boundary and make up the most important components of the IP of the object in the image’s fuzzy topological space (e.g., pixels in part A in Figure 1), are more likely to be correctly classified (they have low possibility of being misclassified), and their uncertainty is low.

2.2. Semantic Uncertainty

In the remote sensing imagery, the confusion of the categories of pixels caused by the intra-class variability in the feature space, is defined as Semantic Uncertainty (SU) in this paper. In the most ideal case, similar pixels in an image should have the same spectral and spatial features, and vice versa. However, in fact, due to the influence of image noise and the phenomenon wherein the same object has different spectra, etc., the features of similar pixels in the image also exhibit certain differences. Additionally, as the spatial resolution of remote sensing imagery continues to increase, increased surface details are presented, and this intra-class variability becomes increasingly apparent, thus causing greater difficulty for image classification. Generally, within an entire object, pixels that differ considerably from the overall feature level of the object are more likely to be misclassified; and for objects with different internal heterogeneity, objects with higher heterogeneity are more likely to be misclassified. These differences cause different degrees of SU between pixels in the image.

3. Quantification of Image Uncertainty

According to the analysis of the source of uncertainty above, for a given remote sensing image for classification, its classification uncertainty, which is caused by the data characteristics of the image itself, mainly comes from the influences of object boundaries and high intra-class differences, that is, Spatial Distribution Uncertainty (SDU) and Semantic Uncertainty (SU). To perform a quantitative description of these uncertainties in the image, object-oriented segmentation is first performed on an entire image to obtain many relatively homogeneous objects. Second, classification uncertainty of the image is quantitatively measured based on the segmentation results. The specific quantitative measurement method is presented below.

3.1. Spatial Distribution Uncertainty

As mentioned above, the difference in pixels’ Spatial Distribution Uncertainty (SDU) varies with the distance of the pixels to the object boundary in the image. The smaller the distance to the boundary is, the greater the SDU of the pixel is and vice versa. The SDU of pixels in remote sensing imagery is quantified in this study according to this principle.
First, the boundary vectors of the objects extracted by image segmentation are rasterized to obtain a boundary raster map I. The resolution and size of raster map I are consistent with those of the original image. In the raster map, the boundary pixel value is 1, and the non-boundary pixel value is 0. Then, a k × k distance-weighted convolution template is established to calculate the SDU of each pixel. The established k × k distance weighted convolution template W is shown in Figure 2 (k = 3). The weight w i j of each unit in the template is determined by its distance to the central unit P; the larger the distance is, the smaller the weight is:
Let the distance between central unit P and itself be 1 (to avoid the distance between the central unit and itself being 0, which is not conducive to the subsequent calculation of weights), then distance D i j of the i-th row and j-th column unit to the central unit P ( k + 1 2 , k + 1 2 ) in the template can be calculated according to Equation (1).
D i j = ( i k + 1 2 ) 2 + ( j k + 1 2 ) 2 + 1    i , j ( 1 , 2 , 3 k ) ,
The weight of each template unit can be calculated by Equation (2).
w i j = 1 D i j i = 1 k j = 1 k 1 D i j    i , j ( 1 , 2 , 3 k ) ,
Finally, with Equation (3), the convolution operation is performed pixel by pixel on the entire boundary raster map I to obtain the SDU of the entire image [30].
S D U = I W ,
where represents a convolution operation, I is a boundary raster map, and W is the distance-weighted convolution template. Specifically, in the entire image, the n-th pixel’s spatial distribution uncertainty S D U n = i = 1 k j = 1 k w i , j × I i , j n , where w i , j represents the weight of the i-th row and the j-th column unit in the k × k convolution template W. I i , j n represents the i-th row and the j-th column pixel’s value in the k × k neighborhood of the n-th pixel of raster map I.
When the convolution operation is performed on boundary raster map I by using template W to calculate SDU, the current pixel coincides with the central unit of the template. At this time, the farther the boundary pixel is from the current pixel (central unit), the smaller the weight is and vice versa. This condition is consistent with the influence of the boundary on the uncertainty of its surrounding pixels, that is, the farther a pixel is from the boundary, the smaller its SDU is and vice versa. In addition, one of the advantages of using convolutional templates to calculate the effect of boundaries on pixel’s uncertainty is that the effects of multiple boundaries around the pixel on the pixel’s SDU can be calculated simultaneously.
The following should be noted. First, the range of influence of adjacency effect on pixels is limited, and the affected pixels are mainly concentrated near the object boundary. The influence on the object’s internal pixels far from the boundary can be almost disregarded. Therefore, when calculating the SDU of the pixel, the value of k should not be too large. A value such as 3 or 5 is ok. Second, when the convolution operation is performed on the outermost pixel of the entire image, the missing portion is replaced with 0.

3.2. Semantic Uncertainty

In order to accurately quantify the semantic uncertainty (SU) of the pixels in an image, SU is summarized into the following aspects: (1) uncertainty caused by pixels’ feature differences within the object and (2) uncertainty caused by differences in internal variations of different objects.
Figure 3a,b show two objects in the image representing object A with small internal variation and object B with large internal variation, respectively. For pixels inside object A (or inside object B), varying degrees of difference exist between features of different pixels, and the degree of their deviation from the average level of the object’s feature also varies, resulting in a difference in the degree of uncertainty between them. As mentioned in Section 2, the greater the degree of deviation from the average level of the object’s feature is, the higher the uncertainty is and vice versa. Obviously, this kind of uncertainty is mainly caused by pixels’ feature differences within the object.
Furthermore, the comparison of objects A and B indicates that the difference between pixels inside object B (Figure 3b) is significantly larger than that inside object A (Figure 3a). Thus, object B (or pixels in the object B) is more easily misclassified than object A in the classification process. Obviously, the overall uncertainty level of object B is higher than that of object A, that is, the uncertainty level of pixels in object B is higher than that in object A. This kind of uncertainty is mainly caused by differences in internal pixels’ variations of different objects.
The quantitative measurement methods for these two uncertainties are described in the following text.

3.2.1. Uncertainty Caused by Pixels’ Feature Differences within the Object

For a so-called homogeneous object, several differences still exist between different pixels inside it. The uncertainty caused by these differences affects the accuracy and reliability of the classification results to some extent. From the perspective of feature space of this nominal homogeneous object, the farther the distance of a feature point is from the feature center, the greater the uncertainty is, and vice versa.
With this principle, the uncertainty caused by pixels’ feature differences inside the object can be defined according to the distance from the pixel to the feature center. In this study, the distance from the pixel to the feature center is calculated by using Euclidean distance, as shown in Equation (4).
d i = j = 1 n | f i j f j ¯ | 2 f j ¯ = i = 1 M f i j M
where d i is the distance from the i-th pixel to the feature center in the feature space composed of all pixels inside a nominal homogeneous object, i = 1, 2 M, and M is the total number of pixels in the object. f i j represents the j-th dimension feature of the i-th pixel, j = 1, 2 n, and n is the total dimension of the image’s features. f j ¯ represents the j-th dimension feature of the feature center.
On the basis of the relationship between the uncertainty of the pixel and its distance to the feature center, the uncertainty ( U i n ) caused by the pixels’ difference inside the object can be calculated using Equation (5) [31].
U i i n = d i i = 1 M d i ,
where U i i n is the uncertainty of the i-th pixel in the object, which is caused by the pixels’ difference inside the object.

3.2.2. Uncertainty Caused by Differences in Internal Variations of Different Objects

The variation in the internal pixels’ features of different homogeneous objects differs. Objects with small internal pixels’ differences have a low degree of uncertainty (the overall uncertainty level of all pixels in the object is relatively low) and vice versa.
To quantitatively measure the uncertainty caused by differences in the internal pixels’ variation of different objects, this study uses the coefficient of variation to measure the degree of dispersion between the features of pixels in each homogeneous object. In probability theory and statistics, the coefficient of variation, also known as the discrete coefficient, is a normalized measure of the degree of dispersion of the probability distribution, which is defined as the ratio of the standard deviation to the mean [32].
In the above (1), we have calculated distance d i between the pixel and the feature center inside each object. The difference in d i reflects the distribution difference between different pixels’ features within the object. Therefore, the coefficient of variation of each object can be calculated using Equation (6).
C V j = δ j d j ¯ d j ¯ = i = 1 M d i M δ j = i = 1 M ( d i d j ¯ ) 2 M 1
where C V j is the coefficient of variation of the j-th object in the image, d i is the distance from the i-th pixel inside each object to the feature center of the object, and M is the total number of pixels in the object.
According to the statistical significance of the coefficient of variation, the coefficient of variation of each object can reflect the degree of difference between different pixels within the object. The larger the coefficient of variation is, the greater the difference is, then the degree of uncertainty of the object is also greater (that is, the overall uncertainty level of all pixels within the object is also greater). Therefore, the coefficient of variation can be used to measure the uncertainty caused by the differences in the internal pixels’ variation of different objects. Let the set of coefficient of variation of all N objects in the image be CV = { C V 1 , C V 2 , C V N } . Then, the coefficient of variation of all objects is normalized by Equation (7) to obtain the uncertainty ( U o b j ) caused by differences in the internal variations of different objects [33].
U j o b j = C V j m i n ( CV ) m a x ( CV ) m i n ( CV ) ,
where U j o b j and C V j represent the uncertainty and coefficient of variation of the j-th object, respectively, and m a x ( CV ) and m i n ( CV ) represent the maximum and minimum values of the coefficients of variation of all objects, respectively.
After calculating uncertainty U i i n caused by pixels’ differences within objects and uncertainty U j o b j caused by differences in the internal variations of different objects, the Semantic Uncertainty (SU) of each pixel in each object is obtained according to Equation (8).
SU j i = U j o b j U i i n ,
where SU j i represents the SU of the i-th pixel in the j-th object of the image.
By traversing all the pixels in the entire image, the SU of the entire image can be obtained.

3.3. Comprehensive Uncertainty: The Proposed Uncertainty Descriptor

As mentioned above, the uncertainty of pixels in an image originates from two aspects, namely, Spatial Distribution Uncertainty (SDU) and Semantic Uncertainty (SU). These two types of uncertainty have been quantified separately in Section 3.1 and Section 3.2, respectively. Then, Equation (9) can be used to integrate these two uncertainties and calculate the Comprehensive Uncertainty (CU) of all pixels in the image, which is the image uncertainty descriptor to be constructed in this work [25].
C U = S D U + S U 2 ,
where CU, SDU, and SU represent comprehensive uncertainty, spatial distribution uncertainty, and semantic uncertainty of the image, respectively.

4. Validity Verification Schemes for the Proposed Uncertainty Descriptor

The validity of the uncertainty descriptor proposed in this work is verified from the following aspects: (i) statistical analysis, which is the analysis of the correlation between uncertainty and error rate in classification results, and (ii) analysis of the proposed uncertainty descriptor’s effect on image classification (the proposed descriptor is used in the image classification process to improve the reliability and accuracy of image classification results). The flow chart of the two verification schemes is shown in Figure 4. The two schemes are described in detail in the following text.

4.1. Scheme I: Statistical Analysis

According to uncertainty theory [28], the higher uncertainty of pixels in an image is, the more likely they are to be misclassified in image classification tasks. Therefore, after the appropriate classification method is used to classify the image, in the classification result, the higher the uncertainty of the pixels is, the more likely they are to be misclassified (that is, the greater the probability that they have classification errors is in the classification result). To describe this relationship quantitatively, this study uses the Pearson correlation coefficient to measure the correlation between uncertainty and classification error quantitatively. The specific process is as follows:
  • Appropriate classification methods are firstly selected to classify the remote sensing images. Support Vector Machines (SVM) classification algorithm based on the Radial Basis Function (RBF) is used in this study for image classification. SVM is a machine learning algorithm based on statistical learning theory [34]. It has evolved into a well-developed and widely used classical classification method in image classification. Therefore, SVM with RBF kernel is used to classify remote sensing images in the current experiments. The results of image classification are used to analyze the relationship between classification error and uncertainty.
    Notably, to facilitate the verification of the second scheme below, the SVM soft classification method is used to classify the images, and then the soft classification results are hardened based on the principle of maximum membership to obtain the final classification results. That is, each pixel is classified into the category corresponding to its maximum membership. The soft classification results and hardened classification map are also used in the second scheme.
  • According to the degree of uncertainty, all pixels of the entire image are divided into N levels with equal intervals of uncertainty, and the range of uncertainty corresponding to the i-th level is shown in Equation (10).
    [ U m i n + ( i 1 ) U m a x U m i n N , U m i n + i U m a x U m i n N ] ,
    where U m i n and U m a x represent the minimum and maximum values of all pixel uncertainties in the entire image, respectively. i = 1, 2⋯⋯N, where N represents the total number of uncertainty levels divided.
  • The number ( n u m i ) of misclassified pixels in each level is counted, and the overall classification error rate δ i of each level is calculated according to Equation (11).
    δ i = n u m i N U M i ,
    where δ i is the overall classification error rate of the i-th level, N U M i is the total number of pixels within the i-th level, and n u m i is the total number of misclassified pixels within the i-th level.
  • The correlation coefficient R between the classification error rates and uncertainty levels (the higher the level is, the greater the degree of uncertainty is) is calculated according to the definition of the Pearson correlation coefficient [35]. The calculation method is shown as Equation (12).
    R = i = 1 N ( δ i δ ¯ ) ( i I ¯ ) i = 1 N ( δ i δ ¯ ) 2 i = 1 N ( i I ¯ ) 2 ,
    where δ ¯ and I ¯ represent the average values of δ i and i, respectively. i = 1, 2⋯⋯N, and N is the total number of uncertainty levels divided.
According to the statistical meaning of the Pearson correlation coefficient [35], the larger the absolute value of the Pearson coefficient is, the stronger the correlation between variables is. When the Pearson coefficient’s value is positive, the variables are positively correlated and vice versa.
Obviously, the larger the absolute value of the Pearson coefficient between uncertainty levels and classification error rates is, the stronger the correlation between them is. The stronger the correlation is, the more effective the proposed uncertainty descriptor is.

4.2. Scheme II: Analysis of the Proposed Uncertainty Descriptor’s Effect on Image Classification

In addition to demonstrating the validity of the proposed uncertainty descriptor from the perspective of statistical analysis (which proves the descriptor’s capability to indicate classification errors), this study also verifies the effectiveness of the proposed descriptor by analyzing its effect on image classification. Specifically, the proposed uncertainty descriptor is applied to the image classification process. Theoretically, the reliability of image classification results can be improved by using the proposed uncertainty descriptor to control or constrain the image’s uncertainty during classification. According to uncertainty propagation theory [28], if the reliability of the classification result is improved, then the accuracy of the classification result will naturally increase. Furthermore, if the accuracy of the classification result is improved by using the proposed uncertainty descriptor, it means that the proposed uncertainty descriptor does play a role, which can prove that the descriptor is effective.
Notably, how to best control or suppress uncertainty in the image classification process to improve the reliability and accuracy of the classification results is not the focus of this work (it is a complex issue that requires further research). Thus, this study only adopts a simple and direct but effective uncertainty control method to verify the effectiveness of the proposed uncertainty descriptor.
The post-processing of image classification is also an important part of the image classification task. Reasonable post-classification methods can effectively improve the preliminary classification results of many classification methods and can thus enhance the performance and reliability of classification [36,37]. From this point of view, this study applies the proposed uncertainty descriptor to the post-processing of image classification to improve the reliability and accuracy of image classification results. The specific details of this process are as follows.
  • First, the appropriate soft classification method is selected to initially classify the images, and the results of the preliminary soft classification are obtained. Then, based on the principle of maximum membership, the preliminary soft classification results are hardened to obtain the original classification map (OCM). As already mentioned previously, in this section, the classification results (including the soft classification results and their hardened results) from the statistical analysis scheme (Scheme I) are directly used for subsequent verification.
  • Second, the uncertainty control method is utilized to refine the preliminary soft classification results. Then, similar to what is performed in Scheme I, the new refined soft classification results are hardened based on the principle of maximum membership to obtain the final reliable classification map (RCM). The details of the uncertainty control method are described in detail below.
According to the first law of geography, things or attributes that are close in geospatial space are similar [38,39]. In remote sensing images, this law is interpreted as that adjacent pixels tend to have high similarity and are more likely to belong to the same class than pixels that are far away, which has been widely recognized and applied in the field of remote sensing. The uncertainty control method used in this study is designed according to this law.
Considering a pixel P with high uncertainty, according to information theory [40], because of the high uncertainty of P, it is highly likely to be misclassified, which can also be proven by the results of the statistical analysis in Section 5.
In order to control or reduce the influence of uncertainty on the classification results, we use neighborhood reliability weighted spatial filtering (NRWSF) as the uncertainty control method to refine the results of the preliminary soft classification and obtain new soft classification results with increased reliability. The specific process is as follows.
For current pixel P whose local eight-neighborhood is O p (including central pixel P), NRWSF is first applied on the preliminary soft classification results of pixel P by using Equation (13) to reduce the influence of uncertainty on the soft classification. And the refined soft classification result of the entire image is obtained after traversing all the pixels in the entire image.
ρ P , i = n = 1 9 w n ρ n , i ,
where ρ P , i represents the probability that pixel P belongs to the i-th class in the refined soft classification results, i = 1, 2,…, I, and I is the total number of predefined classes before performing image classification. w n and ρ n , i respectively represent the weight of the n-th pixel in pixel P’s local neighborhood O p (including the central pixel) and the probability that it belongs to the i-th class in the preliminary soft classification results.
For the weight w n , it is determined by the uncertainty of each pixel in the neighborhood. The greater the uncertainty of the pixel is, the lower the reliability of its soft classification result is and the smaller its weight is. Therefore, the weight w n of each pixel can be calculated using Equation (14).
w n = 1 C U n m = 1 9 1 C U m ,
where w n and C U n represent the weight and comprehensive uncertainty of the n-th pixel in P’s neighborhood O p , respectively, and C U m represents the comprehensive uncertainty of the m-th pixel in this neighborhood.
By using NRWSF to refine the preliminary soft classification results with low reliability, a new soft classification result with increased reliability can be obtained. Then, a final reliable classification map (RCM) is generated by hardening the new soft classification results based on the principle of maximum membership. In theory, compared with the reliability and accuracy of original classification map (OCM), those of the new classification map (RCM) can be improved to some extent because the uncertainty in the new classification map (RCM) is controlled (to some extent, it is even eliminated).
3.
The overall accuracy (OA) and Kappa coefficient (KC) are the most commonly used indicators in the accuracy assessment of image classification. OA and KC can effectively represent classification accuracy [41]. In this paper, OA and KC are used for the accuracy assessment and comparative analysis of the original classification map (OCM) in (1) and the final reliable classification map (RCM) after uncertainty control in (2).
Obviously, if the accuracy of the final reliable classification map (RCM) obtained in (2) is higher than that of the original classification map (OCM) obtained in (1), then it can be illustrated that the uncertainty control method based on the proposed uncertainty descriptor in (2) is effective and plays a role. Moreover, it can also prove that the uncertainty descriptor proposed in this study is effective and valuable.

5. Experimental Results and Discussion

To prove the validity and robustness of the proposed uncertainty descriptor, we perform validation experiments on three real remote sensing image datasets according to the two designed verification schemes (statistical analysis and classification). The data used in the experiments are real remote sensing image data, and the corresponding ground truth reference images are derived from visual interpretation. Obviously, there are some inevitable errors in the determination of the boundary of the object during visual interpretation. Therefore, to minimize the influence of the ground truths’ errors caused by the inaccurate object boundary during visual interpretation on the final accuracy assessment, the data used in the experiments are upscaled images (whose spatial resolution is slightly reduced) obtained by pixel aggregation of the original image. However, in the accuracy evaluation, the original ground truth reference images are still used because the classification results are downscaled accordingly, that is, the classification accuracy is evaluated at the sub-pixel scale.

5.1. Datasets

Three real high spatial resolution remote sensing images are used in the experiments. The detailed information of the images is presented below.
First image: Vaihingen image. The first image is from the Vaihingen dataset of the remote sensing image semantic segmentation dataset published by the International Society for Photogrammetry and Remote Sensing (ISPRS) [42]. The Vaihingen image with a resolution of 9 cm consists of three bands, namely, near infrared, red, and green. As mentioned previously, the original image is resampled from 805 × 620 to 161 × 124 in the actual experiment, and the resampled image has a resolution of 45 cm. In addition, there are four main categories in the image scene: impervious surface, buildings, low vegetation, and trees. The original Vaihingen image and its ground truth reference image are shown in Figure 5.
Second image: Beijing2 image. The second image is from the Beijing-2 satellite imagery data, which cover a partial area of Wuhan, China. The Beijing2 image with a resolution of 3.2 m includes four bands: red, green, blue, and near infrared. The original image size is 375 × 309 pixels and resampled to 125 × 103 pixels during the experiment. The resampled image’s resolution is 9.6 m. The image scene includes five categories, namely, trees, grassland, bare land, water body, and buildings. The original Beijing2 image and its ground truth reference image are shown in Figure 6.
Third image: Potsdam image. The third image is from the Potsdam dataset, another remote sensing image semantic segmentation dataset published by ISPRS [42]. The Potsdam image with a resolution of 5 cm consists of three bands, namely, red, green, and blue. The original image size is 770 × 1500 pixels and resampled to 154 × 300 pixels during the experiment. The resampled image’s resolution is 25 cm. In addition, the image scene includes three categories: impervious surface, buildings, and low vegetation. The original Potsdam image and its ground truth reference image are shown in Figure 7.

5.2. Experimental Settings

Performing object-oriented segmentation on the image to obtain a series of relatively homogeneous objects is the first step in calculating the uncertainty of the image. In all experiments, the segmentation operations are performed in the remote sensing professional software ENVI5.3 using the edge-based segment algorithm and full lambda schedule-based merge algorithm. The parameters corresponding to the three sets of experimental data (segment scale and merge levels) are shown in Table 1.
In the subsequent verification process, the image features used in image classification are similar to those used in image segmentation and the calculation of semantic uncertainty (SU). These features include spectral and texture features. The spectral features include all spectral bands of the original image. The texture features include features, such as mean, variance, and entropy, extracted from each band of the original image using the Grey Level Co-occurrence Matrix (GLCM). In addition, the kernel size used to extract texture features is 3 × 3.
A weight convolution template W with a size of k × k must be established when calculating SDU. As mentioned previously, the value of k should not be too large. After many experiments and analysis, we select 3 as the best value of k. Therefore, in the three sets of experiments, the value of k is 3 when calculating SDU.

5.3. Calculation Results of Image Uncertainty

The comprehensive uncertainty (CU) of each remote sensing image is calculated according to the method described in Section 3. The results are shown in Figure 8. In Figure 8, the larger the value of CU is, the higher the degree of uncertainty is, and vice versa.

5.4. Quantitative Assessment and Analysis I - Statistical Analysis

As described in verification Scheme I, we first classify the three images using the SVM method (soft classification is applied, and the soft classification results are hardened to obtain the classification maps). The original classification maps (OCMs) are shown in Figure 9a, Figure 10a, and Figure 11a. Then, according to the method described in verification Scheme I, the uncertainty of each image is divided into several different levels, and the classification error rate of each level is calculated. Finally, a correlation analysis is performed between the classification error rates and uncertainty levels. The scatter plots and fit curves are shown in Figure 12. The equations of the fitted curves and correlation coefficients are shown in Table 2.
As can be seen from Figure 12 and Table 2, the degree of fitting of the curves in the scatter plots is very high, and the levels of uncertainty and classification error rates show a significant positive correlation, that is, the higher the level of uncertainty is, the higher the classification error rate is. Moreover, their positive correlation is very strong; specifically, the correlation coefficient R is greater than 0.97 (Vaihingen: 0.9705, Beijing2: 0.9852, and Potsdam: 0.9877). Therefore, from a statistical point of view, the uncertainty descriptor proposed in this study is obviously effective, and its capability to indicate classification errors is very strong.

5.5. Quantitative Assessment and Analysis II - Classification Verification

In order to further verify the validity of the proposed uncertainty descriptor, according to verification Scheme II, the proposed uncertainty descriptor is used to control or constrain the uncertainties of the soft classification results of three images in the classification process. The refined soft classification results are then hardened to obtain the final classification maps, and the final reliable classification maps (RCM_CUs) are shown in Figure 9d, Figure 10d, and Figure 11d. Finally, the accuracy of the classification is evaluated, and the accuracy difference between the final reliable classification maps (RCM_CUs) and the original classification maps (OCMs) is compared and analyzed. The accuracy comparison is shown in Table 3.
Table 3 shows that after the proposed comprehensive uncertainty descriptor is used to eliminate the uncertainty in the classification process, the accuracies of the final classification results (RCM_CU) are significantly improved compared with that of the original classification results (OCM) regardless of OA or KC, which indicates that the uncertainty descriptor proposed in this study is also effective from the perspective of classification because it plays a role in the uncertainty control process of image classification.
The comprehensive uncertainty (CU) consists of two parts, namely, spatial distribution uncertainty (SDU) and semantic uncertainty (SU). In order to further analyze their effects and differences, the SDU and SU are separately introduced into the classification process to control or remove image uncertainty and the corresponding classification results (RCM_SUs and RCM_SDUs) are obtained (as shown in Figure 9b,c, Figure 10b,c, and Figure 11b,c). Next, the accuracy differences between their classification maps (RCM_SUs and RCM_SDUs), the original classification maps (OCMs), and final reliable classification maps (RCM_CUs) with CU control are analyzed and compared. Table 3 shows the classification accuracies.
It can be seen from Table 3 that only introducing SDU can also improve the overall accuracy of the classification results to some extent, but its improvement effect is not as good as that when CU is introduced. However, only introducing SU does not guarantee that the overall accuracy of the classification will be improved. On the contrary, it may reduce the overall accuracy of the classification (Table 3 shows that when only SU is introduced, the classification accuracy of the Vaihingen image and Beijing2 image is reduced, and only the classification accuracy of the Potsdam image is improved). Therefore, it is showed that the uncertainty of images can be described and measured more comprehensively and effectively only by considering SDU and SU simultaneously.

6. Conclusions

In this paper, the sources of uncertainty in remote sensing imagery are analyzed and elaborated, and the inherent uncertainty of the imagery itself is summarized as spatial distribution uncertainty (SDU) and semantic uncertainty (SU). Then, on the basis of the detailed analysis of the formation mechanism of SDU and SU in remote sensing imagery, corresponding description models are constructed to quantify these two types of uncertainty, which are subsequently combined to construct a descriptor for measuring the comprehensive uncertainty (CU) of the imagery. Finally, the effectiveness of the proposed uncertainty descriptor is validated by implementing two designed verification schemes (statistical analysis and analysis of the effect on image classification) on three real remote sensing images. The experimental results confirm the validity of the uncertainty descriptor proposed in this work.
In this study, the spatial resolution of the three remote sensing images used in the experiments is high. In future research, we will continue to use images with different types of high, medium, and low spatial resolutions to verify and test the proposed uncertainty descriptor and analyze the applicability and capability of the uncertainty descriptor to quantify the uncertainty of different types of remote sensing imagery, whose aim is to further test its robustness and effectiveness. Especially when the resolution of the image is very low, some assumptions for calculating the SDU and SU may not be fully satisfied. Therefore, it is necessary to further test and improve the proposed model in the future.
The main limitation of our proposed uncertainty descriptor is that the descriptor is calculated based on the results of image segmentation and inaccurate image segmentation may affect the results of uncertainty quantification. In order to minimize the impact of image segmentation on the measurement results, we try to obtain the most accurate image segmentation results by repeated trial-and-error experiments and visual discrimination in the experiments. In future research, we will continue to improve our uncertainty measurement model, such as using fuzzy segmentation to better avoid the impact of inaccurate image segmentation. In addition, the proposed uncertainty quantization method is mainly based on linear models. It is possible to quantify the uncertainty of the imagery more effectively by using nonlinear models. Therefore, in future research, we will also try to use some nonlinear models to optimize the proposed uncertainty quantization method in this paper.
The uncertainty of a remote sensing image accumulates and propagates continuously in the classification process and ultimately affects the reliability of the classification results. Therefore, quantitative description and investigation of the inherent uncertainty of remote sensing imagery are crucial in achieving reliable remote sensing image classification. The proposed uncertainty descriptor is effective in quantitatively measuring the inherent uncertainty of images, and it has guiding significance for research on image uncertainty control methods in image classification tasks. Without a doubt, it is also important to control or eliminate uncertainty in image classification to improve the reliability and accuracy of classification results. To verify the effectiveness of the proposed uncertainty descriptor, this study adopts a simple but effective uncertainty control method. However, it is undeniable that the processing effect of this method must be further improved. In the future, we will continue to study and explore other effective uncertainty control methods or strategies based on the uncertainty descriptor proposed in this work to improve the reliability and accuracy of image classification.

Author Contributions

Q.Z. and P.Z. were responsible for the overall design of the study. Q.Z. performed all the experiments and drafted the manuscript. All authors read and approved the final manuscript.

Funding

This research was funded by “The National Key Research and Development Program of China” (2018YFF0215006). And it was also funded by the Geomatics Technology and Application key Laboratory of Qinghai Province (Grant No.QHDX-2018-09).

Acknowledgments

The authors would like to thank the International Society for Photogrammetry and Remote Sensing (ISPRS) for providing the wonderful datasets. The authors are also grateful to the anonymous referees for their constructive criticism.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, C.-I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Plenum Publishing Co.: New York, NY, USA, 2003. [Google Scholar]
  2. Moser, G.; Serpico, S.B.; Benediktsson, J.A. Land-Cover Mapping by Markov Modeling of Spatial–Contextual Information in Very-High-Resolution Remote Sensing Images. Proc. IEEE 2013, 101, 631–651. [Google Scholar] [CrossRef]
  3. Li, J.; Zhang, H.; Zhang, L. Supervised Segmentation of Very High Resolution Images by the Use of Extended Morphological Attribute Profiles and a Sparse Transform. IEEE Geosci. Remote. Sens. 2014, 11, 1409–1413. [Google Scholar] [CrossRef]
  4. Zhao, J.; Zhong, Y.; Jia, T.; Wang, X.; Xu, Y.; Shu, H.; Zhang, L. Spectral-spatial classification of hyperspectral imagery with cooperative game. ISPRS J. Photogramm. 2018, 135, 31–42. [Google Scholar] [CrossRef]
  5. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE T Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  6. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  7. Murthy, C.S.; Raju, P.V.; Badrinath, K.V.S. Classification of wheat crop with multi-temporal images: Performance of maximum likelihood and artificial neural networks. Int. J. Remote Sens. 2003, 24, 4871–4890. [Google Scholar] [CrossRef]
  8. Blanzieri, E.; Melgani, F. Nearest Neighbor Classification of Remote Sensing Images With the Maximal Margin Principle. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1804–1811. [Google Scholar] [CrossRef]
  9. Liu, K.; Shi, W.; Zhang, H. A fuzzy topology-based maximum likelihood classification. Isprs J Photogramm 2011, 66, 103–114. [Google Scholar] [CrossRef]
  10. Ying, L.; Bo, C. An improved k-nearest neighbor algorithm and its application to high resolution remote sensing image classification. In Proceedings of the 2009 17th International Conference on Geoinformatics, Fairfax, VA, USA, 12–14 August 2009; pp. 1–4. [Google Scholar]
  11. Hosseini, R.S.; Homayouni, S.; Safari, R. Modified algorithm based on support vector machines for classification of hyperspectral images in a similarity space. J. Appl. Rem. Sens. 2012, 6, 1–21. [Google Scholar] [CrossRef]
  12. Blaschke, T. Object based image analysis for remote sensing. ISPRS J Photogramm. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, P.; Lv, Z.; Shi, W. Object-Based Spatial Feature for Classification of Very High Resolution Remote Sensing Images. IEEE Geosci. Remote Sens. 2013, 10, 1572–1576. [Google Scholar] [CrossRef]
  14. Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral Image Classification Using Dictionary-Based Sparse Representation. IEEE T Geosci. Remote Sens. 2011, 49, 3973–3985. [Google Scholar] [CrossRef]
  15. Xue, Z.; Du, P.; Li, J.; Su, H. Simultaneous Sparse Graph Embedding for Hyperspectral Image Classification. IEEE T Geosci. Remote Sens. 2015, 53, 6114–6133. [Google Scholar] [CrossRef]
  16. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  17. Xia, J.; Du, P.; He, X.; Chanussot, J. Hyperspectral Remote Sensing Image Classification Based on Rotation Forest. IEEE Geosci. Remote Sens. 2014, 11, 239–243. [Google Scholar] [CrossRef]
  18. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  19. Carmel, Y. Controlling data uncertainty via aggregation in remotely sensed data. IEEE Geosci. Remote Sens. 2004, 1, 39–41. [Google Scholar] [CrossRef]
  20. Choi, M.; Lee, H.; Lee, S. Weighted SVM with classification uncertainty for small training samples. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4438–4442. [Google Scholar]
  21. Li, W.; Zhang, C. A Markov Chain Geostatistical Framework for Land-Cover Classification With Uncertainty Assessment Based on Expert-Interpreted Pixels From Remotely Sensed Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2983–2992. [Google Scholar] [CrossRef]
  22. Giacco, F.; Thiel, C.; Pugliese, L.; Scarpetta, S.; Marinaro, M. Uncertainty Analysis for the Classification of Multispectral Satellite Images Using SVMs and SOMs. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3769–3779. [Google Scholar] [CrossRef] [Green Version]
  23. Feizizadeh, B. A Novel Approach of Fuzzy Dempster–Shafer Theory for Spatial Uncertainty Analysis and Accuracy Assessment of Object-Based Image Classification. IEEE Geosci. Remote Sens. 2018, 15, 18–22. [Google Scholar] [CrossRef]
  24. Shi, W.; Zhang, X.; Hao, M.; Shao, P.; Cai, L.; Lyu, X. Validation of Land Cover Products Using Reliability Evaluation Methods. Remote Sens. 2015, 7. [Google Scholar] [CrossRef]
  25. Wilson, R.; Granlund, G.H. The Uncertainty Principle in Image Processing. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 488–498, 758–767. [Google Scholar] [CrossRef]
  26. Gillmann, C.; Arbelaez, P.; Hernandez, T.J.; Hagen, H.; Wischgoll, T. An Uncertainty-Aware Visual System for Image Pre-Processing. J. Imaging 2018, 4. [Google Scholar] [CrossRef]
  27. Gillmann, C.; Post, T.; Wischgoll, T.; Hagen, H.; Maciejewski, R. Hierarchical Image Semantics using Probabilistic Path Propagations for Biomedical Research. IEEE Comput. Graph. Appl. 2019, 1. [Google Scholar] [CrossRef] [PubMed]
  28. Shi, W.Z. Principles of Modelling Uncertainties in Spatial Data and Spatial Analyses; CRC Press: Boca Raton, Florida, FL, USA, 2008. [Google Scholar]
  29. Shi, W.; Liu, K.; Huang, C. A Fuzzy-Topology-Based Area Object Extraction Method. IEEE Trans. Geosci. Remote Sens. 2010, 48, 147–154. [Google Scholar] [CrossRef]
  30. Dumoulin, V.; Visin, F. A guide to convolution arithmetic for deep learning. arXiv. 2016. arXiv:1603.07285. Available online: https://arxiv.org/abs/1603.07285 (accessed on 6 May 2019).
  31. Yigit, H. ABC-based distance-weighted kNN algorithm. J. Exp. Theor. Artif. Intell. 2015, 27, 189–198. [Google Scholar] [CrossRef]
  32. Wilson, C.A.; Payton, M.E. Modelling the coefficient of variation in factorial experiments. Commun. Stat.-Theory Methods 2002, 31, 463–476. [Google Scholar] [CrossRef]
  33. Jain, S.; Shukla, S.; Wadhvani, R. Dynamic selection of normalization techniques using data complexity measures. Expert Syst. Appl. 2018, 106, 252–262. [Google Scholar] [CrossRef]
  34. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  35. Stigler, S.M. Francis Galton's Account of the Invention of Correlation. Statist. Sci. 1989, 4, 73–79. [Google Scholar] [CrossRef]
  36. Huang, X.; Lu, Q.; Zhang, L.; Plaza, A. New Postprocessing Methods for Remote Sensing Image Classification: A Systematic Study. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7140–7159. [Google Scholar] [CrossRef]
  37. Cui, G.; Lv, Z.; Li, G.; Atli Benediktsson, J.; Lu, Y. Refining Land Cover Classification Maps Based on Dual-Adaptive Majority Voting Strategy for Very High Resolution Remote Sensing Images. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
  38. Tobler, W.R. A Computer Movie Simulating Urban Growth in the Detroit Region. Economic Geography 1970, 46, 234–240. [Google Scholar] [CrossRef]
  39. Lv, Z.; Zhang, P.; Atli Benediktsson, J. Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification. Remote Sens. 2017, 9. [Google Scholar] [CrossRef]
  40. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  41. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  42. Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, I–3, 293–298. [Google Scholar] [CrossRef]
Figure 1. Distribution of different types of pixels in the ground object (A: pixels far from the boundary of the object, B: pixels near the boundary of the object, and C: mixed pixels at the boundary). The difference in pixel color represents the difference between the features of the pixels.
Figure 1. Distribution of different types of pixels in the ground object (A: pixels far from the boundary of the object, B: pixels near the boundary of the object, and C: mixed pixels at the boundary). The difference in pixel color represents the difference between the features of the pixels.
Remotesensing 11 01560 g001
Figure 2. k × k distance weighted convolution template W (k = 3).
Figure 2. k × k distance weighted convolution template W (k = 3).
Remotesensing 11 01560 g002
Figure 3. Two different objects in the image: (a) object A with small internal variation and (b) object B with large internal variation (the difference in pixel color represents the difference between the features of the pixels).
Figure 3. Two different objects in the image: (a) object A with small internal variation and (b) object B with large internal variation (the difference in pixel color represents the difference between the features of the pixels).
Remotesensing 11 01560 g003
Figure 4. Flow chart of two verification schemes (LayerN represents the probability map that the image is classified into the N-th class in the soft classification results, and “Hardening” represents the hardening process of soft classification results based on the principle of maximum membership).
Figure 4. Flow chart of two verification schemes (LayerN represents the probability map that the image is classified into the N-th class in the soft classification results, and “Hardening” represents the hardening process of soft classification results based on the principle of maximum membership).
Remotesensing 11 01560 g004
Figure 5. Vaihingen image: (a) false color original Vaihingen image and (b) ground truth reference image.
Figure 5. Vaihingen image: (a) false color original Vaihingen image and (b) ground truth reference image.
Remotesensing 11 01560 g005
Figure 6. Beijing2 image: (a) true color original Beijing2 image and (b) ground truth reference image.
Figure 6. Beijing2 image: (a) true color original Beijing2 image and (b) ground truth reference image.
Remotesensing 11 01560 g006
Figure 7. Potsdam image: (a) true color original Potsdam image and (b) ground truth reference image.
Figure 7. Potsdam image: (a) true color original Potsdam image and (b) ground truth reference image.
Remotesensing 11 01560 g007
Figure 8. Calculation results of image uncertainty: (a) Vaihingen image, (b) Beijing2 image, and (c) Potsdam image.
Figure 8. Calculation results of image uncertainty: (a) Vaihingen image, (b) Beijing2 image, and (c) Potsdam image.
Remotesensing 11 01560 g008
Figure 9. Classification maps of Vaihingen image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Figure 9. Classification maps of Vaihingen image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Remotesensing 11 01560 g009
Figure 10. Classification maps of Beijing2 image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Figure 10. Classification maps of Beijing2 image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Remotesensing 11 01560 g010
Figure 11. Classification maps of Potsdam image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Figure 11. Classification maps of Potsdam image: (a) OCM, (b) RCM_SU, (c) RCM_SDU, and (d) RCM_CU.
Remotesensing 11 01560 g011
Figure 12. Scatter plots and fitted curves of the levels of uncertainty and classification error rates: (a) Vaihingen image, (b) Beijing2 image, and (c) Potsdam image. (x-axis: levels of uncertainty, y-axis: classification error rates)
Figure 12. Scatter plots and fitted curves of the levels of uncertainty and classification error rates: (a) Vaihingen image, (b) Beijing2 image, and (c) Potsdam image. (x-axis: levels of uncertainty, y-axis: classification error rates)
Remotesensing 11 01560 g012
Table 1. Related parameters in image segmentation
Table 1. Related parameters in image segmentation
Experimental DataSegment Scale LevelMerge Level
Vaihingen3560
Beijing23565
Potsdam4080
Table 2. Fitted curve equations and correlation coefficients
Table 2. Fitted curve equations and correlation coefficients
DatasetsEquations of the Fitted CurvesCorrelation Coefficient R
Vaihingeny = 0.0243x + 0.14540.9705
Beijing2y = 0.0471x + 0.08110.9852
Potsdamy = 0.0311x − 0.01570.9877
Table 3. Accuracy evaluation results: OCM represents the original classification map. RCM_SU, RCM_SDU, and RCM_CU represent the final reliable classification maps (RCMs) obtained by using SU, SDU, and CU for uncertainty control, respectively.
Table 3. Accuracy evaluation results: OCM represents the original classification map. RCM_SU, RCM_SDU, and RCM_CU represent the final reliable classification maps (RCMs) obtained by using SU, SDU, and CU for uncertainty control, respectively.
Classification ResultsOAKC
VaihingenBeijing2PotsdamVaihingenBeijing2Potsdam
OCM79.0422%78.4224%95.0046%0.71440.71280.9193
RCM_SU78.3463%76.4047%95.2963%0.70470.68580.9238
RCM_SDU79.6425%78.8342%95.4075%0.72240.71670.9256
RCM_CU79.6563%78.9359%95.6694%0.72250.71780.9298

Share and Cite

MDPI and ACS Style

Zhang, Q.; Zhang, P. An Uncertainty Descriptor for Quantitative Measurement of the Uncertainty of Remote Sensing Images. Remote Sens. 2019, 11, 1560. https://doi.org/10.3390/rs11131560

AMA Style

Zhang Q, Zhang P. An Uncertainty Descriptor for Quantitative Measurement of the Uncertainty of Remote Sensing Images. Remote Sensing. 2019; 11(13):1560. https://doi.org/10.3390/rs11131560

Chicago/Turabian Style

Zhang, Qi, and Penglin Zhang. 2019. "An Uncertainty Descriptor for Quantitative Measurement of the Uncertainty of Remote Sensing Images" Remote Sensing 11, no. 13: 1560. https://doi.org/10.3390/rs11131560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop