Next Article in Journal
A Combination of Machine Learning Algorithms for Marine Plastic Litter Detection Exploiting Hyperspectral PRISMA Data
Next Article in Special Issue
Mapping Seasonal Leaf Nutrients of Mangrove with Sentinel-2 Images and XGBoost Method
Previous Article in Journal
Change Detection Based on Fusion Difference Image and Multi-Scale Morphological Reconstruction for SAR Images
Previous Article in Special Issue
Estimating Community-Level Plant Functional Traits in a Species-Rich Alpine Meadow Using UAV Image Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data

by
Nasir Farsad Layegh
1,*,
Roshanak Darvishzadeh
1,
Andrew K. Skidmore
1,
Claudio Persello
1 and
Nina Krüger
2
1
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7500 AE Enschede, The Netherlands
2
M.O.S.S. Computer Grafik Systeme GmbH, Hohenbrunner Weg 13, 82024 Munich, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3605; https://doi.org/10.3390/rs14153605
Submission received: 12 June 2022 / Revised: 13 July 2022 / Accepted: 21 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue Remote Sensing of Vegetation Function and Traits)

Abstract

:
In complex classification tasks, such as the classification of heterogeneous vegetation covers, the high similarity between classes can confuse the classification algorithm when assigning the correct class labels to unlabelled samples. To overcome this problem, this study aimed to develop a classification method by integrating graph-based semi-supervised learning (SSL) and an expert system (ES). The proposed method was applied to vegetation cover classification in a wetland in the Netherlands using Sentinel-2 and RapidEye imagery. Our method consisted of three main steps: object-based image analysis (OBIA), integration of SSL and an ES (SSLES), and finally, random forest classification. The generated image objects and the related features were used to construct the graph in SSL. Then, an independently developed and trained ES was used in the labelling stage of SSL to reduce the uncertainty of the process, before the final classification. Different spectral band combinations of Sentinel-2 were then considered to improve the vegetation classification. Our results show that integrating SSL and an ES can result in significantly higher classification accuracy (83.6%) compared to a supervised classifier (64.9%), SSL alone (71.8%), and ES alone (69.5%). Moreover, utilisation of all Sentinel-2 red-edge spectral band combinations yielded the highest classification accuracy (overall accuracy of 83.6% with SSLES) compared to the inclusion of other band combinations. The results of this study indicate that the utilisation of an ES in the labelling process of SSL improves the reliability of the process and provides robust performance for the classification of vegetation cover.

Graphical Abstract

1. Introduction

Accurate mapping of vegetation cover in an ecosystem can help to initiate its protection and restoration programmes efficiently [1]. It is, therefore, necessary to acquire accurate and up-to-date information about the status of the vegetation cover of an ecosystem through regular monitoring. Traditional field inventory monitoring methods of vegetation covers are usually expensive and time-consuming [2]. Thus, a viable approach can be the use of satellite remote sensing data. Satellite data provide advantages such as large area coverage, ongoing data collection, and cost-effectiveness for monitoring and mapping purposes [3,4]. The advent of remote sensing technologies, with high-resolution multispectral satellite sensors, has provided new opportunities to monitor vegetation cover at different spatial and temporal scales. For instance, Sentinel-2 MSI, covering a wide spectral range (400–2400 nm) including three red-edge bands, may allow more efficient discrimination of different vegetation types.
A common approach to extracting information from satellite data, to distinguish different vegetation types, is using image classification algorithms [5,6]. Usually, there are three common challenges involved in image classification that could affect its accuracy: (1) collecting a sufficient number of training samples, (2) creating a balanced training and test set, and (3) fine-tuning the algorithm parameters to obtain the optimum performance [7,8,9]. A relevant solution to these challenges is the introduction of the semi-supervised learning (SSL) technique [10], which uses relatively few labelled samples and a large number of unlabelled data to train a model [11].
The existing paradigms of SSL for the classification of remote sensing data can be divided into four major categories: (1) generative mixture models, such as expectation–maximisation algorithms [12,13], (2) low-density separation algorithms, such as transductive support vector machines (TSVMs) [14,15], (3) self-learning methods [16,17,18,19], and (4) graph-based methods [20,21,22,23]. Among the SSL methods, graph-based approaches have recently received significant attention due to their ability to provide a relatively high classification accuracy while retaining computational simplicity [24,25,26,27]. While graph-based algorithms can improve the classification performance by using the distribution of unlabelled samples, they have some limitations [9]. One of these limitations occurs in complex classification tasks, such as identifying different classes in heterogeneous vegetation covers. In such a case, samples from the same vegetation class may show low similarity (i.e., high intra-class variability), and two samples from two different vegetation classes show high similarity (i.e., low inter-class variability). This “similarity” problem can confuse the graph-based algorithm, and the semi-labelled samples may not have the correct label. In this case, unlabelled samples can be detrimental to the graph-based algorithm, as they may degrade the accuracy by misguiding the classifier [11]. One of the common approaches to tackling the similarity problem is using non-parametric classifiers since they do not make any underlying assumptions about the distribution of data [7]. However, these classifiers require a representative amount of training data to estimate the mapping function, and they are also subject to overfitting. Consequently, in studies such as vegetation cover classification where there might be an imbalanced distribution of features and training samples, these classifiers would underperform [28]. To help solve the mentioned problem of SSL, this study aimed to use expert knowledge within an independently developed and trained expert system (ES) in the labelling process of a graph-based algorithm. The contribution of an ES can help the problem by refining the semi-labelled samples. In this context, expert knowledge is defined as the experience and existing knowledge of the expert in the specific domains of study, technical practices, and prior information on the study area [29,30]. The developed ES should have the ability to classify the unlabelled samples using expert knowledge, independently from SSL. This ability of the ES can help SSL to assign the most certain class label to the unlabelled samples by filtering out samples with less certain labels.
Motivated by the above insights, in this study, a novel classification approach was proposed for the classification of satellite images by integrating graph-based SSL and an ES (SSLES). The main idea of the proposed approach was to construct a graph in SSL, based on image features, and use an ES in the labelling process of SSL to assign the most probable class labels to the selected unlabelled samples and then perform the classification using a standard supervised classifier. The study specifically aimed to address two objectives, as follows: (1) to investigate the performance of SSLES for vegetation cover classification and (2) to investigate the potential of Sentinel-2 spectral data for vegetation cover classification.

2. Study Area and Materials

2.1. Study Area

The study area was Schiermonnikoog Island in the Netherlands, located between 53°27′20″N–53°30′40″N latitude and 06°06′35″E–06°20′56″E longitude, with an area of 199.1 km2 (Figure 1). The vegetation cover on the south and south-east shore of the island has adapted to the regular inundation of seawater and has formed a salt marsh [31].
In this study, vegetation species of the island were categorised into 10 functional groups for classification based on the reference vegetation map [32], namely: high matted grass, low matted grass, agriculture, forest, green beach, tussock grass, high shrub, herbs, low salix shrub, low hippopahe shrub. The natural vegetation cover has a large spatial and temporal variability, due to the dynamic influences of the tide, wind, and grazing [33,34]. This area was chosen to test the proposed classification methodology as it is representative of a diverse and mixed vegetation cover.

2.2. Materials

2.2.1. Sentinel-2 Data

The main satellite imagery used in this study was the standard Sentinel-2 Level-1C product, which is in UTM/WGS84 projection, and its per-pixel radiometric measurements are provided in top of atmosphere (TOA) reflectance [35]. The Sentinel-2 image of the study area was acquired on 17 July 2016 belonging to the relative orbit of R008 and was downloaded from ESA Sentinel-2 Pre-operation Hub (https://scihub.copernicus.eu/, accessed on 30 July 2016). The image was chosen from July to obtain a cloud-free image with vigorously growing vegetation. The atmospheric correction of the image was performed using Sen2Cor software [35], and the top of canopy (TOC) reflectance was calculated for further analysis.
Sentinel-2 offers a multispectral sensor in 13 bands from 443 to 2190 nm with three different geometric resolutions as follows:
  • 10 m resolution bands: blue (490 nm), green (560 nm), red (665 nm), and near-infrared (842 nm).
  • 20 m resolution bands: four red-edge/NIR bands with central wavelength at 705 nm, 740 nm, 783 nm, and 865 nm, respectively, and shortwave infrared-1 and -2 (1610 nm and 2190 nm).
  • 60 m resolution bands: coastal (443 nm), water vapour (1375 nm), and cirrus (1376).
In this study, all the Sentinel-2 bands were resampled to 5-meter resolution for processing so that they had the same resolution as the RapidEye image.
To achieve a higher classification accuracy and assess the capability of Sentinel-2 data in classifying the vegetation types, its spectral bands were combined into different groups for subsequent assessment to find the most informative band combination for vegetation classification accuracy. Based on previous studies, the most important regions of the spectrum to study vegetation cover are the red-edge, shortwave, and red–infrared regions [4,36,37,38,39,40]. Consequently, six groups of band combinations were considered to classify vegetation cover, as follows:
  • Group 1: All spectral bands;
  • Group 2: Red and infrared bands;
  • Group 3: All shortwave infrared bands;
  • Group 4: All red-edge bands;
  • Group 5: Red, infrared, and red-edge bands;
  • Group 6: Red-edge and shortwave infrared bands.

2.2.2. RapidEye Data

In this study, a RapidEye image was also acquired for Schiermonnikoog Island on 18 July 2015. The pre-processed data were obtained at level 3A, which means radiometric and geometric corrections, as well as geo-referencing, were applied. The image covers 25 km × 25 km with the orthorectified pixel size of 5 m × 5 m. Due to clear weather conditions during the image acquisition, no further atmospheric correction was applied. Both Sentinel-2 and RapidEye images were chosen from a similar time of year to ensure that in both, the vegetation cover is as alike as possible. In this study, the RapidEye image was used for segmentation only, and the features were extracted from Sentinel-2 data.

2.2.3. Reference Data and Sampling

The reference data used in this study included field observations of dominant vegetation species for 30 vegetation plots (30 m × 30 m) collected in July 2015 and a vegetation map belonging to 2010 [32]. This map was obtained from experts’ visual interpretation of aerial photographs (1:10,000) combined with extensive field inventory, and it included the same vegetation classes as this study.
To select the training and test samples, stratified random sampling was implemented, where each vegetation class was considered a stratum [41]. The resulting training sample size became 650 for the 10 vegetation classes, and the samples were extracted from the vegetation plots. Table 1 reports the number of samples per stratum. In addition, 434 more samples were identified as an independent test set (2/3 of the number of the training samples). A sample in the context of this study is referred to as an image object (as the result of image segmentation), representing a vegetation patch on the ground.

2.2.4. Knowledge Sources

In this study, three main sources of knowledge were identified to be used as input to build the ES knowledge base. These sources were different and separate from the reference maps used for the sampling process:
  • A reference vegetation map of the study area, generated in 2010 [32]. As this map was generated with experts’ visual interpretation of aerial photographs and extensive fieldwork, it contained some level of experts’ knowledge.
  • Ancillary data, including field records of dominant vegetation types for 30 vegetation plots, NDVI data from the Sentinel-2 image, and a digital elevation model (DEM) of the island (produced from laser altimetry by the Dutch ministry of public works, Rijkswaterstaat) to generate height, slope, aspect, etc.
  • Published resources about vegetation cover in Schiermonnikoog and its ecology [31,42,43,44].

3. Methods

The architecture of the proposed classification approach contained three parts: object-based image analysis (OBIA), semi-supervised learning and expert system (SSLES), and classification. Using OBIA, the satellite image was segmented to generate image objects and features. Using SSLES, the number of training samples was increased by labelling a set of most informative unlabelled samples. In the final step, classification was performed on the datasets using a standard supervised classifier.

3.1. Object-Based Image Analysis (OBIA)

The segmentation method of the mean shift was used to generate image objects [45]. This algorithm requires two parameters to be tuned to obtain an optimal segmentation result:
  • Spatial radius hs (spatial distance between classes);
  • Range radius hr (the spectral difference between classes).
The segmentation was performed using a 5 m × 5 m RapidEye image, due to its finer spatial resolution compared to Sentinel-2. This could result in a higher spatial accuracy of image objects [46]. For a quantitative evaluation of the segmentation results, the method proposed by [47,48] was used which measures both topological and geometric similarity between the segmented and the reference objects. The method relies on the ratio of the intersected area of the segments and the reference objects, and depending on the size of overlap for each segment and object, over- and under-segmentation indices are calculated.
Using the segmented map and the reference vegetation map, labelled and unlabelled objects were generated. For this, the segmented map was overlaid with the reference samples’ layer (obtained from the sampling process that contained training and test samples). Any image object that contained the centroid of a reference sample and had more than 50% overlap with it was treated as a labelled object. The rest were considered unlabelled objects.
Sentinel-2 data were resampled to 5 m resolution to match the spatial resolution of RapidEye data and then be able to extract the features using the segmentation results. For mapping vegetation, three categories of features were considered, which have been recognised as important in previous studies [37,49,50,51,52,53], as follows: (a) a set of spectral features consisting of the mean, standard deviation, median, and minimum and maximum values of pixels within an image object, (b) a set of textural features including GLCM (grey-level co-occurrence matrix) and GLDV (grey-level difference vector), and (c) a set of geometrical features representing the area and perimeter of the image objects.
The final results of OBIA are image objects with a corresponding informative set of image features.

3.2. Semi-Supervised Learning

The graph-based semi-supervised learning (SSL) method was used to increase the training samples, before classification. This method aims to construct a graph G = (V, E) connecting similar samples. V consists of N = L + U samples, where L and U are the numbers of labelled and unlabelled samples, respectively. Edges E are represented typically by a symmetric similarity weight matrix W RN×N [26]. The k-nearest neighbour (KNN) approach was used to construct the weight matrix, denoted as WW:
W i , j W { e x p x i x j 2 2 σ 2                         i f                       x j N B K W ( x i ) 0                                               O t h e r w i s e                                                             i , j { 1 , 2 , , l + u }
where σ is the Gaussian kernel bandwidth, x i x j is the similarity measure between two samples, and N B K W ( x i ) is a set of K nearest neighbours of sample xi. In this study, the similarity measure between two samples was based on image features obtained from OBIA.
To construct the graph and assign the class labels to the unlabelled samples, the energy function proposed by [54] was used to be optimised, defined as follows:
min f i { 1 , 2 , , l } ( f i y i ) 2 + 1 2 i , j { 1 , 2 , , l + u } W i , j W ( f i f j ) 2 = ( f l y l ) T ( f l y l ) + 1 2 f T f
where f = (fL, fU)T is composed of fL and fU which are the predicted class labels of the labelled and unlabelled samples. y i is the vector class label of the sample i. ∆ is a graph Laplacian matrix obtained by ∆ = D − W, and D is the diagonal degree matrix given by Dii = ∑jWij.
The label propagation technique was employed at the end to propagate the information through the graph to the unlabelled samples [11,55]. For this purpose, the weight of the edges in the graph was computed, according to Equation (2), and then the probability matrix was estimated as   P = D 1 W W . The edge with the highest probability is the determiner of the label for the unlabelled sample.

3.3. Expert System

The expert system (ES) approach used in this study was described in detail by [56]. The ES, here, was developed to answer the question of “What vegetation type is probable to occur in a given image object?” in reference to the samples that obtained labels with SSL.
Bayes’ theory was used in the ES to compute the probability of the rule that the hypothesis (Ha) occurs in an image object given a piece of evidence (Eb), i.e.,
P ( H a | E b ) = P ( E b | H a ) P ( H a ) P ( E b )
where P(Eb|Ha) is the a priori conditional probability that there is a piece of evidence Eb (e.g., a mean slope of less than 0.1°) given a hypothesis Ha (e.g., “high shrub” class) that class Ci occurs in a specific image object. P(Ha) is the probability for the hypothesis (Ha) that class Ci occurs in an object. P(Eb) is the probability that an object has an item of evidence {Eb}. The following steps were taken to compute the probability rules:
  • Generate a histogram population of each feature layer in the knowledge base;
  • Divide each histogram into 10 quantiles, representing the frequency of the occurrence of each class at each percentile of the feature layer;
  • Normalize the frequency values by fitting a normal distribution.

3.4. SSLES Algorithm

Algorithm 1 demonstrates the inputs, output, and steps of SSLES. In this algorithm, which is fully implemented in MATLAB_R2015b, the inputs are image objects with the respective set of features. Further, steps 1–4 are related to SSL which generates a graph based on image features and propagates the class labels to the potential unlabelled samples. In step 5, the developed ES performs an independent class label prediction on the semi-labelled samples of the SSL in step 4. Finally, if both the ES and the SSL agree on the same class label, the labelled sample is added to the training set; otherwise, it is returned to the unlabelled set. The flowchart of the algorithm is presented in Figure 2.
Algorithm 1: SSLES
Inputs:
    • A set of labelled objects OL
    • A set of unlabelled objects OU
    • Extracted features fi for each object
Outputs:
    • New labelled samples to be added to the original training set
Steps:
For every oi ∈ OL, i = 1:L
    • Measure the similarity of all labelled and unlabelled samples in the image feature space
    • Sort the similarity values and choose the first K unlabelled samples with the highest similarity values
    • Construct the graph with the KNN method, then calculate the matrix P
    • Predict the class label of the unlabelled samples using the label propagation framework
    • Predict the class label of the unlabelled samples using the developed ES
    • If both approaches have predicted the same class label for a sample,
        then add the object to the training set,
        otherwise, let it remain with the unlabelled object set

3.5. Classification and Evaluation

The final step is classification, where a standard supervised classifier is implemented. This step is performed using the random forest (RF) classifier [57,58]. This classifier has two parameters that needed to be set:
  • The number of classification trees, i.e., the number of bootstrap iterations (ntree);
  • The number of input variables used at each node (mtry).
Several studies have demonstrated that the default value for mtry can provide satisfactory results [59,60,61]. Therefore, this parameter is set to the default value, i.e., the square root of the total number of input features.
In this study, the RF classifier was used in three different scenarios. In the first scenario, the training set generated from SSLES was used to train the classifier; in the second scenario, the training set generated from SSL was used to train the classifier; and in the third scenario, the classifier was trained with the original training set.
For the validation of the classification results, the accuracy assessment based on the error matrix was conducted [62]. The evaluation was performed using the same test samples extracted from the reference data, and the results were evaluated in terms of overall accuracy (OA) and Cohen’s kappa coefficient. To assess the statistical significance of the difference between the obtained accuracies, McNemar’s test was used with a 95% confidence level and 1 degree of freedom.

4. Results

4.1. Object-Based Image Analysis

To start the image segmentation process, the parameters were tuned and evaluated. hs and hr were iterated in the potential range of [1:10]. Using sensitivity analysis, hs = 5 and hr = 7 were chosen as the optimum values. As the result of image segmentation, a total number of 5230 image objects were delineated with a mean size of 30 pixels. Figure 3 illustrates the final segmented objects on the false colour composite Sentinel-2 image of the study area.
Next, the image objects were divided into three data sets: training, test (labelled), and unlabelled objects. The numbers of generated datasets are listed in Table 2.

4.2. Semi-Supervised Learning

To depict the idea of label propagation, a graph is shown in Figure 4 representing a set of labelled and unlabelled samples. The labels are propagated from the labelled to the unlabelled samples based on the probabilities of the arrows. In this example, although two “Herbs” samples are connected to the unlabelled sample in the centre, it is labelled as “Forest” since the probability value of the “Forest” sample is higher than “High Shrub” and two “Herbs”.
The final output of SSL is semi-labelled samples that might have incorrect labels. Following the generation of semi-labelled samples, an ES was used to classify the semi-labelled samples again, in parallel to SSL. Samples that obtained the same class label as SSL were merged with the original training set to generate a new extended training set; otherwise, they were moved back to the unlabelled pool.

4.3. Expert System a Priori Probabilities

The reference vegetation map was used to estimate the a priori probabilities for the vegetation classes and the initial conditional probabilities for all the feature layers (Table 3).
The ancillary data were used to extract training samples, and then their feature values were statistically analysed, i.e., using the methodology explained in Section 3.3, to define the probability of occurrence of a vegetation type within an image object. The result is six sets of rules for each feature layer that contains the probability values of occurrence of evidence (i.e., a set of features) given a hypothesis (i.e., specific vegetation type). These rules belong to the mean and standard deviation values of the feature layers (Figure 5).
Figure 5 illustrates how rules were derived for the mean values of the ancillary data. These probability rules were applied to each input sample in order to compute the probability values. The black bars indicate the probability of that sample belonging to each class. Based on three derived facts from the published resources, three more rules were generated. The derived facts were (1) the dependency of some classes, e.g., herbs, on water availability (streams), (2) the presence/absence of some classes around the residential areas, e.g., green beach, and (3) a forestry programme adjacent to the village [63]. Using the method described before, the distances of the vegetation classes from the water streams and the residential area were analysed and then divided into three quantiles, and the probability of occurrence of vegetation classes at each distance quantile was computed (Table 4).
According to the extracted probability rules in Table 4, each sample’s feature values were examined and depending on the percentile that it lay in, a probability value was assigned. Eventually, by an iteration through the nine rule sets, samples gained nine sets of probabilities for each class. Then, these probabilities were merged into a final combined probability, and the class with the highest probability value was the class of that sample.

4.4. SSLES Results

After running SSLES, 1513 new samples were labelled. As part of the process, these newly labelled samples were combined with the original training set to generate the new training set. The table below (Table 5) summarizes the number of training samples for each class.
Unlike most SSL implementations that exploit the labelled samples iteratively until all the samples have a label, in this study, graph-based SSL was followed by a basic supervised classifier. As shown by [64], this implementation has the advantage of reducing the computational complexity of the algorithm and can classify more new samples as well.

4.5. Classification Results and Evaluation

4.5.1. Parameter Tuning

To evaluate the performance of SSLES, three classification scenarios were conducted with SSL only, ES only, and RF methods for comparison. Before conducting the experiments, three parameters needed to be tuned to obtain the optimum results. For this, the values of the parameters were changed in a potential range, and the values yielding the highest OA were selected as the final parameters’ values. The parameters were the k-nearest neighbour and kernel bandwidth belonging to the SSL and ntree belonging to the RF classifier, which were tuned in the range of k = { 1 , 2 , , 20 } ,   σ = { 0.1 , 0.2 , , 2 } ,   n t r e e = { 10 , 20 , , 800 } , respectively, based on previous studies. The values of parameters for graph construction were set as   k = 16 and σ = 0.2 , and regarding the RF classifier, the parameter was set to n t r e e = 80 which led to the highest OA.

4.5.2. Classification Evaluation

After tuning the parameters, the experiments were carried out. Table 6 reports the obtained classification scores for the six groups of band combinations using four methods.
As it can be observed in Table 6, SSLES produces relatively higher accuracy for all of the six datasets when compared to SSL alone, ES alone, and RF. The highest accuracy was achieved using only the red-edge bands. Figure 6 illustrates the final classified map of the study area with Group 4 of band combination, using the SSLES for classification.
Classification with Groups 1 and 6 also yielded noticeably high OA, compared to Group 4. Examining the results of these three groups with McNemar’s test revealed that there are no statistically significant differences between the obtained accuracies from Groups 1, 4, and 6. The figure below (Figure 7) shows an example of an area (i.e., east of the island) where the four classification methods provided different results.
The confusion matrix of SSL is shown in Figure 8 to report the producers’ accuracies of vegetation classes. Results are presented as a 2D plot where colours represent the accuracy of each class.
In Figure 8, the highest accuracy classifications have a light/white colour in the main diagonal and dark/black colour in the other cells, which means no misclassification.

5. Discussion

The obtained results in Table 6 show that SSLES can yield higher classification accuracy than SSL. Furthermore, using the red-edge bands provided the highest accuracy, which confirmed the findings of [3] where the importance of the Sentinel-2 red-edge bands for vegetation classification was highlighted. As is shown in Figure 8, a considerable number of “Herbs” are classified as “High matted grass”, and the classifier has confused the “Low matted grass” and “High matted grass” with other classes. This can probably be justified by (i) the high diversity of these two classes in the study area and (ii) the low diversity of training samples belonging to these classes. To gain better insight into SSLES’s performance and the advantage of using an ES integrated with SSL, the confusion matrix obtained from SSLES was subtracted from the one from the SSL, and the results are presented in a new matrix. In the resulting matrix, if the value of a cell increased, it is labelled “positive change”; if the value decreased, it is labelled as “negative change”; otherwise, the cell was given a “no change” label (Figure 9).
Comparing the obtained matrices reveals that the classification accuracies of all vegetation classes improved, except for five off-diagonal cells. In an ideal situation, it is expected that all the positive changes happen in the main diagonal elements and not in the off-diagonal elements, which means a decrease in misclassification. This result implies that the contribution of the ES to the labelling process of SSL has the advantages of removing the less reliable semi-labelled samples and increasing the overall quality of training samples obtained by SSL.
The performance of SSLES can be discussed by considering two perspectives. The first lies in constructing the graph in SSL, which is based on image features rather than spatial neighbourhood and spectral similarity. This could help to obtain a better estimation of the underlying class label of the potential unlabelled image objects. The second perspective is related to the role of the ES in increasing the certainty of labelling in SSL. The ES handled this through the use of probability rules that aimed to link environmental parameters and the location of vegetation types, where it is most likely that a vegetation type may occur.
To have a benchmark to evaluate and compare the performance of SSLES, it is assumed that the number of reference training samples is representative and sufficient to train a standard supervised classifier. Since SSLES increased the number of samples in the training set, there could be the risk of overfitting the classifier due to the high number of training samples. Therefore, to test the robustness of SSLES regarding the number of initial training samples, a new test was conducted using the Group 4 dataset. For this, only 50% of the original reference training samples were used for SSLES, and the result was evaluated by the same test set that had been used previously. This resulted in an OA of 80% and compared to the case of using all the training samples, no statistically significant difference was observed. The classification accuracies could have been negatively affected by various factors. These include the uncertainties associated with the samples obtained from OBIA. In OBIA, if segmentation has low quality, it may not be able to separate two different classes properly in the image; hence, the extracted features for an image object will be the mix of features of two different classes [65,66]. To avoid this problem in the current study, the segmentation results were compared to the reference sample polygons. Nevertheless, in the case of having uncertainty in the reference vegetation map, some level of uncertainty may be found in the segmentation. The segmentation was performed using a RapidEye 5 m image while the features were extracted from the Sentinel-2 image. Although there was a one-year gap between the two images, they were acquired at the same time of year. In such a relatively short time period, no changes are expected in the vegetation structure of the study area except for the agriculture class. This was further confirmed by the high-resolution satellite imagery of GoogleEarth. The limitations of SSLES can be discussed considering the restrictions of SSL and the ES. In terms of the ES, assigning the quantitative values for the a priori probabilities has some uncertainties since it is based on the reference vegetation map mainly. Using the knowledge and experience of specialised experts such as ecologists could result in a stronger knowledge base, but due to time constraints, it was not possible. However, the assumption of this study was to use any available source of (prior) knowledge and expertise. The sources of knowledge could be either in the form of a human expert or other resources such as scientific research and published works. Regarding SSL limitations, the KNN method was used for graph construction. A recent study by [67] showed that the KNN may result in irregular graph construction where each node is connected to more than K neighbours. In this case, the algorithm may end up assigning the incorrect class label to the connected nodes. This problem might be more pronounced in the present study because of the high similarity between the vegetation classes.

6. Conclusions

In this study, a developed approach for semi-supervised classification of satellite images was proposed and applied for vegetation cover classification. The algorithm constructs a graph based on image features from OBIA. It uses Euclidean distance to compute the similarity between samples, where KNNs are selected for labelling. Using an ES to supervise the labelling process of the graph-based SSL algorithm was the key point in this study.
The capability of OBIA, SSL, and the ES for classification, particularly in the field of remote sensing, has already been investigated in the literature. The novel contribution of this study was the integration of these three into a single algorithm. Results prove the effectiveness of the proposed algorithm for the challenging problem of vegetation cover classification where some vegetation classes show similar characteristics. The capability of Sentinel-2 spectral data in vegetation classification was assessed, and the results prove that the red-edge band’s combination could yield the highest overall accuracy for vegetation cover classification.
In a future study, linking the vegetation classification levels to the concept of Anderson levels for land cover mapping could be considered to adjust the vegetation classes where the highest level has the highest accuracy [68]. From an algorithm perspective, the potential applicability of the SSLES method on different land covers and biomes using different remote sensing data could be analysed. Concerning SSL, different strategies need to be investigated for selecting the unlabelled samples in a more informative and reliable way. Using alternative approaches for graph construction instead of KNN, as well as using a different similarity measurement method such as JM distance, should be of high priority. Finally, the possibility of applying the presented method for the classification of large-scale or big data in remote sensing may be investigated.

Author Contributions

Conceptualization, N.F.L. and R.D.; Formal analysis, N.F.L.; Methodology, N.F.L. and R.D.; Project administration, R.D. and A.K.S.; Resources, R.D.; Software, N.F.L.; Supervision, R.D. and A.K.S.; Validation, N.F.L.; Visualization, N.F.L.; Writing—original draft, N.F.L.; Writing—review & editing, N.F.L., R.D., A.K.S., C.P. and N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Erasmus+ programme of Education, Audio-visual and Culture Executive Agency (EACEA) of the European Union.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Egbert, S.L.; Park, S.; Price, K.P.; Lee, R.-Y.; Wu, J.; Duane Nellis, M. Using conservation reserve program maps derived from satellite imagery to characterize landscape structure. Comput. Electron. Agric. 2002, 37, 141–156. [Google Scholar] [CrossRef]
  2. Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
  3. Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  4. Mui, A.; He, Y.; Weng, Q. An object-based approach to delineate wetlands across landscapes of varied disturbance with high spatial resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2015, 109, 30–46. [Google Scholar] [CrossRef] [Green Version]
  5. Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 2013; ISBN 978-3-642-30061-5. [Google Scholar]
  6. Bhatnagar, S.; Gill, L.; Regan, S.; Naughton, O.; Johnston, P.; Waldren, S.; Ghosh, B. Mapping vegetation communities inside wetlands using Sentinel-2 imagery in Ireland. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102083. [Google Scholar] [CrossRef]
  7. Skidmore, A.K.; Forbes, G.W.; Carpenter, D.J. Technical note non-parametric test of overlap in multispectral classification. Int. J. Remote Sens. 1988, 9, 777–785. [Google Scholar] [CrossRef]
  8. Mellor, A.; Boukir, S.; Haywood, A.; Jones, S. Exploring issues of training data imbalance and mislabelling on random forest performance for large area land cover classification using the ensemble margin. ISPRS J. Photogramm. Remote Sens. 2015, 105, 155–168. [Google Scholar] [CrossRef]
  9. Persello, C.; Bruzzone, L. Active and semisupervised learning for the classification of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
  10. Board, R.; Pitt, L. Semi-supervised learning. Mach. Learn. 1989, 4, 41–65. [Google Scholar] [CrossRef]
  11. Chapelle, O.; Schölkopf, B.; Zien, A. Semi-Supervised Learning; The MIT Press: London, UK, 2010; ISBN 9780262255899. [Google Scholar]
  12. Jackson, Q.; Landgrebe, D.A. An adaptive classifier design for high-dimensional data analysis with a limited training data set. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2664–2679. [Google Scholar] [CrossRef] [Green Version]
  13. Prabukumar, M.; Shrutika, S. Band clustering using expectation–maximization algorithm and weighted average fusion-based feature extraction for hyperspectral image classification. J. Appl. Remote Sens. 2018, 12, 046015. [Google Scholar] [CrossRef]
  14. Dalponte, M.; Ene, L.T.; Marconcini, M.; Gobakken, T.; Næsset, E. Semi-supervised SVM for individual tree crown species classification. ISPRS J. Photogramm. Remote Sens. 2015, 110, 77–87. [Google Scholar] [CrossRef]
  15. Bruzzone, L.; Chi, M.; Marconcini, M. A novel transductive SVM for semisupervised classification of remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3363–3373. [Google Scholar] [CrossRef] [Green Version]
  16. Maulik, U.; Chakraborty, D. A self-trained ensemble with semisupervised SVM: An application to pixel classification of remote sensing imagery. Pattern Recognit. 2011, 44, 615–623. [Google Scholar] [CrossRef]
  17. Dopido, I.; Li, J.; Marpu, P.R.; Plaza, A.; Bioucas Dias, J.M.; Benediktsson, J.A. Semisupervised self-learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4032–4044. [Google Scholar] [CrossRef] [Green Version]
  18. Geiß, C.; Aravena Pelizari, P.; Blickensdörfer, L.; Taubenböck, H. Virtual support vector machines with self-learning strategy for classification of multispectral remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2019, 151, 42–58. [Google Scholar] [CrossRef]
  19. Lu, X.; Zhang, J.; Li, T.; Zhang, Y. Incorporating diversity into self-learning for synergetic classification of hyperspectral and panchromatic images. Remote Sens. 2016, 8, 804. [Google Scholar] [CrossRef] [Green Version]
  20. Camps-Valls, G.; Bandos Marsheva, T.V.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  21. Gu, Y.; Feng, K. L1-graph semisupervised learning for hyperspectral image classification. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 1401–1404. [Google Scholar]
  22. Ma, L.; Crawford, M.M.; Yang, X.; Guo, Y. Local-manifold-learning-based graph construction for semisupervised hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2832–2844. [Google Scholar] [CrossRef]
  23. Zhao, Y.; Su, F.; Yan, F. Novel Semi-supervised hyperspectral image classification based on a superpixel graph and discrete potential method. Remote Sens. 2020, 12, 1528. [Google Scholar] [CrossRef]
  24. Zhu, X.; Ghahramani, Z.; Lafferty, J. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine Learning, Washington, DC, USA, 21 August 2003; pp. 912–919. [Google Scholar]
  25. Kim, K.-H.; Choi, S. Label propagation through minimax paths for scalable semi-supervised learning. Pattern Recognit. Lett. 2014, 45, 17–25. [Google Scholar] [CrossRef]
  26. Ma, L.; Ma, A.; Ju, C.; Li, X. Graph-based semi-supervised learning for spectral-spatial hyperspectral image classification. Pattern Recognit. Lett. 2016, 83, 133–142. [Google Scholar] [CrossRef]
  27. Chong, Y.; Ding, Y.; Yan, Q.; Pan, S. Graph-based semi-supervised learning: A review. Neurocomputing 2020, 408, 216–230. [Google Scholar] [CrossRef]
  28. Skidmore, A.K.; Turner, B.J. Forest mapping accuracies are improved using a supervised nonparametric classifier with SPOT data. Photogramm. Eng. Remote Sens. PERS 1988, 54, 1415–1421. [Google Scholar]
  29. Hayes-Roth, F.; Waterman, D.; Lenat, D. Building Expert Systems; Addison-Wesley, Reading: Boston, MA, USA, 1983; ISBN 0-201-10686-8. [Google Scholar]
  30. Booker, J.M.; McNamara, L.A. Solving black box computation problems using expert knowledge theory and methods. Reliab. Eng. Syst. Saf. 2004, 85, 331–340. [Google Scholar] [CrossRef]
  31. Schmidt, K.S.; Skidmore, A.K. Spectral discrimination of vegetation types in a coastal wetland. Remote Sens. Environ. 2003, 85, 92–108. [Google Scholar] [CrossRef]
  32. Pranger, D.P.; Tolman, M.E. Toelichting Bij De Vegetatiekartering Schiermonnikoog Op Basis Van False Colour-Luchtfoto’s 1:10.000 [Explanation to the Vegetation Mapping Schiermonnikoog 2010 on the Basis of False Colour Aerial Photographs 1:10.000, in Dutch]; Rijkswaterstaat: Delft, The Netherlands, 2012. [Google Scholar]
  33. Vrieling, A.; Skidmore, A.K.; Wang, T.; Meroni, M.; Ens, B.J.; Oosterbeek, K.; O’Connor, B.; Darvishzadeh, R.; Heurich, M.; Shepherd, A.; et al. Spatially detailed retrievals of spring phenology from single-season high-resolution image time series. Int. J. Appl. Earth Obs. Geoinf. 2017, 59, 19–30. [Google Scholar] [CrossRef]
  34. Darvishzadeh, R.; Wang, T.; Skidmore, A.; Vrieling, A.; O’Connor, B.; Gara, T.W.; Ens, B.J.; Paganini, M. Analysis of Sentinel-2 and RapidEye for Retrieval of Leaf Area Index in a Saltmarsh Using a Radiative Transfer Model. Remote Sens. 2019, 11, 671. [Google Scholar] [CrossRef] [Green Version]
  35. ESA SENTINEL-2 User Handbook. Available online: https://sentinels.copernicus.eu/web/sentinel/user-guides/document-library/-/asset_publisher/xlslt4309D5h/content/sentinel-2-user-handbook (accessed on 24 July 2015).
  36. Atzberger, C.; Darvishzadeh, R.; Schlerf, M.; le Maire, G. Suitability and adaptation of prosail radiative transfer model for hyperspectral grassland studies. Remote Sens. Lett. 2013, 4, 55–64. [Google Scholar] [CrossRef]
  37. Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of Multitemporal rapideye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
  38. Darvishzadeh, R.; Atzberger, C.; Skidmore, A.K.; Abkar, A.A. Leaf area index derivation from hyperspectral vegetation indicesand the red edge position. Int. J. Remote Sens. 2009, 30, 6199–6218. [Google Scholar] [CrossRef]
  39. Gilmore, M.S.; Wilson, E.H.; Barrett, N.; Civco, D.L.; Prisloe, S.; Hurd, J.D.; Chadwick, C. Integrating multi-temporal spectral and structural information to map wetland vegetation in a lower connecticut river tidal marsh. Remote Sens. Environ. 2008, 112, 4048–4060. [Google Scholar] [CrossRef]
  40. Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
  41. Cochran, W.G. Sampling Techniques; John Wiley and Sons: New York, NY, USA, 1977; p. 428. [Google Scholar]
  42. Bird, E. Coastal Geomorphology. An Introduction; John Wiley and Sons Ltd.: Chichester, UK, 2008; ISBN 9780874216561. [Google Scholar]
  43. Rundquist, D.C.; Narumalani, S.; Narayanan, R.M. A Review of Wetlands Remote Sensing and Defining New Considerations. Remote Sens. Rev. 2001, 20, 207–226. [Google Scholar] [CrossRef]
  44. Schmidt, K.S.; Skidmore, K.; Kloosterman, E.H.; van Oosten, H.; Kumar, L.; Janssen, J.M. Mapping Coastal Vegetation Using an Expert System and Hyperspectral Imagery. Photogramm. Eng. Remote Sens. 2004, 70, 703–715. [Google Scholar] [CrossRef]
  45. Comaniciu, D.; Meer, P. Mean Shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
  46. Laurent, V.C.E.; Schaepman, M.E.; Verhoef, W.; Weyermann, J.; Chavez, R.O. Bayesian object-based estimation of LAI and chlorophyll from a simulated Sentinel-2 top-of-atmosphere radiance image. Remote Sens. Environ. 2014, 140, 318–329. [Google Scholar] [CrossRef]
  47. Clinton, N. An accuracy assessment measure for object based image segmentation an accuracy assessment measure for object based image. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1189–1194. [Google Scholar]
  48. Möller, M.; Lymburner, L.; Volk, M. The comparison index: A tool for assessing the accuracy of image segmentation. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 311–321. [Google Scholar] [CrossRef]
  49. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  50. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef] [Green Version]
  51. Mathieu, R.; Aryal, J.; Chong, A.K. Object-based classification of ikonos imagery for mapping large-scale vegetation communities in urban areas. Sensors 2007, 7, 2860–2880. [Google Scholar] [CrossRef] [Green Version]
  52. Pham, L.T.H.; Brabyn, L.; Ashraf, S. Combining QuickBird, LiDAR, and GIS topography indices to identify a single native tree species in a complex landscape using an object-based classification approach. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 187–197. [Google Scholar] [CrossRef]
  53. Fu, B.; Xie, S.; He, H.; Zuo, P.; Sun, J.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Synergy of multi-temporal polarimetric SAR and optical Image satellite for mapping of marsh vegetation using object-based random forest algorithm. Ecol. Indic. 2021, 131, 108173. [Google Scholar] [CrossRef]
  54. Rohban, M.H.; Rabiee, H.R. Supervised neighborhood graph construction for semi-supervised classification. Pattern Recognit. 2012, 45, 1363–1372. [Google Scholar] [CrossRef]
  55. Szummer, M.; Jaakkola, T. Partially labelled classification with Markov random walks. Adv. Neural Inf. Processing Syst. 2001, 14, 945–952. [Google Scholar]
  56. Skidmore, A. An expert system classifies eucalypt forest types using thematic mapper data and a digital terrain model. Photogramm. Eng. Remote Sens. 1989, 55, 1449–1464. [Google Scholar]
  57. Breiman, L. Random Forests—Random Features; Technical report 567; Statistics department of University of California: Berkeley, CA, USA, 1999. [Google Scholar]
  58. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  59. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 Satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  60. Liaw, A.; Wiener, M. Classification and Regression by RandomForest. R News 2002, 2, 18–22. [Google Scholar]
  61. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  62. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2008; ISBN 9781420055122-CAT# 55127. [Google Scholar]
  63. Beukeboom, T.J. The Hydrology of the Frisian Islands; Rodopi bv Editions: Amsterdam, The Netherlands, 1976; ISBN 9062034195. [Google Scholar]
  64. Wang, L.; Hao, S.; Wang, Q.; Wang, Y. Semi-supervised classification for hyperspectral imagery based on spatial-spectral label propagation. ISPRS J. Photogramm. Remote Sens. 2014, 97, 123–137. [Google Scholar] [CrossRef]
  65. Song, M.; Civco, D.L.; Hurd, J.D. A competitive pixel-object approach for land cover classification. Int. J. Remote Sens. 2005, 26, 4981–4997. [Google Scholar] [CrossRef]
  66. Liu, D.; Xia, F. Assessing object-based classification: Advantages and limitations. Remote Sens. Lett. 2010, 1, 187–194. [Google Scholar] [CrossRef]
  67. Jebara, T.; Wang, J.; Chang, S.-F. Graph construction and b-matching for semi-supervised learning. Int. Conf. Mach. Learn. ICML 2009, 1–8. [Google Scholar] [CrossRef]
  68. Anderson, J.; Hardy, E.; Roach, J.; Witmer, R. A Land Use and Land Cover Classification Systems for Use with Remote Sensor Data; Geological Survey Professional Paper 964; US Government Printing Office: Washington, DC, USA, 1976.
Figure 1. Location of Schiermonnikoog national park.
Figure 1. Location of Schiermonnikoog national park.
Remotesensing 14 03605 g001
Figure 2. SSLES flowchart.
Figure 2. SSLES flowchart.
Remotesensing 14 03605 g002
Figure 3. Segmentation result of RapidEye image with hs = 5 and hr = 7, on a false colour composite of Sentinel-2 (B08, B04, B03).
Figure 3. Segmentation result of RapidEye image with hs = 5 and hr = 7, on a false colour composite of Sentinel-2 (B08, B04, B03).
Remotesensing 14 03605 g003
Figure 4. Illustration of label propagation procedure. Coloured circles represent four different labelled samples, and white circles represent unlabelled samples. Values indicate the probability of edges. Propagation direction is shown by the direction of the arrows, i.e., the arrow is always from a labelled sample to an unlabelled sample.
Figure 4. Illustration of label propagation procedure. Coloured circles represent four different labelled samples, and white circles represent unlabelled samples. Values indicate the probability of edges. Propagation direction is shown by the direction of the arrows, i.e., the arrow is always from a labelled sample to an unlabelled sample.
Remotesensing 14 03605 g004
Figure 5. Example of expert rule weights for ancillary data. Y-axis shows the initial conditional probability, and the X-axis shows the 10 quantiles.
Figure 5. Example of expert rule weights for ancillary data. Y-axis shows the initial conditional probability, and the X-axis shows the 10 quantiles.
Remotesensing 14 03605 g005
Figure 6. Classified map of the study area using the Group 4 of band combinations, red-edge spectral bands (as this group achieved the highest accuracy) of Sentinel-2 data, and the SSLES method.
Figure 6. Classified map of the study area using the Group 4 of band combinations, red-edge spectral bands (as this group achieved the highest accuracy) of Sentinel-2 data, and the SSLES method.
Remotesensing 14 03605 g006
Figure 7. Example of classification result using (a) SSLES, (b) SSL only, (c) ES only, and (d) RF.
Figure 7. Example of classification result using (a) SSLES, (b) SSL only, (c) ES only, and (d) RF.
Remotesensing 14 03605 g007
Figure 8. Producer’s accuracy obtained with SSLES. The colours illustrate the accuracy, where white colour represents 100% accuracy, and black represents 0% accuracy.
Figure 8. Producer’s accuracy obtained with SSLES. The colours illustrate the accuracy, where white colour represents 100% accuracy, and black represents 0% accuracy.
Remotesensing 14 03605 g008
Figure 9. A detailed comparison of two classification methods, the matrix is the result of subtracting the confusion matrix of the SSLES from the SSL. The white colour represents an increase in the cell’s value, black indicates a decrease, and if the value was unchanged, it was coloured cyan.
Figure 9. A detailed comparison of two classification methods, the matrix is the result of subtracting the confusion matrix of the SSLES from the SSL. The white colour represents an increase in the cell’s value, black indicates a decrease, and if the value was unchanged, it was coloured cyan.
Remotesensing 14 03605 g009
Table 1. The vegetation classes in Schiermonnikoog Island and the number of their collected samples.
Table 1. The vegetation classes in Schiermonnikoog Island and the number of their collected samples.
Class NameNumber of Training SamplesNumber of Test Samples
High matted grass160107
Low matted grass14295
Agriculture7147
Forest5839
Green beach5839
Tussock grass4530
High shrub4530
Herbs3523
Low salix shrub2517
Low hippopahe shrub117
Sum650434
Table 2. The number of image objects for three different subsets of image objects.
Table 2. The number of image objects for three different subsets of image objects.
Training ObjectsTest ObjectsUnlabelled ObjectsTotal Number of Objects
65043441465230
Table 3. A priori probability estimation of the vegetation classes. The probability of occurrence of each vegetation type in the study area is presented. HMG: High Matted Grass, LMG: Low Matted Grass, Ag: Agriculture, Fr: Forest, GB: Green Beach, TG: Tussock, HS: High Shrub, Hr: Herbs, LSS: Low Salix Scrub, and LHS: Low Hippopahe Shrub.
Table 3. A priori probability estimation of the vegetation classes. The probability of occurrence of each vegetation type in the study area is presented. HMG: High Matted Grass, LMG: Low Matted Grass, Ag: Agriculture, Fr: Forest, GB: Green Beach, TG: Tussock, HS: High Shrub, Hr: Herbs, LSS: Low Salix Scrub, and LHS: Low Hippopahe Shrub.
Class NameHMGLMGAgFrGBTGHSHrLSSLHS
Probability0.280.240.10.080.080.060.060.050.030.01
Table 4. Expert rules are based on the distance of the vegetation classes to streams and residential areas. Values describe the probability of occurrence of vegetation class at each distance quantile.
Table 4. Expert rules are based on the distance of the vegetation classes to streams and residential areas. Values describe the probability of occurrence of vegetation class at each distance quantile.
Vegetation ClassesDistance to StreamsDistance to the Residential Area
Quantile 1Quantile 2Quantile 3Quantile 1Quantile 2Quantile 3
Ag0.110.310.580.900.100.00
Fr0.060.220.720.410.340.25
HMG0.400.450.150.000.150.85
LMG0.220.310.470.230.460.31
TG0.090.210.700.270.550.18
HS0.190.250.560.130.450.42
LHS0.000.290.710.000.040.96
LSS0.290.100.610.240.760.00
GB0.000.510.490.000.010.99
Hr0.610.380.010.010.000.99
Table 5. The number of original training samples, newly labelled samples, and new training samples.
Table 5. The number of original training samples, newly labelled samples, and new training samples.
Class NameNumber of Original Training SamplesNumber of Newly
Labelled Samples
Number of New
Training Samples
HMG160494654
LMG142290432
Ag71167238
Fr58121179
GB58106164
TG4599144
HS4589134
Hr3567102
LSS254873
LHS113243
Sum65015132163
Table 6. Classification results for the six groups of band combinations of Sentinel-2 data, in terms of OA and kappa coefficient. Group 1: (All spectral bands), Group 2: (Red and infrared bands), Group 3: (All shortwave infrared bands), Group 4: (All red-edge bands), Group 5: (Red, infrared, and red-edge bands), Group 6: (Red-edge and shortwave infrared bands).
Table 6. Classification results for the six groups of band combinations of Sentinel-2 data, in terms of OA and kappa coefficient. Group 1: (All spectral bands), Group 2: (Red and infrared bands), Group 3: (All shortwave infrared bands), Group 4: (All red-edge bands), Group 5: (Red, infrared, and red-edge bands), Group 6: (Red-edge and shortwave infrared bands).
DatasetSSLESRF SSL OnlyES Only
OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
Group 181.10.6764.60.5270.90.6068.10.55
Group 273.50.5758.90.4462.30.4860.90.46
Group 374.60.5960.10.4763.80.4963.10.48
Group 483.60.7064.90.5671.80.6169.50.57
Group 567.80.4947.20.3355.30.4053.80.38
Group 679.90.6764.30.5558.20.5968.70.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Farsad Layegh, N.; Darvishzadeh, R.; Skidmore, A.K.; Persello, C.; Krüger, N. Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data. Remote Sens. 2022, 14, 3605. https://doi.org/10.3390/rs14153605

AMA Style

Farsad Layegh N, Darvishzadeh R, Skidmore AK, Persello C, Krüger N. Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data. Remote Sensing. 2022; 14(15):3605. https://doi.org/10.3390/rs14153605

Chicago/Turabian Style

Farsad Layegh, Nasir, Roshanak Darvishzadeh, Andrew K. Skidmore, Claudio Persello, and Nina Krüger. 2022. "Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data" Remote Sensing 14, no. 15: 3605. https://doi.org/10.3390/rs14153605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop