Next Article in Journal
Low-Power and Wireless Communication Research on Underground Displacement Three-Dimensional Monitoring System
Next Article in Special Issue
A Review on Recent Deep Learning-Based Semantic Segmentation for Urban Greenness Measurement
Previous Article in Journal
Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Improved HOG Features
Previous Article in Special Issue
Forestry Applications of Space-Borne LiDAR Sensors: A Worldwide Bibliometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Mapping of Land Cover Type within International Heterogenous Landscapes Using Sentinel-2 Imagery with Ancillary Geospatial Data

Geospatial Research Laboratory, Engineer Research and Development Center, 7701 Telegraph Road, Bldg 2592, Alexandria, VA 22315, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1587; https://doi.org/10.3390/s24051587
Submission received: 31 December 2023 / Revised: 1 February 2024 / Accepted: 27 February 2024 / Published: 29 February 2024
(This article belongs to the Special Issue Feature Papers in Remote Sensors 2023)

Abstract

:
A near-global framework for automated training data generation and land cover classification using shallow machine learning with low-density time series imagery does not exist. This study presents a methodology to map nine-class, six-class, and five-class land cover using two dates (winter and non-winter) of a Sentinel-2 granule across seven international sites. The approach uses a series of spectral, textural, and distance decision functions combined with modified ancillary layers (such as global impervious surface and global tree cover) to create binary masks from which to generate a balanced set of training data applied to a random forest classifier. For the land cover masks, stepwise threshold adjustments were applied to reflectance, spectral index values, and Euclidean distance layers, with 62 combinations evaluated. Global (all seven scenes) and regional (arid, tropics, and temperate) adaptive thresholds were computed. An annual 95th and 5th percentile NDVI composite was used to provide temporal corrections to the decision functions, and these corrections were compared against the original model. The accuracy assessment found that the regional adaptive thresholds for both the two-date land cover and the temporally corrected land cover could accurately map land cover type within nine-class (68.4% vs. 73.1%), six-class (79.8% vs. 82.8%), and five-class (80.1% vs. 85.1%) schemes. Lastly, the five-class and six-class models were compared with a manually labeled deep learning model (Esri), where they performed with similar accuracies (five classes: Esri 80.0 ± 3.4%, region corrected 85.1 ± 2.9%). The results highlight not only performance in line with an intensive deep learning approach, but also that reasonably accurate models can be created without a full annual time series of imagery.

1. Introduction

Within a changing world, mapping and monitoring of land cover from remotely sensed imagery is a key characteristic of global change studies. Land cover plays a key role in many different environmental studies. At a broader scale, land cover is a key driver of climate change, with impacts on biogeochemical cycles, aerosols, and greenhouse gas emissions [1,2,3]. At regional and local scales, land cover and associated changes have many impacts. Land cover change has been linked with alterations to water pollution and water quality [4], including impacts leading to harmful algal blooms [5,6]. Forest cover loss has been linked to adverse effects on regional hydrometeorology [7]. Land cover change has been linked to land surface temperature changes, which is attributed to surface albedo effects [8,9,10,11]. Effects on soil erosion and sedimentation of water bodies have been related to land cover change [12,13]. Drought, fires, natural disasters, and the positive feedback from climate change have also played a role in driving land cover change [14,15,16]. The interconnection of land cover and land cover change demonstrates the importance of accurate methods to map land cover around the world.
Broad-scale land cover changes have been documented in various regions of the world. Logging, deforestation, reforestation, cropland expansion, and urban expansion have played a significant role in Southeast Asia [17], some of which have been driven by fires [18,19]. Therefore, monitoring and mapping of land cover is important for a variety of disciplinary studies in this region and beyond.
Efforts to map land cover at a global scale have been undertaken using a variety of datasets and methods. Early studies used 1 km or coarser resolution imagery with classification tree methods, a predecessor to the random forest. One such study yielded approximately 82% overall accuracy near-globally with 13 classes and used a variety of biophysical metrics as input features, such as NDVI amplitude and temperature at maximum NDVI [20]. A later study performed a similar assessment but leveraged decision trees [21]. By the mid-2000s, studies began producing global land cover maps using moderate–coarse-resolution MODIS 500 m or ENVISAT 300 m imagery. For example, the GLOBCOVER 2009 dataset was produced at a 300 m scale [22] with an accuracy of 73% [23], and annual global MODIS-based 500 m land cover products were generated based on supervised decision tree methods with manual training data using five of the prevalent classification schemes, including the IGBP global vegetation classification scheme and the University of Maryland classification scheme with a 75% overall accuracy [24]. These moderate–coarse-resolution datasets suffered from mixed pixel issues and variations in classification schemes, which made intercomparison difficult [25]. By the 2010s, with improved data storage and distributed processing, moderate resolution 30 m Landsat-based global products were released. For example, one study developed a global Landsat land cover map with eight land cover classes at an overall accuracy of 65%, using support vector machines (SVMs) and manually labeled training data [26]. The increased availability of cloud computing, using databases such as Google Earth Engine (GEE), led to more advancements. Recently, Sentinel-2 10 m and 20 m imagery, combined with deep learning and manual training data, has resulted in a global layer with about 85% accuracy for nine general land cover classes [27]. A recent study over Europe used time series Sentinel-2 imagery with automated data labels to produce a 13-class map with 86% accuracy [28]. The CORINE land cover mapping program has used computer-assisted photointerpretation to produce a series of 44 class land cover maps over Europe, with the most recent being the 100 m product based on Sentinel-2 imagery for the nominal period of 2018 [29,30]. The most recent product had an accuracy of about 92% [31]. Most of the mentioned studies leverage dense annual time series stacks of cloud-free satellite imagery.
Land cover datasets produced using multi-sensor fusion have emerged due to the increased capabilities of cloud computing and sensor data availability. One study produced annual datasets over a 30-year period using Landsat, MODIS, and AVHRR imagery with 80% accuracy [32]. The coarse imagery provided useful information at a near-daily scale about phenology, which led to an accurate outcome. Until recently, studies had not produced near-real-time land cover datasets, instead typically producing datasets annually, which made timely monitoring difficult. The Dynamic World dataset achieved near-real-time capability with an accuracy ranging from about 77% to 88%, depending on the validation method [33]. SAR, multispectral, elevation, and/or lidar imagery fusion combined with advanced deep neural network methods have also been applied in select regions, with overall accuracies ranging from 75% to 95% [34,35,36,37,38,39,40,41,42,43], including studies with automatically generated training data for self-supervised classification on select study sites [44,45]. The studies leveraging deep learning methods generally exceed the accuracy of shallow machine learning methods due to deep learning’s ability to better represent texture, morphology, and objects [46,47,48], but the accuracy can be poor when training data are of low quality or not geographically transferable [49,50,51]. Some studies, however, have had small-scale success in automating training data using region-growing and conditional random fields or other methods [52]. Lately, studies have combined pixel-based image segmentation, such as multi-layer perceptron, with patch-based convolutional neural networks to leverage the advantages of both methods [53].
Despite these advancements in deep learning methods, traditional shallow machine learning methods should not be overlooked due to their scalability and efficiency. A variety of studies have had success with shallow machine learning methods at country or region scales. One recent study evaluated the use of different temporal composites, which performed better than complete annual time series, and found that performance can be very high: overall accuracy was 89.8% [54]; similarly, temporal time series smoothing can yield about a 4% increase in accuracy [55]. Another study within the Brazilian tropical Savannah achieved 73% and 74% accuracy with random forest and support vector machines, respectively [56]. In several European countries, a random forest and support vector machine classifier resulted in Sentinel-2 scene accuracies ranging from 70% to 86% [57]. Other studies leveraging shallow machine learning with manual training labels have typically achieved accuracies in the range above 70% and below 90% [58,59,60,61] but with occasional higher accuracy exceptions [62,63]. The inclusion of texture and other spatial metrics has led to accuracies above 80% [64] or 90% [65], with automated training labels yielding 85% in one case [66]. It is important to note that such intercomparison of studies can be misleading, as accuracy metrics are typically impacted by the total number of classes, definition of classes, image resolution, local geography, and climate. Studies have evaluated deep-learning- and shallow-machine-learning-based classifiers and found the latter to have similar or slightly lower accuracy, typically with less than a 5% difference [52,67,68,69,70,71,72,73] but occasionally higher than 10% [74,75].
At higher resolutions (finer than 5 m), deep learning methods that take advantage of context, morphology, and texture are typically key for achieving strong accuracy [76,77,78,79,80]; however, shallow machine learning approaches operating with pixel-based methods [81] or object-based segmentation in conjunction with texture and ancillary data have achieved accuracies ranging from 70% to above 90% within local–regional study sites [82,83]. UAS imagery has also been classified using SVM or random forest combined with texture, shape properties, and other feature sets to produce accuracies above 85% [84,85,86]. Some of the previously mentioned studies are geographically limited, and the models may require further training and tedious manual data collection for broader application. To alleviate this issue of a lack of training data, models can be pretrained on a large ancillary dataset and then finetuned at the local scale with limited labels; furthermore, reference data can be generated based on center points derived from CenterNet and guided by transfer learning [87,88].
Currently, a variety of land cover datasets are available to users at moderate resolutions: Esri land cover [27], CORINE land cover [28,29], ESA World Cover [89], Dynamic World [33], Global Land Use Extent [90], and several others. Coarse-resolution datasets include Globcover [22], MODIS land cover [24], and several others. While these products are accurate, most are generated for fixed time periods using dense time series sets of images, which can be computationally demanding. If a user needs a current year map, it may not be available from these datasets in a timely manner. The Dynamic World dataset produces land cover based only on a single date, thereby lacking seasonal land cover.
While previous studies have yielded accurate land cover classification schemes, many of these studies relied on dense time series stacks, manual or semi-supervised training labels, or computational deep learning methods. In this work, we propose a methodology that uses only two dates of imagery to capture some seasonal spectral variation, with automatically selected training data based on a combination of ancillary data, spectral, textural, and Euclidean distance characteristics. A random forest classifier was then fit to these training data for efficient processing on a local machine. The algorithm was optimized and evaluated on seven international Sentinel-2 granule-sized test sites of varied geographies and climates spanning five continents. After an intermediate set of land cover maps was generated along with more than 2800 manually labeled stratified random ground truth points, thresholds for automated training data labeling (spectral, textural, and distance) for each land cover type were evaluated across each scene using stepwise iteration to determine the optimal values per site, climate region (arid, temperate, and tropics), and across all sites combined (global). Several novel texture metrics were also included in the feature set, e.g., water texture. The resulting land cover model, based on two dates of imagery, was also evaluated against a temporally corrected model, which used an annual composite of NDVI 5th percentile and 95th percentile values to refine the model after classification. Different classification schemes were also computed and compared (nine-class, six-class, and five-class) against a deep-learning-based model hosted by Esri. Confusion matrixes were generated, and outputs were also compared by area and at the pixel level. Ultimately, this study offers novelty as it generates a near-globally compatible land cover algorithm that requires less than a full year of an imagery time series, can automatically generate training labels (no need for intensive manual labeling), and is efficient enough to be run on a local machine of average processing capability.

2. Study Area and Datasets

2.1. Study Area

The study sites were carefully selected to include areas representative of most of the geographies and landscapes across the world, with the exception of arctic, high latitude, and extreme mountainous regions. To best represent a land cover algorithm that would function at a near-global scale, we selected seven study sites that each span one Sentinel-2 granule size. These sites include arid, urban, tropical, temperate, rural, agricultural, and forested sites of varied terrain. Two arid sites, two tropical sites, and three temperate sites were selected. The study sites are described in Table 1.

2.2. Sentinel-2 Imagery

Sentinel-2 Multispectral Instrument (MSI) imagery is provided by the European Space Agency [91]. We acquired the Bottom of Atmosphere (BoA) reflectance, Level 2A (L2A), from the Copernicus Open Access Hub. The L2A product contains geometrically and radiometrically corrected imagery, along with a quality assurance layer called the Scene Classification Layer (SCL) that contains information on clouds, cloud shadows, snow, water, and ice, which was used to mask the input imagery and obtain a snow mask. Sentinel-2 MSI imagery includes 13 spectral bands at three spatial resolutions (10 m, 20 m, and 60 m). We leveraged the 20 m dataset to take advantage of the two shortwave-infrared (SWIR) bands and red-edge bands, along with the RGB and NIR bands. For each site, a winter season and non-winter season image was acquired to capture spectral variation and seasonality. The processing algorithm for Sentinel-2 L2A imagery underwent changes for imagery acquired in late 2021 and thereafter. The most notable difference is that post-2021 L2A surface reflectance values have an offset value of 1000 added to them. Please note that all spectral band values reported in the methods have had this offset removed (where applicable), and reported values correspond to pre-late 2021 ranges. Table 1 shows a list of input imagery used for each site.
To improve classification accuracy around land–water interfaces, a water texture layer is computed using the SCL mask from both dates of Sentinel-2 imagery. For each date, a 5 × 5 pixel kernel is convolved over the entire image, taking the sum of pixels labeled as “water” by the SCL layer at each convolution step. This texture layer is then included as a training attribute when fitting the random forest classifier. This layer helps the classifier to delineate water boundaries and highlight variation in shorelines over time.

2.3. Landsat Global Manmade Impervious Surface (GMIS) Layer

The GMIS layer, provided for the year 2010 from 30 m Landsat imagery from the Global Land Survey, was downloaded from the Socioeconomic Data and Applications Center of NASA [92]. The distributed layer is composed of values ranging from 0 to 100, indicating the percentage of impervious surface for a given pixel. Using the Geospatial Data Abstraction Library (GDAL), we downscaled the GMIS layer to 20 m resolution using the nearest neighbor and ensured pixels were properly aligned between the Sentinel-2 imagery and this product [93]. Conversion from coarse to finer resolution yields some degree of inherent positional error but is unavoidable and would primarily apply to less common edge pixels. We included the GMIS layer in the logical decision functions for training data selection to increase the accuracy of our model, as described in the Methodology section.

2.4. Global Impervious Surface Area (GISA) Layer

The GISA layer was provided for the year 2016 using a full time series of imagery from Sentinel-2 at 10 m and Sentinel-1 SAR at 10 m resolution [94]. The dataset is a binary layer produced at a global scale with an overall accuracy of about 86%. It was generated from a previous 30 m Landsat-based layer [95] as well as training data from Open Street Maps and other sources combined with a complex set of spectral, spatial, temporal, and geometric rules.

2.5. Global 2010 Tree Cover and Global Forest Loss Year

The 30 m Landsat-derived global 2010 tree cover layer contains a percentage of canopy closure for trees greater than 5 m in height [96]. We extracted all pixels greater than 10% canopy closure as tree cover pixels. Secondly, we used the global forest loss year product version 1.10 provided by the Global Land Analysis and Dynamics (GLAD) laboratory to refine the tree cover layer by removing any tree cover pixels with a forest cover loss year between 2010 and 2022. This 30 m layer was subsequently resampled, aligned, and reprojected to match the 20 m Sentinel-2 imagery for each of our study sites. The datasets are available here: https://glad.umd.edu/dataset (accessed on 13 October 2022).

2.6. Global Land Use/Land Cover (Esri)

The Sentinel-2 10 m land use/land cover product was provided on an annual basis for each year of 2017–2022. It was developed using a UNet convolutional neural network (CNN) architecture trained on manually labeled points across the world, resulting in nine land cover classes [27]. Because of discrepancies between land cover classes in this dataset and our study, we merged land cover classes to enable direct comparison (as described in the Methods section below). The dataset was resampled to 20 m pixels (majority reclassification scheme) to correspond with our Sentinel-2 20 m imagery. We acquired the year of imagery, which corresponded to the date of our Sentinel-2 imagery for each study area. In this study, we refer to this dataset as Esri land cover because it is freely available for download on their website (https://livingatlas.arcgis.com/landcover/, accessed on 20 October 2023); however, it was produced by Impact Observatory.

2.7. Sentinel-2 Annual NDVI Percentile (5th and 95th) Composites

The harmonized Sentinel-2 surface reflectance product hosted on Google Earth Engine was used for evaluation of how the use of this dataset impacts land cover map accuracy via a series of post-classification corrections. This harmonized dataset enables compatibility between pre-2022 and post-2022 imagery when pre-processing methods were changed within the Sen2Cor processing system. The Sentinel-2 surface reflectance product was collected with less than 20% cloud coverage per scene. The Scene Classification Layer (SCL) was used to further mask out clouds, thin cirrus, and shadow pixels. The imagery was compiled for one year in conjunction with the corresponding year of data used for each study area. Subsequently, for each study site, one year of imagery was collected, corresponding with the original two-date images. This image set was used to compute the 5th percentile and the 95th percentile of NDVI. The 5th percentile is representative of local minima and is used instead of minimum NDVI as occasional cloud or shadow pixels would persist into the final output. The 95th percentile layer is used for temporal corrections as well.

3. Methods

The objective of this study is to devise a method to automatically generate training data and subsequently map land cover types at 20 m Sentinel-2 resolution. We tested across seven study sites representative of most global landscapes (except for arctic areas and extremely mountainous regions). The method should not need to manually generate training data labels. Furthermore, this study endeavors to create a method that maintains similar accuracy levels to deep-learning-based approaches that use large volumes of manually labeled training data. The third objective is to evaluate how accuracy varies when mapping land cover using two dates of Sentinel-2 imagery versus the same procedure but with the addition of an annual NDVI 5th and 95th percentile composite and a series of post-classification corrections based on these layers. The penultimate objective is to evaluate spatial variation and accuracy differences when generating nine-class land cover, six-class land cover, and five-class land cover products. Lastly, for comparison purposes, this study seeks to evaluate how the land cover models from this study compare with those produced by a deep-learning-based model using a full annual Sentinel-2 time series and manually labeled training points (Esri Redlands, CA USA Impact Observatory, Washington, DC, USA).
The general approach of this study is visualized in Figure 1. For each of the seven study sites, a Sentinel-2 surface reflectance image is acquired at 20 m spatial resolution during both winter and non-winter seasons, respective to each study site. Next, the ancillary datasets are processed, reprojected, and aligned to the Sentinel-2 datasets for each study area. The global tree cover extent layer is updated to 2022, and the global AIS dataset is both resampled to 20 m and resampled to binary bitmasks (forest/not-forest and AIS/not-AIS, respectively). The logical decisions used in this study are based on early decision tree approaches where entire images were classified using related functions [20,21]. In contrast, our approach uses these logical decision functions in conjunction with ancillary layers to sample only a portion of the image, which is then classified using a random forest. The image classification approach is loosely based on prior research from single land cover types (water, built-up) and expanded to a general nine-class land cover model [97,98]. Our logical decision function threshold values for water and AIS were taken from the two previously mentioned studies. We also introduce additional novelties in this study, such as temporal corrections with the 95th and 5th percentile annual NDVI layer and associated spatial and spectral decision functions, as well as comparison among six-class, five-class, and nine-class land cover schemes, which has not been thoroughly computed in the literature. The proceeding sections describe the specific steps used in this study.
A texture layer is generated based on the standard deviation of the difference of NDVI between the winter and non-winter imagery (Equation (1)) within a 3 × 3 pixel window. This texture layer is used during the creation of the binary AIS masks. The texture layer highlights areas of spatial variation in spectral signatures that are common in urban areas (e.g., different building materials, shadows, and mixed pixels) and is typically less common in areas confused with AIS, such as bare ground, where the spectral signature tends to be more consistent in neighboring pixels, and through time.
t e x t u r e n d v i = S D k e r n e l = 3 ( | n d v i n o n w i n t e r n d v i w i n t e r | )

3.1. Training Dataset Creation and Intermediate Output Land Cover Maps

The next steps focus on creating binary masks for each of the land cover types of interest. In this case, we initially generate a nine-class land cover product consisting of the classes shown in Table 2.
The binary masks for each land cover type are intended to detect optimal cases (i.e., central examples) of each land cover class. Euclidean distance layers are generated for the GISA AIS layer, global forest cover extent layer, and the SCL water mask provided with the Sentinel-2 imagery (using both the winter and non-winter layers). The binary masks for each of the nine land covers are then generated based on logical decision functions using these data layers and spectral values. The logical decision functions were set with thresholds developed through experimentation over each study site and via findings from previous studies [97,98,99,100,101]. They were used to generate intermediate land cover layers for the creation of sample points, which are then used for threshold optimization and the creation of the final land cover maps. The proceeding intermediate land cover masks are created for the purpose of automatic training data selection within each site and are described in Table 3:
  • deciduous trees: 0.60 ≥ winter NDVI ≥ 0.25 AND non-winter NDVI min ≥ 0.6 AND B11 ≤ 2000 AND AIS distance ≥ 200 m (to avoid mixed pixels) AND Global Tree Layer Percent ≥ 10
  • evergreen trees: (winter NDVI min ≥ 0.65 AND B11 ≤ 1600 AND AIS distance ≥ 200 m) AND (non-winter NDVI min ≥ 0.60 AND AIS distance ≥ 200 m), AND Global Tree Layer Percent ≥ 10
    • evergreen trees: if latitude is between −10 and 10 degrees then deciduous trees are changed to evergreen.
  • vegetation (low NDVI): ((0.60 ≥ winter NDVI ≥ 0.30 AND B11 ≥ 1200) AND (0.60 ≥ Winter NDVI ≥ 0.4 AND B11 ≥ 1200)) AND treeCover distance ≥ 30 m AND AIS distance ≥ 200 m,
  • vegetation (high NDVI): ((non-winter NDVI ≥ 0.60) AND (Winter NDVI ≥ 0.20)) AND AIS distance ≥ 200 m AND treeCover distance > 100 m,
  • bare ground: (−0.1 ≤ NDVI AND winter NDVI ≤ 0.38 AND non-winter NDVI ≤ 0.39) AND AIS distance > 600 m AND B11 > 600,
    • bare ground (beach): an additional mask for bare ground adjacent to water. It uses the same decision logic as bare ground, but the Euclidean distance must be less than 50 m from SCL water. This mask gets combined with bare ground after training data generation.
  • water: ((AWEI ≥ −0.03 AND NDWI ≥ −0.03 AND MNDWI ≥ −0.03) AND B11 < 1000) for both winter and non-winter imagery [98], if only one image date meets this, then it is ephemeral water.
  • Wetland: (0.1 ≤ Winter NDVI ≤ 0.65) AND (0.1 ≤ Non-winter NDVI ≤ 0.60) AND (−0.20 ≤ MNDWI both dates ≤ 0.60) AND AIS distance ≥ 200 m AND distance from SCL water ≤ 100 m.
  • AIS (high density): (−0.5 ≤ Winter NDVI ≤ 0.35) AND (−0.5 ≤ Non-winter NDVI ≤ 0.60) AND (NDVI difference texture ≥ 3.5) AND GMIS AIS Percent Pixel Cover > 50%,
  • AIS (low density): (−0.5 ≤ Winter NDVI ≤ 0.5) AND (−0.5 ≤ Non-winter NDVI ≤ 0.75) AND (NDVI difference texture ≥ 3.5) AND (10% ≤ GMIS AIS Percent Pixel Cover < 50%).
To account for the effects of seasonal snow cover, the Sentinel-2 SCL snow mask is used. If a pixel is flagged as snow in the winter image, then only the non-winter image criteria need to be met from the logical decision functions to create the land cover masks [61]. There were no scenes with snow cover in both winter and non-winter dates.
After the nine binary land cover masks are created, they are subsequently used to generate training sample points for the image classifier. Specifically, the non-water masks are each stratified into 3 quantiles (using Jenks natural breaks) based on the NIR band value [102], while the water mask is stratified based on the NDWI. These quantiles are then used to generate equalized random sampling points to enable spectrally balanced training data. After stratification, 800 training points are generated for each land cover type. The beach bare ground is combined with the bare ground mask, and ephemeral water is combined with the wetland mask. Two training datasets are created for each mask: one “no-snow” and one “snow”. For the “no snow” training data, both the non-winter and winter criteria must be met, whereas for the “snow” set, the non-winter criteria must be met, and the winter scene must be labeled as snow by the SCL layer. Previous work has shown that classifier accuracy is not substantially affected by additional training points after a minimum training point count is achieved [103].
The training data samples, two dates of imagery, NDVI texture, water texture, and spectral indexes (NDVI, NDWI, AWEI, and MNDWI) are used to fit a random forest classifier containing the following parameters: 500 trees, max_depth = 30, max_samples = 500 as implemented in scikit-learn [104,105].

3.2. Accuracy Assessment and Stepwise Threshold Adjustment

After the intermediate set of land cover maps was created, an accuracy assessment was conducted across each of the seven study sites. Pixels from the nine classes are selected via stratified random sampling for a total of 2858 points across the seven scenes (408 average points per scene) with a minimum of 30 points (when possible) per class to prevent under-sampling [100]. However, in one or two sites, we were unable to obtain 30 bare ground points more than 300 m apart from each other. We inferred estimated standard errors for each class, which typically resulted in class-specific standard errors of 0.05 to 0.15 depending on the scene; this information was then used with binomial probability sampling to select the total number of points per scene [106,107]. A minimum distance of 300 m between points was specified to reduce spatial bias [108]. The ground truth for each point was interpreted using a combination of high-resolution Worldview imagery, the original Sentinel-2 imagery, and spectral indexes. The overall accuracies, user’s accuracies, producer’s accuracies, and F1 scores are reported, as well as confusion matrixes for comparison purposes [109]. Confidence intervals of 95% uncertainty for the overall accuracies are computed for comparison purposes following best practice recommendations in accuracy assessment [106].
After the ground truth interpretation, the points are then used for the stepwise determination of optimal thresholds for each binary land cover mask. This is done because the initial thresholds used in the intermediate maps were based on exploratory analysis only and could be improved upon. The following variables were evaluated: bare ground AIS minimum distance (0 m, 300 m, 600 m), bare ground Band 11 minimum reflectance (0, 600, 1200), vegetation (low NDVI) NDVI minimum (0.3, 0.4) and NDVI maximum (0.5, 0.6), vegetation (low NDVI) Band 11 minimum (600, 1200), vegetation (high NDVI) AIS minimum distance (0 m, 100 m, 200 m) and NDVI minimum (0.1, 0.2, 0.3), water minimum spectral index (AWEI, NDWI, MNDWI) value (−0.1, −0.05, −0.03, 0) and maximum Band 11 reflectance (none, 1000, 1500), and wetland AIS minimum distance (0 m, 200 m) and MNDWI minimum (−0.25, −0.20, −0.15) and NDVI minimum (−0.05, 0.0, 0.1). An additional evaluation was performed where natural breaks (Jenks) separation was used as an adaptive threshold to differentiate several classes. Specifically, deciduous and evergreen trees, AIS (low density) and AIS (high density), and vegetation (low NDVI) and vegetation (high NDVI). Note that for AIS (low density) and AIS (high density), prior research found GISA to be slightly more accurate than GMIS [97], and we used those as the final thresholds in our study with AIS (high density): winter NDVI max (0.15), non-winter NDVI max (0.25); AIS (low density): winter minimum NDVI (0.15), non-winter minimum NDVI (0.25), winter maximum NDVI (0.65), non-winter maximum NDVI (0.45) [97].

3.3. Model Correction Using Annual NDVI Time Series Statistics

The annual Sentinel-2 NDVI composites were generated to include the 95th and 5th percentile values per pixel within all imagery with less than 15% cloud cover for the year corresponding to the original two-date Sentinel-2 imagery for each study site. Further cloud masking was conducted using the SCL cloud, thin cirrus, and shadow masks. The 95th and 5th percentile values were used in lieu of a minimum and maximum to avoid false values due to clouds and shadows that are not detected by the SCL mask. These layers can be used to detect drastic spectral changes to a pixel that are missed by the two-date classification output. This model correction is built upon early logical decision tree-based land cover studies [20,21].
Using the 95th and 5th percentile NDVI layers together with the GISA AIS layer, the following modifications are made to the land cover model and are then compared against the unmodified model and the ground truth points:
  • The first correction is the conversion of water pixels from the two-date Sentinel-2 classification into wetland/ephemeral water pixels if the NDVI_95th_percentile value > 0.60.
  • The second correction converts vegetation (low NDVI or high NDVI) or bare ground pixels into wetland/ephemeral water pixels if the NDVI_05th_percentile value < −0.15.
  • The third correction converts bare ground pixels into vegetation (low NDVI) if the NDVI_95th_percentile value is between 0.5 and 0.7 or vegetation (high NDVI) if the NDVI_95th_percentile value is greater than 0.7.
  • The fourth modification converts likely false positive AIS pixels into vegetation (low NDVI or high NDVI) or bare ground based on the 2nd or 3rd correction if the AIS pixel is greater than 6 km from an AIS pixel in the GISA AIS layer.
  • Probable false wetland pixels are modified to vegetation (low NDVI or High NDVI) or bare based on the 2nd or 3rd modification function if the pixel is greater than 5 km from a water pixel. Probable false positive wetland pixels occurring in urban areas are converted to AIS pixels if they coincide directly with a GISA AIS pixel and the NDVI_95th_percentile value is less than 0.7.
  • Another modification changes evergreen tree pixels to deciduous tree pixels if the NDVI_5th_percentile value is less than 0.4. If a deciduous or evergreen tree pixel NDVI_5th_percentile value is less than 0.2, then it is converted to vegetation (high NDVI or low NDVI based on the original decision function).
  • Lastly, a decision function is created to modify probable false positive AIS (low density and high density) pixels to bare ground based on how large clusters of AIS pixels in urban areas tend to have heterogeneous spectral signatures and often include a mix of vegetation, trees, or other mixed pixels that increase the NDVI above a nominal level. Comparatively, large swaths of bare ground found in arid regions will tend to have consistently low NDVI values through time. This modification changes an AIS (low density of high density) pixel to the bare ground if the maximum NDVI_95th_percentile value within a 55 × 55 pixel window (~1.1 km × 1.1 km) does not exceed 0.3.
These modifications are evaluated in the accuracy assessment and compared against the unmodified models.

3.4. Adaptive Regional Thresholds and Intercomparison

After threshold evaluation was completed, optimal thresholds were selected for each of the three regional threshold groups: temperate region (U.K., U.S., and FR imagery), tropics region (IN and BR imagery), and arid region (EG and QA imagery). The thresholds were also evaluated and compared globally (optimization across all seven scenes).
The nine-class land cover dataset was condensed into six-class and five-class datasets for evaluation and comparison purposes. The specific classes are shown in Table 1, where the six-class and five-class models combine similar classes for the primary purpose of comparing against the Esri/Impact Observatory land cover dataset due to differences in land cover class definitions. The eight classes (and snow cover) produced in the Esri model do not align with the nine classes from our model, making direct comparison impossible. Thus, we combine the Esri classes of “rangeland” and “crops” into “vegetation”. The remaining classes are generally suitable for comparison with the six-class condensed model from this study. It is important to point out that the Esri land cover classes are defined slightly differently than in our study, and these differences will be analyzed and explored in the Results and Discussion sections.

4. Results

After the intermediate land cover maps were generated from the initial set of thresholds, ground truth points were created and interpreted on each of the seven sites and used to evaluate the stepwise threshold changes described in the Methods section.

4.1. Stepwise Thresholds of Land Cover Masks

To measure changes in model accuracy, stepwise threshold adjustments were used to evaluate 15 different decision functions for the binary land cover masks. Figure 2 illustrates how the accuracy metrics vary when the thresholds are changed for each binary land cover mask. Several notable variations were observed: bare ground class F1 score for India had high levels of variation across bare ground, as well as both vegetation layers. The QA scene had notably high variation in water class F1 score and overall accuracy resulting from the different water thresholds, with overall accuracy ranging from about 0.6 to 0.78. The highest observed variation in accuracy for all scenes was the vegetation (high NDVI) class, where most scenes had substantial variation in the class F1 score, overall accuracy, and merged (low NDVI and high NDVI vegetation classes combined) F1 score. A major contributor to this effect was probably the fact that vegetation (high NDVI) occupied a substantial proportion of land cover for many of the scenes. Evaluation of the natural breaks (Jenks) threshold set to “true” or “false” for separating vegetation (low NDVI and high NDVI), trees (deciduous and evergreen), and AIS (low density and high density) resulted in notable variation in accuracy metrics especially with the AIS threshold in the QA scene. The main finding from this analysis shows that threshold variation for vegetation and the Jenks method generally yielded substantial changes in accuracy for most sites. Meanwhile, for the other classes, threshold variation was less pronounced (bare ground, wetlands, water), and substantial accuracy variation was limited to select sites (i.e., QA for water, IN for bare ground).
Figure 3 shows histogram plots of thresholds from the top 50% of the most accurate combinations evaluated. This plot is separated into the respective regions (arid, temperate, and tropics), and it highlights several observed trends. For the bare ground mask, a 0 m minimum distance from the GISA AIS layer threshold was the most common result for the temperate scenes, whereas arid scenes performed best with the 300 m and 600 m distance thresholds. This is likely due to the combination of spectral similarity and high bare ground presence common among the two arid scenes. For the AIS natural breaks (Jenks) separation, there was no apparent impact for the arid and tropics regions, whereas enacting the natural breaks separation was common for the most accurate threshold combinations in the temperate scenes. For the wetland mask, the temperate region appeared to benefit the most from the 200 m minimum GISA AIS layer distance from wetland training pixels, whereas the trend was not clear in the other regions. This is likely because the wetland areas are very close to urban areas in the temperate scenes but less so in the other regions.
Based on evaluation against the 2858 accuracy assessment points (spread across the seven sites), the optimal threshold values were determined for each binary land cover mask (the training masks used for generating the random sample points for the random forest classifier). These thresholds were optimized by region (arid, temperate, and tropics) and globally (all seven sites combined) and are shown in Table 4. Use of the natural breaks (Jenks) method to separate AIS (low density) and AIS (high density) resulted in higher accuracy (temperate region) and minimal difference (arid, tropics), and separation of vegetation (high NDVI) and vegetation (low NDVI) using natural breaks (Jenks) resulted in minimal difference (arid, temperate) and reduced accuracy (tropics). The use of natural breaks (Jenks) to separate deciduous trees and evergreen trees resulted in minimal discernable differences across all regions.

4.2. Land Cover Maps with Optimized Thresholds and Temporal Corrections

The nine-class land cover model was generated for the seven study sites based on the regional models and global models for each site, as shown in Figure 4. The initial and corrected land cover outputs are shown for comparison and are zoomed over areas with variation for each study site. The BR site illustrates areas detected as bare ground in the regional and global models, which are modified to vegetation (high NDVI) in the corrected model due to the NDVI_95th_percentile layer, indicating the presence of vegetation in those pixels. Noticeable difference in evergreen tree presence between the global and regional models is also observed in this site, with the global model appearing to incorrectly label some pixels as vegetation (high NDVI). Within the EG site, the maps show AIS commission errors prominent along the edges of water bodies, as well as in hilly terrain on the East side of the image for both the global and regional models. Within the FR site, two notable patterns are observed: (1) groups of wetland pixels are detected over the urban area in both global models due to clouds in the winter image; however, this pattern does not appear in the regional model, likely due to the improved thresholds. (2) The bare ground detected over croplands in the initial models was detected as vegetation in the corrected models because vegetation presence was observed in the NDVI_95th_percentile composite. A similar pattern can be seen in the U.K. site. The IN and QA sites show examples where the global model appears to outperform the regional models by better mapping AIS areas; this is likely due in part to natural variation in the training data selected due to the random sampling and errors in the GISA AIS extent layer along wetlands. Figure 5 illustrates the variation and pixel agreement between the four models across the entire study area at a broader scale.

4.3. Nine-, Six-, and Five-Class Model Accuracies and Comparison with Esri Model

The overall accuracies are shown in Figure 6 for the nine-class, six-class, and five-class land cover models. The six-class output was created by merging the two AIS classes and the two vegetation classes from the nine-class output; the wetland and water classes were additionally merged to create the five-class output to enable comparison with the Esri land cover model. Figure 6 illustrates that the nine-class model’s overall accuracies ranged from 0.66 to 0.79 by scene. The corrected models show about a 5% average increase in accuracy, with very high increases in several scenes, such as the FR regional model (0.69 to 0.76), the QA regional model (0.64 to 0.72), and the U.K. regional model (0.69 to 0.77). Accuracy was slightly reduced when correcting the IN site (0.67 vs. 0.66). For the six-class models, the overall accuracies ranged from 0.76 to 0.88 (region-corrected model), 0.73 to 0.87 (global-corrected model), with the global model yielding higher accuracy in some scenes and the uncorrected regional model for QA yielding very low accuracy for the uncorrected regional model (0.64), but much higher in the corrected model (0.72). The overall accuracy for the five-class models was 79.9% region, 85.1% region corrected, 81.0% global, and 84.6% global corrected. The accuracies for the six-class models were 76.85% region, 82.7% region corrected, 77.9% global, and 81.7% global corrected. The accuracies for the nine-class models were 68.4% region, 73.1% region corrected, 70.2% global, and 73.4% global corrected.
The overall accuracy of the six-class and five-class models was directly compared to the ESRI deep learning model, as shown in Figure 6. Across the six-class land cover schemes, the Esri model performed less accurately than the global and region-corrected models for all sites, with overall accuracies ranging from 0.58 to 0.82 by scene. The lowest accuracy was seen in the IN site and the highest in the FR site. The performance was closer under the five-class scheme, where the Esri model performed more accurately than the uncorrected region model in 4/7 sites (U.K., QA, FR, BR). The five-class Esri model has drastically higher overall accuracies in all seven scenes when compared to the six-class Esri model. This is attributed largely to errors between the water and wetlands classes, which are elided when those cover types are merged in the five-class model.

4.4. Class-Specific Accuracies

The class-specific accuracies for the nine-class models are shown in Figure 7. Relatively higher changes in class-specific accuracy between the corrected and uncorrected models were seen in the bare ground, vegetation (low NDVI), and vegetation (high NDVI) classes. The region-corrected model generally performed more accurately than the global-corrected model in the two vegetation classes, which made up most of the land area in many of the scenes. However, some of the class-specific accuracies, such as AIS (low density and high density), were lower in the regional model than in the global model; these classes often made a relatively small proportion of most scenes. The QA and EG sites had class-specific accuracies of 0 for evergreen trees or deciduous trees or vegetation (low NDVI), as these classes were not detected in the models but did make up only a small fraction of the landscape. Within the IN site, minimal vegetation (low NDVI) pixels were present in the site and not detected by the models.
The class-specific accuracies for the six-class models are shown in Figure 8. The Esri model appears to have abysmally low accuracies in the wetlands and bare ground categories while typically exhibiting relatively higher accuracies in the water and AIS categories when compared to our study’s models. The Esri model exhibited class-specific accuracies above 0.95 for water in 6/7 scenes, with perfect accuracy in three scenes (U.K., QA, and FR). The global-corrected model generally yielded higher accuracies in the AIS category than the region-corrected model in most scenes, whereas the region-corrected model generally performed better in the vegetation category.
The class-specific accuracies for the five-class models are shown in Figure 9. The combined wetland/water class resulted in the Esri model exhibiting nearly comparable accuracy to the results of this study for that class, except for the IN site, where the Esri model struggled. Among the different classes, bare ground/sparsely vegetated typically had the lowest accuracy for both Esri and the models from this study. The vegetation class accuracy for the global and region-corrected models from this study exhibited the most drastic increases in accuracy when compared to the uncorrected models.

4.5. Confusion Matrix Comparison with Esri Model

The confusion matrixes for the five- and six-class models (regional models from this study and the Esri model) are shown in Table 5. The six-class region-corrected model had an overall accuracy of 82.8% (±3.1%) vs. the corresponding Esri model with 72.8% (±3.5%). For the five-class models, the region-corrected model was slightly more accurate than the six-class model at 85.1% (±2.9%), whereas the five-class Esri model had a drastic increase in accuracy over the corresponding six-class model, at 80.0% (±3.4%). Investigation of the confusion matrixes reveals several interesting patterns. For both six-class models, the wetlands category had the most omission errors (lowest producer’s accuracy). While the wetland omissions were balanced between trees, AIS, vegetation, and water for the region-corrected model, the Esri model had nearly all wetlands omission errors incorrectly mapped as water, with a sizable minority mapped as vegetation. This trend explains why the Esri model experiences a drastic increase in overall accuracy when converted into the five-class output. The second highest rate of omission errors was found in the bare ground category for both six-class models (producer’s accuracy of 62.4% and 81.9% for Esri and region-corrected model, respectively). Omitted bare ground pixels in the region-corrected model were mostly mapped as AIS pixels, likely due to the very similar spectral signatures. The Esri model, meanwhile, had relatively few bare ground pixels wrongly mapped as AIS, likely due to the contextual nature of the convolutional neural network. The majority of omitted bare ground pixels were incorrectly mapped as vegetation in the Esri model. Omission errors of trees in the Esri model were largely incorrectly mapped as AIS. Overall, for our study, wetland and bare ground had the highest omission errors, and AIS and wetland had the highest commission errors. Future improvements on these errors could yield drastic accuracy increases.
The confusion matrixes for the corrected nine-class global and region-corrected models are shown in Table 6. The results are interesting, with the region-corrected modeling yielding an overall accuracy of 73.2% (±1.9) and the global-corrected model with 73.4% (±1.8) with their 95% confidence intervals overlapping and indicating that they are not significantly different. This difference overall is primarily due to poor AIS accuracy in the QA scene for the region model, as shown in Section 4.4. For both models, we can observe the most confusion between the two AIS land covers, AIS and bare ground, and between high and low NDVI vegetation classes. Interestingly, the region-corrected model had much lower producer’s accuracy for both AIS categories than the global-corrected model. The region-corrected model had substantially high producer’s accuracy for the low NDVI vegetation category. The user’s accuracy was similar between the models, with more pronounced differences between evergreen trees, deciduous trees, and low NDVI vegetation.

4.6. Spatial Intercomparison of the Region-Corrected and Esri Models

Some of the previously mentioned omission or commission errors can be observed in Figure 10, where the region-corrected model and Esri model are shown zoomed to areas of notable variation within each study area. Within all scenes except QA, the AIS area appears to be overestimated in the Esri model. Many of the pixels surrounding true AIS areas are incorrectly classified as AIS. If this were a land use classification, then the Esri model would likely have very high AIS accuracy; however, for land cover, it is less accurate than our model in these locations. The Esri model is likely to have fewer AIS commission errors in agricultural areas (e.g., as seen in the U.K.) but more AIS omission errors from roadways and infrastructure outside of towns (e.g., as seen clearly in BR and EG scenes). Another interesting trend is observed in EG, where the Esri model maps swaths of pixels as vegetation, whereas the region-corrected model maps them as bare ground. Most of these pixels have very low NDVI values, typically lower than 0.20, but some do appear to be sparsely vegetated. This variation is likely due to a difference in definitions: bare ground/sparsely vegetated are grouped in one category for our region-corrected model, whereas the Esri model appears to assign sparsely vegetated pixels to the vegetation category. Another interesting model disagreement is seen in the IN scene. An area of seasonally inundated rice fields within the North 24 Parganas district is mapped by the Esri model as vegetation and by the region-corrected model as wetlands (depending on definitions, either could be considered correct). Another large area of seasonally inundated aquaculture is mapped by the Esri model as water and by the region-corrected model as wetlands.
While the visualizations of spatial variation between the Esri model and the region-corrected model provide illustrative examples, they do not represent the entire study area. Figure 11 summarizes the variation between the two models and includes the top 10 most common variations in land cover between them. Overall, the three locations with the most pixel disagreement were IN at 32.5%, EG at 28.1%, and U.S. at 23.3%. Across all seven sites, the three most common land cover class pixel differences for the six-class land cover output were (1) vegetation (Esri), trees (region corrected), (2) vegetation (Esri), bare (region corrected), and (3) AIS (Esri), trees (region corrected). Vegetation-bare made up more than 80% of the pixel disagreement in EG and 40% in QA while occupying a drastically smaller proportion of disagreement in non-arid regions. Vegetation-AIS had the highest proportion of pixel disagreement for the U.K. scene at 25%; some of this disagreement is attributed to AIS commission errors in the region-corrected model, along with AIS rural road pixels omitted by the Esri model. Vegetation-trees and trees-vegetation together occupied nearly 80% of pixel disagreement in the BR scene, which contained many mixed pixels, young forest, and agricultural areas that posed a challenge to both models.

4.7. Areal Variation between the Different Land Cover Models

The corrections made to the regional and global land cover models resulted in improvements to the overall accuracy as previously described. The most impactful correction across all scenes was in EG, with the use of the 55 × 55 pixel window returning the maximum NDVI_95th_percentile and correcting AIS commission errors on bare ground when the NDVI_95th_percentile did not exceed 0.3 in that pixel window. The changes for EG were drastic (as seen earlier in Figure 4). For the global six-class model, the AIS area dropped from 9.8% to 5.1% with this correction, whereas in the regional six-class model, the impact was smaller, with a decrease from 4.2% to 2.8% due to the removal of large swaths of false positive AIS area occurring on bare ground.
The areal variation in land cover classes for the different models is shown in Figure 12 for the nine-class models and Figure 13 for the six-class models. For the six-class model, we can see that corrections result in changes to many of the land cover classes, with the highest changes typically in the vegetation and bare ground classes, but also the AIS, water, and trees categories to a lesser extent. Two key differences are as follows: (1) U.K. bare ground decreases from 5.1% to 0.4% in the global and regional models (likely due to fallow agricultural fields that eventually become vegetated), and (2) vegetation varies in the EG scene between Esri and the global and regional models (25.5% vs. less than 1.5%). Much of this vegetated area predicted by Esri is detected as bare ground in the global and regional models from this study.

5. Discussion

The land cover maps from this study generally resulted in similar accuracies to those produced by other studies that leveraged more intensive time series of data or deep learning approaches. Considering this, our study’s novelty is that it offers a unique, less data-intensive way to generate accurate land cover maps without manual data labeling, usage of a novel NDVI texture metric, region- and global-based models, and a temporal correction, which have not been found in prior studies. In addition to the framework, a main contribution from this study is that it not only demonstrated that two dates of imagery can be used to create an accurate annual land cover product but that temporal corrections using NDVI 5th and 95th percentile composites can yield about 5% higher accuracy. The most drastic improvement with the temporal corrections was the conversion of false positive AIS to bare ground in EG, where the NDVI 95th percentile combined with a pixel neighborhood reduced these errors by about 33% in the region-corrected model.
This framework enables the on-demand generation of land cover types across international study sites. The temporally corrected region-based models generally yielded higher accuracy than the global model, but not always. The evaluation was carried out across seven geographically diverse sites using a robust and balanced accuracy assessment scheme encompassing the entirety of the Sentinel-2 granule (except the no-data areas in the BR scene and the QA and U.K. scene where easy-to-classify ocean areas beyond 1 km distance from the shoreline were excluded). In comparison to the Esri model, our study’s region-corrected model outperformed it in the six-class scheme (82.8% (±3.1%) vs. 72.8% (±3.5%). However, under the five-class scheme, we could not definitively say our model was more accurate with a 95% confidence interval (80.0% (±3.4%) Esri vs. 85.1% (±2.9%) region-corrected model). It should be noted that the Esri model had its own accuracy assessment, which reported about 85% accuracy for their nine-class model, which exceeds the accuracy of our nine-class model (73.1%). However, direct comparison was not possible due to class definition differences. The 10 m Dynamic World land cover dataset with nine land cover classes produced globally had F1 scores ranging from 77% to 88% based on different assessment schemes. Their approach was more accurate than our model, 73.1% vs. 77–88% [33]. However, their approach also used fully convolutional neural networks and cloud computing, with a large volume of time series imagery, compared to our less data- and compute-intensive approach. One study that used shallow machine learning methods and semi-automated training data labels to produce a thirteen-class land cover map yielded 86% overall accuracy; however, this study was geographically limited to Europe [28]. Further comparison against a global 2019 12-class Landsat-based classifier found similar, but slightly higher, accuracy levels to our study (78.3% vs. 73.1%) [90].
There are several important limitations to mention in this study. While we used robust accuracy assessments and sampling as guided by the literature [106], we were only able to evaluate seven different international sites. While diverse, this evaluation set did not include high-latitude locations or extremely mountainous terrain. For the ancillary data layers, there are several options available for AIS or forest cover that could likely be interchanged, but this study was more interested in developing a framework for automated classification, and evaluating different ancillary layers was beyond the scope of this study. Furthermore, the ancillary layers (forest cover, AIS, and SCL water) that were used (in modified form) as training data may contain their own data errors (e.g., false positive AIS), which can be carried over into the output produced from this study (as was very evident in the QA and EG scenes). The GISA dataset and forest cover dataset were both produced from 30 m Landsat imagery, which has different pixel alignment and size than the 20 m Sentinel-2 used in this study. While effort was taken to resample and align, this discrepancy could have led to some errors; however, these should be considered in the context of our accuracy assessment results. For our stepwise threshold evaluation on the binary training masks, we did not test all possible combinations due to computational needs and time constraints, but we demonstrated that accuracy typically did not vary drastically. It is also worth mentioning that the GISA layer was originally from 2016, which is now outdated. However, considering that AIS areas almost never decrease, this is acceptable to use for the selection of training data. These topics would be beyond the scope of the article but would be of interest to explore in future research.
The analysis indicated that the Esri land cover layer may define some land cover classes differently than our study. For example, their model seems to include sparsely vegetated pixels in the vegetation categories, whereas this study included them with bare ground. Furthermore, the Esri model appeared to consider any pixel that had water at any point during the year as water, whereas we considered such a pixel to be wetland/ephemeral water. Neither is necessarily more correct than the other, but this leads to differences that are not easily rectifiable in this study. The accuracy levels reported for the Esri model, therefore, may appear lower than reported in their original studies because we evaluated the Esri model against our ground truth and our own definitions of each land cover type. The purpose of the five-class and six-class model comparisons was to close the gap in land cover definitions between the two studies.
While our study evaluated the framework across seven international heterogeneous sites, future expansion to include evaluation across a continental or global scale would yield significant insights. However, this would require integration with cloud computing frameworks due to the high data storage and computational requirements, as well as more extensive accuracy assessment. We expect that the workflow would need further refinements in extremely mountainous and arctic regions where shadows and terrain drastically impact spectral signatures [110]. However, some efforts have been made to overcome these issues in various land cover mapping applications using advanced segmentation and convolutional neural networks [111]. However, considering these shadow correction advancements, it is currently not possible to obtain spectral values when a pixel is completely occluded (i.e., pixel value = 0), but adjustments can be made when values are nonzero [112].

6. Conclusions

This study developed an approach to accurately map land cover types using two dates of Sentinel-2 20 m imagery and ancillary layers with automatically generated training data for subsequent classification using a random forest. This framework used logical decision functions based on Euclidean distances, texture, and spectral index values to create binary land cover masks representative of central/ideal pixel conditions that were then used for sampling spectrally balanced training data and subsequent classification with a random forest. A total of 2858 stratified random accuracy points spread across the seven geographically diverse study sites were generated on this intermediate set of land cover maps and then used for stepwise threshold optimization of the binary land cover training masks. The thresholds were optimized globally (across all seven scenes) and regionally (arid, temperate, and tropics). Temporal corrections, including a novel texture filter to reduce confusion between AIS and bare ground, were applied post-classification using annual composites of the 95th and 5th percentile of NDVI, which resulted in an average of 5% increase in accuracy for the region models and a 3% average increase for the global models (e.g., nine-class region model 68.4% vs. 73.1%). The region model was slightly less accurate than the global model; however, the region-corrected models were more accurate than the global-corrected models (except for nine-class). The six-class and five-class land cover schemes had accuracies for the region-corrected models of 82.8% (±3.1%) and 85.1% (±2.9%), respectively. A comparison of these two models against a deep-learning-based model from Esri found that the Esri models performed slightly less accurately at 72.8% (±3.5%) and 80.0% (±3.4%) for six-class and five-class schemes, respectively. Ultimately, this study devised a methodology to automatically map land cover type accurately with two dates of Sentinel-2 imagery and shallow machine learning models without a compute-intensive deep-learning- or dense time series-based approach. Future work could explore the optimization of this workflow in challenging mountainous terrain and refinements as new ancillary layers are released. Ultimately, as compute capabilities increase, conversion of the approach toward a deep learning classifier could yield improvement.

Author Contributions

Conceptualization, K.L.; Methodology, K.L. and F.D.O.; Software, F.D.O.; Validation, K.L., F.D.O. and E.S.; Formal Analysis, K.L. and F.D.O.; Data Curation, K.L., F.D.O. and E.S.; Writing—Original Draft Preparation, K.L.; Writing—Review and Editing, K.L., F.D.O. and E.S.; Visualization, K.L.; Supervision, K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the U.S. Army Corps of Engineers, Engineer Research and Development Center (ERDC), Geospatial Research business area. No external funding was received for this study. Permission to publish was granted by the ERDC Public Affairs Office. Any opinions expressed in this paper are those of the authors and are not to be construed as official positions of the funding agency.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used in this study were accessed from publicly available repositories. The Sentinel-2 imagery was provided by the European Space Agency (https://scihub.copernicus.eu/ accessed on various dates, 2022 and 2023). The global forest cover extent and loss layers were obtained from the University of Maryland, Global Land Analysis & Discovery (GLAD) Group (https://www.glad.umd.edu/dataset (accessed on 13 October 2022)). The GISA AIS layer was downloaded from https://zenodo.org/records/5791855 (accessed on 15 October 2022).

Acknowledgments

The authors thank Sean P. Griffin for discussion and support, Sadiq I. Khan for review and suggestions, and Jean Nelson and Nicole Wayant for project management.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pielke, R.A., Sr. Land use and climate change. Science 2005, 310, 1625–1626. [Google Scholar] [CrossRef]
  2. Pielke, R.A., Sr.; Pitman, A.; Niyogi, D.; Mahmood, R.; McAlpine, C.; Hossain, F.; Goldewijk, K.K.; Nair, U.; Betts, R.; Fall, S.; et al. Land use/land cover changes and climate: Modeling analysis and observational evidence. Wiley Interdiscip. Rev. Clim. Chang. 2011, 2, 828–850. [Google Scholar] [CrossRef]
  3. Hong, C.; Burney, J.A.; Pongratz, J.; Nabel, J.E.; Mueller, N.D.; Jackson, R.B.; Davis, S.J. Global and regional drivers of land-use emissions in 1961–2017. Nature 2021, 589, 554–561. [Google Scholar] [CrossRef] [PubMed]
  4. Teixeira, Z.; Teixeira, H.; Marques, J.C. Systematic processes of land use/land cover change to identify relevant driving forces: Implications on water quality. Sci. Total Environ. 2014, 470, 1320–1335. [Google Scholar] [CrossRef]
  5. Marion, J.W.; Zhang, F.; Cutting, D.; Lee, J. Associations between county-level land cover classes and cyanobacteria blooms in the United States. Ecol. Eng. 2017, 108, 556–563. [Google Scholar] [CrossRef]
  6. Ma, J.; Jin, S.; Li, J.; He, Y.; Shang, W. Spatio-temporal variations and driving forces of harmful algal blooms in Chaohu Lake: A multi-source remote sensing approach. Remote Sens. 2021, 13, 427. [Google Scholar] [CrossRef]
  7. Baidya Roy, S.; Avissar, R. Impact of land use/land cover change on regional hydrometeorology in Amazonia. J. Geophys. Res. Atmos. 2002, 107, LBA-4. [Google Scholar] [CrossRef]
  8. Jiang, J.; Tian, G. Analysis of the impact of land use/land cover change on land surface temperature with remote sensing. Procedia Environ. Sci. 2010, 2, 571–575. [Google Scholar] [CrossRef]
  9. Ahmed, B.; Kamruzzaman, M.D.; Zhu, X.; Rahman, M.S.; Choi, K. Simulating land cover changes and their impacts on land surface temperature in Dhaka, Bangladesh. Remote Sens. 2013, 5, 5969–5998. [Google Scholar] [CrossRef]
  10. Tran, D.X.; Pla, F.; Latorre-Carmona, P.; Myint, S.W.; Caetano, M.; Kieu, H.V. Characterizing the relationship between land use land cover change and land surface temperature. ISPRS J. Photogramm. Remote Sens. 2017, 124, 119–132. [Google Scholar] [CrossRef]
  11. Ding, H.; Shi, W. Land-use/land-cover change and its influence on surface temperature: A case study in Beijing City. Int. J. Remote Sens. 2013, 34, 5503–5517. [Google Scholar] [CrossRef]
  12. Borrelli, P.; Robinson, D.A.; Fleischer, L.R.; Lugato, E.; Ballabio, C.; Alewell, C.; Meusburger, K.; Modugno, S.; Schütt, B.; Ferro, V.; et al. An assessment of the global impact of 21st century land use change on soil erosion. Nat. Commun. 2017, 8, 2013. [Google Scholar] [CrossRef]
  13. Uddin, K.; Abdul Matin, M.; Maharjan, S. Assessment of land cover change and its impact on changes in soil erosion risk in Nepal. Sustainability 2018, 10, 4715. [Google Scholar] [CrossRef]
  14. Justice, C.; Gutman, G.; Vadrevu, K.P. NASA land cover and land use change (LCLUC): An interdisciplinary research program. J. Environ. Manag. 2015, 148, 4–9. [Google Scholar] [CrossRef] [PubMed]
  15. Tyukavina, A.; Potapov, P.; Hansen, M.C.; Pickens, A.H.; Stehman, S.V.; Turubanova, S.; Parker, D.; Zalles, V.; Lima, A.; Kommareddy, I.; et al. Global trends of forest loss due to fire from 2001 to 2019. Front. Remote Sens. 2022, 3, 825190. [Google Scholar] [CrossRef]
  16. Mataveli, G.; Pereira, G.; Sanchez, A.; de Oliveira, G.; Jones, M.W.; Freitas, S.R.; Aragão, L.E. Updated land use and land cover information improves biomass burning emission estimates. Fire 2023, 6, 426. [Google Scholar] [CrossRef]
  17. Vadrevu, K.; Heinimann, A.; Gutman, G.; Justice, C. Remote sensing of land use/cover changes in South and Southeast Asian Countries. Int. J. Digit. Earth 2019, 12, 1099–1102. [Google Scholar] [CrossRef]
  18. Miettinen, J.; Liew, S.C. Connection between fire and land cover change in Southeast Asia: A remote sensing case study in Riau, Sumatra. Int. J. Remote Sens. 2005, 26, 1109–1126. [Google Scholar] [CrossRef]
  19. Vetrita, Y.; Cochrane, M.A. Fire frequency and related land-use and land-cover changes in Indonesia’s peatlands. Remote Sens. 2019, 12, 5. [Google Scholar] [CrossRef]
  20. Hansen, M.; Dubayah, R.; DeFries, R. Classification trees: An alternative to traditional land cover classifiers. Int. J. Remote Sens. 1996, 17, 1075–1081. [Google Scholar] [CrossRef]
  21. De Fries, R.S.; Hansen, M.; Townshend, J.R.G.; Sohlberg, R. Global land cover classifications at 8 km spatial resolution: The use of training data derived from Landsat imagery in decision tree classifiers. Int. J. Remote Sens. 1998, 19, 3141–3168. [Google Scholar] [CrossRef]
  22. Arino, O.; Gross, D.; Ranera, F.; Leroy, M.; Bicheron, P.; Brockman, C.; Defourny, P.; Vancutsem, C.; Achard, F.; Durieux, L.; et al. GlobCover: ESA Service for Global Land Cover from MERIS. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; IEEE: Piscataway, NJ, USA; pp. 2412–2415. [Google Scholar]
  23. Defourny, P.; Schouten, L.; Bartalev, S.; Bontemps, S.; Cacetta, P.; De Wit, A.J.W.; Di Bella, C.; Gérard, B.; Giri, C.; Gond, V.; et al. Accuracy assessment of a 300 m global land cover map: The GlobCover experience. In Proceedings of the 33rd International Symposium on Remote Sensing of Environment, Sustaining the Millennium Development Goals, Tucson, AZ, USA, 4–9 May 2009; ISBN 978-0-932913-13-5. [Google Scholar]
  24. Friedl, M.A.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A.; Huang, X. MODIS Collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ. 2010, 114, 168–182. [Google Scholar] [CrossRef]
  25. Congalton, R.G.; Gu, J.; Yadav, K.; Thenkabail, P.; Ozdogan, M. Global land cover mapping: A review and uncertainty analysis. Remote Sens. 2014, 6, 12070–12093. [Google Scholar] [CrossRef]
  26. Gong, P.; Wang, J.; Yu, L.; Zhao, Y.; Zhao, Y.; Liang, L.; Niu, Z.; Huang, X.; Fu, H.; Liu, S.; et al. Finer resolution observation and monitoring of global land cover: First mapping results with Landsat TM and ETM+ data. Int. J. Remote Sens. 2013, 34, 2607–2654. [Google Scholar] [CrossRef]
  27. Karra, K.; Kontgis, C.; Statman-Weil, Z.; Mazzariello, J.C.; Mathis, M.; Brumby, S.P. Global Land Use/Land Cover with Sentinel 2 and Deep Learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: Piscataway, NJ, USA; pp. 4704–4707. [Google Scholar]
  28. Malinowski, R.; Lewiński, S.; Rybicki, M.; Gromny, E.; Jenerowicz, M.; Krupiński, M.; Nowakowski, A.; Wojtkowski, C.; Krupiński, M.; Krätzschmar, E.; et al. Automated production of a land cover/use map of Europe based on Sentinel-2 imagery. Remote Sens. 2020, 12, 3523. [Google Scholar] [CrossRef]
  29. CLC. Technical Guidelines, Copernicus Land Monitoring Service Report. 2018. Available online: https://land.copernicus.eu/en/technical-library/clc-2018-technical-guidelines/@@download/file (accessed on 20 January 2024).
  30. Copernicus Land Monitoring Service CORINE Land Cover User Manual 2017, Copernicus Land Monitoring Service. Available online: https://land.copernicus.eu/en/technical-library/clc-product-user-manual/@@download/file (accessed on 20 January 2024).
  31. GMES Initial Operations/Copernicus Land Monitoring Services—Validation of Products: CLC2018/CLCC1218 Validation Report. 2021. Available online: https://land.copernicus.eu/en/technical-library/clc-2018-and-clc-change-2012-2018-validation-report/@@download/file (accessed on 20 January 2024).
  32. Liu, H.; Gong, P.; Wang, J.; Wang, X.; Ning, G.; Xu, B. Production of global daily seamless data cubes and quantification of global land cover change from 1985 to 2020-iMap World 1.0. Remote Sens. Environ. 2021, 258, 112364. [Google Scholar] [CrossRef]
  33. Brown, C.F.; Brumby, S.P.; Guzder-Williams, B.; Birch, T.; Hyde, S.B.; Mazzariello, J.; Czerwinski, W.; Pasquarella, V.J.; Haertel, R.; Ilyushchenko, S.; et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci. Data 2022, 9, 251. [Google Scholar] [CrossRef]
  34. Feng, Q.; Yang, J.; Zhu, D.; Liu, J.; Guo, H.; Bayartungalag, B.; Li, B. Integrating multitemporal Sentinel-1/2 data for coastal land cover classification using a multibranch convolutional neural network: A case of the Yellow River Delta. Remote Sens. 2019, 11, 1006. [Google Scholar] [CrossRef]
  35. Shendryk, Y.; Rist, Y.; Ticehurst, C.; Thorburn, P. Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery. ISPRS J. Photogramm. Remote Sens. 2019, 157, 124–136. [Google Scholar] [CrossRef]
  36. Chen, Y.; Bruzzone, L. Self-supervised sar-optical data fusion of sentinel-1/-2 images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
  37. Quan, Y.; Tong, Y.; Feng, W.; Dauphin, G.; Huang, W.; Xing, M. A novel image fusion method of multi-spectral and sar images for land cover classification. Remote Sens. 2020, 12, 3801. [Google Scholar] [CrossRef]
  38. Ma, W.; Karakuş, O.; Rosin, P.L. AMM-FuseNet: Attention-based multi-modal image fusion network for land cover mapping. Remote Sens. 2022, 14, 4458. [Google Scholar] [CrossRef]
  39. Jin, H.; Mountrakis, G. Fusion of optical, radar and waveform LiDAR observations for land cover classification. ISPRS J. Photogramm. Remote Sens. 2022, 187, 171–190. [Google Scholar] [CrossRef]
  40. Cherif, E.; Hell, M.; Brandmeier, M. DeepForest: Novel deep learning models for land use and land cover classification using multi-temporal and-modal sentinel data of the amazon basin. Remote Sens. 2022, 14, 5000. [Google Scholar] [CrossRef]
  41. Garg, R.; Kumar, A.; Prateek, M.; Pandey, K.; Kumar, S. Land cover classification of spaceborne multifrequency SAR and optical multispectral data using machine learning. Adv. Space Res. 2022, 69, 1726–1742. [Google Scholar] [CrossRef]
  42. Singh, M.K.; Mohan, S.; Kumar, B. Fusion of hyperspectral and LiDAR data using sparse stacked autoencoder for land cover classification with 3D-2D convolutional neural network. J. Appl. Remote Sens. 2022, 16, 034523. [Google Scholar] [CrossRef]
  43. Montanaro, A.; Valsesia, D.; Fracastoro, G.; Magli, E. Semi-supervised learning for joint SAR and multispectral land cover classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  44. Xue, Z.; Yu, X.; Yu, A.; Liu, B.; Zhang, P.; Wu, S. Self-supervised feature learning for multimodal remote sensing image land cover classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  45. Wu, Y.; Mu, G.; Qin, C.; Miao, Q.; Ma, W.; Zhang, X. Semi-supervised hyperspectral image classification via spatial-regulated self-training. Remote Sens. 2020, 12, 159. [Google Scholar] [CrossRef]
  46. Hu, Y.; Zhang, Q.; Zhang, Y.; Yan, H. A deep convolution neural network method for land cover mapping: A case study of Qinhuangdao, China. Remote Sens. 2018, 10, 2053. [Google Scholar] [CrossRef]
  47. Chen, S.; Lei, F.; Dong, S.; Zang, Z.; Zhang, M. Land use/land cover mapping using deep neural network and sentinel image dataset based on google earth engine in a heavily urbanized area, China. Geocarto Int. 2022, 37, 16951–16972. [Google Scholar] [CrossRef]
  48. Zhao, X.; Hong, D.; Gao, L.; Zhang, B.; Chanussot, J. Transferable deep learning from time series of Landsat data for national land-cover mapping with noisy labels: A case study of China. Remote Sens. 2021, 13, 4194. [Google Scholar] [CrossRef]
  49. Pelletier, C.; Valero, S.; Inglada, J.; Champion, N.; Marais Sicre, C.; Dedieu, G. Effect of training class label noise on classification performances for land cover mapping with satellite image time series. Remote Sens. 2017, 9, 173. [Google Scholar] [CrossRef]
  50. Talukdar, S.; Singha, P.; Mahato, S.; Pal, S.; Liou, Y.A.; Rahman, A. Land-use land-cover classification by machine learning classifiers for satellite observations—A review. Remote Sens. 2020, 12, 1135. [Google Scholar] [CrossRef]
  51. Kuras, A.; Brell, M.; Rizzi, J.; Burud, I. Hyperspectral and lidar data applied to the urban land cover machine learning and neural-network-based classification: A review. Remote Sens. 2021, 13, 3393. [Google Scholar] [CrossRef]
  52. Zhang, W.; Tang, P.; Corpetti, T.; Zhao, L. WTS: A Weakly towards strongly supervised learning framework for remote sensing land cover classification using segmentation models. Remote Sens. 2021, 13, 394. [Google Scholar] [CrossRef]
  53. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef]
  54. Phan, T.N.; Kuch, V.; Lehnert, L.W. Land cover classification using Google Earth Engine and random forest classifier—The role of image composition. Remote Sens. 2020, 12, 2411. [Google Scholar] [CrossRef]
  55. Khanal, N.; Matin, M.A.; Uddin, K.; Poortinga, A.; Chishtie, F.; Tenneson, K.; Saah, D. A comparison of three temporal smoothing algorithms to improve land cover classification: A case study from NEPAL. Remote Sens. 2020, 12, 2888. [Google Scholar] [CrossRef]
  56. Camargo, F.F.; Sano, E.E.; Almeida, C.M.; Mura, J.C.; Almeida, T. A comparative assessment of machine-learning techniques for land use and land cover classification of the Brazilian tropical savanna using ALOS-2/PALSAR-2 polarimetric images. Remote Sens. 2019, 11, 1600. [Google Scholar] [CrossRef]
  57. Dabija, A.; Kluczek, M.; Zagajewski, B.; Raczko, E.; Kycko, M.; Al-Sulttani, A.H.; Tardà, A.; Pineda, L.; Corbera, J. Comparison of support vector machines and random forests for corine land cover mapping. Remote Sens. 2021, 13, 777. [Google Scholar] [CrossRef]
  58. Adugna, T.; Xu, W.; Fan, J. Comparison of random forest and support vector machine classifiers for regional land cover mapping using coarse resolution FY-3C images. Remote Sens. 2022, 14, 574. [Google Scholar] [CrossRef]
  59. Nguyen, H.T.T.; Doan, T.M.; Tomppo, E.; McRoberts, R.E. Land Use/land cover mapping using multitemporal Sentinel-2 imagery and four classification methods—A case study from Dak Nong, Vietnam. Remote Sens. 2020, 12, 1367. [Google Scholar] [CrossRef]
  60. Zhang, T.; Su, J.; Xu, Z.; Luo, Y.; Li, J. Sentinel-2 satellite imagery for urban land cover classification by optimized random forest classifier. Appl. Sci. 2021, 11, 543. [Google Scholar] [CrossRef]
  61. O’Neill, F.D.; Lasko, K.D.; Sava, E. Snow-Covered Region Improvements to a Support Vector Machine-Based Semi-Automated Land Cover Mapping Decision Support Tool; ERDC/GRL TR-22-3; Engineer Research and Development Center Library: Vicksburg, MS, USA, 2022. [Google Scholar] [CrossRef]
  62. McCarty, D.A.; Kim, H.W.; Lee, H.K. Evaluation of light gradient boosted machine learning technique in large scale land use and land cover classification. Environments 2020, 7, 84. [Google Scholar] [CrossRef]
  63. Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef]
  64. Xu, S.; Xiao, W.; Ruan, L.; Chen, W.; Du, J. Assessment of ensemble learning for object-based land cover mapping using multi-temporal Sentinel-1/2 images. Geocarto Int. 2023, 38, 2195832. [Google Scholar] [CrossRef]
  65. Chatziantoniou, A.; Petropoulos, G.P.; Psomiadis, E. Co-Orbital Sentinel 1 and 2 for LULC mapping with emphasis on wetlands in a mediterranean setting based on machine learning. Remote Sens. 2017, 9, 1259. [Google Scholar] [CrossRef]
  66. Pan, X.; Wang, Z.; Gao, Y.; Dang, X.; Han, Y. Detailed and automated classification of land use/land cover using machine learning algorithms in Google Earth Engine. Geocarto Int. 2022, 37, 5415–5432. [Google Scholar] [CrossRef]
  67. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  68. Jozdani, S.E.; Johnson, B.A.; Chen, D. Comparing deep neural networks, ensemble classifiers, and support vector machine algorithms for object-based urban land use/land cover classification. Remote Sens. 2019, 11, 1713. [Google Scholar] [CrossRef]
  69. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GISci. Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef]
  70. Feizizadeh, B.; Mohammadzade Alajujeh, K.; Lakes, T.; Blaschke, T.; Omarzadeh, D. A comparison of the integrated fuzzy object-based deep learning approach and three machine learning techniques for land use/cover change monitoring and environmental impacts assessment. GISci. Remote Sens. 2021, 58, 1543–1570. [Google Scholar] [CrossRef]
  71. Rousset, G.; Despinoy, M.; Schindler, K.; Mangeas, M. Assessment of deep learning techniques for land use land cover classification in southern new Caledonia. Remote Sens. 2021, 13, 2257. [Google Scholar] [CrossRef]
  72. Boston, T.; Van Dijk, A.; Larraondo, P.R.; Thackway, R. Comparing CNNs and random forests for Landsat image segmentation trained on a large proxy land cover dataset. Remote Sens. 2022, 14, 3396. [Google Scholar] [CrossRef]
  73. Magalhães, I.A.L.; de Carvalho Júnior, O.A.; de Carvalho, O.L.F.; de Albuquerque, A.O.; Hermuche, P.M.; Merino, É.R.; Gomes, R.A.T.; Guimarães, R.F. Comparing machine and deep learning methods for the phenology-based classification of land cover types in the Amazon biome using Sentinel-1 time series. Remote Sens. 2022, 14, 4858. [Google Scholar] [CrossRef]
  74. Xie, G.; Niculescu, S. Mapping and monitoring of land cover/land use (LCLU) changes in the crozon peninsula (Brittany, France) from 2007 to 2018 by machine learning algorithms (support vector machine, random forest, and convolutional neural network) and by post-classification comparison (PCC). Remote Sens. 2021, 13, 3899. [Google Scholar]
  75. Tong, X.Y.; Xia, G.S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef]
  76. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef]
  77. Ling, F.; Foody, G.M. Super-resolution land cover mapping by deep learning. Remote Sens. Lett. 2019, 10, 598–606. [Google Scholar] [CrossRef]
  78. Pelletier, C.; Webb, G.I.; Petitjean, F. Temporal convolutional neural network for the classification of satellite image time series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef]
  79. Zhang, X.; Han, L.; Han, L.; Zhu, L. How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery? Remote Sens. 2020, 12, 417. [Google Scholar] [CrossRef]
  80. Al-Najjar, H.A.; Kalantar, B.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Mansor, S. Land cover classification from fused DSM and UAV images using convolutional neural networks. Remote Sens. 2019, 11, 1461. [Google Scholar] [CrossRef]
  81. Adam, E.; Mutanga, O.; Odindi, J.; Abdel-Rahman, E.M. Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: Evaluating the performance of random forest and support vector machines classifiers. Int. J. Remote Sens. 2014, 35, 3440–3458. [Google Scholar] [CrossRef]
  82. Maxwell, A.E.; Strager, M.P.; Warner, T.A.; Ramezan, C.A.; Morgan, A.N.; Pauley, C.E. Large-area, high spatial resolution land cover mapping using random forests, GEOBIA, and NAIP orthophotography: Findings and recommendations. Remote Sens. 2019, 11, 1409. [Google Scholar] [CrossRef]
  83. Petropoulos, G.P.; Vadrevu, K.P.; Kalaitzidis, C. Spectral angle mapper and object-based classification combined with hyperspectral remote sensing imagery for obtaining land use/cover mapping in a Mediterranean region. Geocarto Int. 2013, 28, 114–129. [Google Scholar] [CrossRef]
  84. Ma, L.; Fu, T.; Blaschke, T.; Li, M.; Tiede, D.; Zhou, Z.; Ma, X.; Chen, D. Evaluation of feature selection methods for object-based land cover mapping of unmanned aerial vehicle imagery using random forest and support vector machine classifiers. ISPRS Int. J. Geo-Inf. 2017, 6, 51. [Google Scholar] [CrossRef]
  85. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  86. Kwak, G.H.; Park, N.W. Impact of texture information on crop classification with machine learning and UAV images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef]
  87. Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting tassels in RGB UAV imagery with improved YOLOv5 based on transfer learning. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
  88. Gao, T.; Niu, Q.; Zhang, J.; Chen, T.; Mei, S.; Jubair, A. Global to local: A scale-aware network for remote sensing object detection. IEEE Trans. Geosci. Remote Sens. 2023, 61. [Google Scholar] [CrossRef]
  89. Zanaga, D.; Van De Kerchove, R.; Daems, D.; De Keersmaecker, W.; Brockmann, C.; Kirches, G.; Wevers, J.; Cartus, O.; Santoro, M.; Fritz, S.; et al. ESA WorldCover 10 m 2021 v200. Zenodo 2022. [Google Scholar] [CrossRef]
  90. Hansen, M.C.; Potapov, P.V.; Pickens, A.H.; Tyukavina, A.; Hernandez-Serna, A.; Zalles, V.; Turubanova, S.; Kommareddy, I.; Stehman, S.V.; Song, X.P.; et al. Global land use extent and dispersion within natural land cover using Landsat data. Environ. Res. Lett. 2022, 17, 034050. [Google Scholar] [CrossRef]
  91. Gascon, F.; Cadau, E.; Colin, O.; Hoersch, B.; Isola, C.; Fernández, B.L.; Martimort, P. Copernicus Sentinel-2 Mission: Products, Algorithms and Cal/Val. In Earth Observing Systems XIX; SPIE: Bellingham, WA, USA, 2014; Volume 9218, pp. 455–463. [Google Scholar] [CrossRef]
  92. De Colstoun, E.C.B.; Huang, C.; Wang, P.; Tilton, J.C.; Tan, B.; Phillips, J.; Niemczura, S.; Ling, P.Y.; Wolfe, R. Documentation for the Global Man-Made Impervious Surface (GMIS) Dataset from Landsat; NASA Socioeconomic Data and Applications Center (SEDAC): Palisades, NY, USA, 2017. [Google Scholar]
  93. Astola, H.; Häme, T.; Sirro, L.; Molinier, M.; Kilpi, J. Comparison of Sentinel-2 and Landsat 8 imagery for forest variable prediction in boreal region. Remote Sens. Environ. 2019, 223, 257–273. [Google Scholar] [CrossRef]
  94. Huang, X.; Yang, J.; Wang, W.; Liu, Z. Mapping 10 m global impervious surface area (GISA-10m) using multi-source geospatial data. Earth Syst. Sci. Data 2022, 14, 3649–3672. [Google Scholar] [CrossRef]
  95. Huang, X.; Song, Y.; Yang, J.; Wang, W.; Ren, H.; Dong, M.; Feng, Y.; Yin, H.; Li, J. Toward accurate mapping of 30-m time-series global impervious surface area (GISA). Int. J. Appl. Earth Observ. Geoinf. 2022, 109, 102787. [Google Scholar] [CrossRef]
  96. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-resolution global maps of 21st-century forest cover change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  97. Lasko, K.; O’Neill, F.D. Automated method for artificial impervious surface area mapping in temperate, tropical, and arid environments using hyperlocal training data with Sentinel-2 imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 298–314. [Google Scholar] [CrossRef]
  98. Lasko, K.; Maloney, M.C.; Becker, S.J.; Griffin, A.W.; Lyon, S.L.; Griffin, S.P. Automated training data generation from spectral indexes for mapping surface water extent with sentinel-2 satellite imagery at 10 m and 20 m resolutions. Remote Sens. 2021, 13, 4531. [Google Scholar] [CrossRef]
  99. Al-Doski, J.; Mansor, S.B.; Shafri, H.Z.M. NDVI differencing and post-classification to detect vegetation changes in Halabja City, Iraq. IOSR J. Appl. Geol. Geophys. 2013, 1, 1–10. [Google Scholar] [CrossRef]
  100. Žížala, D.; Minařík, R.; Zádorová, T. Soil organic carbon mapping using multispectral remote sensing data: Prediction ability of data with different spatial and spectral resolutions. Remote Sens. 2019, 11, 2947. [Google Scholar] [CrossRef]
  101. Mzid, N.; Pignatti, S.; Huang, W.; Casa, R. An analysis of bare soil occurrence in arable croplands for remote sensing topsoil applications. Remote Sens. 2021, 13, 474. [Google Scholar] [CrossRef]
  102. Jenks, G.F.; Caspall, F.C. Error on choroplethic maps: Definition, measurement, reduction. Ann. Assoc. Am. Geogr. 1971, 61, 217–244. [Google Scholar] [CrossRef]
  103. Lasko, K. Gap Filling Cloudy Sentinel-2 NDVI and NDWI Pixels with Multi-Frequency Denoised C-Band and L-Band Synthetic Aperture Radar (SAR), Texture, and Shallow Learning Techniques. Remote Sens. 2022, 14, 4221. [Google Scholar] [CrossRef]
  104. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  105. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  106. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  107. Olofsson, P. Updates to Good Practices for Estimating Area and Assessing Accuracy of Land Cover and Land Cover Change Products. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1982–1985. [Google Scholar]
  108. Torbick, N.; Salas, W.A.; Hagen, S.; Xiao, X. Monitoring rice agriculture in the Sacramento Valley, USA with multitemporal PALSAR and MODIS imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 4, 451–457. [Google Scholar] [CrossRef]
  109. Tselka, I.; Detsikas, S.E.; Petropoulos, G.P.; Demertzi, I.I. Google Earth Engine and machine learning classifiers for obtaining burnt area cartography: A case study from a Mediterranean setting. In Geoinformatics for Geosciences; Elsevier: Oxford, UK, 2023; pp. 131–148. [Google Scholar]
  110. Wu, S.T.; Hsieh, Y.T.; Chen, C.T.; Chen, J.C. A comparison of 4 shadow compensation techniques for land cover classification of shaded areas from high radiometric resolution aerial images. Can. J. Remote Sens. 2014, 40, 315–326. [Google Scholar] [CrossRef]
  111. Gao, L.; Luo, J.; Xia, L.; Wu, T.; Sun, Y.; Liu, H. Topographic constrained land cover classification in mountain areas using fully convolutional network. Int. J. Remote Sens. 2019, 40, 7127–7152. [Google Scholar] [CrossRef]
  112. Richter, R. Correction of satellite imagery over mountainous terrain. Appl. Opt. 1998, 37, 4004–4015. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the general approach used in this study.
Figure 1. Flowchart of the general approach used in this study.
Sensors 24 01587 g001
Figure 2. Visualization of how the overall accuracy and F1 scores varied during stepwise threshold evaluation illustrated by each study site and each binary land cover mask that is subsequently used for training data selection. The right-side legend applies only to the “Jenks Separation” sub-figure. The left-side legend applies to the remaining 5 sub-figures.
Figure 2. Visualization of how the overall accuracy and F1 scores varied during stepwise threshold evaluation illustrated by each study site and each binary land cover mask that is subsequently used for training data selection. The right-side legend applies only to the “Jenks Separation” sub-figure. The left-side legend applies to the remaining 5 sub-figures.
Sensors 24 01587 g002
Figure 3. Bin plots of the top 50% most accurate stepwise threshold combinations for the 15 different variables evaluated. Bins are grouped by region to highlight trends.
Figure 3. Bin plots of the top 50% most accurate stepwise threshold combinations for the 15 different variables evaluated. Bins are grouped by region to highlight trends.
Sensors 24 01587 g003
Figure 4. Sentinel-2 imagery with the region, region-corrected, global, and global-corrected models shown for selected areas of variation within each study site. A high-resolution version of this figure is also available.
Figure 4. Sentinel-2 imagery with the region, region-corrected, global, and global-corrected models shown for selected areas of variation within each study site. A high-resolution version of this figure is also available.
Sensors 24 01587 g004
Figure 5. Pixel agreement maps highlighting pixel agreement of land cover type for the region, region-corrected, global, and global-corrected nine-class models.
Figure 5. Pixel agreement maps highlighting pixel agreement of land cover type for the region, region-corrected, global, and global-corrected nine-class models.
Sensors 24 01587 g005
Figure 6. Overall accuracies for the nine-class, six-class, and five-class land cover schemes shown for the global and region models from this study and the Esri/Impact Observatory model for comparison.
Figure 6. Overall accuracies for the nine-class, six-class, and five-class land cover schemes shown for the global and region models from this study and the Esri/Impact Observatory model for comparison.
Sensors 24 01587 g006
Figure 7. Class-specific accuracies for the four models from this study for the nine-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Figure 7. Class-specific accuracies for the four models from this study for the nine-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Sensors 24 01587 g007
Figure 8. Class-specific accuracies for the four models from this study for the six-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Figure 8. Class-specific accuracies for the four models from this study for the six-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Sensors 24 01587 g008
Figure 9. Class-specific accuracies for the four models from this study for the five-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Figure 9. Class-specific accuracies for the four models from this study for the five-class land cover scheme. Black, unannotated spaces indicate the absence of that land cover type in the scene.
Sensors 24 01587 g009
Figure 10. (a). Spatial variation between the Esri model and the region-corrected model for the BR, EG. FR sites (b) Spatial variation between the Esri model and the region-corrected model for the IN, US, UK sites.
Figure 10. (a). Spatial variation between the Esri model and the region-corrected model for the BR, EG. FR sites (b) Spatial variation between the Esri model and the region-corrected model for the IN, US, UK sites.
Sensors 24 01587 g010aSensors 24 01587 g010b
Figure 11. Pixel disagreement between the region-corrected model and Esri model for the entirety of each scene and across all seven scenes (global) for the six-class land cover scheme. The Esri model is the original condition, and the region-corrected model is the second condition (e.g., “AIS to Trees” indicates AIS in the Esri model, but Trees in the region-corrected model).
Figure 11. Pixel disagreement between the region-corrected model and Esri model for the entirety of each scene and across all seven scenes (global) for the six-class land cover scheme. The Esri model is the original condition, and the region-corrected model is the second condition (e.g., “AIS to Trees” indicates AIS in the Esri model, but Trees in the region-corrected model).
Sensors 24 01587 g011
Figure 12. Percentage of areal variation per scene for each land cover type for the nine-class scheme.
Figure 12. Percentage of areal variation per scene for each land cover type for the nine-class scheme.
Sensors 24 01587 g012
Figure 13. Percentage of areal variation per scene for each land cover type for the six-class scheme.
Figure 13. Percentage of areal variation per scene for each land cover type for the six-class scheme.
Sensors 24 01587 g013
Table 1. Study area and imagery details for the seven sites.
Table 1. Study area and imagery details for the seven sites.
Site NameSentinel-2 Granule IDImagery Date (Winter)Imagery Date (Summer)RegionGeographical FeaturesPrevalent Land CoversLargest City Population
Maranhão, Brazil (BR)T23MQR2019/DEC/212020/AUG/07TropicsTropical forest, riverineForest, agricultureCoelho Neto (~50K)
Aswan, Egypt (EG)T36QVM2021/DEC/262021/AUG/18AridLake, hilly terrainBare ground, waterAswan (~380K)
Paris, France (FR)T31UDQ2020/NOV/262021/JUN/14TemperateMosaic landscapeBuilt-up, agricultureParis (~11 million)
Kolkata, India (IN)T45QXE2018/FEB/042018/OCT/22TropicsMega urban, aquaculture, rice, mangroveBuilt-up, Agriculture, WetlandsKolkata Metro Area (~15 million)
Doha, Qatar (QA)T39RWJ2022/MAR/102022/SEP/11AridCoastal, bare groundBare Ground, Built-upDoha (~2.3 million)
Portsmouth, United Kingdom (UK)T30UXB2019/NOV/182020/JUL/30TemperateMosaic landscapeAgriculture, Built-upSouthampton ~(900K)
Washington, DC, USA (US)T18SUJ2021/SEP/022021/DEC/26TemperateSuburban sprawlBuilt-up, ForestDC (~700K)
Table 2. Names of the land cover classes used in this study.
Table 2. Names of the land cover classes used in this study.
Nine Class Land CoverSix Class Land CoverFive Class Land Cover
Deciduous TreesTreesTrees
Evergreen TreesVegetationVegetation
Vegetation (Low NDVI)Bare Ground/Sparse VegetationBare Ground/Sparse Vegetation
Vegetation (High NDVI)AISAIS
Bare Ground/Sparse VegetationWetland/Ephemeral WaterWetland/Ephemeral Water/Water
Artificial Impervious Surface (AIS) High DensityWater
AIS Low Density
Wetland/Ephemeral Water
Water
Table 3. Data layers used to create the binary land cover masks used for training data sampling.
Table 3. Data layers used to create the binary land cover masks used for training data sampling.
MaskGlobal Tree Cover LayerGlobal AIS LayerMid-Infrared (MIR) Reflectance NDVINDVI TextureNDWI MNDWIAWEIDistance to SCL Water
AIS (Dense)
AIS (Light)
Water
(distance)
Temporary Water/Wetland
(distance)
Deciduous Trees
(distance)
Evergreen Trees
(distance)
Vegetation (Low NDVI)
(distance)

(distance)
Vegetation (High NDVI)
(distance)

(distance)
Bare Ground/Sparse Vegetation
(distance)
Snow Internal Sentinel-2 Snow Mask from SCL
Table 4. The most accurate threshold values for each of the binary land cover training masks as determined by the stepwise threshold analysis in comparison to the ground truth points.
Table 4. The most accurate threshold values for each of the binary land cover training masks as determined by the stepwise threshold analysis in comparison to the ground truth points.
Land Cover Training MaskVariableMost Accurate Value (Arid)Most Accurate Value (Temperate)Most Accurate Value (Tropics)Most Accurate Value (Global)
Vegetation (High NDVI)Minimum NDVI0.50.10.10.1
Minimum GISA AIS Distance200 m200 m100 m100 m
Bare Ground/Sparse VegetationMinimum GISA AIS Distance300 m, 600 m (Tie)300 m300 m300 m
Minimum Band 11 reflectance060012001200
Wetland/Ephemeral WaterMinimum NDVI0.00.10.10.1
Minimum MNDWI−0.25−0.15, −0.20 (Tie)−0.25−0.20
Minimum GISA AIS Distance200 m NoneNoneNone
WaterMinimum Spectral Index Values (MNDWI, NDWI, AWEI)−0.05−0.03−0.05−0.03
Maximum Band 11 Reflectance1500None15001500
Vegetation (Low NDVI)Minimum NDVI0.30.40.30.3
Maximum NDVI0.50.50.60.5
Minimum Band 11 reflectance60060012001200
AIS (Low Density and High Density)Natural Breaks (Jenks)TieNatural Breaks = TrueTie
Deciduous and Evergreen TreesNatural Breaks (Jenks)TieTieTie
Vegetation (Low NDVI and High NDVI)Natural Breaks (Jenks)TieTieNatural Breaks = False
Table 5. Confusion matrixes of the six-class and five-class land cover schemes of the region-corrected model (bottom) and the Esri model (top). F1 score, overall, producer’s, and user’s accuracies are shown.
Table 5. Confusion matrixes of the six-class and five-class land cover schemes of the region-corrected model (bottom) and the Esri model (top). F1 score, overall, producer’s, and user’s accuracies are shown.
6 Class Esri REFERENCE 5 Class Esri REFERENCE
ClassTreesVegetationBare GroundAISWetlandWaterTotalUser’s Acc.ClassTreesVegetationBare GroundAISWater/WetlandTotalUser’s Acc.
Trees3217211226143374.1%Trees321721122743374.1%
Vegetation326241333361388670.4%Vegetation32624133336488670.4%
Bare Ground05266157029390.8%Bare Ground0526615729390.8%
AIS5954164949463677.7%AIS5954164941363677.7%
Wetland25002033066.7%Water/Wetland31110558161095.2%
Water1610520235658061.4%Total415766426559692
Total415766426559325367 Producer’s Acc.77.3%81.5%62.4%88.4%84.0%
Producer’s Acc.77.3%81.5%62.4%88.4%6.2%97.0%
Overall Acc.80.0% (±3.4%)
Overall Acc.72.8% (±3.5%) Producer’s Acc.78.7%
Prod. Acc68.8% User’s Acc.81.7%
User’s Acc.73.5% F10.802
F10.711
6 Class Region Corr. REFERENCE 5 Class Region Corr. REFERENCE
ClassTreesVegetationBare GroundAISWetlandWaterTotalUser’s Acc.ClassTreesVegetationBare GroundAISWater/WetlandTotalUser’s Acc.
Trees363570223044581.6%Trees36357022344581.6%
Vegetation47632143924375983.3%Vegetation4763214392775983.3%
Bare Ground01353332038990.7%Bare Ground0135333238990.7%
AIS5695046922261776.0%AIS569504692461776.0%
Wetland0214122213428378.1%Water/Wetland02141661664895.1%
Water00043332836589.9%Total4157614315596922858
Total415761431559325367 Producer’s Acc.87.5%83.0%81.9%83.9%89.0%
Producer’s Acc.87.5%83.0%81.9%83.9%68.0%89.4%
Overall Acc.85.1% (±2.9%)
Overall Acc.82.8% (±3.1%) Producer’s Acc.85.1%
Producer’s Acc.82.3% User’s Acc.85.3%
User’s Acc.83.3% F10.852
F10.828
Table 6. Confusion matrixes of the nine-class global- and region-corrected models produced from this study.
Table 6. Confusion matrixes of the nine-class global- and region-corrected models produced from this study.
9 Class Global corr.REFERENCE
ClassDeciduous TreesEvergreen TreesVeg (Low)Bare GroundAIS (High)AIS (Low)Veg (High)WetlandWaterTotalUser’s Acc.
Deciduous Trees1281930134113020861.54%
Evergreen Trees4816700001716024867.34%
Veg (Low)0065748124110164.36%
Bare Ground00034218202036493.96%
AIS (High)0065625146114037467.11%
AIS (Low)82689611523717335742.58%
Veg (High)152857142445316159076.78%
Wetland00034212173926681.58%
Water00001002632335092.29%
Total199216199431342217562325367
Producer’s Acc.64.32%77.31%32.66%79.35%73.39%70.05%80.60%66.77%88.01%
Overall Acc.73.4% (±1.8%)
Producer’s Acc.70.3%
User’s Acc.71.9%
F10.688
9 Class Region Corr.REFERENCE
ClassDeciduous TreesEvergreen TreesVeg (Low)Bare GroundAIS (High)AIS (Low)Veg (High)WetlandWaterTotalUser’s Acc.
Deciduous Trees1242800111411017969.27%
Evergreen Trees5016110004212026660.53%
Veg (Low)441278524475222656.19%
Bare Ground00135324902038990.75%
AIS (High)006372314919033369.37%
AIS (Low)323113681213113228442.61%
Veg (High)18213263742619153379.92%
Wetland001146612213428378.09%
Water00004003332836589.86%
Total199216199431342217562325367
Producer’s Acc.62.31%74.54%63.82%81.90%67.54%55.76%75.80%68.00%89.37%
Overall Acc.73.2% (±1.9%)
Producer’s Acc.71.0%
User’s Acc.70.7%
F10.683
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lasko, K.; O’Neill, F.D.; Sava, E. Automated Mapping of Land Cover Type within International Heterogenous Landscapes Using Sentinel-2 Imagery with Ancillary Geospatial Data. Sensors 2024, 24, 1587. https://doi.org/10.3390/s24051587

AMA Style

Lasko K, O’Neill FD, Sava E. Automated Mapping of Land Cover Type within International Heterogenous Landscapes Using Sentinel-2 Imagery with Ancillary Geospatial Data. Sensors. 2024; 24(5):1587. https://doi.org/10.3390/s24051587

Chicago/Turabian Style

Lasko, Kristofer, Francis D. O’Neill, and Elena Sava. 2024. "Automated Mapping of Land Cover Type within International Heterogenous Landscapes Using Sentinel-2 Imagery with Ancillary Geospatial Data" Sensors 24, no. 5: 1587. https://doi.org/10.3390/s24051587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop