Next Article in Journal
A New Neighboring Pixels Method for Reducing Aerosol Effects on the NDVI Images
Next Article in Special Issue
An Automated Approach for Sub-Pixel Registration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) Imagery
Previous Article in Journal
Automatic Detection of the Ice Edge in SAR Imagery Using Curvelet Transform and Active Contour
Previous Article in Special Issue
Water Bodies’ Mapping from Sentinel-2 Imagery with Modified Normalized Difference Water Index at 10-m Spatial Resolution Produced by Sharpening the SWIR Band
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection

Earth and Life Institute—Environment, Université catholique de Louvain, Croix du Sud 2, Louvain-la-Neuve 1348, Belgium
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2016, 8(6), 488; https://doi.org/10.3390/rs8060488
Submission received: 17 March 2016 / Revised: 28 May 2016 / Accepted: 2 June 2016 / Published: 9 June 2016

Abstract

:
Land cover and land use maps derived from satellite remote sensing imagery are critical to support biodiversity and conservation, especially over large areas. With its 10 m to 20 m spatial resolution, Sentinel-2 is a promising sensor for the detection of a variety of landscape features of ecological relevance. However, many components of the ecological network are still smaller than the 10 m pixel, i.e., they are sub-pixel targets that stretch the sensor’s resolution to its limit. This paper proposes a framework to empirically estimate the minimum object size for an accurate detection of a set of structuring landscape foreground/background pairs. The developed method combines a spectral separability analysis and an empirical point spread function estimation for Sentinel-2. The same approach was also applied to Landsat-8 and SPOT-5 (Take 5), which can be considered as similar in terms of spectral definition and spatial resolution, respectively. Results show that Sentinel-2 performs consistently on both aspects. A large number of indices have been tested along with the individual spectral bands and target discrimination was possible in all but one case. Overall, results for Sentinel-2 highlight the critical importance of a good compromise between the spatial and spectral resolution. For instance, the Sentinel-2 roads detection limit was of 3 m and small water bodies are separable with a diameter larger than 11 m. In addition, the analysis of spectral mixtures draws attention to the uneven sensitivity of a variety of spectral indices. The proposed framework could be implemented to assess the fitness for purpose of future sensors within a large range of applications.

Graphical Abstract

1. Introduction

Land cover and land use maps derived from satellite remote sensing imagery are critical for landscape ecology and natural resource planning [1,2]. Environmental features such as small forest remnants, invasive species, linear vegetation patches and water bodies have a substantial ecological value regardless of their areal extent. The presence of such features affects landscape ecology and thus habitat connectivity which, in turn, modifies landscape processes [3] such as species migration or landscape dynamics [4,5] and defines birds and mammals distribution [6]. Structural rural features like isolated trees and hedgerows have biological and ecological functions—windbreaks, field boundaries, erosion control—as well as large biodiversity value [7]. Small and linear vegetation patches function indeed as corridors which have a usually positive effect on biodiversity and species persistence [8] but could also contribute to the dispersion of invasive species [9]. Accurate mapping of those corridors is essential to reliably describe their physiognomy attributes—width and length—that affect their functionality for wildlife [2,10]. Indeed, an appropriate spatial resolution is required to robustly disentangle the relationship between landscape elements and their patterns with ecological processes [11].
Satellite remote sensing has entered a new era with short revisiting period operational satellites that acquire free, open, global and systematic high resolution visible and infrared imagery. Landsat-8 acquisitions are available from March 2013; Sentinel-2A acquisitions are available from end of November 2015. Thanks to its state-of-the-art specifications, Sentinel-2 [12,13] was designed for a variety of land monitoring applications such as water detection [14], mapping built-up areas [15] and crop type and tree species identification [16]. In addition to its spatial resolution, its payload offers thirteen spectral bands from Blue to SWIR, including Red-edge bands which have already proved to be useful for forest stress monitoring [17], land use and land cover mapping [18,19] and biophysical variable retrieval [20,21,22,23]. In fragmented landscapes, the components of the ecological networks are generally small, i.e., sub-pixel targets undetected by conventional multi-spectral classification methods [24,25]. For end-users interested in these small or linear patches, knowing their discrimination threshold is of paramount importance [26]. Research was carried out on mapping or extracting information at finer resolution than the effective pixel size of a sensor. For instance, spectral unmixing methods were developed to determine the fractions of land cover classes within a coarse pixel and downscaling methods, also known as super-resolution or sub-pixel mapping, aim at turning these proportions into a fine resolution map of class labels [27]. Extensive work has also been carried out to evaluate the minimum spatial resolution to detect small features [26,28,29,30]. For example, SPOT-5 imagery at 10 m was found suitable to map small ponds in Senegal [31]. Similarly, SPOT-5 resolution allowed the monitoring of reed ecosystem in Southern France and in turn provided potential distribution map of species that are relevant to ecosystem functioning. Townsend et al. [32] found that Landsat data resolution is adequate for characterizing landscape patterns, although higher resolution data or multiple sensors may be necessary for specific applications. Nonetheless, operational methods for landscape elements remain scarce [7].
Pre-launch and on orbit characterization of spaceborne sensors are the first step towards verifying that the technical specifications are met with a focus on calibrating the sensor and assessing the continuity with previous missions [33,34]. On orbit characterization covers the spatial and the radiometric performances of the sensor. However, most on orbit radiometric characterizations use calibration data sets, as opposed to natural ground targets [35], and spatial characterization uses Earth targets as well as stars and moon shots [33,36]. On the other hand, empirical analysis of new sensors based on natural surface targets provides fitness for use results for future applications. However, most studies with surface targets focus on developing methods and algorithms to extract the most precise and accurate information from a given sensor without looking specifically at the sensor performance. An intermediate analysis level between rigorous calibration/validation analysis and applications optimized for the input data could therefore bring more useful information.
Understanding the detection processes of sub-pixel landscape elements is the key to derive information for natural resources management. In addition, systematic analyses of the sub-pixel discrimination capacity for different landscape features is useful for the scientific community to assess the sensor ability to detect specific thematic features. Hence, the aim of this paper is to measure the fitness for purpose of Sentinel-2 for the detection of small spatial objects that play a role in ecological networks. The minimum pixel size of 10 m and the narrow spectral bands selected for their specific response to key land surface properties are indeed very promising. The working hypothesis is that the combination of the spatial and spectral resolution analysis can be jointly used to characterize the Sentinel-2 performances for specific object detection independently of the landscape composition. This paper presents a framework that simulates the process of detecting sub-pixel feature pairs by assessing the separability of synthetic mixtures. The assessment of the empirical resolution of Sentinel-2 includes a comparison with SPOT-5 and Landsat-8 because Sentinel-2 aims at providing enhanced continuity of SPOT and at complementing the Landsat archive [13].

2. Methodology

This study assesses Sentinel-2’s ability to detect structuring landscape objects from diverse homogeneous backgrounds. In order to estimate the minimum area at which a spatial object can be discerned, the potential separability of mixed spectral signatures is combined with the actual spatial resolution (Figure 1). The spatial and the spectral resolutions of the sensor were measured independently. First, the spectral signature of class pairs were synthesized for different mixture proportions based on pure land surface reflectances. The separability of the two classes was then evaluated for a selected set of spectral bands and 35 indices (Section 2.1). Henceforth, the separability is defined as the ability to discriminate a foreground class from a background class with less than 5 % of classification error when the classes have the same a priori probability. Second, the actual spatial resolution of Sentinel-2 is characterized by estimating its point spread function (Section 2.2). The combination of the spectral and the spatial resolutions assesses the potential detection of sub-pixel spatial objects surrounded by a specific background whose spectral responses have been mixed within the pixel (Section 2.3).
Sentinel-2 MSI data were compared with two other sensors with similar spectral (Landsat-8 OLI) and spatial (SPOT-5 HRG) resolutions respectively (Table 1). All SPOT-5 spectral bands have their counterpart in Sentinel-2 at the same spatial resolution, except the panchromatic band that could be used for pan-sharpening at 5 m (or 2.5 m). With regards to Landsat-8, the spatial resolution of Sentinel-2 is finer except for the bands used for the atmospheric corrections (aerosol, water vapor and cirrus). These bands have been discarded from the analysis because they are not intended for land application [37]. The main differences with respect to the available bands is the presence of Red-edge bands in Sentinel-2 and of Thermal bands in Landsat-8. Like SPOT-5, Landsat-8 has a panchromatic band with higher spatial resolution. This study focuses only on the spectral bands that have a counterpart on Sentinel-2.
Table 1 also highlights the differences in bandwidths for corresponding spectral bands of each satellite. In order to achieve a given Signal-To-Noise Ratio (SNR), the spectral band width is expected to decrease when the pixel size increases. The land surface spectral response is therefore strongly linked to the spatial resolution of the sensor [38]. In particular, Sentinel-2 sensors acquire two different NIR bands: one at 10 m resolution and one at 20 m hereafter referred to as NIR wide and NIR narrow . The bandwidth of NIR narrow is markedly smaller than NIR wide . Its bandwidth is slightly smaller than the NIR band of Landsat-8 despite a better spatial resolution, while the 10 m NIR wide has a slightly larger bandwidth than the SPOT-5 NIR band. Both Sentinel-2 NIR bands are considered, depending on the spatial resolution of the other bands with which they interact or the sensor used for the comparison.

2.1. Spectral Resolution for Spatial Object Detection

The spectral resolution refers to the ability to separate two land cover classes based on their spectral signature. Spectral resolution therefore sets the upper limit of the performance of a classifier using spectral information only. It depends on the number and types of spectra measured, the signal-to-noise ratio and the radiometric precision (pixel depth) of the sensors.
The spectral resolution is quantified based on the separability of an object belonging to a class of interest, the foreground, from a specific background in the spectral domain. In this context, the separability is defined as the necessary minimum contribution of a foreground object in a simulated sub-pixel mixtures that preserves its accurate detection. The foreground classes have been chosen within a set of key structuring landscape objects described in Section 2.1.1. The separability analysis, described in Section 2.1.2, has been performed for the spectral bands, for common spectral indices used to enhance the discrimination as well as for new indices based on the less common spectral bands available from Sentinel-2 (Table 2 [39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58]). For this purpose, pure spectral signatures of objects have been extracted from the images at well known locations (see Section 3).

2.1.1. Class Pairs of Interest

Numerous landscape features of interest for biodiversity monitoring are potentially discernible at the Sentinel-2 spatial resolution. For the sake of conciseness, only four types of landscape features and the most common foreground/background pairs in the study area are considered for the analysis.
  • Water bodies: Besides their obvious importance to the hydrological cycle, a large number of species rely on the presence of small water bodies for their life cycle (fishes, batrachians, dragonflies...). The creation of water bodies in rural areas is furthermore encouraged by institutions through various policies such as the European Common Agricultural Policy (CAP). Small ponds are dynamic in time and could also be quickly filled or invaded by vegetation. As ponds are generally found in grassland areas, water bodies were paired with grassy background.
  • Roads: Roads are most of the time obstacles for animal movement, but are sometimes sought for food foraging (hunting areas). The focus was on small consolidated roads (bitumen or concrete) across crop fields and grassland. Different pairs were therefore considered: Road/Maize, Road/Bare soil, Road/Sugar beet and Road/Grassland.
  • Grass strips: Leaving grass strips provides corridors that improve landscape connectivity for birds and insects population in crop dominated landscapes as well as refuge zones for auxiliary crop species acting in biological pest control. They also play an important role to mitigate soil erosion. Those grass strips are most of the time located along crop fields, so that Grassland/Maize, Grassland/Sugar beet and Grassland/Bare soil pairs were considered.
  • Small woody patches: Hedges and isolated trees contribute to woody habitat connectivity and provide food and shelter to a large range of species. Broadleaved trees were coupled with grass, maize and sugar beet because their contribution to ecological network is of major importance in agricultural landscapes.
In addition to the detection of small spatial objects, the spectral separability of some similar spectral pairs has also been tested. First, the spectral signature of the most frequent tree species in the study area, namely spruce (Picea abies, representing needleleaved species) and oak (Quercus sp., representing broadleaved species) have been compared. Second, the two main crop types in vegetative stage at the dates of acquisition (maize and sugar beet) were paired together. Finally, the potential to discriminate grassland use types, i.e., mowed or grazed (later referred as grassland and pasture, respectively), was investigated. Figure 2 illustrates the spectral signatures of all nine considered classes.

2.1.2. Spectral Separability Analysis

The spectral separability analysis aims at identifying the minimum foreground spectral contribution to a given spectral or index value that allows distinguishing the foreground from the background. It runs in four steps: (i) the definition of Regions Of Interest (ROI); (ii) the simulation of the pure spectral signatures for the classes of interest; (iii) the mixing of these signatures; and (iv) the estimation of the separability metric.
First, ROIs were defined for each class on areas where the SPOT-5, Landsat-8 and Sentinel-2 images overlap (Section 3). They were manually delineated based on very high spatial resolution orthophotos [59] and field observations (Table 3). At least three different ROIs were sampled for each land cover with the following rules: (i) ROIs had to be homogeneous on the orthophotos; (ii) the same land cover had to be present on each image and (iii) ROIs were selected at the center of land cover features in order to avoid boundaries effect. Because of the small size of the ecological features of interest, samples were taken from larger land cover areas when necessary. Hence, ponds, hedges and grass strips spectral signatures were inferred from lakes, forests and mowed grasslands, respectively. The road samples were extracted from large car park, which were the spectrally closest land cover to the tarred roads.
Second, samples were randomly drawn from the spectral distributions estimated from a single population made from several ROI for a given land cover class. Since some class distributions did not follow normal distributions according to preliminary Shapiro tests and visual inspection of the histograms, a non-parametric approach was implemented. Indeed, kernel density estimates of the spectral distribution provide more flexibility to handle complex distributions [60]. Traditionally, non-parametric sampling consists of three components: (i) sampling with replacement of random observations from the data; (ii) sampling of adjustment factors from a Gaussian kernel; and (iii) combining the two samples [61]. Following this scheme, for each band, a set of 10,000 points was thus randomly drawn with repetition from each pure class distributions (Equation (1)):
X sample class = b a n d 1 b a n d 2 . . . b a n d n ρ ( 1 , 1 ) ρ ( 1 , 2 ) . . . ρ ( 1 , n ) ρ ( 2 , 1 ) ρ ( 2 , 2 ) . . . ρ ( 2 , n ) ρ ( 3 , 1 ) ρ ( 3 , 2 ) . . . ρ ( 3 , n ) . . . . . . . . . . . . ρ ( k , 1 ) ρ ( k , 2 ) . . . ρ ( k , n ) class
where ρ ( i , j ) is the i t h randomly sampled reflectance value from the pure class distribution for the band j. j is the number of bands required for the computation of a given index (up to 5 bands) and k set to 10 , 000 . The adjustment factors were drawn from a multivariate Gaussian kernel with a null mean and an optimized bandwidth (Equation (2)). The bandwidth was estimated using the Sheather-Jones method, which is generic and close to optimal [62].
C Kernel class N ( μ , Σ )
with C Kernel class a n by m matrix randomly drawn from a multivariate normal distribution, μ a vector of zeros (m elements) and Σ the estimated matrix of kernel bandwidths (m × m elements). Finally, the sampled spectral values ( X sample class ) were linearly adjusted using the adjusting factor vector ( C Kernel class ) (Equation (3)). Out-of-range values were truncated to their closest valid point in order to keep all reflectance values between 0 and 1.
X simulated class = X sample class + C Kernel class
Third, synthetic spectral signatures ( M ), hereafter referred as mixture, were derived by weighted linear mixing of random samples from a pure foreground land cover class ( X simulated foreground ) with random samples from a background land cover class ( X simulated background ) in varying proportions ( ϕ [ 0 , 1 ] ):
M = ϕ × X simulated background + ( 1 - ϕ ) × X simulated foreground
Synthetic spectral indices were then computed based on the spectral mixture M .
Fourth, the overlap between the mixture and the pure background was computed for increasing background proportions in the mixture. The area of overlap between the pure background kernel density estimate (KDE) and that of the mixture (Figure 3) is estimated using Equation (5). It corresponds to the minimum error in case the Bayesian optimal decision rule is selected to classify two equiprobable classes.
S = x = min ( index ) max ( index ) min KDE M ( x ) , KDE X simulated background ( x )
where S stands for the area of overlap, KDE X simulated background and KDE M are the kernel density estimates of the mixture and the pure background at a given spectral band or index value x, respectively. The area of overlap is equal to zero when the two distributions are completely separable and is equal to one when they are identical. Under the assumption that both classes are equiprobable and that the distribution are accurately estimated, S corresponds to twice the classification error, i.e., if the distributions are identical ( S = 1 ), the probability to misclassify a pixel is 50 % ( = S / 2 ). A conservative classification error of five percent has been chosen as a reasonable limit for which a mixture could be considered separable from a background. The separability criteria to consider a foreground proportion separable from a background was thus S 0 . 10 .
The last step of the separability analysis consists in finding the smallest foreground proportion at which a mixture is separable from a pure background. Figure 4 illustrates the process of determining the minimum proportion of foreground spatial object needed to detect the spatial object from a homogeneous background when the pure classes are separable. It shows a case where the mean value of the index is larger for the background class than for the foreground class. In this case, increasing the background fraction increases the mean and, also, the variance of the mixture. S is below 0 . 1 until the background fraction exceeds 86 % of the mixture, which means that the foreground can be accurately detected even if it contributes to a small proportion of the spectral signature. For less than 15 % of foreground, the error probability then climbs steeply and the sub-pixel object cannot be accurately detected anymore.

2.2. Effective Spatial Resolution

The most common measure of sensor spatial resolution is the ground sample distance, also known as the ground-projected instantaneous field of view, which can sometimes also refer to the distance between two adjacent pixel centroids as measured on the ground [63]. However, the spatial quality of an optical remote sensing instrument involves more aspects of the imaging system than the sole characteristics of the pixel resolution [64]. Schowengerdt [63] provides extensive definitions and examples of aspects of spatial resolution, representing the cumulative result of optical properties of the sensing system. Generally, spatial resolution involves the interaction between the ground sample distance and the point spread function (PSF), which models the blurring effect due to all elements of the imaging system.
A plethora of methods were developed to assess the PSF or the direction line spread functions (LSF) pre-launch or on orbit from remote sensing imagery (see Pagnutti et al. [65] for a review). Post-launch methods rely on the presence of highly contrasted targets on the ground that can be knife edges (edge methods) [66,67,68,69], a straight and narrow long object (pulse methods) [66] or a point source (impulse methods) [70]. Given the landscape focus of this study, natural targets were preferred over man-made ones as the latter are not so relevant for landscape ecology. To identify potential targets, several suitability characteristics should be met: maximum contrast, uniformity, sharp transition and proper orientation with respect to the satellite orbit. Natural targets matching these requirements are scarce in the natural landscape [69]. Following these recommendations, transition between bare fields and green crop fields was considered. Since Sentinel-2 is in a polar orbit at an inclination of 98 , the near-optimal angle for the field boundary orientation is 98 ± 8 for along-scan estimations and 8 ± 8 for along-track estimations [69]. Landsat-8 and SPOT-5 are also on polar orbit with a 98 inclination.
The sensor PSF is a two-dimensional function usually assumed to be separable in LSF in the along-scan (LSF s ) and along-track (LSF t ) directions so that: PSF = LSF t × LSF s . Similarly, the PSF is also separable as PSF = LSF row × LSF col , where LSF row and LSF col are respectively along the rows and the columns of the gridded product. In this study, the LSF was estimated using the three step approach proposed by Wenny et al. [69].
In order to accurately measure the LSF, the exact sub-pixel location of the edge is first estimated by fitting modified Fermi functions (Equation (6)) to each row profiles.
y ( x ) = d + b - d 1 + exp ( - s ( x - e ) ) + g x
where d is the mean reflectance value on the dark side of the edge, b represents the mean reflectance value on the bright side, s is the slope of the edge and e corresponds to the actual edge position, g is the linear coefficient of the modified Fermi function. The parameters of the Fermi function were determined by minimizing the nonlinear least square estimates of the model. As parameter e represents the edge sub-pixel location, row profiles can be aligned to obtain the edge spread function. The linear portion of the modified Fermi model was subtracted from the edge spread function (ESF) profiles. The latter were then scaled from zero to one by dividing by the height of the edge.
After this preliminary step, the ESF profiles were smoothed using a spline function which subsequently served to provide regular spacing. The spline filter was implemented to provide smoothed ESF profiles with a step size of 0.05 pixels, i.e., 20 data points per pixel.
Finally, the ESF profiles were differentiated to obtain the LSF, a one-dimensional approximation of the PSF. Again, LSF were normalized between zero and one. The empirical resolution is commonly defined as the width within which the PSF drops to half the maximal value, called Full Width Half Maximum (FWHM). To provide confidence in the PSF estimates, the edge signal-to-noise ratio was computed as the ratio of the mean difference in mean digital number between the dark and bright sides of the edge, and the mean of the standard deviation of the dark and bright sides of the image edge.
The empirical FWHM have been derived for each sensor by selecting one band for the main pixel footprint sizes. While it can be reasonably assumed that the FWHM is similar for each band, there could be small difference especially when the type of sensor is different. Indeed, for Landsat-8, bands 1 to 4 and 8 of OLI use Silicon PIN detectors while bands 6, 7 and 9 use Mercury–Cadmium–Telluride detectors [71]. For Sentinel-2, ten monolithic CMOS detector are on a first focal plane for VNIR bands and three Mercury–Cadmium–Telluride detectors hybridized on a CMOS read-out circuit are located on another focal plane for SWIR [72]. Based on the similarity of the PSF from the pre-launch characterization of Landsat-8 and the less precise measurement on Earth targets, the PSF of the band offering the best contrast with the selected targets are used in this study, namely the Red bands for all sensors, plus the SWIR and the first Red-edge band for Sentinel-2.

2.3. Potential for Object Detection

A spatial object can be detected from a homogeneous background if it remains spectrally separable despite the blurring effect of the PSF. The combination of the spectral and the spatial resolution therefore allows assessing the sensor’s potential to detect small spatial objects on the image. The aim is to establish the theoretical minimum size of a spatial object, belonging to a specific class and surrounded by a homogeneous background, that can be detected in a Sentinel-2, Landsat-8 or SPOT-5 pixel taking into account the PSF. The PSF of the most limiting band regarding spatial resolution has been used for each index taking the underlying assumption that bands with equal spatial resolution have similar PSF. The theoretical proportion of the object shape in the pixel is adjusted with the PSF of each sensor allowing to approximate the actual size of the object that is observed in the pixel.
Three object shapes have been studied: continuous linear objects (typical for roads, rivers, grass strips and hedges) crossing the pixel in its (i) center (LC) or (ii) its border (LB), and (iii) compact object (like for small water bodies or isolated trees) centered in the pixel and modeled as squares (CO) (Figure 5). These three simplified shapes represent the extreme cases of the most common shapes observable for the classes of interest. It means, for example, that a road or a river that curves across the pixel could have a minimum size detection intermediate between the LC and the LB object.
Given the estimated FWHM, the PSF was modeled as a bi-dimensional Gaussian function with a mean of zero and a standard deviation of FWHM/2.355 [73] assuming that PSF is homogeneous in the along-track and across-track directions (Figure 6a). Pixels farther from the center that the 95 % confidence interval were assumed not to contribute to the PSF, i.e., their relative contribution is considered as null. To identify the actual minimum detectable area for the LB, LC and CO cases, the width of the foreground object (Figure 6b) was incrementally increased until the integral of the bi-convolution of the PSF on the object reached the minimum foreground mixture proportion computed in Section 2.1.2 (Figure 6c). In other words, the iteration stops when the contribution of the foreground to the measured pixel reflectance is large enough to detect the presence of this foreground object with less than 5 % of errors.

3. Study Area and Data

Two study areas were selected to fulfill the needs of both the spectral analyses and the spatial resolution (Table 4).
The spectral analysis was performed based on images of Southern Belgium (Figure 7). This area embraces a large diversity of fragmented landscapes and therefore is well suited for analyzing mixed pixels of different types. The forest type location relied on the 2007 Land Cover Map of Wallonia (Carte d’Occupation des Sols de Wallonie (COSW)) [74], a high resolution database ( 1 / 10 , 000 scale) with a large thematic precision. Crop types were verified on the field early October soon after the Sentinel-2 and Landsat-8 images acquisitions, respectively on the 1st and 4th October 2015. The analysis was also performed on the closest SPOT-5 (Take 5) cloud-free acquisition covering Southern Belgium on 4th July 2015. Extreme caution was thus exercised in the comparison among sensors as the state, and thus the spectral response, of the vegetation may have significantly changed during the time interval. Figure 7 illustrates the spatial resolution differences of the three sensors of interest when imaging a 30 m wide pond.
As the PSF is location-independent, acquisitions over Sacramento (CA, USA) in September 2015 were selected. The average field size in Belgium was indeed too small for a proper PSF estimation and data were available for each sensor.
For the three sensors, Level-1C products, i.e., top-of-atmosphere reflectances in cartographic geometry, were available for the two selected sites. Sentinel-2 image were obtained during the commissioning phase of the satellite. The geolocation of the provided Sentinel-2 image was evaluated visually and achieved a geometric quality RMSE of approximately 20 m. The images were therefore manually co-registered based on a 25 cm orthophoto of the Walloon region with a first-order polynomial model and resampled with the nearest neighbor method in order to preserve the radiometry of the pixels. Eventually, the RMSE of the co-registration model was 4.9 m. Since atmospherically corrected images are essential to assess spectral indices with spatial reliability and products comparison, level-1C data have been processed to level-2A (top-of-canopy) taking into account the effects of aerosols and water vapor on reflectances. These corrections have been realized using the Sen2Cor tool [75] for the Sentinel-2 images, L8SR algorithm for Landsat-8 images [76] and MACCS for SPOT-5 images [77]. The Sen2Cor algorithm uses Sentinel-2 bands for atmospheric corrections whereas the MACCS tools rely on US National Centers for Environmental Prediction water vapor, air pressure, air temperature data and Total Ozone Mapping Spectrometer ozone data. The L8SR algorithm calculates internally the surface pressure based on the elevation and uses Moderate Resolution Imaging Spectroradiometer (MODIS) Climate Modeling Grid—Aerosol (CMA) product for water vapor, air temperature and aerosol optical thickness estimates, MODIS CMG Coarse resolution ozone product for ozone data. Due to processor constraints, the manual co-registration of Sentinel-2 images have been applied after atmospheric correction.

4. Results

4.1. On the Spectral Resolution

4.1.1. Separability of Foreground/Background Pairs with Sentinel-2

The discrimination power, S, of each pair of classes for each index is presented in Figure 8. Each pair of classes is ordered as foreground-background and the limit proportion of the background allows a classification error of 5 % as defined by the separability metric. The higher the limit proportion of the background, the smaller the contribution of the foreground to the pixel is needed to classify a given pixel as a foreground feature. Green shades illustrate cases of mixed pixels where the foreground class is actually discriminated even if the background contribution to the mixed pixel signal is higher than the foreground one. The reciprocal is also true, i.e., a limit proportion of 10 % means that a small feature of the background class will still be accurately separated from the foreground even though the foreground proportion inside the pixel is 90 % . In the case of red boxes (limit proportion close to 0%), only pure pixels of foreground could be detected, indicating a very high index similarity between the background and the foreground. White indicates that the class accounting for the majority of the pixel reflectance will be assigned. Finally, black boxes indicate that the background and the foreground are not accurately separable based on the selected index with the imagery at hand. The pairs of classes are ordered following the sum of discrimination power of every index meaning, i.e., top rows are on average easier to discriminate.
The upper half of Figure 8 shows that most indices allow discriminating vegetation from the non-vegetation (water, road and bare soil) essentially thanks to vegetation indices. Pairs on the lower part of Figure 8 are hardly separable: Grassland/Pasture, Needleleaved/Broadleaved, Broadleaved/Sugar beet and Maize as well as Roads/Bare soil. For these pairs, only few indices are efficient. Nevertheless, all the foreground classes among the selected pairs can be distinguished from their background with at least one index or band, except between mowed grassland (intensively grazed) and pasture (which are neither separable with Landsat-8 nor SPOT-5).
As expected, water is well detected using spectral bands in the NIR, the SWIR and the Red-edge bands: a mixture of water and grassland is classified preferentially as water if approximately 20 % of its signal comes from water. The best band for water detection however depends on the type of grassland being considered. On average, the most reliable band for detecting water in our study area is the Red-edge 2 followed by NIR and Red-edge 1 . On the other hand, ratio indices commonly used for water detection only become accurate when the sub-pixel proportion of water increases around 75 % . NDWI 1 , NDWI 2 and NHI show similar weaker results, which could be partly due to the fact that some of those indices were designed for semi-arid regions where discrimination against various bare soils is the primary issue. For the purpose of small ponds detection in grasslands, where they play a major ecological role especially in temperate ecosystems, vegetation indices such as the PVI appear more sensitive.
Most indices provide a high level of discrimination for roads surrounded by a green cropland background (maize or sugar beet). In this case, NDTI and STI are the top indices because a road proportion of less than 10 % is enough for an accurate detection against both crop types. As expected, separability with grassland is also very good but it is a couple of percent lower than the separability against crops. The best index is then SR B l u e R e d E d g e , which is matched with the individual Blue band. The BAI and SR N I R n a r r o w B l u e display the best performances for detecting roads surrounded with bare soil (separable for up to 56 % of background in the mixture). Those indices both use NIR with Blue, which is individually not discriminant enough for mixture separation in this case. Overall, our results confirm that BAI is the best index for road detection, but NDWI 2 is close second.
The SWIR bands play a major role in the detection of the grassy headlands, which are otherwise poorly detected from green crop background. SWIR 1 is the only spectral band that could be used to separate grass stripes mixed with more than 50 % of sugar beet or maize at the date of acquisition. The best indices to separate grassland from a sugar beet or maize are RedSWIR 1 and NDWI 1 , respectively. These indices use SWIR 1 in combination with another band. Other spectral bands are also interesting in this context: Blue, Green, Red-edge 1 and Red-edge 2 bands bring additional discrimination against maize and SWIR 2 is very close to SWIR 1 with sugar beet.
Several spectral bands can be used to discriminate broadleaved trees or hedges from a grassy or crop background, but mixture, even with a small background proportion, reduces dramatically the discrimination power. The most efficient spectral bands for the comparison with grassland is Green, followed by Red-edge 1 , and they are better than all the indices tested in this study. Green is also the best feature with maize background and second best to the SWIR 2 with pasture background. It is however not useful when the background is sugar beet: in this case, Red-edge 2 and Red-edge 3 are the only individual spectral bands that could be used for tree detection, and several indices are more performant (with MSI on top).
As expected, the SWIR bands are found useful to discriminate the different forest types: SWIR 2 is the best feature with up to 48 % of background, and SWIR 1 is the third one with 37 % . At the scale of forest patches, these bands could therefore be the most interesting. It is also worth noting that the NIR narrow band contributes to the discrimination between broadleaved and needleleaved while this is not the case of the NIR wide band. This difference seems to be the only significant one between those two bands for the selected classes.

4.1.2. Comparison with SPOT-5 and Landsat-8

The spectral discrimination power of Sentinel-2 was compared with those of SPOT-5 (Figure 9) and Landsat-8 (Figure 10). Blue tones represent cases for which the other sensor outperforms Sentinel-2 and red tones the opposite. Boxes overlaid with crosses indicate that only one of the two sensors was able to separate the two classes of interest: a cross on red shades marks cases for which only Sentinel-2 succeeded in discriminating the foreground class and conversely for the blue shades. The value represented is therefore the limit proportion observed for the discriminating sensor only.
When looking at equivalent indices, results highlight that Sentinel-2 generally outperforms SPOT-5 in mixed pixels from a background. However, SPOT-5 performed slightly better for pairs including bare soil background in all spectral bands. This is particularly true for Road/Bare soil pairs where SPOT-5 is systematically better than Sentinel-2. Other noticeable differences include the performance of SPOT-5 for the Broadleaved/Sugar beet pair and the very good relative results of the SWIR band of Sentinel-2 for the discrimination of the vegetation types. These differences could however be due to the different dates of acquisition.
Compared with Landsat-8 (Figure 10), small proportion differences (most of the time less than 0.2) prevail. The Green band of Landsat-8 is better than the one of Sentinel-2 with pairs including a bare soil background, while Sentinel-2’s Blue and SWIR bands are most of the time better than the ones of Landsat-8. The main difference between Landsat-8 and Sentinel-2 spectral discrimination power is observed with the Road/Bare soil pair. This could be due to the contamination of Landsat-8 road samples due to its larger pixel size.
Figure 9 and Figure 10 also illustrate the good complementarity of the spectral band selection with Sentinel-2. On average, the discrimination power increased relatively more with Sentinel-2 indices than with Landsat-8 or SPOT-5. Indeed, while individual spectral bands performed better with the latter when the background is a bare soil, the indices derived from Sentinel-2 showed in turn the largest discrimination power. The main exception to this overall improvement is the NHI.

4.2. On the Spatial Resolution

Table 5 summarizes the five spectral bands for which the PSF was empirically estimated. The PSF estimations range from 20.11 m for the Red band of SPOT-5 to 51.05 m for the Red band of Landsat-8. Considering channels with the same spatial resolution of 10 m, SPOT-5 Red band (20.11 m) is slightly finer than Sentinel-2 Red band (22.06 m) but not significantly different. Relatively to the pixel footprints, the Red-edge band of Sentinel-2 and the Red band of Landsat-8 outperform the other tested bands.
In addition to the standard deviation (SD), the signal-to-noise ratio (SNR) was calculated for each of the edge spread functions that were generated [78]. Guidelines suggest that SNR ≥ 100 is optimal for accurate PSF estimation but that consistent estimates can be obtained from an SNR of 50 or greater [69]. Values for Landsat-8 and Sentinel-2’s SWIR should thus be handled with care.

4.3. On the Potential for Object Detection

Results from the spatial and spectral resolution assessments were combined to investigate Sentinel-2’s potential for detecting landscape features of environmental relevance. Table 6 shows the minimum width that is required to accurately detect specific foreground objects from their background using the best index or individual band for each sensor. The detectable sizes of LC and LB objects are similar and showed sub-decametric potential for Sentinel-2. A closer look at the less separable pairs, e.g., Broadleaved/Maize, show that the minimum width is consistently smaller for LC than for LB. This was expected as a higher proportion of the foreground contributes to the total signal in the LB case. By contrast, the effect of the shape is substantial for CO objects. Unlike for linear patterns, only few sub-pixel CO objects could be detected using Sentinel-2 10 m bands. This is explained by the PSF as the continuous linear objects influence the reflectance of the pixel even when they are not falling into it. This is why some object sizes should be larger than the pixel size to be detectable.
As mentioned above, Landsat-8 performs similarly to Sentinel-2 (10 m) when considering the spectral resolution component only: they are both able to separate all pairs. However, when including the spatial resolution component, the objects size detectable by Sentinel-2 is almost always smaller, even when the spectral separability difference is high, e.g., Broadleaved/Maize. With respect to small object detection, Landsat-8 only outperforms Sentinel-2 in one case, Grassland/Maize, while the average minimum detection width is clearly in favour of Sentinel-2 (15.7 m for Landsat-8 vs. 8.1 m for Sentinel-2).
Against SPOT-5, the main difference is primarily a spectral difference. One fifth of the pairs could indeed not be separated with SPOT-5 due to its lower spectral resolution. With its similar spatial resolution, SPOT-5 was better than Sentinel-2 in three out of 15 cases, namely Broadleaved/Sugar beet, Water/Pasture and Road/Bare soil. On average, the minimum detection size of Sentinel-2 is also better than SPOT-5 (7 m for Sentinel-2 vs. 10.3 m with SPOT-5, not accounting for the pairs that are not separated by SPOT-5).
Only large isolated broadleaved trees planted in a grassland (>19 m) or pasture (>15 m) are likely to be detected. Most hedges should therefore remain undetected but large isolated or aligned trees would have a crown size above this threshold. However, Red-edge-based indices allow detecting smaller hedges along maize fields. With pasture in the background, SPOT-5 shows performances similar to Sentinel-2 .
For grass stripes, Red-edge bands offer a good alternative to 10 m bands for the discrimination with bare soil. This clearly highlights the interest of Red-edge bands even at a coarser resolution. Otherwise, discrimination between grass stripes and green crop fields cannot be achieved at sub-pixel level (<10 m) at the date of acquisition, even for linear features.
The minimum road width in rural areas in Belgium can be as low as 3.5 m, which means that they could be detected using Sentinel-2 data when crossing grassland, maize and sugar beet fields. Roads crossing bare soils are, on the other hand, not detectable. Those results are markedly better than those obtained with Landsat-8 multispectral bands and slightly better than SPOT-5.
Water ponds (CO objects) surrounded by a grassy background are difficult to separate from their background using the 10 m Sentinel-2 bands. Taking into account the PSF and despite a high separability result (limit proportion of 0.81), Sentinel-2 (10 m) can only detect water bodies larger than 11 m, thus larger than its 10 m pixels. SPOT-5 (10 m) needs even wider water bodies to be detectable, ranging from 12 m for water in pasture to 15 m for water in grassland. Sentinel-2 20 m bands (Red-edge and SWIR) and Landsat-8 could detect water bodies below their resolution. When looking at the center or border linear crossing objects which have similar results, Sentinel-2 reveals a good potential to detect narrow continuous linear water courses (from 5 m wide for water linear object centered with a grassland background). Such results show the ability of Sentinel-2 to detect river and stream at finer scale than the 10 m pixel and are promising.

5. Discussion

This research assessed the potential performances of Sentinel-2’s resolution for small landscape feature detection and compared its performances with similar sensors in terms of spatial and spectral resolutions. A methodological framework was proposed to assess the ability of three different sensors—Sentinel-2, Landsat-8 and SPOT-5—to detect sub-pixel landscape features contributing to the ecological network. This methodological framework is generic and could be applied on other landscape features or other sensors to characterize the potential of passive optical remote sensing data in a wide range of applications. The spatial and spectral resolutions were assessed separately at first and were then combined to evaluate the theoretical performance of the sensors. The spectral bands shared by the three sensors provided similar discrimination power. However, the analysis showed that Sentinel-2 outperformed the other sensors for most of its indices. By combining high spatial and spectral resolutions, the overall performance of Sentinel-2 for small landscape feature detection placed it above SPOT-5 and Landsat-8. The effect of pansharpening on the three images should however be further investigated and is likely to bring SPOT-5 on top for more pairs.
Based on this preliminary study, the Green band of Sentinel-2 seems to be its weak point relatively to the other sensors, while the two SWIR bands show excellent relative and absolute performances. The major spectral difference between Sentinel-2 and Landsat-8 is the presence of Red-edge bands. Those bands, located at the transition between Red and standard NIR, were added for their potential to discriminate vegetation status and types. In this study, those bands did not markedly improve the separability between classes despite testing a large array of band combinations. However, they do bring some nuances that could be used for specific applications (such as biopphysical variable estimation) and it was shown that they could separate some foreground/background pairs better than the 10 m bands. In this context, the second Red-edge band was the most promising for the discrimination of the crop types versus grassland at the date of acquisition. These results correspond however to the first radiometric resolution set for the pre-operational Sentinel-2 images, which was further enhanced later on but unavailable on the tested image.
The spectral separability analysis focused on mono-temporal images. Yet, class separability is a function of the time of observation and evolves along the year, especially when comparing crop types [79], but also in the case of other vegetated areas. Both the use of temporal metrics and the selection of key phenological states are likely to improve the discrimination. As the scope of this paper was to assess the resolution of the sensor, an analysis with the first available images provides a fair comparison between the sensors. However, because of the persistent cloud coverage over Belgium, the closest cloud-free available SPOT-5 in the Take 5 time-series was in early July while the Landsat-8 and Sentinel-2 data were acquired at the beginning of October. This time lag and the resulting differences in phenological stages with SPOT-5 could introduce some inconsistencies when comparing SPOT-5 results to the other sensors. Even if the classes were carefully selected to minimize a potential phenological effect on the class discrimination, it is difficult to assert too firmly the spectral differences between Sentinel-2 and SPOT-5. Yet, one can conclude that the spectral bands provided by Sentinel-2 are very competitive against two reference sensors for classification and object detection.
Among the broad range of the selected indices, a relative homogeneity of response was apparent despite their distinct respective purpose. As an example, indices as diverse as the Built-up Area Index, the RTVICore and the NDWI showed results similar to vegetation indices for separating class pairs such as Sugar beet/Bare soil, Roads/Maize or Grassland/Bare soil. Results also showed that spectral bands are sometimes more efficient than the dedicated indices derived from them, but operational applications should also consider the noise of the signal (e.g., topographic effects) which was not taken into account in this study. Finally, using multispectral bands together is likely to improve the detection threshold in most cases, while this work focused on each band individually. The relatively higher performances of Sentinel-2 indices compared to Landsat-8 indices suggest that Sentinel-2 should perform very well in multiband analysis.
The study highlighted the important fact that some indices, and also spectral bands, are able to detect land cover classes that slightly contributes to the pixel reflectance. While this feature is sought for feature detection, the bias in the detection threshold should be taken into account for land cover classification. Indeed, using discrete classification methods that do not incorporate sub-pixel variations would affect downstream analyses such as the quantification of the class areal extent [80] and landscape pattern qualification with metrics [81,82]. It is indeed well documented that crop area can not be accurately estimated by counting the pixels belonging to each crop type class [83]. The results shown here nicely illustrate the reason for this, as spectral separability was found not linearly related with the sub-pixel proportions. This implies that the commission and omission errors are not counterbalanced: the area of the class favored by the index will be systematically overestimated and conversely. In a fragmented landscape where edge pixels represent a large share of the pixels proportion of a field, this bias should not be neglected. For instance, the area of a sugar beet field of 4 ha (mean field size in Belgium) surrounded by a road (detected at 4 m ) will potentially be underestimated by 12 % . Similarly, this bias leads to an even more drastic bias with coarser spatial resolution and the underestimation may reach 48 % with Landsat-8.
A conservative misclassification probability of 5 % with equiprobable classes was arbitrarily selected because the detection of rare foreground objects is very demanding in terms of accuracy. For land cover classification, larger errors are usually tolerated and pure pixels are the majority. To put the results in a land cover classification perspective, pairwise error rates of pure foregrounds and backgrounds are given in Figure 11). Good ( 80 % ) pairwise separability results are achievable with pure Sentinel-2 pixels even for the most difficult classes being tested in this study. This corroborate the good classification results from other studies using some bands and indices derived from Sentinel-2 data. Due to the high similarity between their common spectral bands, Figure 11 could also be used for Landsat-8 classification potential.
For the water bodies’ detection, Du et al. [14] used NDWI 2 and MNDWI which is equivalent to the NHI in terms of information as it is the mathematical opposite. Results show that MNDWI performs equally to NDWI 2 for water bodies mapping although Du et al. [14] indicates a better discrimination of water using MNDWI. Additionally, several single band indices such as Green, NIR wide , SWIR, NIR narrow and the three Red-edge bands perform equally with less than 5 % classification error probability (Figure 11). However, NDWI 1 does not achieve the same success rate for water detection as it has an error probability ranging from 0.15 for water in grassland to 0.05 for water in pasture.
For crop and tree type identification, Immitzer et al. [16] underlined the importance of Red-Edge 1, Blue and SWIR bands. Considering maize and sugar beet alone only confirmed the SWIR band while Red-Edge 2 and 3 outperformed Red-Edge 1. Differences might be due to differences in the crop types of interest; crop classification results from Immitzer et al. [16] were mainly driven by winter wheat and maize, which suggests that different spectral bands enhance different pairwise comparisons.
For forest types, only the SWIR bands emerged as discriminant channels when they are considered alone. Considering Red-edge bands for the discrimination of pure pixels, errors spread around 5 % which is acceptable for thematic classifications (see Figure 11).
Large isolated trees or shelterwood strips are likely to be detected with Sentinel-2. The detection of hedges and isolated trees could further depend on the sun-object-sensor geometry. Considering the large swath width and the overpass time (10:30 A.M.), the parallax shift and the shadows contribute to a biased observation of vertical objects which is a function of the sun position, the orientation of the observed object and the satellite viewing angle [84]. Although Sentinel-2 captures nadir view, the viewing angle ranges between ± 11 . 5 because of its 290 km swath width. The maximum parallax shift on a flat surface is therefore up to ≈1/5th of the object height. The maximum shadows can be even larger, especially at high latitude during winter. For example, at 50 North, shadows would be 4 times the height of vertical objects (0.6 times in summer). As a consequence, the contribution of shadows for detecting trees and hedgerows should be further investigated.
Results are of special interest in the context of the greening of the European CAP that supports agro-envrionmental elements such as ponds, hedges, isolated trees and headlands. For that reason, these landscape elements are evolving rapidly and positively impact the landscape patterns. To be considered eligible for the CAP, such features must comply to a set of criteria regarding their shape (length and width) among others: a stretch of standing water must be larger than 25 m 2 , grassy strips must have a width that exceeds 12 m at any point and hedges and shelterwood strip must measure at least 20 m long and maximum 10 m wide [85]. With Sentinel-2, only ponds larger than 100 m 2 could be detected which is four-fold the minimum area. On the contrary, the requirements for headlands are met as grassy strips of 6–7 m wide could be identified if the surrounding background is bare soil, i.e., early or late in the season. Yet, when those grassy strips are surrounded by crops (maize of sugar beet), the minimum width increases up to 14 to 16 m.

6. Conclusions

This study analysed pre-operational Sentinel-2 images to rigorously evaluate their potential for detecting sub-decametric landscape features of environmental interest. Through an innovative framework that evaluates apart the spatial and spectral performances, minimum detectable object sizes were identified for various pairs of foreground-background (grassland, crops, water, roads, etc.) and specific spatial arrangements. The analysis was extended to closely matching available satellite systems: Landsat-8 and SPOT-5. Results confirm that Sentinel-2 data actually combine the spectral resolution of Landsat-8 with the spatial resolution of SPOT-5. In other words, some small landscape features highly separable in the spectral domain of Landsat-8 but still undetectable due to its spatial resolution are now accurately detected with Sentinel-2. Conversely, with the large spectral resolution of Sentinel-2, it is possible to detect small landscape object formerly undetectable in the spectral domain of SPOT-5. In addition, the spectral analysis highlighted the value of some spectral bands for class discrimination, e.g., Red-edge for separating maize from grassland and SWIR for separating different forest types.
Together with Sentinel-2’s 10-day revisiting period (5 days with Sentinel-2B to be launched in 2017), these results are therefore promising for the use of Sentinel-2 data for monitoring natural resources. Sentinel-2 offers new perspectives that make the difference in the landscape ecology proccesses. For example, mapping grassy strips supported by the European Common Agriculture Policy becomes feasible. Similarly, Sentinel-2 should also detect narrow continuous linear water courses of 5 m wide.
The methodological framework proposed in this study could be used to compare other spaceborne instruments, independently of the choice of a classifier or the landscape structure. Nevertheless, multivariate and multi-temporal separability analysis would give more information about the overall potential of the sensors. In the future, the proposed experimental framework could also support the assessment of the fitness for purpose of a given sensor prior to its launch.

Acknowledgments

The Sentinel-2 data were obtained thanks to the European Space Agency and as a champion user of the sensor. The Landsat data were obtained through the online Data Pool at the NASA Land Processes Distributed Active Archive Center (LP 440 DAAC), USGS/Earth Resources Observation and Science (EROS) Center, 441 Sioux Falls, South Dakota. The SPOT-5 imagery was obtained under the SPOT-5/Take 5 programme. Imagery is copyrighted to CNES under the mention: “CNES 2016, all rights reserved. Commercial use of the product prohibited”. The orthophoto of 2015 was obtained from the Walloon Region (LIC 160210-0951, all rights reserved to SPW). Support (i) from the Belgian National Fund for Scientific Research (through FRIA and FNRS grants) and (ii) from the Wallonia-Brussels Federation (through the Lifewatch-WB project) are acknowledged. We greatly acknowledge ESA’s support for this study by providing access to Sentinel-2 data during the commissioning phase.

Author Contributions

All authors contributed to the design of the experiment and to its undertaking. The paper was collaboratively written on the Overleaf platform.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Antrop, M. Reflecting upon 25 years of landscape ecology. Landsc. Ecol. 2007, 22, 1441–1443. [Google Scholar] [CrossRef]
  2. Hilty, J.A.; Lidicker, W.Z., Jr.; Merenlender, A. Corridor Ecology: The Science and Practice of Linking Landscapes for Biodiversity Conservation; Island Press: Washington, DC, USA, 2006. [Google Scholar]
  3. Metzger, J.P. Landscape ecology: Perspectives based on the 2007 IALE world congress. Landsc. Ecol. 2008, 23, 501–504. [Google Scholar] [CrossRef]
  4. LaRue, M.A.; Nielsen, C.K. Modelling potential dispersal corridors for cougars in midwestern North America using least-cost path methods. Ecol. Model. 2008, 212, 372–381. [Google Scholar] [CrossRef]
  5. Nagendra, H.; Pareeth, S.; Ghate, R. People within parks—Forest villages, land-cover change and landscape fragmentation in the Tadoba Andhari Tiger Reserve, India. Appl. Geogr. 2006, 26, 96–112. [Google Scholar] [CrossRef]
  6. Forman, R.; Godron, M. Landscape Ecology; John Wiley & Sons: New York, NY, USA, 1986; p. 619. [Google Scholar]
  7. Thornton, M.; Atkinson, P.; Holland, D. Sub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. Int. J. Remote Sens. 2006, 27, 473–491. [Google Scholar] [CrossRef]
  8. Suter, W.; Bollmann, K.; Holderegger, R. Landscape permeability: From individual dispersal to population persistence. In A Changing World; Springer: Berlin/Heidelberg, Germany, 2007; pp. 157–174. [Google Scholar]
  9. Frazier, A.; Wang, L. Characterizing spatial patterns of invasive species using sub-pixel classifications. Remote Sens. Environ. 2011, 115, 1997–2007. [Google Scholar] [CrossRef]
  10. Lindenmayer, D.B.; Fischer, J. Habitat Fragmentation and Landscape Change: An Ecological and Conservation Synthesis; Island Press: Washington, DC, USA, 2006. [Google Scholar]
  11. Shao, G.; Wu, J. On the accuracy of landscape pattern analysis using remote sensing data. Landsc. Ecol. 2008, 23, 505–511. [Google Scholar] [CrossRef]
  12. Malenovsky, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; Garcia-Santos, G.; Fernandes, R.; Berger, M. Sentinels for science: Potential of Sentinel-1, -2, and -3 missions for scientific observations of ocean, cryosphere, and land. Remote Sens. Environ. 2012, 120, 91–101. [Google Scholar] [CrossRef]
  13. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, A.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  14. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water bodies’ mapping from sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sens. 2016, 8, 354. [Google Scholar] [CrossRef] [Green Version]
  15. Pesaresi, M.; Corbane, C.; Julea, A.; Florczyk, A.J.; Syrris, V.; Soille, P. Assessment of the added-value of sentinel-2 for detecting built-up areas. Remote Sens. 2016, 8, 299. [Google Scholar] [CrossRef] [Green Version]
  16. Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with sentinel-2 data for crop and tree species classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  17. Eitel, J.U.; Vierling, L.A.; Litvak, M.E.; Long, D.S.; Schulthess, U.; Ager, A.A.; Krofcheck, D.J.; Stoscheck, L. Broadband, red-edge information from satellites improves early stress detection in a New Mexico conifer woodland. Remote Sens. Environ. 2011, 115, 3640–3646. [Google Scholar] [CrossRef]
  18. Schuster, C.; Förster, M.; Kleinschmit, B. Testing the red edge channel for improving land-use classifications based on high-resolution multi-spectral satellite data. Int. J. Remote Sens. 2012, 33, 5583–5599. [Google Scholar] [CrossRef]
  19. Zarco-Tejada, P.J.; Miller, J.R. Land cover mapping at BOREAS using red edge spectral parameters from CASI imagery. J. Geophys. Res. Atmos. 1999, 104, 27921–27933. [Google Scholar] [CrossRef]
  20. Verrelst, J.; Muñoz, J.; Alonso, L.; Delegido, J.; Rivera, J.P.; Camps-Valls, G.; Moreno, J. Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and-3. Remote Sens. Environ. 2012, 118, 127–139. [Google Scholar] [CrossRef]
  21. Delegido, J.; Verrelst, J.; Alonso, L.; Moreno, J. Evaluation of sentinel-2 red-edge bands for empirical estimation of green LAI and chlorophyll content. Sensors 2011, 11, 7063–7081. [Google Scholar] [CrossRef] [PubMed]
  22. Clevers, J.G.; Gitelson, A.A. Remote estimation of crop and grass chlorophyll and nitrogen content using red-edge bands on Sentinel-2 and-3. Int. J. Appl. Earth Obs. Geoinform. 2013, 23, 344–351. [Google Scholar] [CrossRef]
  23. Sibanda, M.; Mutanga, O.; Rouget, M. Examining the potential of Sentinel-2 MSI spectral resolution in quantifying above ground biomass across different fertilizer treatments. ISPRS J. Photogramm. Remote Sens. 2015, 110, 55–65. [Google Scholar] [CrossRef]
  24. Foschi, P.G.; Smith, D.K. Detecting subpixel woody vegetation in digital imagery using two artificial intelligence approaches. Photogramm. Eng. Remote Sens. 1997, 63, 493–499. [Google Scholar]
  25. Oki, K.; Oguma, H.; Sugita, M. Subpixel classification of alder trees using multitemporal Landsat Thematic Mapper imagery. Photogramm. Eng. Remote Sens. 2002, 68, 77–82. [Google Scholar]
  26. Lechner, A.M.; Stein, A.; Jones, S.D.; Ferwerda, J.G. Remote sensing of small and linear features: Quantifying the effects of patch size and length, grid position and detectability on land cover mapping. Remote Sens. Environ. 2009, 113, 2194–2204. [Google Scholar] [CrossRef]
  27. Boucher, A.; Boucher, A. Downscaling of Satellite Remote Sensing Data: Application to Land Cover Mapping; Stanford University: Stanford, CA, USA, 2007; p. 143. [Google Scholar]
  28. Congalton, R.G.; Birch, K.; Jones, R.; Schriever, J. Evaluating remotely sensed techniques for mapping riparian vegetation. Comput. Electron. Agric. 2002, 37, 113–126. [Google Scholar] [CrossRef]
  29. Lausch, A.; Herzog, F. Applicability of landscape metrics for the monitoring of landscape change: Issues of scale, resolution and interpretability. Ecol. Indic. 2002, 2, 3–15. [Google Scholar] [CrossRef]
  30. Jensen, J.R.; Cowen, D.C. Remote sensing of urban/suburban infrastructure and socio-economic attributes. Photogramm. Eng. Remote Sens. 1999, 65, 611–622. [Google Scholar]
  31. Lacaux, J.; Tourre, Y.; Vignolles, C.; Ndione, J.; Lafaye, M. Classification of ponds from high-spatial resolution remote sensing: Application to Rift Valley Fever epidemics in Senegal. Remote Sens. Environ. 2007, 106, 66–74. [Google Scholar] [CrossRef]
  32. Townsend, P.A.; Lookingbill, T.R.; Kingdon, C.C.; Gardner, R.H. Spatial pattern analysis for monitoring protected areas. Remote Sens. Environ. 2009, 113, 1410–1420. [Google Scholar] [CrossRef]
  33. Knight, E.J.k.; Kvaran, G. Landsat-8 operational land imager design, characterization and performance. Remote Sens. 2014, 6, 10286–10305. [Google Scholar]
  34. James, S.; Michael, C.; Kenton, L. Landsat 8 operational land imager on-orbit geometric calibration and performance. Remote Sens. 2014, 6, 11127–11152. [Google Scholar]
  35. Ron, M.; Julia, B.; Raviv, L.; Brian, M.; Esad, M.; Lawrence, O.; Pat, S.; Kelly, V. Landsat-8 operational land imager (OLI) radiometric performance on-orbit. Remote Sens. 2015, 7, 2208–2237. [Google Scholar]
  36. De Lussy, F.; Greslou, D.; Dechoz, C.; Amberg, V.; Delvit, J.M.; Lebegue, L.; Blanchet, G.; Fourest, S. Pleiades HR in flight geometrical calibration: Location and mapping of the focal plane. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 519–523. [Google Scholar] [CrossRef]
  37. Communications, E. Sentinel-2 ESA’s Optical High-Resolution Mission for GMES Operational Services. 2012. Available online: https://sentinels.copernicus.eu/documents/247904/349490/S2_SP-1322_2.pdf (accessed on 22 May 2016).
  38. Price, J.C. Spectral band selection for visible-near infrared remote sensing: Spectral-spatial resolution tradeoffs. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1277–1285. [Google Scholar]
  39. Datt, B. Visible/near infrared reflectance and chlorophyll content in Eucalyptus leaves. Int. J. Remote Sens. 1999, 20, 2741–2759. [Google Scholar]
  40. Pinty, B.; Verstraete, M.M. GEMI: A non-linear index to monitor global vegetation from satellites. Vegetatio 1992, 101, 15–20. [Google Scholar]
  41. le Maire, G.; François, C.; Dufrêne, E. Towards universal broad leaf chlorophyll indices using PROSPECT simulated database and hyperspectral reflectance measurements. Remote Sens. Environ. 2004, 89, 1–28. [Google Scholar]
  42. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  43. Qi, J.; Chehbouni, A.; Huete, A.; Kerr, Y.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  44. Rock, B.N.; Williams, D.L.; Vogelmann, J.E. Field and airborne spectral characterization of suspected acid deposition damage in red spruce (picea rubens) from vermont. In Proceedings of the 11th International Symposium—Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 25–27 June 1985; pp. 71–81.
  45. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar]
  46. Richardson, A.J.; Wiegand, C.L. Distinguishing vegetation from soil background information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
  47. Chen, P.F.; Tremblay, N.; Wang, J.H.; Vigneault, P.; Huang, W.J.; Li, B.G. New index for crop canopy fresh biomass estimation. Guang Pu Xue Yu Guang Pu Fen Xi/Spectrosc. Spectr. Anal. 2010, 30, 512–517. [Google Scholar]
  48. Huete, A. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  49. Blackburn, G.A. Quantifying chlorophylls and caroteniods at leaf and canopy scales. Remote Sens. Environ. 1998, 66, 273–285. [Google Scholar]
  50. Baret, F.; Guyot, G.; Major, D. TSAVI: A vegetation index which minimizes soil brightness effects on LAI and APAR estimation. In Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 10–14 July 1989; Volume 3, pp. 1355–1358.
  51. Clevers, J. The derivation of a simplified reflectance model for the estimation of leaf area index. Remote Sens. Environ. 1988, 25, 53–69. [Google Scholar] [CrossRef]
  52. Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  53. Mcfeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  54. Wulf, H.; Stuhler, S. Sentinel-2: Land Cover, Preliminary User Feedback on Sentinel-2A Data. In Proceedings of the Sentinel-2A Expert Users Technical Meeting, Frascati, Italy, 29–30 September 2015.
  55. Van Deventer, A.P.; Ward, A.D.; Gowda, P.M.; Lyon, J.G. Using thematic mapper data to identify contrasting soil plains and tillage practices. Photogramm. Eng. Remote Sens. 1997, 63, 87–93. [Google Scholar]
  56. Jacques, D.C.; Kergoat, L.; Hiernaux, P.; Mougin, E.; Defourny, P. Monitoring dry vegetation masses in semi-arid areas with MODIS SWIR bands. Remote Sens. Environ. 2014, 153, 40–49. [Google Scholar] [CrossRef]
  57. Lichtenthaler, H.; Lang, M.; Sowinska, M.; Heisel, F.; Miehé, J. Detection of Vegetation Stress Via a New High Resolution Fluorescence Imaging System. J. Plant Physiol. 1996, 148, 599–612. [Google Scholar] [CrossRef]
  58. Shahi, K.; Shafri, H.Z.; Taherzadeh, E.; Mansor, S.; Muniandy, R. A novel spectral index to automatically extract road networks from WorldView-2 satellite imagery. Egypt. J. Remote Sens. Space Sci. 2015, 18, 27–33. [Google Scholar] [CrossRef]
  59. Institut Géographique National. Topomapviewer; Institut Géographique National: Brussels, Belgium, 2016. [Google Scholar]
  60. Radoux, J.; Defourny, P. Automated image-to-map discrepancy detection using iterative trimming. Photogramm. Eng. Remote Sens. 2010, 76, 173–181. [Google Scholar] [CrossRef]
  61. Worton, B. Using Monte Carlo simulation to evaluate kernel-based home range estimators. J. Wildl. Manag. 1995, 59, 794–800. [Google Scholar] [CrossRef]
  62. Jones, M.C.; Marron, J.S.; Sheather, S.J. A brief survey of bandwidth selection for density estimation. J. Am. Stat. Assoc. 1996, 91, 401–407. [Google Scholar] [CrossRef]
  63. Schowengerdt, A.R. Remote Sensing: Models and Methods for Image Processing; Elsevier Inc.: Amsterdam, The Netherlands, 2007; pp. 300–304. [Google Scholar]
  64. Joseph, G. How well do we understand Earth observation electro-optical sensor parameters? ISPRS J. Photogramm. Remote Sens. 2000, 55, 9–12. [Google Scholar] [CrossRef]
  65. Pagnutti, M.; Blonski, S.; Cramer, M.; Helder, D.; Holekamp, K.; Honkavaara, E.; Ryan, R. Targets, methods, and sites for assessing the in-flight spatial resolution of electro-optical data products. Can. J. Remote Sens. 2010, 36, 583–601. [Google Scholar] [CrossRef]
  66. Schowengerdt, R.A.; Archwamety, C.; Wrigley, R. Landsat thematic mapper image-derived MTF. Photogramm. Eng. Remote Sens. 1985, 51, 1395–1406. [Google Scholar]
  67. Ruiz, C.P.; Lopez, F.A. Restoring SPOT images using PSF-derived deconvolution filters. Int. J. Remote Sens. 2002, 23, 2379–2391. [Google Scholar] [CrossRef]
  68. Campagnolo, M.L.; Montano, E.L. Estimation of effective resolution for daily MODIS gridded surface reflectance products. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5622–5632. [Google Scholar] [CrossRef]
  69. Wenny, B.N.; Helder, D.; Hong, J.; Leigh, L.; Thome, K.J.; Reuter, D. Pre-and post-launch spatial quality of the Landsat 8 Thermal Infrared Sensor. Remote Sens. 2015, 7, 1962–1980. [Google Scholar] [CrossRef]
  70. Helder, D.; Choi, T.; Rangaswamy, M. In-flight characterization of spatial quality using point spread functions. In Proceedings of the International Workshop on Radiometric and Geometric Calibration, Gulfport, MS, USA, 2–5 December 2003; pp. 151–170.
  71. James, R.I.; John, L.D.; Julia, A.B. The next Landsat satellite: The Landsat Data Continuity Mission. Remote Sens. Environ. 2012, 122, 11–21. [Google Scholar]
  72. Fletcher, K. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services; Technical Report; European Space Agency: Noordwijk, The Netherlands, 2012. [Google Scholar]
  73. Fedorov, A.A.; Barbosa, S.P.; Berberan-Santos, M.N. Radiation propagation time broadening of the instrument response function in time-resolved fluorescence spectroscopy. Chem. Phys. Lett. 2006, 421, 157–160. [Google Scholar] [CrossRef]
  74. Cartosig. La Carte d’Occupation du Sol de Wallonie (COSW)—Version 2_07; Direction Générale opérationnelle de l’agriculture, des ressources naturelles et de l’environnment: Namur, Belgium, 2010. [Google Scholar]
  75. GmbH, T.V.D. Sentinel-2 MSI—Level-2A Prototype Processor Installation and User Manual. 2015. Available online: http://s2tbx.telespazio-vega.de/sen2cor/sen2cor-sum-2.0.pdf (accessed on 6 June 2016).
  76. Vermote, E.; Justice, C.; Claverie, M.; Franch, B. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sens. Environ. 2016, in press. [Google Scholar] [CrossRef]
  77. Hagolle, O.; Huc, M.; Villa Pascual, D.; Dedieu, G. A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of FormoSat-2, LandSat, VENμS and sentinel-2 images. Remote Sens. 2015, 7, 2668–2691. [Google Scholar] [CrossRef]
  78. Helder, D.; Choi, J.; Anderson, C. On-orbit modulation transfer function (MTF) measurements for IKONOS and QuickBird. In Proceedings of the JACIE 2006 Civil Commercial Imagery Evaluation Workshop, Brookings, SD, USA, 13 March 2006; pp. 14–16.
  79. Wardlow, B.D.; Egbert, S.L. Large-area crop mapping using time-series MODIS 250 m NDVI data: An assessment for the US Central Great Plains. Remote Sens. Environ. 2008, 112, 1096–1116. [Google Scholar] [CrossRef]
  80. Foody, G. Fuzzy modelling of vegetation from remotely sensed imagery. Ecol. Model. 1996, 85, 3–12. [Google Scholar] [CrossRef]
  81. Wickham, J.D.; O’neill, R.V.; Riitters, K.H.; Wade, T.G.; Jones, K.B. Sensitivity of selected landscape pattern metrics to land-cover misclassification and differences in land-cover composition. Photogramm. Eng. Remote Sens. 1997, 63, 397–401. [Google Scholar]
  82. Arnot, C.; Fisher, P.F.; Wadsworth, R.; Wellens, J. Landscape metrics with ecotones: Pattern under uncertainty. Landsc. Ecol. 2004, 19, 181–195. [Google Scholar] [CrossRef]
  83. Gallego, F.J. Remote sensing and land cover area estimation. Int. J. Remote Sens. 2004, 25, 3019–3047. [Google Scholar] [CrossRef]
  84. Radoux, J.; Defourny, P. A quantitative assessment of boundaries in automated forest stand delineation using very high resolution imagery. Remote Sens. Environ. 2007, 110, 468–475. [Google Scholar] [CrossRef]
  85. Service Public de Wallonie Direction Générale Opérationnelle de l’Agriculture, des Ressources Naturelles et de l’Environnement; Département des Aides; Direction des Surfaces Agricoles. Les Subventions Agro-Environnmentales, Vade-Mecum. 2012; Available online: http://agriculture.wallonie.be/apps/ spip_wolwin/IMG/pdf/Vademecum_MAE_2012_version_13_02_2012.pdf (accessed on 6 June 2016).
Figure 1. Flowchart of the three-step methodology to assess the potential of a sensor for sub-decametric feature detection.
Figure 1. Flowchart of the three-step methodology to assess the potential of a sensor for sub-decametric feature detection.
Remotesensing 08 00488 g001
Figure 2. Average surface reflectance spectral signatures of the nine considered classes derived from the Sentinel-2 image acquired on 1 January 2015 over Belgium (see Section 3).
Figure 2. Average surface reflectance spectral signatures of the nine considered classes derived from the Sentinel-2 image acquired on 1 January 2015 over Belgium (see Section 3).
Remotesensing 08 00488 g002
Figure 3. The overlap area (S ) between two normal probability density functions (in blue).
Figure 3. The overlap area (S ) between two normal probability density functions (in blue).
Remotesensing 08 00488 g003
Figure 4. Example of separability analysis. The error ribbons represent the 90% confidence interval around the distributions mean of a background/foreground mixture. The blue zone corresponds to the mixtures separable from the pure background distribution represented with the red dotted lines. With large background fractions, the overlap S (black line on the bottom graph) between the mixture and the pure background exceeds 10% and the distributions are considered non-separable (red zone) corresponding to a classification error larger than 5%. The black dashed lines indicate when the limit of 10 % of separability with the background class is reached.
Figure 4. Example of separability analysis. The error ribbons represent the 90% confidence interval around the distributions mean of a background/foreground mixture. The blue zone corresponds to the mixtures separable from the pure background distribution represented with the red dotted lines. With large background fractions, the overlap S (black line on the bottom graph) between the mixture and the pure background exceeds 10% and the distributions are considered non-separable (red zone) corresponding to a classification error larger than 5%. The black dashed lines indicate when the limit of 10 % of separability with the background class is reached.
Remotesensing 08 00488 g004
Figure 5. Shapes of interest: (a) continuous linear objects crossing the pixel in its center (LC); (b) continuous linear objects centered with relation to the pixel border (LB); (c) compact object centered in the pixel (CO).
Figure 5. Shapes of interest: (a) continuous linear objects crossing the pixel in its center (LC); (b) continuous linear objects centered with relation to the pixel border (LB); (c) compact object centered in the pixel (CO).
Remotesensing 08 00488 g005
Figure 6. (a) Sentinel-2 10 m PSF; (b) example of simulated 8 m wide LC object; (c) bi-convolution of the PSF on the simulated LC object. Orange area shows the pixel footprint. Blue areas represent the footprints of the 8 adjacent pixels.
Figure 6. (a) Sentinel-2 10 m PSF; (b) example of simulated 8 m wide LC object; (c) bi-convolution of the PSF on the simulated LC object. Orange area shows the pixel footprint. Blue areas represent the footprints of the 8 adjacent pixels.
Remotesensing 08 00488 g006
Figure 7. Snapshots of an orthophoto (25 cm), Sentinel-2A (10 m), SPOT-5 (Take 5) (10 m) and Landsat-8 (30 m) images of a small pond (30 m × 30 m) surrounded by permanent grassland in the South of Belgium. The four images are presented using the same standard false color bands combination (R: NIR w i d e , G: Red, B: Green).
Figure 7. Snapshots of an orthophoto (25 cm), Sentinel-2A (10 m), SPOT-5 (Take 5) (10 m) and Landsat-8 (30 m) images of a small pond (30 m × 30 m) surrounded by permanent grassland in the South of Belgium. The four images are presented using the same standard false color bands combination (R: NIR w i d e , G: Red, B: Green).
Remotesensing 08 00488 g007
Figure 8. Discriminating power of spectral bands and indices to discriminate foreground classes according to the limit proportion of background. Each pair of classes is ordered as foreground - background and the limit proportion of the background allows an error of 5% of misclassification between front and back classes. All results are based on Sentinel-2 data. The left matrix includes all the bands and indices which are common to the 3 sensors, the central matrix includes bands and indices that are not present in SPOT-5 and the right matrix is based on bands and indices that are specific to Sentinel-2.
Figure 8. Discriminating power of spectral bands and indices to discriminate foreground classes according to the limit proportion of background. Each pair of classes is ordered as foreground - background and the limit proportion of the background allows an error of 5% of misclassification between front and back classes. All results are based on Sentinel-2 data. The left matrix includes all the bands and indices which are common to the 3 sensors, the central matrix includes bands and indices that are not present in SPOT-5 and the right matrix is based on bands and indices that are specific to Sentinel-2.
Remotesensing 08 00488 g008
Figure 9. Discriminating power difference between SPOT-5 and Sentinel-2 (S2) for each index and foreground/background pair. Blue tones indicate a better discrimination with SPOT-5 and red ones show the better performance of Sentinel-2. Crosses indicate that the discrimination threshold was not achieved by one of the sensors.
Figure 9. Discriminating power difference between SPOT-5 and Sentinel-2 (S2) for each index and foreground/background pair. Blue tones indicate a better discrimination with SPOT-5 and red ones show the better performance of Sentinel-2. Crosses indicate that the discrimination threshold was not achieved by one of the sensors.
Remotesensing 08 00488 g009
Figure 10. Discriminating power difference between Landsat-8 (L8) and Sentinel-2 (S2) for each indices and foreground/background pair. Blue tones indicate a better discrimination with Landsat-8 and red ones show the better performance of Sentinel-2. Crosses indicate that the discrimination threshold was not achieved by one of the sensors.
Figure 10. Discriminating power difference between Landsat-8 (L8) and Sentinel-2 (S2) for each indices and foreground/background pair. Blue tones indicate a better discrimination with Landsat-8 and red ones show the better performance of Sentinel-2. Crosses indicate that the discrimination threshold was not achieved by one of the sensors.
Remotesensing 08 00488 g010
Figure 11. Probability of classification error for a pure foreground pixel with Sentinel-2’s indices and bands when classes are equiprobable two-by-two. The left matrix includes all the bands and indices which are common to the 3 sensors, the central matrix includes bands and indices that are not present in SPOT-5 and the right matrix is based on bands and indices that are specific to Sentinel-2.
Figure 11. Probability of classification error for a pure foreground pixel with Sentinel-2’s indices and bands when classes are equiprobable two-by-two. The left matrix includes all the bands and indices which are common to the 3 sensors, the central matrix includes bands and indices that are not present in SPOT-5 and the right matrix is based on bands and indices that are specific to Sentinel-2.
Remotesensing 08 00488 g011
Table 1. Name, spectral range (nm) and spatial resolution (m) of the corresponding Sentinel-2 MSI, Landsat-8 OLI and SPOT-5 HRG bands. The asterisk indicates bands that are not assessed in this study.
Table 1. Name, spectral range (nm) and spatial resolution (m) of the corresponding Sentinel-2 MSI, Landsat-8 OLI and SPOT-5 HRG bands. The asterisk indicates bands that are not assessed in this study.
Sentinel-2 MSILandsat-8 OLISPOT-5 HRG
Band [m]Range [nm]Band [m]Range [nm]Band [m]Range [nm]Name
B1 * (60) 443 ± 10 B1 (30) 440 ± 10 Aerosol
B2 (10) 490 ± 32 . 5 B2 (30) 480 ± 30 Blue
B3 (10) 560 ± 17 . 5 B3 (30) 560 ± 30 B1 (10) 545 ± 45 Green
B4 (10) 665 ± 15 B4 (30) 650 ± 20 B2 (10) 645 ± 35 Red
B8 * (15) 590 ± 45 PAN (5) 595 ± 115 PAN
B5 (20) 705 ± 7 . 5 Red-edge 1
B6 (20) 740 ± 7 . 5 Red-edge 2
B7 (20) 783 ± 10 Red-edge 3
B8 (10) 842 ± 57 . 5 B3 (10) 835 ± 55 NIR wide
B8A (20) 865 ± 10 B5 (30) 865 ± 15 NIR narrow
B9 * (60) 945 ± 10 Cirrus
B10 * (60) 1375 ± 15 B9 (30) 1370 ± 10 Water Vapor
B11 (20) 1610 ± 45 B6 (30) 1610 ± 40 B4 (20) 1665 ± 85 SWIR 1
B12 (20) 2190 ± 90 B7 (30) 2200 ± 90 SWIR 2
B10 * (100) 10 , 895 ± 295 Thermal 1
B11 * (100) 12 , 005 ± 505 Thermal 2
Table 2. Indices considered in the separability analysis.
Table 2. Indices considered in the separability analysis.
IndicesNameFormulaReference
Vegetation discrimination
ChlogreenChlorophyll Green index ρ N I R n a r r o w ρ G r e e n + ρ R e d e d g e 1 [39]
GEMIGlobal Environment Monitoring Vegetation Index n * ( 1 - 0 . 25 * n ) - ( ρ R e d - 0 . 125 ) 1 - ρ R e d n = 2 × ( ρ N I R n a r r o w 2 - ρ R e d 2 ) + 1 . 5 × ρ N I R n a r r o w + 0 . 5 * ρ R e d ρ N I R n a r r o w + ρ R e d + 0 . 5 [40]
GIGreenness Index ρ G r e e n ρ R e d [41]
gNDVIGreen normalized difference vegetation index ρ N I R n a r r o w - ρ G r e e n ρ N I R n a r r o w + ρ G r e e n [42]
MSAVIModified soil adjusted vegetation index 1 - 2 × a × N D V I × W D V i [43]
MSIMoisture stress index ρ S W I R 1 ρ N I R n a r r o w [44]
ND R e d e d g e S W I R Normalized Difference of Red-edge and SWIR2 ρ R e d e d g e 2 - ρ S W I R 2 ρ R e d e d g e 2 + ρ S W I R 2
NDVINormalized difference vegetation index ρ N I R n a r r o w - ρ r e d ρ N I R n a r r o w + ρ r e d [45]
NDVIreRed-edge normalized difference vegetation index ρ N I R n a r r o w - ρ R e d e d g e 1 ρ N I R n a r r o w + ρ R e d e d g e 1 [42]
PVIPerpendicular vegetation index ρ N I R n a r r o w - a * ρ R e d - b a 2 + 1 [46]
RededgePeakAreaRed-edge peak area ρ R e d + ρ R e d e d g e 1 + ρ R e d e d g e 2 + ρ R e d e d g e 3 + ρ N I R n a r r o w
RTVIcoreRed-edge Triangular Vegetation Index 100 × ( ρ N I R n a r r o w - ρ R e d e d g e 1 ) - 10 × ( ρ N I R n a r r o w - ρ G r e e n ) [47]
SAVISoil Adjusted Vegetation Index ρ N I R n a r r o w - ρ R e d ρ N I R n a r r o w + ρ R e d + L × L with L = 0 . 5 [48]
SR N I R n a r r o w B l u e Simple ratio NIR narrow and Blue ρ N I R n a r r o w ρ B l u e [49]
SR N I R n a r r o w G r e e n Simple ratio NIR narrow and Green ρ N I R n a r r o w ρ G r e e n [41]
SR N I R n a r r o w R e d Simple ratio NIR narrow and Red ρ N I R n a r r o w ρ R e d [49]
TSAVITransformed Soil Adjusted Vegetation Index a × ( ρ N I R n a r r o w - a × ρ R e d - b ) ρ N I R n a r r o w + ρ R e d - a × b + 0 , 08 × ( 1 + a 2 ) [50]
WDViWeighted Difference Vegetation Index ρ N I R n a r r o w - a × ρ R e d [51]
Water detection
NDWI 1 Normalized Difference Water Index 1 ρ N I R n a r r o w - ρ S W I R 1 ρ N I R n a r r o w + ρ S W I R 1 [52]
NDWI 2 Normalized Difference Water Index 2 ρ G r e e n - ρ N I R n a r r o w ρ G r e e n + ρ N I R n a r r o w [53]
NHINormalized Humidity Index ρ S W I R 1 - ρ G r e e n ρ S W I R 1 + ρ G r e e n [31]
Canopy properties
LAnthoCLeaf Anthocyanid Content ρ R e d e d g e 3 ρ G r e e n - ρ R e d e d g e 1 [54]
LCaroCLeaf Carotenoid Content ρ R e d e d g e 3 ρ B l u e - ρ R e d e d g e 1 [54]
LChloCLeaf Chlorophyll Content ρ R e d e d g e 3 ρ R e d e d g e 1 [54]
Dry vegetation
NDTINormalized Difference Tillage Index ρ S W I R 1 - ρ S W I R 2 ρ S W I R 1 + ρ S W I R 2 [55]
RedSWIR 1 Bands difference ρ R e d - ρ S W I R 1 [56]
STISoil Tillage Index ρ S W I R 1 ρ S W I R 2 [55]
Vegetation (with Red-edge)
SR B l u e R e d e d g e 1 Simple Blue and Red-edge 1 Ratio ρ B l u e ρ R e d e d g e 1 [41]
SR B l u e R e d e d g e 2 Simple Blue and Red-edge 2 Ratio ρ B l u e ρ R e d e d g e 2 [57]
SR B l u e R e d e d g e 3 Simple Blue and Red-edge Ratio ρ B l u e ρ R e d e d g e 3 derived from [41,57]
SR N I R n a r r o w R e d e d g e 1 Simple NIR and Red-edge 1 Ratio ρ N I R n a r r o w ρ R e d e d g e 1 [39]
SR N I R n a r r o w R e d e d g e 2 Simple NIR and Red-edge 2 Ratio ρ N I R n a r r o w ρ R e d e d g e 2 derived from [39]
SR N I R n a r r o w R e d e d g e 3 Simple NIR and Red-edge 3 Ratio ρ N I R n a r r o w ρ R e d e d g e 3 derived from [39]
Artificial areas
BAIBuilt-up Area Index ρ B l u e - ρ N I R n a r r o w ρ N I R n a r r o w + ρ B l u e [58]
Table 3. Number of pixels selected based on the regions of interest for each land cover class and for each sensor. The number of selected pixels varies due to different pixel sizes and sub-pixel shifts.
Table 3. Number of pixels selected based on the regions of interest for each land cover class and for each sensor. The number of selected pixels varies due to different pixel sizes and sub-pixel shifts.
SatelliteBare SoilBroad-LeavedGrasslandMaizeNeedle-LeavedPastureRoadsSugar BeetWater
Sentinel-21318356140655813691131491079632
SPOT-51390356541221013841124501081630
Landsat-832836433741131261412653
Table 4. Location, sensor, acquisition dates and atmospheric corrections algorithm for the input images of this study.
Table 4. Location, sensor, acquisition dates and atmospheric corrections algorithm for the input images of this study.
LocationCentroidSensorAcquisition DateAtmospheric Correction
LatitudeLongitude
Belgium50 28 21.01 N5 53 13.12 ESentinel-21 October 2015Sen2Cor
50 16 28.32 N6 1 50.84 ELandsat-829 September 2015L8SR
50 36 8.86 N4 59 7.62 ESPOT-523 August 2015MACCS
Sacramento, USA38 47 55.29 N121 47 32.87 WSentinel-218 September 2015Sen2Cor
38 54 0.62 N120 7 1.90 WLandsat-86 September 2015L8SR
38 4 19.21 N121 47 8.93 WSPOT-58 September 2015MACCS
Table 5. Channels properties and the corresponding full width half maximum values (FWHM) and signal-to-noise ratio (SNR). The standard deviation (SD) provides information on the precision of the estimated FWHM.
Table 5. Channels properties and the corresponding full width half maximum values (FWHM) and signal-to-noise ratio (SNR). The standard deviation (SD) provides information on the precision of the estimated FWHM.
SatelliteBandChannelResolution [m]FWHM [Pixel]SD [Pixel]FWHM [m]SD [m]SNR
Landsat-8Band 3Red301.700.1851.055.4821
SPOT-5Band 2Red102.010.2020.111.9957
Sentinel-2Band 4Red102.210.1722.061.788
Sentinel-2Band 5Red-edge201.670.1733.483.4450
Sentinel-2Band 11SWIR 1 201.960.1339.102.2131
Table 6. Minimum width (m) of continuous linear objects crossing the pixel in its center (LC) or its border (LB) and square object centered in the pixel (CO) detectable by each sensor and the corresponding limit proportion (Prop). A “/” indicates that none of the tested indices using the corresponding bands were able to discriminate the pair. The smallest detection size for each pair is highlighted in bold.
Table 6. Minimum width (m) of continuous linear objects crossing the pixel in its center (LC) or its border (LB) and square object centered in the pixel (CO) detectable by each sensor and the corresponding limit proportion (Prop). A “/” indicates that none of the tested indices using the corresponding bands were able to discriminate the pair. The smallest detection size for each pair is highlighted in bold.
Landsat-8 (30 m)Sentinel-2 (10 m)SPOT-5 (10 m)Sentinel-2 Red-Edge (20 m)Sentinel-2 SWIR (20 m)
CouplePropLCLBCOPropLCLBCOPropLCLBCOPropLCLBCOPropLCLBCO
Broadleaved - Grassland0.3937.546.553.50.5212.51419.50.271922.5250.42232933.50.34323844
Broadleaved - Maize0.602329390.431517220.043540.540////0.3034.54146.5
Broadleaved - Pasture0.5923.529.5400.6789.5150.599.511160.6413.517240.7112.51524.5
Broadleaved - Sugar beet0.632126.537.5////0.451315.519.50.233442.5440.6316.519.528.5
Grassland - Bare soil0.839.51223.50.88338.50.843.548.50.932.53.5100.904.55.514
Grassland - Maize0.7414.518.5300.401618.523////0.51192429.50.64161928
Grassland - Sugar beet0.79121526.50.43151722////0.5019.524.5300.789.511.521
Pasture - Bare soil0.8110.513.5250.89338.50.8733.580.932.53.5100.904.55.514
Pasture - Sugar beet0.751417.529.50.09323637.5////0.144251.5510.77101221.5
Roads - Bare soil0.522835.544.50.56111318.50.6489.514.50.4720.52631.50.5221.525.534
Roads - Grassland0.8968190.89338.50.6489.514.50.8945.512.50.7610.512.522
Roads - Maize0.905.57180.902.5380.755.56.511.50.884.55.5130.914513
Roads - Sugar beet0.9156.5170.9222.570.716.58130.884.55.5130.923.5412.5
Water - Grassland0.7414.518.5300.8155.5110.628.510.5150.826.58.5160.7991120.5
Water - Pasture0.801114260.7278140.7467120.7981017.50.837.5918.5

Share and Cite

MDPI and ACS Style

Radoux, J.; Chomé, G.; Jacques, D.C.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; D’Andrimont, R.; Defourny, P. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection. Remote Sens. 2016, 8, 488. https://doi.org/10.3390/rs8060488

AMA Style

Radoux J, Chomé G, Jacques DC, Waldner F, Bellemans N, Matton N, Lamarche C, D’Andrimont R, Defourny P. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection. Remote Sensing. 2016; 8(6):488. https://doi.org/10.3390/rs8060488

Chicago/Turabian Style

Radoux, Julien, Guillaume Chomé, Damien Christophe Jacques, François Waldner, Nicolas Bellemans, Nicolas Matton, Céline Lamarche, Raphaël D’Andrimont, and Pierre Defourny. 2016. "Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection" Remote Sensing 8, no. 6: 488. https://doi.org/10.3390/rs8060488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop