Next Article in Journal
Erratum: Xie, Y.Y.; et al. GRACE-Based Terrestrial Water Storage in Northwest China: Changes and Causes. Remote Sens. 2018, 10, 1163
Previous Article in Journal
Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations

Food Security Unit, Joint Research Centre (JRC), European Commission, 21027 Ispra, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1300; https://doi.org/10.3390/rs10081300
Submission received: 6 July 2018 / Revised: 31 July 2018 / Accepted: 2 August 2018 / Published: 17 August 2018
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
The introduction of high-resolution Sentinels combined with the use of high-quality digital agricultural parcel registration systems is driving the move towards at-parcel agricultural monitoring. The European Union’s Common Agricultural Policy (CAP) has introduced the concept of CAP monitoring to help simplify the management and control of farmers’ parcel declarations for area support measures. This study proposes a proof of concept of this monitoring approach introducing and applying the concept of ‘markers’. Using Sentinel-1- and -2-derived (S1 and S2) markers, we evaluate parcels declared as grassland in the Gelderse Vallei in the Netherlands covering more than 15,000 parcels. The satellite markers—respectively based on crop-type deep learning classification using S1 backscattering and coherence data and on detecting bare soil with S2 during the growing season—aim to identify grassland-declared parcels for which (1) the marker suggests another crop type or (2) which appear to have been ploughed during the year. Subsequently, a field-survey was carried out in October 2017 to target the parcels identified and to build a relevant ground-truth sample of the area. For the latter purpose, we used a high-definition camera mounted on the roof of a car to continuously sample geo-tagged digital imagery, as well as an app-based approach to identify the targeted fields. Depending on which satellite-based marker or combination of markers is used, the number of parcels identified ranged from 2.57% (marked by both the S1 and S2 markers) to 17.12% of the total of 11,773 parcels declared as grassland. After confirming with the ground-truth, parcels flagged by the combined S1 and S2 marker were robustly detected as non-grassland parcels (F-score = 0.9). In addition, the study demonstrated that street-level imagery collection could improve collection efficiency by a factor seven compared to field visits (1411 parcels/day vs. 217 parcels/day) while keeping an overall accuracy of about 90% compared to the ground-truth. This proposed way of collecting in situ data is suitable for the training and validating of high resolution remote sensing approaches for agricultural monitoring. Timely country-wide wall-to-wall parcel-level monitoring and targeted in-season parcel surveying will increase the efficiency and effectiveness of monitoring and implementing agricultural policies.

Graphical Abstract

1. Introduction

Grasslands are not only an intrinsic part of European agriculture, but also essential for global bio-geochemical cycling, biodiversity and climate change mitigation (e.g., soil carbon stocks), providing a range of economic (e.g., feed) and environmental services [1]. Therefore, following the 2013 reform of the European Union’s (EU) Common Agricultural Policy (CAP) to make the direct payment system more environmentally friendly with the so-called ‘greening’ rules, farmers can receive direct payments for maintaining permanent grasslands. For the administration and control of these farmer support requests, compliance monitoring of grassland management is a mandatory task of the EU Member States’ (MS) CAP paying agencies.
New opportunities for grassland monitoring are provided by the EU Copernicus program and the observations by the Sentinel-1 and -2 satellites. The Synthetic Aperture Radar (SAR) Sentinel-1 and multi-spectral Sentinel-2 sensors are acquiring high resolution (10 m) observations with high revisit frequencies. The combination of the Sentinels’ observations, which are freely accessible, and the increasingly accessible crop-type data at the parcel level enable new possibilities for EU crop monitoring. In the remainder of this Introduction we will summarize the importance of grasslands, discuss the state-of-the art of high-resolution remote sensing, specifically with respect to the Copernicus Sentinels and grasslands, the availability of open-access parcel-level crop type declarations in the EU, highlight the continued need for in situ data and field observations and discuss recent opportunities provided by street-level imagery.

1.1. The Importance of Grasslands

Grasslands are key biotopes in climate change mitigation as they store approximately 34% of the global stock of carbon in terrestrial ecosystems [2]. Indeed, enhancing soil carbon sequestration is seen as having the highest greenhouse gas mitigation potential in the agriculture sector [3], and the Intergovernmental Panel on Climate Change (IPCC) provides methods for greenhouse gas inventories for grassland [4]. Unlike forests, where vegetation is the primary source of carbon storage, most of the grassland carbon stocks are in the soil. Cultivation and the effect of urbanization on grasslands, as well as other modifications of grasslands such as desertification and livestock grazing, can result in significant carbon emissions (e.g. [5]). Grasslands provide habitats for plants and animals—soil microfauna and large mammals alike—and also support large numbers of wild herbivores that depend on the biotope for breeding, migratory and wintering habitats, sharing the land with domesticated herds. Grasslands also support large numbers of domesticated animals such as cattle, sheep and goat herds, horses and water buffalo, which are the sources of meat, dairy products, wool and leather products. Grasslands in Europe had been declining since the 1960s. Given the importance of grasslands, policies were developed to counteract this decline. The permanent pasture ratio was introduced as part of a cross compliance in 2004. This required MS to keep the decline of the ratio of permanent grassland area and total agricultural area below 10%. The maintenance of permanent grassland, as part of the greening, strengthened this requirement to 5% and required identifying the most environmentally-sensitive grasslands and protecting them from ploughing [6].

1.2. High-Resolution Remote Sensing Grasslands Monitoring

Grasslands vary greatly in their degree and intensity of management, from extensively-managed rangelands to intensively-managed pasture and hay land that are fertilized and irrigated. The huge variety in grassland typology provides a challenge for any remotely-sensed-based classification scheme.
Fortunately, the Copernicus program has reached the operational status in the provision of full, free and open access to Sentinel satellite products for any user. In constellation, Sentinel-1 (S1) and Sentinel-2 (S2) have a revisit capacity of respectively six days and five days, when combining the identical A and B platforms. Data have the finest spatial resolution of 10 m, which is ideal for monitoring fields at the parcel level.
Crop type mapping with Sentinels has already been demonstrated in various recent studies such as Belgiu and Csillik [7], Immitzer et al. [8] and Defourny et al. [9] using vegetation indices or reflectance. To make sense of the large Sentinel data volumes, machine learning algorithms can improve the accuracy of crop mapping. In this context, deep learning was demonstrated to be useful for many remote sensing applications (see [10] for a review). Kussul et al. [11] demonstrated the potential of deep learning for crop type classification with S1 time series in the Ukraine.
In addition to grassland mapping per se, several studies demonstrate the potential for detailed grassland characterization with Sentinel observations, especially in the context of activity monitoring. For example, recent experience has evidenced the possibility of using coherence between two S1 SAR scenes for detecting mowing events in Estonia [12], as well as for grassland cutting dates in Germany [13]. High coherence values due to backscattering from the ground were linked to ploughed bare fields and low vegetation height [14,15]. S2 has also recently demonstrated its potential ability to monitor grassland mowing in Germany [16]. Veloso et al. [17] analyzed the temporal trajectory of remote sensing data for a variety of winter and summer crops and pointed out the interest of S1 data and particularly the VH/VV ratio for crop monitoring.
To conclude, Sentinels allow the timely monitoring of grasslands using high-resolution remote sensing, but methods still have to be developed and adapted for this purpose.

1.3. CAP Policy Context

In the context of the EU’s CAP direct payment management and control measures, discussions are ongoing on simplification, in particular the substitution of the On The Spot Checks (OTSC) by a system of monitoring. OTSC (for OTSC guidelines, see [18]) are based on a limited sample (5%) of farmer declarations that are checked with either a pre-defined selection of several high resolution images or via field inspection and measurement. Monitoring refers to ‘a procedure based on regular and systematic observation, tracking and assessment of the fulfillment of eligibility conditions and agricultural activities over a period of time, which involves, where and when necessary, appropriate follow-up action’ [19]. To this end, an implementing regulation (Regulation (EU) No. 2018/746 [20]) was recently adopted (May 2018), allowing MS to switch from OTSC to monitoring for their controls starting from 2018. Several MS are considering switching from OTSC to monitoring, but no operational official monitoring has yet been carried out. Therefore, providing scientific and technical guidance for the implementation of the monitoring and marker approach is essential.
Monitoring applies to the full population of farmer declarations and is primarily relying on continuous S1 and S2 time series. Basically, the monitoring approach determines whether the time series characteristics (i.e., behavior) of an agricultural parcel are in line with the crop and activities declared under a particular scheme for which aid is requested. For example, basic area payments will require evidence of an agricultural activity; crop diversification requires determination of the crop types; and greening payments may require detection of crop management activities (e.g., mowing). These concepts are captured in the definition of a marker, which is ‘a unique collection (combination) of property values of a data signal that evidence the presence of a particular continuous state or a change of state of the land phenomenon’ [19], where land phenomena reflect the characteristic phenology of a crop or evidence a mechanical practice (e.g., ploughing, mowing, harvesting). Where appropriate, the absence of a marker or the assignment of a marker relating to another crop or activity than the declared case will trigger follow up action by the management and control authorities.

1.4. Open Access Parcel-Level Crop Type Declarations

In the EU, the Member States’ Land Parcel Identification Systems (LPIS), which provide detailed digital geometries of agricultural reference parcels to aid the management of the CAP, are being released as open access data in an increasing number of regions. LPIS contain up to several millions parcel boundary vectors, depending on the Member State, delineating agricultural land eligible for CAP support. Farmers use the LPIS to declare their cropping practices, including specific environmental measures where relevant, in an annual aid application. Member State administrations maintain LPIS and carry out management and control activities on the basis of the LPIS, often with the use of remote sensing [21]. The level of detail available in the various open access LPIS implementations varies across Member States. For instance, in the Netherlands [22], Austria [23] and Denmark [24], the actual parcel declarations are made public annually, whereas, for example, France and the Czech Republic publish only the LPIS reference vectors. In some Member States, LPIS are managed at a regional level (e.g., Spain, Germany, Italy). The trend to release LPIS as open access data follows the realization that they have significant potential in supporting a range of agronomic and environmental use cases in scientific, public and private use contexts.

1.5. In Situ Data Needs

The combined use of detailed reference parcel databases with deep Sentinel data stacks facilitates near-real-time information gathering of markers for large sets of agricultural parcels. A subsequent challenge is to collect in situ data to train and validate the information extraction process. Declared parcel data may not be available until late in the growing season and may not, a priori, be considered as the ground-truth. However, traditional in situ ground-truth collection lacks the scale and possibility for automated integration into big data analyses and is prone to sampling errors [25]. Moreover, it require a huge organizational effort, making it difficult to achieve periodic resampling to assess changes in dynamic agricultural phenomena over time.

1.6. Street-Level Imagery

Citizens without professional expertise in remote sensing have become actively involved in the creation and analysis of large datasets, which is known as crowdsourcing. The rise of geospatial user-created information has been of great benefit to the collection of large quantities of reference data for land cover classification. Crowdsourcing applications such as Geo-Wiki [26] have demonstrated the potential to collect massive and high-quality in situ data.
Recently, applications based on (crowdsourced) geo-tagged street view imagery have been developed in the domains of, e.g., political preferences [27], income brackets’ prediction [28], crime rate prediction [29], electric network [30], transport [31], urban risk assessment [32], urban greenery and urban trees [33,34,35], and illustrate the enormous potential of such datasets.
Different platforms and business models coexist to host these growing archives. The most famous repository is the street-level pictures collected by Google Street View (GSV), through image-acquiring devices mounted onboard its dedicated cars, bikes and other transportation means. Its coverage is extensive, and in many cases, each location has been captured multiple times over the years. Despite the global coverage of GSV, limitations in term of access, contribution and date filtering availability through the API make it difficult to use the vast dataset for dynamic agricultural monitoring. Other platforms such as Open Street Cam, Tencent, Baidu, Mapillary, Here, Bing Streetside, and Apple provide the same type of repositories, each with their specificities. Mapillary [36], a European platform, is the first to provide open access and free-of-charge detailed street photos based on crowdsourcing [37]. In addition, it is possible to filter the data via queries through the site’s API, allowing the harvesting of images for a defined time window and geographical area.
Geo-tagged street-level imagery was already tested to collect agricultural statistics. Wu and Li [38], for example, developed their own device combining a GPS, a video camera and a GIS analysis system to collect crop type information in China, which was also used by [39] to validate paddy rice maps. Use of GSV along with remote sensing was used in the Global Croplands project where use of the Street View Application [40] permitted the collection of information on agriculture then used to train and validate a 30-m Africa cropland map [41]. Thus, their recent improvements offer a great potential to exploit this type of data for agriculture monitoring.

1.7. Objectives

Given the importance of grasslands and their policy relevance, along with the developments sketched above, the overall aim of this study is to evaluate the feasibility of integrating the in-season availability of parcel-level crop type information with S1 and S2 markers. These markers are designed to identify, across a large area, grassland parcels with deviating spectral characteristics that are then targeted for inspection during a one-day field survey.
The specific objectives are to evaluate: (1) whether efficient markers for grassland monitoring can be defined from a combination of Sentinel-1 SAR and Sentinel-2 multi-spectral observations; (2) the usefulness of combining cloud processing using Google Earth Engine (GEE) and deep learning (Tensorflow) to perform large-scale parcel-level marker evaluation tasks; (3) whether the targeted monitoring approach is appropriate and able to efficiently scale across the whole area covering 15,000 parcels along with its implications for CAP monitoring; and finally, (4) whether street-level imagery can contribute to the efficiency and effectiveness of a field survey designed for the evaluation of our markers.
Computationally intensive data-driven science such as reported here needs to be completely transparent and reproducible. Therefore, a final objective embedded in our manuscript is focused on providing all scripts and code used across a variety of software packages needed to reproduce the results available to the public on the GitHub repository https://github.com/rdandrimont/AGREE (AGRicultural Enhanced Evidence), summarized in Figure A1, Appendix A, and available in an online document as Supplementary Material.

2. Materials and Methods

2.1. Study Area

For this study, an area in the Netherlands was chosen as a large amount of high quality geo-information is available under open access license there, including very detailed annual agricultural parcel sets. Secondly, grassland and maize are the two most important crops in terms of cultivated area in the Netherlands (with 52.0% and 13.3%, respectively). However, crop occurrence has a distinct regional pattern which is primarily linked to varying soil types. Most grassland and (silage) maize are cultivated on sandy soils, peat and poorly-drained heavy clay soils, while most other arable crops (primarily potatoes, winter wheat, sugar beet, vegetables and onions) are cultivated on well-drained clay-rich alluvial and loamy soils. The study site is the Gelderse Vallei, which is part of the province of Gelderland, and includes 7 municipalities: Ede, Wageningen, Renkum, Barneveld, Arnhem, Putten and Nijkerk (Figure 1). The region’s area is 220,171 ha with 17,215 parcels totaling 28,116 ha, where the 4 main crops for 2017 are permanent grassland (62%), maize (16.2%), temporary grassland (10%) and natural grassland (1.4%). This range of grassland types represents a wide variation in management intensities and pressures resulting from other land use conversions such as urbanization, new road infrastructure, as well as nature conservation.

2.2. Data

2.2.1. Agricultural Parcel Database

The Netherlands open geo-data infrastructure exposes to the public a large number of very high quality reference datasets, using various protocols (e.g., WMS, WFS, atom downloads). For this study, the “Basisregistratie Gewaspercelen” (BRP) datasets are the most relevant [22]. BRP is derived from the Land Parcel Information System (LPIS) [21]. The LPIS itself is maintained and updated on the basis of the digital topographic map at scale 1:10,000 and actual aerial orthophotos. Applicants use the LPIS parcels to indicate the location and total area under cultivation of a particular crop and, where relevant, specific environmental or management practices eligible under support schemes. The applications are then digitized by the relevant authorities for subsequent administration and control of support payments. At the end of the season, a consolidated set is released in the public domain. BRP datasets comprise around 770,000 parcels with a total area of approximately 1.5 million hectares and have been available since 2009. In this study, we work with the BRP concept version for 2017, which was released on 15 July 2017 (BRP2017).

2.2.2. Sentinel-1

The S1 constellation consists of 2 identical C-band active microwave Synthetic Aperture Radar (SAR) low-Earth orbiting platforms operating at 5.6 GHz (5.4 cm wavelength, [42]), with an effective revisit of 6 days. For the global land surface, the dual polarization (VV, VH) Interferometric Wave (IW) mode is the default operating configuration. Level 1 products are produced both as Single Look Complex (SLC) and Ground Range-Detected (GRD) outputs and downloadable, usually within a few hours after sensing, from the Copernicus Open Hub data access points [43]. GRD products need to be processed to calibrated, geo-coded backscattering coefficients ( σ 0 ), which can be done with the European Space Agency’s SNAP Sentinel-1 toolbox, which is released as open source software [44]. Each scene is subjected to the following processing steps: thermal noise removal, radiometric calibration and terrain correction. The Google Earth Engine [45] platform already collects all GRD data, runs them through the toolbox and then ingests the geo-coded output as log-scaled and quantized to 16 bits unsigned integer into the COPERNICUS/S1 catalog. In the study area, and for the considered period from 1 January 2017 to 1 August 2017, 36 descending and 34 ascending orbits were acquired.

2.2.3. Sentinel-2

The Sentinel-2 MultiSpectral Instrument (MSI) Level-1C reflectance data are loaded for the period of interest and converted to top of atmosphere reflectance from 1 January 2017 to 1 August 2017. Over this period and the study area, which is fully contained in the Sentinel-2 reference grid tile 31UFT, 44 S2 images acquisitions are available, including cloudy observations. For the study area, S2-A was mostly used, because S2-B data were only available starting from 30 June 2017.

2.3. Methods

A key objective of this study is to use satellite markers to identify parcels declared as grassland that are potentially not grasslands. Thanks to the ability of S1 to retrieve data independently of atmospheric conditions, a consistent, gap-free time series is available at the parcel level making it particularly suitable data for deep learning classification. For S2, an approach based on a normalized index for bare soil event detection provides discrete evidence of grassland conversion. Hereafter, the methodology applied on the Copernicus data is described. We have implemented our methods in code that can be deployed in Google Earth Engine (GEE), run in Python or executed as recipes in the open source Sentinel-1 toolbox. All relevant code is available openly to the public on a GitHub repository (see Appendix A for details). The description of the processing steps for S1 backscattering is described in Section 2.3.1 (Figure 2a), the description of the methodology applied for S1 coherence in Section 2.3.2 (Figure 2b) and the methodology applied for S2 in Section 2.3.4 (Figure 2c).

2.3.1. S1 Backscattering

The processing step for S1 backscattering is summarized in Figure 2a and starts by loading the 10-m resolution S1 VV and VH GRD time series. After edge masking and conversion from dB to natural, 7-day average σ 0 temporal signatures are extracted for each parcel, internally buffered by 10 m to avoid boundary pixels. In this way, we generate a consistently-timed, equally-sized set of temporal features to use as input for the subsequent machine learning runs, independent of the number of actual S1 coverages of each individual parcel. We extract a CSV formatted table from GEE with the weekly averages for both the VV and VH polarized bands for each parcel.

2.3.2. S1 Coherence

For our study, we also consider S1 coherence as a potential marker, and the required processing is shown in Figure 2b. Starting with the operational status of Sentinel 1B in late September 2016, it is now possible to generate coherence with a 6-day temporal baseline over Europe. Coherence generation requires Level 1 SLC as input and can be derived by using the SNAP Sentinel-1 toolbox with the Graph Processing Tool batch procedure. The processing recipe (available as Supplementary Materials) includes debursting (i.e., merging of the azimuth bursts) of individual SLC IW subswaths (3 per scene), followed by co-registration of each pair of subswaths from successive scenes and finally the calculation of coherence by averaging 4 × 1 range and azimuth samples. The three subswaths are merged and terrain corrected to a 20-m pixel output product. Downloading SLC data and post-processing to geocoded coherence is a scripted process, which can be scheduled to run synchronized to S1 acquisitions. We generate coherence for the descending relative orbit 37 and ascending orbit 88 only. These orbits cover our study area completely and are approximately 3 days apart (scene acquisition time for Orbit 37 at 05:58 UTC on T0 and for Orbit 88 at 17:18 UTC on T0 + 3 days).
We upload the geocoded SAR coherence as assets to the GEE user account so that they can be shared with a range of use cases (e.g., [46]). Our marker is based on the occurrence of high coherence events, i.e., detecting the presence of bare soil or (very) low vegetation cover. In a similar manner as for σ 0 , temporal profiles are generated for each parcel by generating the maximum for a fixed period, in this case 15 days, i.e., with 4–5 coherence observations in each period. Extracts per parcel are also exported to a CSV formatted table, for both VV and VH polarizations.

2.3.3. Deep Learning S1 Backscatter and Coherence Classification

For both S1 backscattering and S1 coherence, the same deep neural network machine learning routines are used, but applied separately (Figure 2). The choice of TensorFlow [47] for deep learning is practical and based on its growing reputation as a versatile open-source toolkit for a wide range of machine learning problems. However, results reported in this study are likely to be reproducible in other (python based) open source machine learning libraries (theano, scikit-learn, etc.).
The CSV records exported from GEE are first filtered for fields smaller than 0.1 ha, to avoid noisy time series at the training stage. Secondly, the crops are aggregated to 5 main classes: Grassland (GRA), Maize (MAI), Cereal (CER), Potatoes (POT) and Other (OTH), which represent >90% of the crop area in the study site. The specific BRP2017 crop categories aggregated into the 5 classes are described in Table A1 in the Appendix B. We also remove ancillary parcel attributes (e.g., area, perimeter, crop labels) from the records that do not play a role in the machine learning steps.
For the training step, the labeled records ( N t o t a l ) are split into a record set for ( N t r a i n i n g ) and one for testing ( N t e s t ). The choice of N t r a i n i n g can be a fixed number of records or a percentage of all records. In this study, we found that using between 100 and 300 randomly selected parcels per class for the training set performed well.
The TFLearn module provides a model wrapper for a deep neural network, which in our case consists of 2 fully-connected layers with 32 nodes and a softmax activation function. The shape of our input is specific to the S1 backscatter record set, which has 34 features (17 weekly averages for VV and VH polarization) and for the S1 coherence, which has 18 features (9 two-weekly averages for VV and VH). The deep neural network automatically performs classifier tasks, such as training, prediction and production of accuracy metrics. We train the model for 80 epochs. Training accuracy levels off well before 80 epochs and does not significantly increase with a higher numbers of epochs.
The testing set provides the “marked parcels”, that is for these, the trained model predicts the most probable class given the time series for each parcel. To improve the prediction reliability, we run the training and prediction twice with different random training samples, and we keep only results for which the class probability is higher than 70%. The choice of 70% ensures that the probability is higher than the sum of the probabilities for the other classes (unlike, for instance, with a 50% threshold), but not too high to eliminate potential candidate outliers. After the 2 runs, we select the identified class with the highest probability in both cases. If the class is different from its original label, the parcel is marked as an outlier. Due to the non-uniform distribution of the crop classes, with a dominance of grassland and maize, the results do not improve significantly with the use of more iterations. The validation of the markers is done using independent ground-truth data and is described in Section 2.4.3.

2.3.4. S2 Bare Soil Index

Similarly to S1, S2 is processed as described in Figure 2c. Clouds and cirrus are removed using the ‘QA60’ flag provided in the metadata by the European Space Agency (ESA). The shadows of clouds are then removed using solar geometry and height estimation. Although atmospherically-corrected S2 images are better for reliable spatial and temporal comparison, level-2A were not yet available in GEE. However, atmospheric correction is not always a prerequisite for classification and change detection [48], especially when working with normalized indices. Therefore, as a trade-off between quality and availability, the level-1C data were preferred for this study.
In order to detect ploughing or other grassland conversion measures that lead to a temporary loss of the vegetation cover, we adopt the Bare Soil Index (BSI), proposed by [49] for LANDSAT TM to Sentinel-2 as (Equation (1)). The SWIR1 (Short-Wave InfraRed) and the red bands allow one to quantify the soil mineral composition, while the NIR (Near-InfraRed) and the blue are sensitive to the presence of vegetation. The blue (B2, range 490 ± 32.5 nm), red (B4, range 665 ± 15 nm) and NIR (B8, range 842 ± 57.5 nm) all have a 10-m resolution, while the SWIR1 (B11, range 1610 ± 45 nm) has a 20-m resolution and is resampled to 10 m.
B S I = ( ϱ S W I R 1 + ϱ R e d ) - ( ϱ N I R + ϱ B l u e ) ( ϱ S W I R 1 + ϱ R e d ) + ( ϱ N I R + ϱ B l u e )
For each parcel, the BSI is calculated for each available cloud-free Sentinel-2 acquisition. The number of times when BSI is positive, i.e., the parcel is potentially bare, is calculated using Equation (2). This provides an indication that the parcel was ploughed and possibly converted.
t ( B S I t > 0 )
Parcels are marked with the S2-BSI when the soil was detected as bare more than 1 time in the period 1 April to 1 August (i.e., Equation (2) > 1 ).

2.3.5. Combining Satellite Markers

After processing S1 and S2, we obtain a binary marker for each of the three workflows (Figure 2) indicating whether the parcel was marked as an outlier: S1 backscatter, S1 coherence and S2 BSI. In addition, we also combine the three markers with a logical AND and with a logical OR (Table 1). S1 AND S2 indicates that the parcel should be marked by S1 (either S1 backscatter or S1 coherence) and S2 at the same time. S1 OR S2 indicates that the parcel should be marked by at least one of the three markers, i.e., S1 backscatter or S1 coherence or S2 BSI.

2.4. Ground-Truth Collection and Accuracy Assessments

To assess the performance of these markers and their combinations, ground-truth collection was carried out on 11 October 2017 using 2 approaches: (i) a field visit (described in the Section 2.4.1) and (ii) by simultaneously collecting street-level geo-tagged pictures using a rooftop camera (described in Section 2.4.2). The accuracy assessment methodology is then described in Section 2.4.3.

2.4.1. Ground-Truth from Field Visits

Crop type was recorded during the field visits as points recorded with a mobile map app (Avenza Maps® [50]). This app permits downloading of maps for offline use on a smartphone or tablet for use with the device’s built-in GPS to track the location. Two background layers were generated from very high resolution hybrid layers of Google Map Aerial and Google Street Map with the marked parcels boundaries differently colored by S1 and S2 markers.
The app then allows one to put a pin on a located field and to specify attributes, which include the position (longitude, latitude), the time, the crop class as aggregated in the 5 groups: cereals (CER), grassland (GRA), maize (MAI), potatoes (POT) and other (OTH) (see Table A1 in Appendix B). In addition, status information was collected such as: bare soil, catch crop, meadow, renewed and cut, renewed recently (for GRA), standing or stubble (for MAI). Additionally, geo-tagged still camera photos were taken. A total of 241 fields were surveyed by field visits. Amongst these, 5 fields were absent from BRP2017 and 5 surveyed positions were wrongly located outside the parcel boundaries and, therefore, excluded.

2.4.2. Geo-Tagged Street-Level Images

Street-level digital photography represents a valuable source of in situ data with an important potential as it provides advantages in terms of logistics and objectivity. This section describes how the pictures were collected and then visually interpreted to provide another source of ground-truth.
An action camera (SONY HDR-AS300R) with a Zeiss® Tessar wide-angle lens looking forward was mounted on the roof of the car with a suction cup. The choice of the device and mount was advised by Mapillary [51]. In particular, we followed the set-up described by [52]. It was mounted with an external battery to collect data up to 8 h continuously on a 128-GB SD card and handled via a remote control screen into the car. As suggested by [51], the camera was programmed to record at 1-s intervals. The size of acquisition of each image is 4 K, i.e., 3840 × 2160 pixels. The exposure is adjusted for each shot allowing one to smoothly follow changes in brightness along the route.
A script available on GitHub and developed by Mapillary [53] for preprocessing and uploading images on their platform applies the following steps: skip images that are potential duplicates (the minimum interval between consecutive acquisition was set to 0.5 m); group images into sequences based on GPS and time (a cutoff time of 25 s and a cutoff distance of 100 m); interpolate compass angles for each sequence; add Mapillary tags to the images; and finally, upload the images.
After having uploaded the images onto Mapillary, the location of the image, the sequence of the driven path and the image themselves could be accessed through an API access (see [54] for a technical description). To select one image per field, we selected the street-level picture located the closest to the centroid of the intersection of the considered field and a 35-m driven road buffer. The street-level pictures were visually photo-interpreted with the help of an aerial overview (Figure 3a) and the image itself (Figure 3b). In addition to this image, the photo-interpreter was provided with a link to the field to interpret on the Mapillary platform where he could select another picture of the same field if the provided picture were not suitable. For each parcel, the photo-interpreter had to choose a tag (i.e., CER, GRA, MAI, POT or OTH) and should have also indicated if the street-level picture permitted him to photo-interpret the image or if an additional picture from Mapillary was needed (see the results in Section 3.3.1).

2.4.3. Accuracy Assessment

In this study, four accuracy assessments were carried out: (1) the BRP2017 assessment using the ground-truth; (2) the assessment of the street-level pictures’ photo-interpretation method; (3) the S1 and S2 markers’ assessment using the ground-truth; and (4) the S1 and S2 markers’ assessment using street-level pictures. To carry out these assessments, we used the following metrics described below, first for assessing multiple classes (>2) and then for binary cases.
The most common way to express classification accuracy for multiple classes is the preparation of a so-called error matrix, also known as a confusion matrix or contingency matrix (Table 2). Such matrices show the cross tabulation of the classified land cover and the actual land cover revealed by field observation results. The confusion matrices compare each class of the map ( n i ) to the reference sample classes ( n j ) with k classes.
Each confusion matrix reports the Overall Accuracy (OA), the User’s Accuracy (UA), the Producer’s Accuracy (PA) [55] and the F-score, which is used for map assessment. OA represents the proportion of all cases correctly classified (Equation (3)) with n being the total number of samples and q the total number of classes. UA, also defined as recall, corresponds to the probability that a randomly-selected sample from the map is classified as correct in the reference sample. PA, also defined as precision, corresponds to the probability that a reference sample is correctly classified in the map. Therefore, UA is related to the commission error, while PA indicates the omission error. They are calculated following Equations (4) and (5) and weighted using the same method as for the global overall accuracy. The F-score (Equation (6)) represents for a class k the harmonic mean of the user and producer accuracies and ranges between 0 and 1.
O A = k = 1 q n k k s n s
U A i = n i i n i +
P A j = n j j n + j
F - s c o r e = 2 × U A k × P A k U A k + P A k
For binary cases such as for assessing the markers, the True Positive (TP), False Positive (FP), False Negative (FN) and True Negative (TN) proportions are calculated as described in Table 3. Several useful metrics describe the marker performance: sensitivity (Equation (7)), specificity (Equation (8)), precision (Equation (9)), accuracy (Equation (10)) and F-score (Equation (11)).
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
P r e c i s i o n = T P T P + F P
A c c u r a c y = T P + T N T P + T N + F P + F N
F - s c o r e = 2 T P 2 T P + F P + F N

3. Results

The first part of the result section (Section 3.1) presents the satellite-based marker results at the parcel level and then at the study area level. In the second section, the street-level imagery acquisition is presented (Section 3.2). Finally, the four assessments are presented demonstrating how the different approaches proposed performed (Section 3.3).

3.1. Satellite-Based Markers

First of all, a temporal series of a marked parcel is shown to illustrate the results in light of the signal measured by the satellite. This parcel was declared as permanent grassland in BRP2017, but was marked by all three markers. Figure 4 shows the S1 backscatter intensity time series along with coherence for both the VV and VH polarizations. For the parcel location and the period 1 January to 1 August, there were 36 descending and 34 ascending orbits acquired. From the time series, we observe a drop in the backscattering for both polarization VV and VH at the end of March, indicating a vegetation decrease. The coherences from both polarization VV and VH have a low value when ploughing occurs, but for the late April–early May period, high values of coherence indicate a bare stable surface state. Figure 5 shows the S2 BSI and NDVI. In the study period, 44 Sentinel-2 observations were available (OBSERVATION in Figure 5) for this parcel. However, only 10 of those were cloud-and-shadow free (BSI and NDVI in Figure 5). At the end of April, the BSI was above zero, indicating that the soil was bare. The fact that two consecutive bare soil states were detected explains why the parcel was marked by the S2-BSI marker. Figure 6 provides the BSI sum for the parcel at the pixel level along with true color S2 overviews of selected dates. The BSI sum along the growing season is then averaged for the parcel, which is then marked when greater than zero. In this example, ploughing can be observed during the second half of March on the RGB composite (Figure 6), which corresponds exactly to the BSI time series (Figure 5). Interestingly, in this example, half of the parcel was observed to be bare on 14 March, and the remaining part was bare on 24 March. Vegetation regrowth was then observed on the 26 May acquisition (Figure 6) for which BSI goes back below zero, as indicated in Figure 5.
The study site contained 15,395 parcels for which area size and class distribution are described in Table 4 together with the training sample composition used for the S1 markers. In total, less than five percent (4.6%) of the parcels were used for training in three classes (GRA, MAI, CER). Potatoes (POT) are not used for training as they represent less than 150 parcels (116). The other classes (OTH) are not used, as they are by definition too heterogeneous to be used for the training.
The machine learning classification of the S1 weekly backscattering signatures has led to a total of 1002 (8.51%) marked outliers, while the S1 coherence marked 345 (2.93%) outliers (Table 5). The S2 BSI method led to 1064 (9.03%) marked outliers. When combining markers as described in Table 1, we obtained five different marker combinations. Both the logical AND and OR of the S1 backscatter and S2 BSI markers show that both marker approaches were highly complementary, with only 303 (2.57%) parcels marked as outliers by both. In Figure 7c, the spatial pattern of marked outliers for the S1 AND S2 combination is shown for the study area.

3.2. Geo-Tagged Street Level Imagery

Figure 8 illustrates the geo-tagged street-level acquisition for a given parcel. Every second, one image was acquired (represented as black dots in Figure 8) Figure 8a–c show examples of pictures of the same parcel acquired at three different locations.
For the current study, about 35,000 street-level images were collected (see Figure 1 for a map of the route covered). After pre-processing, the pictures were uploaded to the Mapillary platform. In the study area, 1411 fields were observed with street-level imagery, i.e., within a 35-m buffer of the driven road (Table 6). These fields were mainly declared as grassland (82%) and maize (13.61%). After image collection, the subsequent step was the visual photo-interpretation tagging, which will be described in the following section.

3.3. Accuracy Assessment

3.3.1. BRP2017 Assessment Using Ground-Truth

Table 7 presents the confusion matrix between the declared parcels’ BRP2017 and the ground-truth collected. Of the surveyed parcels, 92% were found to be of the same class as declared in the BRP2017 (OA = 0.92). For grassland, which is our main target, UA was 0.99 and PA was 0.90 (Table 7). Indeed, nine parcels declared as grassland in the BRP2017 were in fact maize parcels, and five were Others (OTH).

3.3.2. Assessment of Street-Level Pictures Tagging with the Ground-Truth

In the second step, we evaluate the fitness for purpose of the proposed labeling method for the street-level pictures with automated picture selection. Three different photo-interpreters have tagged the pictures for each of the 214 parcels for which we collected field observations. For this study, the photo-interpretation rate was about 100 parcels per hour. The parcel labeling by the interpreters was fast and straightforward on the automatically preselected pictures for 83 to 97% of the parcels depending on the interpreter (Table 8). For the remaining parcels, the interpreter had to search for a more suitable image first, which required more time. Only 2 to 7% of the parcels were not visible from any of the collected pictures and, thus, were not labeled by the interpreters (Table 8).
The confusion matrix comparing the interpreted photo tags to the field survey results shows an OA that varies between 88.79% and 92.06% according to the interpreter (Table 9). For the three photo-interpreters and the two main classes of interest (i.e., grassland and maize), the UA, PA and F-score were always above 0.9 (Table 10).

3.3.3. Markers’ Assessment Using the Ground-Truth

We now proceed with the performance assessment of the proposed satellite markers, in particular those that detect declared grassland parcels as not conforming. The field survey contained 144 parcels declared as grasslands. The comparison of marked parcels (Table 11) with the ground-truth and are discussed in Section 4.1.2.

3.3.4. Assessment of the Markers with Geo-Tagged Street-Level Acquisitions

The interpreted and tagged street-level pictures provide a significantly larger sample of parcels for which the satellite markers can be compared. For the study area, 1411 parcels were observed by street-level imagery (Table 6). For the marked parcels (Table 11), between 34 and 219 parcels, depending on the marker, were observed with street-level pictures. For each of the five markers, we sampled the same number of fields amongst the non-marked fields to derive more robust metrics. This results in a number of parcels ranging from 64 to 386 with corresponding tagged street-level imagery used for the subsequent assessment (see Table 12 for the detailed samples’ distributions). As an example, the S1 backscatter marker selects 1002 parcels for which 104 parcels were observed by street-level pictures. Among those 104 parcels, 92 were identifiable on the street-level pictures and thus tagged. In addition, 92 non-marked parcels were randomly selected and tagged, resulting in a total of 184 samples used to assess the S1 backscatter marker.
Table 13 lists the performance matrix derived from Table 12. All the markers except S1 coherence have a FP number of zero, indicating that the markers do not miss any converted grasslands. This is well described by the specificity metric ranging from 0.71 to 1.00, which relates to the marker’s ability to correctly identify grassland parcels that are not grasslands. The FN, corresponding to grassland parcels marked by the satellite that are identified as not converted on street-level imagery, is important and results in the highest accuracy for the combined S1 AND S2 marker.

4. Discussion

The overall objective of this paper is to evaluate the feasibility of integrating the in-season availability of parcel-level crop type information with S1 and S2 markers. The markers aim to identify, across a large area, grassland parcels with deviating temporal-radiometric characteristics that are then targeted for inspection during a one-day field survey was achieved. Nevertheless, following the five specific objectives described in Section 1.7, we now discuss issues to clarify specific, but important details of the methodology, and that will need further investigation and improvement in future studies.

4.1. Efficiency of the S1 and S2 Markers for Grassland Monitoring

The first specific objective of this study was to evaluate whether efficient markers for grassland monitoring can be defined from a combination of Sentinel-1 SAR and Sentinel-2 multi-spectral observations. Indeed, we demonstrate that when combining both sensors, we were able to detect parcels wrongly declared as grasslands with a precision of 98%.

4.1.1. Marker Concept

Nevertheless, the marker concept as described in Section 1.3 is challenging to translate to a precise remote sensing definition as the interpretation is not only linked to a remotely-sensed signal, but also to agricultural practices. The markers tested are a temporal crop type classification using S1 and the detection of bare soil with S2. The latter marker suggests ploughing of the parcel, and this indicates possible grassland conversion. In the Netherlands, grassland can be declared permanent if it is maintained continuously for five years. However, ploughing the parcel is allowed (if not in Natura 2000), as long as it is directly renewed to grassland thereafter. Therefore, while the S2 BSI marker correctly identified some parcels as being bare during the growing season, renewed grassland was observed on these parcels during the field survey in October. These parcels are included in the FN category. Similarly, the field survey also revealed that some parcels in this category flagged with the S1 backscatter and S2 BSI markers actually were heavily grazed meadows, e.g., by horses, leading to bare soil appearance, especially during wet periods in the spring.

4.1.2. Evaluating Markers with Targeted Field Visits and Street-Level Images

The field observations were collected according to a routing designed to maximize the inspection rate of the marked parcels, but without taking into account a proper distribution with respect to non-marked fields. This limitation was partly overcome by using the street-level images to build a more complete ground-truth sample. Following this approach, information about the efficiency of the methodology can be deduced. Using the satellite marker methodology, a high level of TN and a low level of FP were obtained as indicated by the evaluation with the field observations and the street-level imagery (respectively Table 11 and Table 13). This is a requirement for methodologies focusing on parcel monitoring. Regarding the results obtained with street-level pictures (Table 13), compared to the assessment of the markers using the targeted field visits (Table 11), it is interesting to see that the FP number is low, ranging from zero to two for street-level pictures and ranging from one to seven for the targeted field visits.
Generally, the accuracies obtained with street-level pictures (Table 13) are lower than the ones obtained with the ground-truth (Table 11) since the proportions of FN are higher. However, the accuracy for S1 OR S2 in Table 11 at 0.23 is much lower than the corresponding value in Table 13. This is because the number of TP is proportionally much lower in Table 11 since these parcels (grassland detected as grassland), were not targeted for field observations.

4.1.3. Combination of S1 and S2

The evaluation of the performance of different single and combined markers shows that parcels marked by both sensors yield the highest sensitivity (0.84), precision (0.98), accuracy (0.84) and F-score (0.9). In this case, we observe two FP and 12 TN when compared to the field visits. In fact, only one of the FP is detected by S1 backscatter, but not by S2 BSI (hence, S1 OR S2 has one FP). In conclusion, only one parcel is not flagged by any of the marker combinations, which is a good operational result in an inspection context.
In this study, S1 backscatter and S2 BSI demonstrate their ability to detect outlier parcels. S1 coherence was less efficient with more false positives than the other markers. This can be linked to the time window (two weeks) selected to create the maximum composite as this may not be optimal to grasp short-term change events within the parcel. Furthermore, S1 coherence is generated at 20-m resolution, which may lead to a loss of quality in extracting parcel means, given the relatively small average parcel size of 1.75 ha. S1 coherence is probably more useful as a discrete type of signal (like S2 BSI), rather than in a machine learning approach as used in this study. More work is needed to explore how coherence should be exploited for such applications.

4.2. Processing Methodology

The second objective is to assess the usefulness of combining cloud processing using Google Earth Engine (GEE) and deep learning (Tensorflow) to perform large-scale marker evaluation tasks at the parcel level.

4.2.1. Tools, Coding Languages and Platforms

A core characteristic of this study is the ability to analyze diverse data streams, including deep time series stacks from Sentinel-1 and -2 against large parcel datasets, and to validate results with field visit observations and tags derived from interpreted geo-tagged street-level images. For remote sensing processing, GEE was selected because it already hosts S1 and S2 in “analysis ready” formats and provides the sophisticated processing functionality and capacity to rapidly generate arbitrarily large signature compositions and extracts. One limitation of GEE is that it does not host S1 coherence outputs yet. Thus, coherence processing needs to be done outside GEE, starting from S1 SLC images and with the use of the SNAP Sentinel-1 toolbox. A recent study by Zebker [56] proposes a new method to resample S1 SLC data to map projected SLC products that will then allow coherence (and interferometric phase) generation on cloud platforms such as GEE. This would likely lead to further popularizing the use of such outputs in crop monitoring and other applications.
Another GEE platform limitation is the lack of S-2 atmospherically-corrected surface reflectance, even though that was not an essential requirement in this study. The reason why these datasets are not yet ingested as a GEE catalog is primarily due to a lack of consensus in the scientific community on the preferred correction algorithm [57].
GEE integration with TensorFlow is a work in progress, which required us to run the latter on a stand-alone workstation (an eight-core Intel Xeon E3-1505M v6 @3.00 GHz, with 64 GB RAM and Quadro M2200 GPU) using the Python implementation. The Mapillary API was used to support the tagging of the geo-tagged street-level images. Overall, Python provides the “glue” to interlink the various inputs and outputs to processing modules, for format conversion (typically using GDAL [58]) and occasional spatial data handling (in PostgreSQL/Postgis).
The tools developed and published along with this study can be scaled to work for much larger areas than our study area. We have already applied equivalent GEE extractions and TensorFlow classifications at the country scale using the full BRP2017 (770,000 parcels). TensorFlow run times for the analysis proposed in this study are in the order of minutes, i.e., in no way limiting the uptake of the methodology to complicated cloud solutions or dedicated hardware solutions such as GPUs.

4.2.2. Need for Open-Access Parcel Identification

A prerequisite of this study is the availability of open-access digital agricultural parcel features with their actual crop type. Because of technical, administrative and political reasons, such datasets are not always available (on time). Segmentation methods such as Simple Non-Iterative Clustering (SNIC) [59] could now efficiently be applied on the cloud [60] and thus bridge the gap for countries or regions where vector parcel data are not available. However, this remains challenging as the segmentation performs differently according to the landscape structure, input data and parameter settings. Existing large-scale (e.g., 1:10,000) topographic vector data and high quality land use/land cover maps would facilitate enhancing segmentation results. An alternative would be to use timely high-resolution imagery to generate the parcel boundary data on demand, in a similar manner as how MS LPIS are created and maintained. Systematic collection of open spectro-temporal crop type libraries, much in the fashion of the USGS Spectral Library [61], but dedicated to agriculture, would be an essential contribution to crop recognition tasks.

4.2.3. TensorFlow Training Improvement

One of the limitations in the S1 machine learning approach is that the study area contains about 15,000 parcels with unevenly-distributed crop types. This results in limitations when designing a proper statistically-distributed training sampling strategy. Enlarging the sample with parcels stratified by class from neighboring areas will probably improve the method’s robustness and should be investigated in the future. Such an enlarged set would facilitate multiple training set selection, allowing majority voting with more runs and stricter class probability thresholds per run to improve the results’ robustness. So far, the structure of the neural network itself was not properly assessed as the results obtained with two hidden layers with 32 nodes each were satisfying, both in achievable model accuracy and computational time. However, further investigation and hyper-parameterization (number of layers, number of nodes, number of epochs) could lead to additional improvements. Other interesting perspectives include the use of transfer learning methods such as [62] with trained models from different years or geographical locations.

4.3. Implications for CAP Monitoring

The third objective is to evaluate whether the targeted monitoring approach is appropriate and able to efficiently scale across the whole area covering 15,000 parcels. The question for this study was to evaluate if the proposed markers could detect parcels for which the predicted crop class was not the same as the declared one, in particular for declared grassland.
Table 7 comparing the BRP2017 with the targeted field visit shows that 18 parcels are not compliant, i.e., their declared label is not what was observed in the field. Among the 18 diverging parcels, nine parcels were declared as grasslands, but categorized as maize. Of these, six were confirmed as maize in both the field visit and the tagged street-level pictures (the other three parcels were too far from the road and, therefore, not observed). As mentioned in Section 2.2.1, this study used the early available BRP2017 (available in July). After conducting this study, a definitive version of the BRP2017 was made available. We therefore cross-checked the 18 diverging parcels in the early version of BRP2017 and in the definitive one. Among the six parcels declared as grasslands, but found as maize, one was changed to maize in the definitive BRP version.
The total of nine non-compliant grasslands in the total of the 1411 parcels (0.64%) that were observed in the street-level images suggests that the overall problem of undeclared grassland conversion in the study area is very low. Since we have targeted our field visit and street-level image collection based on the S1 and S2 markers, we may assume that the actual grassland conversion for the whole area is even below the rate derived from the street-level sample. However, permanent grassland conversion risks being a gradual process, and corrective action is required to ensure that non-compliant parcels are resown as grassland. Our method can be an important contribution to systematically follow this process and understand its temporal and geographical dynamics.

4.4. Street-Level Imagery

The fourth objective is to evaluate whether street-level imagery can contribute to the efficiency and effectiveness of a field survey designed for the evaluation of our markers.
In less than three hours, the interpreters were able to tag more than 200 parcels with an overall accuracy close to 90% (Table 9). The performance for grasslands and maize distinction was good as UA and PA were always above 90% for the three interpreters (Table 10). These results are encouraging and demonstrate that using street-level photo-interpretation is a valuable approach. However, the study focused on five crop groups that are rather distinct, even in sub-optimal quality photos. A full evaluation with a wider range of crop types and crop stages would be necessary to appreciate street-level imagery in more generalized crop recognition contexts. The three interpreters in our study were researchers with some expertise in agronomy, which may have biased the interpretation results to higher accuracies. However, as discussed by [63], a proper training and relevant feed back can improve recognition skills over time.
This study demonstrates the potential to improve in situ data collection efficiency. During this study, in one day, 214 parcels were visited to collect crop type, while at the same time, 1411 parcels were observed with street-level imagery. The collection efficiency improved thus by a factor seven, still reaching an interpretation accuracy of about 90% (Table 9). This efficiency factor is probably somewhat higher if the survey were to deploy only the street-level imagery collection (no stops needed for field visit observations) and would double if side-looking cameras were mounted on both sides of the car.
The usefulness of collecting ground-truth data with such an approach depends on landscape structure including field size and landscape elements obscuring the field of view. Clearly, when the parcel is not adjacent to the road or hidden by anthropic or natural elements such as trees or hedges, it limits the usefulness of the approach. Automating the image tagging with the use of computer vision methods opens a new avenue for further raising the efficiency of street-level image use. The potential of these methods has already been demonstrated in other user domains [27,28,29,30,31,32,33,34,35]. In order to use the full potential of deep learning approaches for crop type recognition, abundant and representative training data are a pre-requisite. Generating dedicated street-level crop-type hierarchical datasets following the approach of ImageNet [64] will be a worthwhile effort for the scientific community.
In this study, street-level pictures were collected specifically to assess the markers, and these were then uploaded on an open-access platform. Mapillary currently (as of 6 July 2018) provides more than 315 million crowd-sourced images. A huge future, and so far unexplored, opportunity exists in the use of such images to collect and generate useful in situ data. It is important to note that collecting data with street-level cameras should comply with privacy regulations such as the European General Data Protection Regulation (GDPR) [65].

4.5. Code Sharing

Following the general reporting requirements that come with scientific research and following the final objective embedded in our manuscript, all scripts and codes used across a variety of software packages needed to reproduce the results presented here are made available on the GitHub repository https://github.com/rdandrimont/AGREE (AGRicultural Enhanced Evidence), as summarized in Figure A1, Appendix A, and are available online as Supplementary Material. In this way, the computationally-intensive data-driven science carried out in this study, and the results obtained, can be reproduced transparently and completely.

5. Conclusions

Taking advantage of four socio-technological developments, (1) frequent and high-resolution observations made by Copernicus Sentinels. (2) in-season availability of parcel-level crop-type declarations. (3) cloud computation and deep learning and (4) the availability of street-level imagery, a case-study to monitor grasslands in the Netherlands was carried out in 2017. Using Sentinel satellite observations during the growing season allowed us to flag grassland parcels with deviating temporal-radiometric characteristics. The best marker was obtained by combining S1 and S2. A subsequent one-day field survey to inspect the parcels and build a relevant ground-truth sample confirmed the usefulness and scalability of the methodology, which detected parcels wrongly declared as grasslands with a precision of 98%, when combining both sensors. Additionally, this study demonstrated that collecting street-level imagery permits efficiently collecting in situ data on crop type with an accuracy of about 90%. This opens avenues to improve high accuracy crop monitoring at the parcel level.

Supplementary Materials

In addition to the GitHub account (https://github.com/rdandrimont/AGREE), a supplementary document available online provides the source code, which is available online at https://www.mdpi.com/2072-4292/10/8/1300/s1.

Author Contributions

Raphaël d’Andrimont, Guido Lemoine and Marijn van der Velde conceived, designed and realized the study and wrote the manuscript.

Funding

This manuscript has received funding from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 689812 (LandSense, https://landsense.eu/).

Acknowledgments

The authors would like to thank the three anonymous reviewers for their constructive comments. They would also like to thank their colleagues Wim Devos and Vincenzo Angileri for their insights in the CAP along with Momtchil Iordanov for his comments on an earlier version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Code

All the codes written for this study are available to the public on the GitHub repository https://github.com/rdandrimont/AGREE (AGRicultural Enhanced Evidence) and in a document in the Supplementary Material. For this study, five different scripts are available (Figure A1).
Figure A1. Codes developed in this study are open and available to the public. They are divided into five chunks using different formats and libraries including JavaScript for GEE, Python, SNAP and TensorFlow.
Figure A1. Codes developed in this study are open and available to the public. They are divided into five chunks using different formats and libraries including JavaScript for GEE, Python, SNAP and TensorFlow.
Remotesensing 10 01300 g0a1

Appendix B. Aggregation of Crop Class Label for the Ground-Truth

Table A1. Aggregation of the crops for the ground-truth collection.
Table A1. Aggregation of the crops for the ground-truth collection.
LabelLabel (Dutch name)Gewas Code
Grasslands (GRA)Permanent grassland (Grasland, blijvend)
Temporary grassland (Grasland, tijdelijk)
Natural grassland. Main function: agriculture. (Grasland, natuurlijk. Hoofdfunctie landbouw.)
Natural grassland. Main function: nature. (Grasland, natuurlijk. Hoofdfunctie natuur.)
Natural grassland (Grasland, natuurlijk. Areaal met een natuurbeheertype)
265
266
331
332
336
Maize (MAI)Green maize (Maïs, snij-)
Grain maize (Maïs, korrel-)
259
316
Potatoes (POT)Potatoes, starch (Aardappelen, zetmeel)2017
Cereals (CER)Summer wheat (Tarwe, zomer-)
Winter barley (Gerst, winter-)
Summer barley (Gerst, zomer-)
Rye (Rogge (geen snijrogge))
234
235
236
237
Other (OTH)Alley trees/park trees, older/heavier trees (Laanbomen/parkbomen, opzetters, open grond)
Peas, green/yellow Erwten, groene/gele (groen te oogsten)
Spinach (Spinazie, zaden en opkweekmateriaal)
1071
244
2774

References

  1. Ojima, D.S.; Dirks, B.O.; Glenn, E.P.; Owensby, C.E.; Scurlock, J.O. Assessment of C budget for grasslands and drylands of the world. Water Air Soil Pollut. 1993, 70, 95–109. [Google Scholar] [CrossRef] [Green Version]
  2. Rosen, C. World Resources 2000–2001: People and Ecosystems: The Fraying Web of Life; Elsevier: New York, NY, USA, 2000. [Google Scholar]
  3. Soussana, J.F.; Tallec, T.; Blanfort, V. Mitigating the greenhouse gas balance of ruminant production systems through carbon sequestration in grasslands. Animal 2010, 4, 334–350. [Google Scholar] [CrossRef] [PubMed]
  4. Intergovernmental Panel on Climate Change. 2006 IPCC Guidelines for National Greenhouse Gas Inventories; Eggleston, S., Buendia, L., Miwa, K., Ngara, T., Tanabe, K., Eds.; Institute for Global Environmental Strategies: Kanagawa, Japan, 2006. [Google Scholar]
  5. Bellarby, J.; Tirado, R.; Leip, A.; Weiss, F.; Lesschen, J.P.; Smith, P. Livestock greenhouse gas emissions and mitigation potential in Europe. Glob. Change Biol. 2013, 19-1, 3–18. [Google Scholar] [CrossRef] [PubMed]
  6. Alliance Environnement. Evaluation Study of the Payment for Agricultural Practices Beneficial for the Climate and the Environment; Technical Report; IEEP: Brussels, Belgium, 2017. [Google Scholar]
  7. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  8. Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  9. Defourny, P.; Sophie, B.; Bellemans, N.; Cosmin, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Nicola, L.; Rabaute, T.; et al. Sentinel-2 near real-time agriculture monitoring at parcel level thanks to the Sen2-Agri automated system: methods and demonstration from local to national scale for cropping systems around the world. Remote Sens. Environ. 2018. under review. [Google Scholar]
  10. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  11. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  12. Tamm, T.; Zalite, K.; Voormansik, K.; Talgre, L. Relating Sentinel-1 interferometric coherence to mowing events on grasslands. Remote Sens. 2016, 8, 802. [Google Scholar] [CrossRef]
  13. Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S. Satellite-based assessment of grassland yields. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 15. [Google Scholar] [CrossRef]
  14. Santoro, M.; Wegmuller, U.; Askne, J.I. Signatures of ERS–Envisat interferometric SAR coherence and phase of short vegetation: An analysis in the case of maize fields. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1702–1713. [Google Scholar] [CrossRef]
  15. Wegmuller, U.; Werner, C. Retrieval of vegetation parameters with SAR interferometry. IEEE Trans. Geosci. Remote Sens. 1997, 35, 18–24. [Google Scholar] [CrossRef]
  16. Griffiths, P.; Nendel, C.; Pickert, J.; Hostert, P. Towards national-scale characterization of grassland use intensity based on integrated Sentinel-2 and Landsat time series data. Remote Sens. Environ. 2018. under review. [Google Scholar]
  17. Veloso, A.; Mermoz, S.; Bouvet, A.; Le Toan, T.; Planells, M.; Dejoux, J.F.; Ceschia, E. Understanding the temporal behavior of crops using Sentinel-1 and Sentinel-2-like data for agricultural applications. Remote Sens. Environ. 2017, 199, 415–426. [Google Scholar] [CrossRef]
  18. European Commission DG AGRI. DS/CDP/2018/10: Guidance Document on the on-the-Spot Checks and Area Measurement. 2018. Available online: https://marswiki.jrc.ec.europa.eu/wikicap/images/3/39/DSCG-2014-32-FINAL_REV_4_OTSC_guideline_2018_CLEAN.pdf (accessed on 6 July 2018).
  19. Devos, W.; Fasbender, D.; Lemoine, G.; Loudjani, P.; Milenov, P.; Wirnhardt, C. Discussion Document on the Introduction of Monitoring to Substitute OTSC; Technical Report; European Commission: Brussels, Belgium, 2017; ISBN 978-92-79-74279-8. [Google Scholar]
  20. European Union. Commission implementing regulation (EU) 2018/746 of 18 May 2018 amending Implementing Regulation (EU) No 809/2014 as regards modification of single applications and payment claims and checks. Off. J. Eur. Union 2018, 61, L125-1-7. [Google Scholar]
  21. JRC. Wikicap-European Commission. Available online: https://marswiki.jrc.ec.europa.eu/wikicap/index.php/Main_Page (accessed on 24 December 2017).
  22. Nationaal Georegister. Basisregistratie Gewaspercelen (BRP). 2017. Available online: http://www.nationaalgeoregister.nl/geonetwork/srv/dut/catalog.search#/metadata/%7B25943e6ebb27-4b7a-b240-150ffeaa582e%7D (accessed on 24 December 2017).
  23. Invekos Schläge. 2017. Available online: http://gis.bmlfuw.gv.at/wmsgw-ds/?alias=e722906e-e559-4&request=GetDataFeed&id=ae690988-644c-4c25-bdee-bc7d1f4762ee (accessed on 29 December 2017).
  24. Jordbrugs Analyser: Geoserver Download. Available online: http://jordbrugsanalyser.dk/downloadside/index.html (accessed on 29 December 2017).
  25. Dickinson, J.L.; Zuckerberg, B.; Bonter, D.N. Citizen science as an ecological research tool: Challenges and benefits. Annu. Rev. Ecol. Evol. Syst. 2010, 41, 149–172. [Google Scholar] [CrossRef]
  26. Fritz, S.; See, L.; McCallum, I.; You, L.; Bun, A.; Moltchanova, E.; Duerauer, M.; Albrecht, F.; Schill, C.; Perger, C.; et al. Mapping global cropland and field size. Glob. Chang. Biol. 2015, 21, 1980–1992. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Gebru, T.; Krause, J.; Wang, Y.; Chen, D.; Deng, J.; Aiden, E.L.; Li, F. Using deep learning and google street view to estimate the demographic makeup of the us. arXiv 2017, arXiv:1702.06683. [Google Scholar]
  28. Acharya, A.; Fang, H.; Raghvendra, S. Neighborhood Watch: Using CNNs to Predict Income Brackets from Google Street View Images; Standford Report; Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
  29. Andersson, V.O.; Birck, M.A.; Araujo, R.M. Investigating Crime Rate Prediction Using Street-Level Images and Siamese Convolutional Neural Networks. In Proceedings of the Latin American Workshop on Computational Neuroscience, Porto Alegre, Brazil, 22–24 November 2017; Springer: Cham, Switzerland, 2017; pp. 81–93. [Google Scholar]
  30. Krylov, V.A.; Kenny, E.; Dahyot, R. Automatic Discovery and Geotagging of Objects from Street View Imagery. arXiv 2017, arXiv:1708.08417. [Google Scholar] [CrossRef]
  31. Goel, R.; Garcia, L.M.; Goodman, A.; Johnson, R.; Aldred, R.; Murugesan, M.; Brage, S.; Bhalla, K.; Woodcock, J. Estimating city-level travel patterns using street imagery: A case study of using Google Street View in Britain. arXiv 2018, arXiv:1802.02915. [Google Scholar] [CrossRef] [PubMed]
  32. Iannelli, G.C.; Dell’Acqua, F. Extensive Exposure Mapping in Urban Areas through Deep Analysis of Street-Level Pictures for Floor Count Determination. Urban Sci. 2017, 1, 16. [Google Scholar] [CrossRef]
  33. Seiferling, I.; Naik, N.; Ratti, C.; Proulx, R. Green streets—Quantifying and mapping urban trees with street-level imagery and computer vision. Landsc. Urban Plan. 2017, 165, 93–101. [Google Scholar] [CrossRef]
  34. Li, X.; Zhang, C.; Li, W.; Ricard, R.; Meng, Q.; Zhang, W. Assessing street-level urban greenery using Google Street View and a modified green view index. Urban For. Urban Green. 2015, 14, 675–685. [Google Scholar] [CrossRef]
  35. Long, Y.; Liu, L. How green are the streets? An analysis for central areas of Chinese cities using Tencent Street View. PLoS ONE 2017, 12, e0171110. [Google Scholar] [CrossRef] [PubMed]
  36. Mapillary. Available online: https://www.mapillary.com/ (accessed on 24 December 2017).
  37. Juhász, L.; Hochmair, H.H. User contribution patterns and completeness evaluation of Mapillary, a crowdsourced street level photo service. Trans. GIS 2016, 20, 925–947. [Google Scholar] [CrossRef]
  38. Wu, B.; Li, Q. Crop planting and type proportion method for crop acreage estimation of complex agricultural landscapes. Int. J. Appl. Earth Obs. Geoinf. 2012, 16, 101–112. [Google Scholar] [CrossRef]
  39. Singha, M.; Wu, B.; Zhang, M. Object-based paddy rice mapping using HJ-1A/B data and temporal features extracted from time series MODIS NDVI data. Sensors 2016, 17, 10. [Google Scholar] [CrossRef] [PubMed]
  40. USGS. Global Croplands Street View Application. Available online: https://www.croplands.org/app/data/street (accessed on 27 March 2018).
  41. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef]
  42. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M.; et al. GMES Sentinel-1 mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  43. Copernicus—Open Access Hub. Available online: https://scihub.copernicus.eu/ (accessed on 24 December 2017).
  44. ESA. SNAP, the Sentinel Toolboxes. Available online: http://step.esa.int/main/download/ (accessed on 24 December 2017).
  45. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  46. Corbane, C.; Lemoine, G.; Pesaresi, M.; Kemper, T.; Sabo, F.; Ferri, S.; Syrris, V. Enhanced automatic detection of human settlements using Sentinel-1 interferometric coherence. Int. J. Remote Sens. 2018, 39, 842–853. [Google Scholar] [CrossRef]
  47. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 16 August 2018).
  48. Song, C.; Woodcock, C.E.; Seto, K.C.; Lenney, M.P.; Macomber, S.A. Classification and change detection using Landsat TM data: When and how to correct atmospheric effects? Remote Sens. Environ. 2001, 75, 230–244. [Google Scholar] [CrossRef]
  49. Rikimaru, A.; Roy, P.; Miyatake, S. Tropical forest cover density mapping. Trop. Ecol. 2002, 43, 39–47. [Google Scholar]
  50. Avenza. Avenza Systems Inc.—GIS, Mapping and Cartography Software for Adobe Illustrator and Photoshop. Mobile Mapping with Avenza Maps on iOS, Android and Windows. Available online: https://www.avenza.com/avenza-maps/ (accessed on 13 December 2017).
  51. Mapillary. Multiple Camera Setups—Mapillary. 2017. Available online: https://help.mapillary.com/hc/en-us/articles/115001471709-Multiple-camera-setups (accessed on 10 December 2017).
  52. Housel, B. Mapping with Action Cameras, Mapillary, and OpenStreetMap. Available online: http://blog.mapillary.com/community/2017/01/31/mapping-with-action-cameras-mapillary-and-openstreetmap.html (accessed on 10 November 2017).
  53. Mapillary. Mapillary_Tools/Python at Master · Mapillary/Mapillary_Tools. 2017. Available online: https://github.com/mapillary/mapillary_tools/tree/master/mapillary_tools (accessed on 29 November 2017).
  54. Mapillary. Where’s My Data? How to Retrieve Mapillary Images to Use in External Tools. Available online: https://blog.mapillary.com/product/2017/04/12/how-to-retrieve-mapillary-images-to-use-in-external-tools.html (accessed on 28 November 2017).
  55. Strahler, A.H.; Boschetti, L.; Foody, G.M.; Friedl, M.A.; Hansen, M.C.; Herold, M.; Mayaux, P.; Morisette, J.T.; Stehman, S.V.; Woodcock, C.E. Global Land Cover Validation: Recommendations for Evaluation and Accuracy Assessment of Global Land Cover Maps; European Communities: Luxembourg, 2006; Volume 51. [Google Scholar]
  56. Zebker, H.A. User-Friendly InSAR Data Products: Fast and Simple Timeseries Processing. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2122–2126. [Google Scholar] [CrossRef]
  57. Doxani, G.; Vermote, E.; Roger, J.C.; Gascon, F.; Adriaensen, S.; Frantz, D.; Hagolle, O.; Hollstein, A.; Kirches, G.; Li, F.; et al. Atmospheric correction inter-comparison exercise. Remote Sens. 2018, 10, 352. [Google Scholar] [CrossRef]
  58. GDAL/OGR Contributors. GDAL/OGR Geospatial Data Abstraction Software Library; Open Source Geospatial Foundation: Chicago, IL, USA, 2018. [Google Scholar]
  59. Achanta, R.; Süsstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4895–4904. [Google Scholar]
  60. Gorelick, N. Image Segmentation and OBIA (Gorelick)—Google Slides. 2018. Available online: https://docs.google.com/presentation/d/1p_W06MwdhRFZjkb7imYkuTchatY5nxb5aTRgh6qm2uU/view#slide=id.p (accessed on 19 June 2018).
  61. Kokaly, R.F.; Clark, R.N.; Swayze, G.A.; Livo, K.E.; Hoefen, T.M.; Pearson, N.C.; Wise, R.A.; Benzel, W.M.; Lowers, H.A.; Driscoll, R.L.; et al. USGS Spectral Library Version 7; Technical Report; US Geological Survey: Reston, VA, USA, 2017. [Google Scholar]
  62. Chen, T.; Goodfellow, I.; Shlens, J. Net2net: Accelerating learning via knowledge transfer. arXiv 2015, arXiv:1511.05641. [Google Scholar]
  63. Laso Bayas, J.C.; See, L.; Fritz, S.; Sturn, T.; Perger, C.; Dürauer, M.; Karner, M.; Moorthy, I.; Schepaschenko, D.; Domian, D.; et al. Crowdsourcing in-situ data on land cover and land use using gamification and mobile technology. Remote Sens. 2016, 8, 905. [Google Scholar] [CrossRef] [Green Version]
  64. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on the Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  65. European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Off. J. Eur. Union 2016, L119, 1–88. [Google Scholar]
Figure 1. The study site is the Gelderse Vallei including 7 municipalities: Ede, Wageningen, Renkum, Barneveld, Arnhem, Putten and Nijkerk. The ground-truth points were collected during a field visit; geo-tagged street-level imagery was acquired simultaneously at 1 frame per second along the survey path.
Figure 1. The study site is the Gelderse Vallei including 7 municipalities: Ede, Wageningen, Renkum, Barneveld, Arnhem, Putten and Nijkerk. The ground-truth points were collected during a field visit; geo-tagged street-level imagery was acquired simultaneously at 1 frame per second along the survey path.
Remotesensing 10 01300 g001
Figure 2. Main processing steps for S1 backscattering (a), S1 coherence (b) and S2 BSI (c).
Figure 2. Main processing steps for S1 backscattering (a), S1 coherence (b) and S2 BSI (c).
Remotesensing 10 01300 g002
Figure 3. The photo-interpreters were given a high-resolution aerial image with the parcel and the point where the image was taken (a) along with the image to photo-interpret (b). This example shows a maize parcel on the left side of the road, which is not yet harvested.
Figure 3. The photo-interpreters were given a high-resolution aerial image with the parcel and the point where the image was taken (a) along with the image to photo-interpret (b). This example shows a maize parcel on the left side of the road, which is not yet harvested.
Remotesensing 10 01300 g003
Figure 4. S1 backscattering and coherence signal for a parcel declared as permanent grassland and marked with S1.
Figure 4. S1 backscattering and coherence signal for a parcel declared as permanent grassland and marked with S1.
Remotesensing 10 01300 g004
Figure 5. S2 parcel temporal profile of declared permanent grassland marked as ploughed. The BSIis reaching zero at the end of March, indicating a ploughing confirmed visually on Figure 6.
Figure 5. S2 parcel temporal profile of declared permanent grassland marked as ploughed. The BSIis reaching zero at the end of March, indicating a ploughing confirmed visually on Figure 6.
Remotesensing 10 01300 g005
Figure 6. Parcel declared as permanent grassland and detected as ploughed with S2. The BSI is reaching zero at the end of March, indicating a ploughing here confirmed visually.
Figure 6. Parcel declared as permanent grassland and detected as ploughed with S2. The BSI is reaching zero at the end of March, indicating a ploughing here confirmed visually.
Remotesensing 10 01300 g006

t ( B S I t > 0 ) 15-02-201714-03-2017
24-03-201706-05-201726-05-2017
Figure 7. Maps showing the distribution of the non-marked (grey) and marked (red) declared grasslands with the different marker combinations: (a) S1 backscatter, (b) S1 coherence, (c) S2 BSI, (d) S1 AND S2 and (e) S1 OR S2.
Figure 7. Maps showing the distribution of the non-marked (grey) and marked (red) declared grasslands with the different marker combinations: (a) S1 backscatter, (b) S1 coherence, (c) S2 BSI, (d) S1 AND S2 and (e) S1 OR S2.
Remotesensing 10 01300 g007
Figure 8. Examples of geo-tagged street-level acquisitions taken every second driving from south to north. Thanks to the camera’s wide angle, the parcel is observed on several geo-tagged images (e.g., (ac)) from different points of view, thus permitting one to deduce the crop type.
Figure 8. Examples of geo-tagged street-level acquisitions taken every second driving from south to north. Thanks to the camera’s wide angle, the parcel is observed on several geo-tagged images (e.g., (ac)) from different points of view, thus permitting one to deduce the crop type.
Remotesensing 10 01300 g008
Table 1. Satellite markers and their combination. S1 AND S2 corresponds to the logical AND where the parcel should be marked by S1 (either S1 backscatter or S1 coherence) and S2 at the same time. S1 OR S2 corresponds to the logical OR where the parcel should be marked by at least one of the three markers, i.e., S1 backscatter or S1 coherence or S2 BSI.
Table 1. Satellite markers and their combination. S1 AND S2 corresponds to the logical AND where the parcel should be marked by S1 (either S1 backscatter or S1 coherence) and S2 at the same time. S1 OR S2 corresponds to the logical OR where the parcel should be marked by at least one of the three markers, i.e., S1 backscatter or S1 coherence or S2 BSI.
MarkerS1 BackscatterS1 CoherenceS2 BSI
S1 backscatterx
S1 coherence x
S2 BSI x
S1 AND S2xxx
xx
x x
S1 OR S2x
x
x
Table 2. Mathematical formulation of the error matrix.
Table 2. Mathematical formulation of the error matrix.
j = Columns
(Reference)
Row Total
i = Rows
(Classification)
12k n i +
1 n 11 n 12 n 1 k n 1 +
2 n 21 n 22 n 2 k n 2 +
k n k 1 n k 2 n k k n k +
Column Total n + j n + 1 n + 2 n + 3 n
Table 3. Markers’ assessment through True Positive (TP), False Positive (FP), False Negative (FN) and True Negative (TN) proportions.
Table 3. Markers’ assessment through True Positive (TP), False Positive (FP), False Negative (FN) and True Negative (TN) proportions.
Ground or Street-Level
GrasslandNot Grassland
MarkersgrasslandTPFP
not grasslandFNTN
Table 4. Parcel area, distribution per class for the study area and sample used for classification training. GRA, Grassland; MAI, Maize; CER, Cereals; POT, Potatoes; OTH, Other.
Table 4. Parcel area, distribution per class for the study area and sample used for classification training. GRA, Grassland; MAI, Maize; CER, Cereals; POT, Potatoes; OTH, Other.
ClassParcel AreaParcel CountTraining Sample
Area (Ha)Area (%)NumberNumber (%)Number per ClassPercent per Class
GRA18,62968.9011,77376.473002.55
MAI50718.75263717.1330011.38
CER4511.671981.2910955.05
POT3711.371160.7500.00
OTH25179.316714.3600.00
TOTAL27,039100.0015,395100.007094.6
Table 5. Number and distribution of declared grassland parcels marked with S1 AND S2, S1 OR S2.
Table 5. Number and distribution of declared grassland parcels marked with S1 AND S2, S1 OR S2.
MarkersMarked N (Percent)
S1 backscatter1002 (8.51%)
S1 coherence345 (2.93%)
S2 BSI1064 (9.03%)
S1 AND S2303 (2.57%)
S1 OR S22015 (17.12%)
Table 6. Fields adjacent to the survey route using a 35-m buffer and summarized by class.
Table 6. Fields adjacent to the survey route using a 35-m buffer and summarized by class.
ClassStreet-Level ObservationPercentage of the Total
(N Fields)(%)
CER110.78
GRA115782.00
MAI19213.61
OTH483.40
POT30.21
TOTAL1411100
Table 7. Confusion matrix of the BRP2017 parcels with the ground-truth collected during the field survey along with the per-class user accuracy, producer accuracy and F-score.
Table 7. Confusion matrix of the BRP2017 parcels with the ground-truth collected during the field survey along with the per-class user accuracy, producer accuracy and F-score.
Ground-TruthMetrics
CERGRAMAIOTHPOTTOTALUAPAF-Score
CER1012041.000.250.40
GRA01309501440.990.900.95
MAI007800780.891.000.94
OTH0102030.220.670.33
POT0000221.001.001.00
TOTAL11318892231---
Table 8. Suitability of the collected street-level imagery for the photo-interpretation for each of the interpreters. Either the picture automatically selected is suitable for photo-interpretation, or the interpreter needs to select another picture of the same parcel, or the parcel is not visible from any of the collected pictures.
Table 8. Suitability of the collected street-level imagery for the photo-interpretation for each of the interpreters. Either the picture automatically selected is suitable for photo-interpretation, or the interpreter needs to select another picture of the same parcel, or the parcel is not visible from any of the collected pictures.
InterpreterPicture SuitableNeed Other PictureNot Visible
(N)(%)(N)(%)(N)(%)
117883321542
2207972152
3149705023157
Table 9. Photo-interpretation of the street-level imagery compared to the ground-truth for the three photo-interpreters.
Table 9. Photo-interpretation of the street-level imagery compared to the ground-truth for the three photo-interpreters.
Interpreter123
Overall accuracy92.0692.0688.79
Kappa84.9484.8579.59
Table 10. Photo-interpretation performance compared to the ground-truth for the 2 main classes.
Table 10. Photo-interpretation performance compared to the ground-truth for the 2 main classes.
Interpreter
123
ClassUAPAF_ScoreUAPAF_scoreUAPAF_Score
GRA0.970.940.950.970.940.960.940.930.94
MAI0.940.960.950.940.940.940.900.970.93
Table 11. Marked declared grasslands fields with S1 and S2, S1 or S2 among the surveyed parcels (TP: True Positive, FP: False Positive, FN: False Negative, TN:True Negative).
Table 11. Marked declared grasslands fields with S1 and S2, S1 or S2 among the surveyed parcels (TP: True Positive, FP: False Positive, FN: False Negative, TN:True Negative).
MarkersTPFPFNTNSensitivitySpecificityPrecisionAccuracyF-Score
S1 backscatter90240120.690.860.980.710.81
S1 coherence9173970.700.500.930.680.80
S2 BSI72258120.550.860.970.580.71
S1 AND S2109221120.840.860.980.840.90
S1 OR S2201110130.150.930.950.230.26
Table 12. Street-level imagery samples’ distribution, which were tagged to assess the markers. The same number of parcels are sampled in marked and non-marked categories.
Table 12. Street-level imagery samples’ distribution, which were tagged to assess the markers. The same number of parcels are sampled in marked and non-marked categories.
MarkerMarkedMarked with Street-LevelMarked with Street-Level Where Tagging Was PossibleTagged Marked and Non-Marked Street-Level Samples
S1 backscatter100210492184
S1 coherence345514998
S2 BSI106410995190
S1 AND S2303343264
S1 OR S22015219193386
Table 13. Markers’ assessment through street-level pictures and the corresponding metrics: TP, FP, FN, TN, sensitivity, specificity, precision and accuracy (TP: True Positive, FP: False Positive, FN: False Negative, TN:True Negative).
Table 13. Markers’ assessment through street-level pictures and the corresponding metrics: TP, FP, FN, TN, sensitivity, specificity, precision and accuracy (TP: True Positive, FP: False Positive, FN: False Negative, TN:True Negative).
MarkersTPFPFNTNSensitivitySpecificityPrecisionAccuracyF-Score
S1 backscatter92082100.531.001.000.550.69
S1 coherence4724450.520.710.960.530.67
S2 BSI95084110.531.001.000.560.69
S1 AND S23202390.581.001.000.640.74
S1 OR S21930180130.521.001.000.530.68

Share and Cite

MDPI and ACS Style

D’Andrimont, R.; Lemoine, G.; Van der Velde, M. Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations. Remote Sens. 2018, 10, 1300. https://doi.org/10.3390/rs10081300

AMA Style

D’Andrimont R, Lemoine G, Van der Velde M. Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations. Remote Sensing. 2018; 10(8):1300. https://doi.org/10.3390/rs10081300

Chicago/Turabian Style

D’Andrimont, Raphaël, Guido Lemoine, and Marijn Van der Velde. 2018. "Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations" Remote Sensing 10, no. 8: 1300. https://doi.org/10.3390/rs10081300

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop