Next Article in Journal
Performance of AquaCrop Model for Maize Growth Simulation under Different Soil Conditioners in Shandong Coastal Area, China
Next Article in Special Issue
Spatial and Temporal Evolution of Sowing and the Onset of the Rainy Season in a Region of Large Agricultural Expansion in Brazil
Previous Article in Journal
A Balanced Sowing Density Improves Quality of Rapeseed Blanket Seedling
Previous Article in Special Issue
VICAL: Global Calculator to Estimate Vegetation Indices for Agricultural Areas with Landsat and Sentinel-2 Data
 
 
Article
Peer-Review Record

Use of Sentinel-2 Derived Vegetation Indices for Estimating fPAR in Olive Groves

Agronomy 2022, 12(7), 1540; https://doi.org/10.3390/agronomy12071540
by Luisa Leolini 1, Marco Moriondo 2,*, Riccardo Rossi 1, Edoardo Bellini 1, Lorenzo Brilli 2, Álvaro López-Bernal 3, Joao A. Santos 4, Helder Fraga 4, Marco Bindi 1, Camilla Dibari 1 and Sergi Costafreda-Aumedes 1,2
Reviewer 1: Anonymous
Agronomy 2022, 12(7), 1540; https://doi.org/10.3390/agronomy12071540
Submission received: 24 May 2022 / Revised: 17 June 2022 / Accepted: 23 June 2022 / Published: 27 June 2022
(This article belongs to the Special Issue Use of Satellite Imagery in Agriculture)

Round 1

Reviewer 1 Report

"Sentinel-2 images (2A product-surface reflectance) at a pixel resolution of 10 m were collected for each site during the whole period of the field campaign. These images were downloaded from the official Copernicus Open Access Hub (https://scihub.copernicus.eu). The cloud-free Sentinel-2 images (Red Green Blue, RGB, and Near InfraRed, NIR) were acquired with a minimum discard of 0 days to a maximum of 9 days between field data collection and image acquisition. The images were then processed in the R software environment (version 4.1.2) for VIs extraction and fPAR simulation"

---------

The satellite data collection and analysis part needs work, the field sampling period is from 2018 - 2021, how does the Sentinel datasets were aggregated to fit the time frame? How did the cloud removal algorithm work?

 

 

Author Response

Responses to Reviewers

“Use of Sentinel-2 derived vegetation indices for estimating fPAR in olive groves”

agronomy-1762529

All comments of the Reviewers have been taken into account and answered in this document (blue).

The manuscript has been now revised by an English native speaker.

Reviewers #1:

"Sentinel-2 images (2A product-surface reflectance) at a pixel resolution of 10 m were collected for each site during the whole period of the field campaign. These images were downloaded from the official Copernicus Open Access Hub (https://scihub.copernicus.eu). The cloud-free Sentinel-2 images (Red Green Blue, RGB, and Near InfraRed, NIR) were acquired with a minimum discard of 0 days to a maximum of 9 days between field data collection and image acquisition. The images were then processed in the R software environment (version 4.1.2) for VIs extraction and fPAR simulation"

 

The satellite data collection and analysis part needs work, the field sampling period is from 2018 - 2021, how does the Sentinel datasets were aggregated to fit the time frame? How did the cloud removal algorithm work?

 

Thanks to the reviewer for the comment. The satellite images were acquired over the period 2018-2021 for covering the fPAR sampling periods in olive groves. During this period, the cloud-free Sentinel-2 images were used for analyzing the seasonal VIs trends in each study site. However, only the cloud-free images with a minimum discard from 0 days to a maximum of 9 days between field data collection and image acquisition, were used for comparing VIs-derived fPAR with fPAR ground observations. We now clarifying better this aspect in the text (see new lines: 148-156).

Regarding the clouds and shadows removal from the Bottom of Atmosphere reflectance (BOA) product, we used the Scene Classification (SC) algorithm included in the Sentinel-2 Toolbox which classifies the Sentinel 2 scene into 4 types of clouds, 6 types of shadows, saturated pixels, bare ground/deserts, vegetation, water and snow. The algorithm accounts different steps: starting with clouds and snow detection, then the cirrus clouds and the shadows and finally by classifying the remaining land-use macro-types of the scene (https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm).

In our case, we selected 4, 5, 6 and 7 classes, creating a mask with values 1-present and NULL/NA - not present that has been subsequently used as a cut zone of the BOA images.

 

Reviewers #2:

  • Abstract could be informative a little more by the results

Thanks to the reviewer for the suggestion. We now revised this part of the abstract including more details on the results (see new lines 37-40).

  • In introduction authors have to write clearly, what is their targets?

Thanks to the reviewer for the comment. We now tried to specify more in detail that the main objective of the manuscript is to propose a simple methodology for disentangling the vegetation indices contribution of olive tree and grass cover in a Sentinel-2 pixel (10 m) for improving fPAR estimation and monitoring at sub-field scale. We also indicated the potential contribution of this approach for precision agriculture and crop modelling (see new lines: 105-109).

3- References must be checked again based on the format

The DOIs and the metadata have been checked and updated in the reference manager. Additionally, we modified the ISTAT reference.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear Editorial Manager

1- Abstract could be informative a little more by the results

2- In introduction authors have to write clearly, what is their targets?

3- References must be checked again based on the format

Regards

 

Author Response

Responses to Reviewers

“Use of Sentinel-2 derived vegetation indices for estimating fPAR in olive groves”

agronomy-1762529

All comments of the Reviewers have been taken into account and answered in this document (blue).

The manuscript has been now revised by an English native speaker.

Reviewers #1:

"Sentinel-2 images (2A product-surface reflectance) at a pixel resolution of 10 m were collected for each site during the whole period of the field campaign. These images were downloaded from the official Copernicus Open Access Hub (https://scihub.copernicus.eu). The cloud-free Sentinel-2 images (Red Green Blue, RGB, and Near InfraRed, NIR) were acquired with a minimum discard of 0 days to a maximum of 9 days between field data collection and image acquisition. The images were then processed in the R software environment (version 4.1.2) for VIs extraction and fPAR simulation"

 

The satellite data collection and analysis part needs work, the field sampling period is from 2018 - 2021, how does the Sentinel datasets were aggregated to fit the time frame? How did the cloud removal algorithm work?

 

Thanks to the reviewer for the comment. The satellite images were acquired over the period 2018-2021 for covering the fPAR sampling periods in olive groves. During this period, the cloud-free Sentinel-2 images were used for analyzing the seasonal VIs trends in each study site. However, only the cloud-free images with a minimum discard from 0 days to a maximum of 9 days between field data collection and image acquisition, were used for comparing VIs-derived fPAR with fPAR ground observations. We now clarifying better this aspect in the text (see new lines: 148-156).

Regarding the clouds and shadows removal from the Bottom of Atmosphere reflectance (BOA) product, we used the Scene Classification (SC) algorithm included in the Sentinel-2 Toolbox which classifies the Sentinel 2 scene into 4 types of clouds, 6 types of shadows, saturated pixels, bare ground/deserts, vegetation, water and snow. The algorithm accounts different steps: starting with clouds and snow detection, then the cirrus clouds and the shadows and finally by classifying the remaining land-use macro-types of the scene (https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm).

In our case, we selected 4, 5, 6 and 7 classes, creating a mask with values 1-present and NULL/NA - not present that has been subsequently used as a cut zone of the BOA images.

 

Reviewers #2:

  • Abstract could be informative a little more by the results

Thanks to the reviewer for the suggestion. We now revised this part of the abstract including more details on the results (see new lines 37-40).

  • In introduction authors have to write clearly, what is their targets?

Thanks to the reviewer for the comment. We now tried to specify more in detail that the main objective of the manuscript is to propose a simple methodology for disentangling the vegetation indices contribution of olive tree and grass cover in a Sentinel-2 pixel (10 m) for improving fPAR estimation and monitoring at sub-field scale. We also indicated the potential contribution of this approach for precision agriculture and crop modelling (see new lines: 105-109).

3- References must be checked again based on the format

The DOIs and the metadata have been checked and updated in the reference manager. Additionally, we modified the ISTAT reference.

 

Author Response File: Author Response.pdf

Back to TopTop