Next Article in Journal
Triple-Frequency Code-Phase Combination Determination: A Comparison with the Hatch-Melbourne-Wübbena Combination Using BDS Signals
Next Article in Special Issue
A General Approach to Enhance Short Wave Satellite Imagery by Removing Background Atmospheric Effects
Previous Article in Journal
Siamese-GAN: Learning Invariant Representations for Aerial Vehicle Image Categorization
Previous Article in Special Issue
Using Copernicus Atmosphere Monitoring Service Products to Constrain the Aerosol Type in the Atmospheric Correction Processor MAJA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Atmospheric Correction Inter-Comparison Exercise

1
SERCO SpA c/o European Space Agency ESA-ESRIN, Largo Galileo Galilei, 00044 Frascati, Italy
2
NASA/GSFC Code 619, Greenbelt, MD 20771, USA
3
Department of Geographical Sciences, University of Maryland, College Park, MD 20742, USA
4
European Space Agency ESA-ESRIN, Largo Galileo Galilei, 00044 Frascati, Italy
5
VITO, Boeretang 200, 2400 Mol, Belgium
6
Environmental Remote Sensing and Geoinformatics, Faculty of Regional and Environmental Sciences, Trier University, 54286 Trier, Germany
7
Centre d’études Spatiales de la Biosphère, CESBIO Unite mixte Université de Toulouse-CNES-CNRS-IRD, 18 Avenue E.Belin, 31401 Toulouse CEDEX 9, France
8
Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Section Remote Sensing, Telegrafenberg, 14473 Potsdam, Germany
9
Brockmann Consult GmbH, Max-Planck-Straße 2, 21502 Geesthacht, Germany
10
National Earth and Marine Observation Branch, Geoscience Australia, GPO Box 378, Canberra, ACT 2601, Australia
11
Telespazio France, SSA Business Unit (Satellite Systems & Applications), 31023 Toulouse CEDEX 1, France
12
ACRI-ST, 260 Route du Pin Montard, BP 234, 06904 Sophia-Antipolis CEDEX, France
13
Science Systems and Applications, Inc., 10210 Greenbelt Road, Suite 600, Lanham, MD 20706, USA
14
German Aerospace Center (DLR) Remote Sensing Technology Institute Photogrammetry and Image Analysis Rutherfordstraße 2, 12489 Berlin-Adlershof, Germany
15
Royal Belgian Institute for Natural Sciences (RBINS), Operational Directorate Natural Environment, 100 Gulledelle, 1200 Brussels, Belgium
*
Authors to whom correspondence should be addressed.
Present address: Geomatics Lab, Geography Department, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany.
Remote Sens. 2018, 10(2), 352; https://doi.org/10.3390/rs10020352
Submission received: 24 January 2018 / Revised: 16 February 2018 / Accepted: 20 February 2018 / Published: 24 February 2018
(This article belongs to the Special Issue Atmospheric Correction of Remote Sensing Data)

Abstract

:
The Atmospheric Correction Inter-comparison eXercise (ACIX) is an international initiative with the aim to analyse the Surface Reflectance (SR) products of various state-of-the-art atmospheric correction (AC) processors. The Aerosol Optical Thickness (AOT) and Water Vapour (WV) are also examined in ACIX as additional outputs of AC processing. In this paper, the general ACIX framework is discussed; special mention is made of the motivation to initiate the experiment, the inter-comparison protocol, and the principal results. ACIX is free and open and every developer was welcome to participate. Eventually, 12 participants applied their approaches to various Landsat-8 and Sentinel-2 image datasets acquired over sites around the world. The current results diverge depending on the sensors, products, and sites, indicating their strengths and weaknesses. Indeed, this first implementation of processor inter-comparison was proven to be a good lesson for the developers to learn the advantages and limitations of their approaches. Various algorithm improvements are expected, if not already implemented, and the enhanced performances are yet to be assessed in future ACIX experiments.

Graphical Abstract

1. Introduction

Today, free and open data policy allows access to a large amount of remote sensing data, which together with the advanced cloud computing services, significantly facilitate the analysis of long time series. As the correction of the atmospheric impacts on optical observations is a fundamental pre-analysis step for any quantitative analysis [1,2,3], operational processing chains towards accurate and consistent Surface Reflectance (SR) products have become essential. To this end, several entities have already started to generate, or they plan to generate in the short term, Surface Reflectance (SR) products at a global scale for Landsat-8 (L-8) and Sentinel-2 (S-2) missions.
A number of L-8 and S-2 atmospheric correction (AC) methodologies are already available and widely implemented in various applications [3,4,5,6,7,8,9]. In certain cases, the users validate the performance of different AC processors, in order to select the most suitable over their area of interest [10]. Moreover, some studies have already been conducted on the validation of SR products derived from specific processors at larger scales [11,12,13]. So far though there has not been a complete inter-comparison analysis for the current advanced approaches. Therefore, in the international framework of the Committee on Earth Observation Satellites (CEOS), the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) initiated the Atmospheric Correction Inter-comparison Exercise (ACIX) to explore the different aspects of every AC processor and the quality of the SR products.
ACIX is an international collaborative initiative to inter-compare a set of AC processors for L-8 and S-2 imagery over a selected sample of sites. The exercise aimed to contribute to a better understanding of the different uncertainty components and to the improvement of the AC processors’ performance. In order to obtain an accurate SR product, ready to use for land or water applications, two main steps are required: first, the detection of cloud and cloud shadow and then, the correction for atmospheric effects. Although both parts of the process have equal importance, ACIX only concentrated on the atmospheric correction in this first experiment.
This paper describes in detail the protocol defined for the implementation of the exercise, the results for both L-8 and S-2 datasets, and the experience gained through this study. In particular, details are given for the input data, sites, and metrics involved in the inter-comparison analysis and its outcomes are presented per sensor and product. For brevity, the analysis performed per test site is not presented in this paper, but all the results can be found on the ACIX web site hosted on the CEOS Cal/Val portal (http://calvalportal.ceos.org/projects/acix).

2. ACIX Protocol

The ACIX protocol was designed to include some typical experimental cases over diverse land cover types and atmospheric conditions, which were considered suitable to fulfil the purposes of the exercise. In particular, the ACIX sites were selected based on the locations of the international Aerosol Robotic Network (AERONET). The network provides a reliable, globally representative and consistent dataset of atmospheric variables that allows for the validation of the performance of an AC processor using common metrics [14,15]. Since there were no other global networks mature enough or with similar global representation, the AERONET in-situ measurements were considered to be the ground truth in ACIX. The inter-comparison analyses were conducted separately for Aerosol Optical Thickness (AOT), Water Vapour (WV), and Surface Reflectance (SR) products.
The organizers, together with the participants, prepared the protocol after having discussed and agreed on all the major points, i.e., sites, input data, results’ specifications, etc. The protocol was drafted in the 1st ACIX workshop (21–22 June 2016, USA) taking into account most of the recommendations that were feasible in this first implementation. ACIX was conceived as a free and open exercise in which any developer team of an AC algorithm could participate. The list of the processors, and the corresponding participants’ names and affiliations are presented in Table 1. Some of the participants for various reasons, e.g., time constraints, tuning of the processor, processor’s limitations, etc., implemented their AC algorithms on certain L-8 and/or S-2 imagery of the available dataset (Table 1). iCOR, CorA, and GFZ-AC in Table 1 are the current acronyms for OPERA, Brockmann, and SCAPE-M, respectively. The new names were defined after the end of the exercise, when the plots were already created, so both current and former names may appear in this manuscript. The MACCS processor has recently been renamed MAJA, but it will appear here with its former name, as MAJA corresponds to a newer version.

2.1. ACIX Sites and Datasets

The inter-comparison analysis was made over 19 AERONET sites around the world, as agreed unanimously by the ACIX organizers and participants (Table 2). The sites were used for L-8 and S-2 datasets and covered various climatological zones and land cover types. Although ACIX was only initiated to inter-compare the performance of AC processors over land, five coastal and inland water sites were included in the analysis, in order to examine the performance over diverse sites. The availability of AERONET measurements for the study time period was a critical parameter during the selection phase.

Input Data and Processing Specifications

Time series over a period of one year were available for L-8 OLI, while for S-2 MSI, the time series covered a seven-month period, from the Level-1 data provision (December 2015) to the beginning of ACIX (June 2016). Thus, the S-2 MSI time series only covered the winter half year on the Northern hemisphere. In addition, Level-1C products were not provided at the nominal S-2 revisit time (10 days at the equator with one satellite) until the end of March, when S-2A started being steadily operational. Therefore, the imagery was not provided at regular time intervals with sometimes only one or two available observations per month. This was a hindrance for processors based on a multi-temporal method, i.e., MACCS, which would not have been operated in their optimal configuration. In total, around 120 L-8 and 90 S-2 scenes with coincident AERONET measurements were available.
The L-8 data were in GeoTIFF data format, as provided by USGS, including Bands 1-7 and 9 of OLI and the two thermal bands of TIRS. The metadata file *MTL.txt was also available. The S-2 data were in JPEG2000 data format, as provided by ESA, including the 13 bands of S-2 data in all the corresponding spatial resolutions (10 m, 20 m, 60 m). However, after ACIX processor runs were completed, a new version of the S-2A spectral response functions was released by ESA in December 2017 with a particular impact on the responses of bands B01, B02, and B08. The greatest central wavelength difference was 4 nm for B02, and this difference could be translated into a change of about 4% in the atmospheric molecular scattering reflectance. The corresponding S-2 metadata file ‘scenename.xml’ was also available.
Considering the diversity of the corrections involved in the approaches of ACIX, a twofold implementation was proposed, in order to obtain more stable and consistent AC and inter-comparison results amongst all the processors. The first implementation was mandatory and it included the correction of the Rayleigh scattering effects, aerosol scattering, and atmospheric gases. The correction of adjacency effects was only involved if it could not be omitted from the processing chain. The second application was optional, allowing the participants to implement the full processing chain of their processors. In this case, the approach could involve any corrections considered necessary by the participant/developer (adjacency effects, bidirectional reflectance distribution function, terrain correction, etc.). For all the experimental scenarios, the participants were encouraged to additionally submit the quality flags at pixel level, indicating the quality assured pixel to be involved in the analysis.

2.2. Inter-Comparison Analysis

The inter-comparison analysis was performed separately for all the products, i.e., AOT, WV, and SRs, and on image subsets of 9 km × 9 km centered on the AERONET Sunphotometer station of every site. The size of the subset was selected in order to cover a whole number of pixels at L-8 and S-2 spatial resolutions, i.e., 30 m, 60 m, 20 m, and 10 m, accordingly. However, the 9 km resolution did not allow any significant difference to be perceived related to adjacency effect correction or terrain correction. The quality masks submitted by the participants were blended either altogether or in combinations and only the common pixels flagged as ‘good quality pixels’ were considered in the analysis. The pixel categories excluded from the inter-comparison of each product are described in the respective section.

2.2.1. Aerosol Optical Thickness (AOT) and Water Vapour (WV)

The estimated AOT values by the ACIX processors were compared to Level 1.5 (cloud screened) AERONET observations of spectral aerosol optical depth. The common quality pixels approved by all the participants were combined in a single quality mask in this case. The analysis was performed at λ = 550 nm, since for both L-8 and S-2, the AOT is estimated and reported in the products at this wavelength. The AERONET AOT values were interpolated correspondingly using the Angstrom Exponent. Due to large AOT variations in time, only the AERONET measurements within a ±15 min time difference from the AOT retrieved values (L-8/S-2 overpass) were considered, including all ranges of AOT. The inter-comparison analysis was implemented per date, site, and method, also including a time series analysis of the submitted AOT values against AERONET measurements.
The WV values could only be estimated from S-2 observations. The S-2 MSI instrument has the spectral band B09 located in the WV absorption region (Central Wavelength: 945 nm) and so it is appropriate for the WV correction, while L-8 OLI lacks this feature. The inter-comparison approach for WV was similar to the one implemented for AOT analysis.

2.2.2. Surface Reflectance (SR)

Inter-Comparison of the Retrieved SRs

The inter-comparison of the SRs was initially achieved by plotting the averaged values over the subset test area per date, band, and AC approach. The time series plots provided an indication of similarities and differences among the various approaches and atmospheric conditions of different dates and test sites.
A distance N × N matrix was also created, where N is the number of AC processors. The rows and the column headings referred to the names of the participating models. The elements of the matrix were the normalized distances between the resulting averaged SR values of the 9 km × 9 km subsets, considering only the pixels commonly classified as of “good quality” and averaged them over the available dates. The values on the main diagonal are all zero and the off-diagonal values indicate the difference between the two compared AC processors. The distance matrix is symmetric with dij = dji for every pair of processors i, j and was calculated per test site (Table 3).

Comparison with AERONET Corrected Data

The SR products from L-8 OLI and S-2 MSI were compared to a reference SR dataset computed by the 6S radiative transfer (RT) code [22] and AERONET measurements. AOT, aerosol model, and column water vapour were derived from the AERONET sunphotometer measurements and were used in the RT model. In this way, the SRs were retrieved based on the TOA reflectances acquired from Level-1C products [11,12,13]. A constant aerosol model (size distribution and refractive indices) was derived for each site using all good quality almucantar inversions available and parameterized as a function of optical depth and angstrom exponent following an approach similar to the one of Dubovik et al. [23]. The choice of 6S as the RT model for the computation of the “reference” SR could constitute a moderate, but not negligible, advantage for the AC codes that use this same model in their RT simulations, i.e., LaSRC. However, RT codes tend to agree well within the 1% level (except for those that do not account for polarization), as demonstrated during a previous benchmarking exercise on RT simulations [24,25]. However, for a reflectance of 0.3, a 1% uncertainty in the transmission can result in an uncertainty of 0.003 in the surface reflectance, while an uncertainty of 1% on the path radiance can also add up to 0.001. These values are not negligible with regard to the results shown in the SR validation tables. Moreover, in this case, a subset of 9 km × 9 km around the AERONET station was analysed. The pixel-by-pixel comparison between each of the spectral bands and the corresponding AERONET surface reflectance data was performed for all the subsets. Only the pixels that were not labeled as clouds, cloud shadows, snow, water, and high aerosol loads were considered in this analysis. These quality-approved pixels were the result of the intersection of the Quality Assessment (QA) band estimated by LaSRC and each processor’s quality flags in every analysis case. Therefore, the quality masks of the processors were not blended in this inter-comparison approach. In addition, in order to exclude the water pixels from the analysis, specific thresholds were set to Band 6 and Band 7 pixel values of OLI, as well as to Band 11 and Band 12 of the MSI instrument, accordingly. The residuals Δ ρ ι , λ SR between the resulting SR (by the processors participating in ACIX), ρ ι , λ SR PROCESSOR , and the reference AERONET SR, ρ ι , λ SR   AERONET , were calculated for every pixel i , with i and ranging from 1 to n λ the total number of pixels per wavelength λ :
Δ ρ ι , λ SR = ( ρ ι , λ SR PROCESSOR ρ ι , λ SR AERONET )
The statistical metrics accuracy (A), precision (P), and uncertainty (U) [11,12,13] were then estimated as below:
A = 1 n λ ( i = 1 n λ Δ ρ ι , λ SR )
P = 1 ( n λ 1 ) i = 1 n λ ( Δ ρ ι , λ SR A ) 2
U = 1 n λ i = 1 n λ ( Δ ρ ι , λ SR ) 2
Moreover, scatter plots between the submitted SR (y axis) against the AERONET SR (x axis) assisted in assessing the variability and bias in the corrected reflectance.

3. Overview of ACIX Results

Due to the large volume of data, only a representative and remarkable part is highlighted in this section. In addition, as important differences were not observed between the mandatory and optional implementations at the resolutions of these analyses, only the optional implementations are presented here. However, larger differences might have been observed, if the comparison had been performed at a higher spatial resolution. In particular in the optional cases, the participants could include in the AC processing chain all the corrections considered essential. The extensive presentation of all the results can be found on the ACIX web site (http://calvalportal.ceos.org/projects/acix). The results are presented by sensor, case study, and inter-comparison category.

3.1. Landsat-8

In total, four AC processors, i.e., ATCOR, FORCE, iCOR, and LaSRC, were applied to most of the L-8 datasets. GA-PABT was only implemented on the Australian site (Canberra), while ACOLITE and SeaDAS only for the coastal sites. Because of the small sample size of these cases, the interpretation of the inter-comparison results may be biased; therefore, the corresponding analyses are not presented in this paper. These results, however, can be found on the ACIX web site. The data involved in the AOT analysis were filtered from (i) cloud; (ii) cloud shadow; (iii) adjacent to cloud; (iv) cirrus cloud; (v) no data values; and (vi) interpolated values. All the quality masks provided by the participants were taken into consideration and a combined mask was used to exclude the unwanted pixels.

3.1.1. Aerosol Optical Thickness

Figure 1 shows the scatterplots of L-8 derived AOT compared to AERONET measurements. The plots are presented per processor, including all the dates over all the sites as they were provided, correspondingly. The dot dashed line refers to 1:1 line of the two compared values set. If the AOT estimates were correct, all points would fall on the 1:1 line. This agreement is observed for most of the points in the case of iCOR, showing that the processor performed well regardless of the diversity of land cover types and aerosol conditions. The arid areas seemed to be the main problem for the processors, which employ the dark dense vegetation (DDV) method and/or estimate the AOT over dark water pixels, namely: ATCOR and FORCE. Therefore, fixed AOT values were set for Banizoumbou, Capo Verde, and Sede Boker in these cases. The greatest discrepancies though were detected for high aerosol values, where the AOT was mostly underestimated except for LaSRC, which achieved a good assessment. However, LaSRC did not manage to accurately estimate the AOT over coastal scenes; an expected fact since the processor was only suitable for AOT retrievals over land areas at the time of ACIX implementation. Similar results were observed for LaSRC over Davos, where the snow cover yielded overestimations. For the rest of the experimental cases, all the processors assessed the AOT quite accurately.
The statistical analysis of AOT retrieved values compared to the reference AERONET measurements was performed for 9 km × 9 km subsets. Table 4 summarizes the statistics over all the sites per AC processor. Among all AC processors, iCOR has the lowest root mean square (RMS) value, showing the overall good agreement between AOT estimates and reference AERONET measurements over diverse land cover types and atmospheric conditions. The rest of the processors produced results with quite similar error values.

3.1.2. Surface Reflectance Products

The surface reflectance products obtained by the processors for each of the seven OLI bands were compared pixel-by-pixel with the corresponding reference dataset (Section 2.2.2). Only pixels that were characterized as land and clear, and were not labeled as snow, water, high aerosol, and shadow by the participants’ quality masks, were considered in this comparison. Figure 2 shows the L-8-derived SR for OLI Band 4 (Red, 0.64–0.67 µm); one plot per processor is presented. The results include the retrieved SR values of all the sites and dates submitted in every case. The accuracy, precision, and uncertainty (APU) (Section 2.2.2) were calculated and displayed on the plots, together with the theoretical error budget for Landsat SR ( 0.005 + 0.05 × ρ ), where ρ is the surface reflectance magnitude [1]. More detailed inter-comparison results including all bands and the analysis per test site, are available on the ACIX web site. In general, when the APU lines fall under the specification line (magenta), the results are considered average-good.
As it can be observed in Figure 2, the uncertainty does not exceed the specification for most of the points involved in the comparison with the reference. In particular, ATCOR and LaSRC perform better with low APU scores. The overall results of the APU analysis are summarized in Table 5, indicating that ATCOR, FORCE, and LaSRC provide accurate and robust SR estimates for all the cases. iCOR has slight differences with the first three processors, apart from Band 5, for which the highest discrepancy was observed. The number of points involved in the analysis varies among the processors, as the number of the submitted results also varied accordingly. The best uncertainty values per band are underlined in the Table 5.

3.2. Sentinel-2

Eight processors, i.e., CorA, FORCE, iCOR, LaSRC, MACCS, S2-AC2020, GFZ-AC, and Sen2Cor, provided results for Sentinel-2 datasets over most of the sites (Table 1). Similar to the L-8 case, GA-PABT was only implemented on the Australian site (Canberra), while ACOLITE only on the water/coastal sites, and their results are not included in this paper. iCOR did not deliver any WV products, while LAC provided only SR products. It is worth noting at this point that overall, the AC codes involved in ACIX were in their early validation stage for S-2 data. In addition, the S-2A spectral response functions were inaccurately known at that time, mainly having an effect on bands B01 and B02.

3.2.1. Aerosol Optical Thickness

Figure 3 shows the scatterplots of the AOT estimates based on the S-2 datasets, which were processed by the processor, versus the corresponding AERONET measurements. The dot dashed line refers to the exact agreement between the two sets of data, meaning that the further the points fall from the line, the greater the differences between the two datasets. FORCE managed to retrieve the AOT with a good accuracy for most of the sites, apart from a few dates/points of Banizoumbou and Sede_Boker. The arid sites were found to be challenging for all the AC processors, basically because of the absence of DDV pixels. iCOR set AOT values to zero in some of these cases, while other processors, i.e., CorA, FORCE, S2-AC2020, and Sen2Cor, set default values. MACCS also encountered difficulties in dealing with the high SR values of arid sites and retrieving the AOT correctly. However, these areas were the ones mostly processed, due to the data availability and the multi-temporal constraints of the processor. Therefore, they were the majority of a rather small sample that can probably partly explain its overall poor performance. GFZ-AC included no image-based AOT retrieval method, but extracted it from ECMWF forecast data. The big grid cell size could be a reason for the discrepancies observed for this case. For the rest of the experimental cases, the AC processors in general produced good AOT results. However, some individual instances per processor were observed with big discrepancies between AOT estimates and AERONET measurements.
The statistics of AOT estimates from S-2 observations are presented in Table 6. LaSRC achieved overall the best agreement between the AOT estimates and the reference AERONET measurements, as indicated by the low RMS value. For S2-AC2020, Sen2Cor, iCOR, CorA, and FORCE, a similar, good performance was observed over all land cover and aerosol types.

3.2.2. Water Vapour (WV)

Water Vapour was an additional product derived from S-2 observations. Seven processors included the WV estimation in their approaches, i.e., CorA, FORCE, LaSRC, MACCS, S2-AC2020, GFZ-AC, and Sen2Cor. The inter-comparison analysis was similar to the one implemented to inter-compare the AOT values. It should be noted that the pixels labeled as ‘Water’ in the participants’ quality masks were excluded from the analysis of WV retrievals.
Figure 4 shows the plots of WV estimates compared to AERONET WV measurements. The points in the plots correspond to retrieved values over the sites and dates provided by each processor. Overall, the processors succeeded in retrieving the WV more accurately than AOT values, as most of the points fall close to the 1:1 line. In particular, a very good agreement is observed for MACCS estimates, although in this case, the results provided were limited to specific sites. A bias was observed for GFZ-AC, leading to an overestimation of the WV retrievals across most of the cases. The rest of the processors performed very well overall, apart from a few exceptions over arid and equatorial forest sites.
Table 7 summarizes the results of the statistical analysis of WV estimates from S-2 observations. In agreement with the plots of Figure 4, overall, the processors managed to quantify the WV accurately. However, big differences were observed between mean and max values, attesting the existence of some outliers, which deteriorate the statistical performance for the majority of the processors and increase the RMS values. Besides, except for S-2 bands 9 and 10, the absorption by WV is usually below 5%, and the performances observed for WV should therefore be translated into negligible noise added to the surface reflectances.

3.2.3. Surface Reflectance Products

The surface reflectance products for S-2 MSI bands were compared on a pixel basis with the reference SRs (Section 2.2.2). Similar to the analysis of L-8 OLCI SRs, the pixels involved were labeled as land and clear, and were filtered from snow, water, high aerosol, and shadow based on the participants’ quality masks. As has already been mentioned, the APU analysis of all the SR values obtained by every processor and for every site is available on the ACIX web site. Band 9 (Water vapour) and Band 10 (SWIR–Cirrus) were excluded from this analysis because they are not intended for land applications. Figure 5 demonstrates a representative example of the APU outcomes for MSI Band 4 (Red, central wavelength 665 nm) for all the datasets processed by the processor. The overall analysis of the plots shows that FORCE, LaSRC, MACCS, and Sen2Cor managed to estimate the SRs quite well over all the sites. The good performance is confirmed by the low values of accuracy (A), proving that the SR products are not biased. In addition, U curves fall under the line of specified uncertainty, showing that these processors met the requirement of the theoretical SR reference [1].
The results of the APU analysis over all bands and sites are summarized in Table 8, indicating that LaSRC, FORCE, and MACCS provided accurate and robust SR estimates for all the cases. However, MACCS provided SRs only over specific sites, due to the basic requirement of the underlying multi-temporal algorithm [26] for regularly acquired scenes in order to perform optimally. As S-2 only steadily started providing images every 10 days in April 2016, the number of points involved in the MACCS APU analysis is approximately a third of the estimates provided by the rest of the participants. In addition, Sen2Cor managed to produce accurate results across all the visible bands, while higher discrepancies were observed for the infrared bands. As has already been mentioned, in this study, the reference SRs were computed by 6S RT code that is the same as the one used in some of the AC codes. This can provide a modest but non-negligible advantage to these AC codes, e.g., LaSRC. The best uncertainty scores per band are underlined in Table 8.

4. Conclusions

The ACIX is designed as an open and free initiative to compare AC codes applicable either to L-8 or S-2 imagery. Therefore, every developer of an AC algorithm was welcome to participate in the exercise. Indeed, in the first implementation of ACIX, several participants from different institutes, companies, and agencies around the world contributed by defining the inter-comparison protocol and processing a big volume of data. However, different factors, e.g., time constraints, tuning of the processors, processor’s limitations, etc., prevented the application of some AC algorithms to the whole L-8 and/or S-2 dataset. Due to this variance of the submitted results, it was not feasible to draw common conclusions among all the algorithms, but fortunately, these cases were not the majority. Being completed for the first time, ACIX has proven to be successful in (a) addressing the strengths and weaknesses of the processors over diverse land cover types and atmospheric conditions; (b) quantifying the discrepancies of AOT and WV products compared to AERONET measurements; and (c) identifying the similarities among the processors by analysing and presenting all the results in the same manner.
The ACIX results are a unique source of information over the performance of notable AC processors, which will be made publicly available on the CEOS Cal/Val portal. Based on these outcomes, the user and scientific community can be informed about the state-of-art approaches, including their highlights and shortcomings, across different sensors, products, and sites. It should be noted here that the developers were determined to participate in the exercise; although the processors were not mature enough to handle different source data and land cover types. Considering S-2 datasets for instance, ACIX only started six months after the beginning of S-2 Level-1 data provision to all users. The research community was still inexperienced during that phase, and time and effort was needed to adapt the processors to the new data requirements. The discrepancies observed in ACIX inter-comparison results have assisted, in many cases, the developers to learn about the performance and identify the flaws in their algorithms. As a matter of fact, the participants have already modified and improved their processors and will have a chance to present the enhanced versions during the following ACIX implementation.
The continuation of the exercise has already been discussed and agreed, suggesting some new implementation parameters. More datasets need to be exploited, in order to obtain more concrete conclusions, so at least a one-year period of complete time series from L-8, S-2A, and S-2B will be employed. However, it is important that all the participants will apply their processors over all sites, in order to gain an overall assessment of their inter-performance. The sites will also be redefined and more representative cases concerning land cover and aerosol types will be included. The analyses of the performances over aquatic sites (i.e., coastal and inland waters) and comparisons of cloud masks will also be considered for inclusion within the study, as they constitute a significant part of the performance and usability of a SR product.
Having experience of the first ACIX implementation, the inter-comparison strategy will be refined, complementing the current metrics with comparisons to other sources of measurements, e.g., RadCalNet, and analysis at a higher spatial resolution (pixel scale) in order to allow testing the adjacency effect correction. Using criteria that assess the time consistency of time series would also provide an idea of the noise that affects L2A time series, including the effects of undetected clouds or shadows. The next phase of ACIX is anticipated to involve more participants and more datasets will be assessed.

Acknowledgments

The authors would like to thank all the AERONET Principal Investigators and their staff for establishing and maintaining the 19 sites used in this investigation. We would like also to thank the editors and the anonymous reviewers for their constructive comments, which helped us to improve the quality of the article.

Author Contributions

Georgia Doxani, Eric Vermote, Jean-Claude Roger and Ferran Gascon contributed to the definition of the protocol, the inter-comparison analysis and the preparation of the manuscript. Eric Vermote, Stefan Adriaensen David Frantz, Olivier Hagolle, André Hollstein, Grit Kirches, Fuqin Li, Jérôme Louis, Antoine Mangin, Nima Pahlevan, Bringfried Pflug and Quinten Vanhellmont contributed to the definition of the protocol, implemented their AC processors on the input data and participated in the preparation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vermote, E.F.; Kotchenova, S. Atmospheric correction for the monitoring of land surfaces. J. Geophys. Res. 2008, 113. [Google Scholar] [CrossRef]
  2. Franch, B.; Vermote, E.F.; Roger, J.C.; Murphy, E.; Becker-Reshef, I.; Justice, C.; Claverie, M.; Nagol, J.; Csiszar, I.; Meyer, D.; et al. A 30+ Year AVHRR Land Surface Reflectance Climate Data Record and Its Application to Wheat Yield Monitoring. Remote Sens. 2017, 9, 296. [Google Scholar] [CrossRef]
  3. Hagolle, O.; Huc, M.; Villa Pascual, D.; Dedieu, G. A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of FormoSat-2, LandSat, VENµS and Sentinel-2 images. Remote Sens. 2015, 7, 2668–2691. [Google Scholar] [CrossRef]
  4. Ng, W.T.; Rima, P.; Einzmann, K.; Immitzer, M.; Atzberger, C.; Eckert, S. Assessing the Potential of Sentinel-2 and Pléiades Data for the Detection of Prosopis and Vachellia spp. in Kenya. Remote Sens. 2017, 9, 74. [Google Scholar] [CrossRef]
  5. Van der Werff, H.; Van der Meer, F. Sentinel-2A MSI and Landsat 8 OLI Provide Data Continuity for Geological Remote Sensing. Remote Sens. 2016, 8, 883. [Google Scholar] [CrossRef]
  6. Radoux, J.; Chomé, G.; Jacques, D.C.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; d’Andrimont, R.; Defourny, P. Sentinel-2’s Potential for Sub-Pixel Landscape Feature Detection. Remote Sens. 2016, 8, 488. [Google Scholar] [CrossRef]
  7. Pahlevan, N.; Sarkar, S.; Franz, B.A.; Balasubramanian, S.V.; He, J. Sentinel-2 MultiSpectral Instrument (MSI) data processing for aquatic science applications: Demonstrations and validations. Remote Sens. Environ. 2017, 201, 47–56. [Google Scholar] [CrossRef]
  8. Pahlevan, N.; Schott, J.R.; Franz, B.A.; Zibordi, G.; Markham, B.; Bailey, S.; Schaaf, C.B.; Ondrusek, M.; Greb, S.; Strait, C.M. Landsat 8 remote sensing reflectance (R rs) products: Evaluations, intercomparisons, and enhancements. Remote Sens. Environ. 2017, 190, 289–301. [Google Scholar] [CrossRef]
  9. Rouquié, B.; Hagolle, O.; Bréon, F.-M.; Boucher, O.; Desjardins, C.; Rémy, S. Using Copernicus Atmosphere Monitoring Service Products to Constrain the Aerosol Type in the Atmospheric Correction Processor MAJA. Remote Sens. 2017, 9, 1230. [Google Scholar] [CrossRef]
  10. Dörnhöfer, K.; Göritz, A.; Gege, P.; Pflug, B.; Oppelt, N. Water Constituents and Water Depth Retrieval from Sentinel-2A—A First Evaluation in an Oligotrophic Lake. Remote Sens. 2016, 8, 941. [Google Scholar] [CrossRef]
  11. Ju, J.; Roy, D.P.; Vermote, E.; Masek, J.; Kovalskyy, V. Continental-scale validation of MODIS-based and LEDAPS Landsat ETM+ atmospheric correction methods. Remote Sens. Environ. 2012, 122, 175–184. [Google Scholar] [CrossRef]
  12. Claverie, M.; Vermote, E.F.; Franch, B.; Masek, J.G. Evaluation of the Landsat-5 TM and Landsat-7 ETM+ surface reflectance products. Remote Sens. Environ. 2015, 169, 390–403. [Google Scholar] [CrossRef]
  13. Vermote, E.; Justice, C.; Claverie, M.; Franch, B. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sens. Environ. 2016, 185, 46–56. [Google Scholar] [CrossRef]
  14. Holben, B.N.; Eck, T.F.; Slutsker, I.; Tanre, D.; Buis, J.P.; Setzer, A.; Vermote, E.; Reagan, J.A.; Kaufman, Y.J.; Nakajima, T.; et al. AERONET—A federated instrument network and data archive for aerosol characterization. Remote Sens. Environ. 1998, 66, 1–16. [Google Scholar] [CrossRef]
  15. Smirnov, A.; Holben, B.N.; Eck, T.F.; Dubovik, O.; Slutsker, I. Cloud-screening and quality control algorithms for the AERONET database. Remote Sens. Environ. 2000, 73, 337–349. [Google Scholar] [CrossRef]
  16. Richter, R. Correction of satellite imagery over mountainous terrain. Appl. Opt. 1998, 37, 4004–4015. [Google Scholar] [CrossRef] [PubMed]
  17. Richter, R.; Schläpfer, D. Atmospheric/Topographic Correction for Satellite Imagery; DLR Report DLR-IB 565-02/15); German Aerospace Center (DLR): Wessling, Germany, 2015. [Google Scholar]
  18. Defourny, P.; Arino, O.; Boettcher, M.; Brockmann, C.; Kirches, G.; Lamarche, C.; Radoux, J.; Ramoino, F.; Santoro, M.; Wevers, J. CCI-LC ATBDv3 Phase II. Land Cover Climate Change Initiative—Algorithm Theoretical Basis Document v3. Issue 1.1, 2017. Available online: https://www.esa-landcover-cci.org/?q=documents# (accessed on 20 February 2018).
  19. Frantz, D.; Röder, A.; Stellmes, M.; Hill, J. An Operational Radiometric Landsat Preprocessing Framework for Large-Area Time Series Applications. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3928–3943. [Google Scholar] [CrossRef]
  20. Li, F.; Jupp, D.L.B.; Thankappan, M.; Lymburner, L.; Mueller, N.; Lewis, A.; Held, A. A Physics-based Atmospheric and BRDF Correction for Landsat Data over Mountainous Terrain. Remote Sens. Environ. 2012, 124, 756–770. [Google Scholar] [CrossRef]
  21. Franz, B.A.; Bailey, S.W.; Kuring, N.; Werdell, P.J. Ocean color measurements with the Operational Land Imager on Landsat-8: Implementation and evaluation in SeaDAS. J. Appl. Remote Sens. 2015, 9, 096070. [Google Scholar] [CrossRef]
  22. Vermote, E.F.; Tanré, D.; Deuze, J.L.; Herman, M.; Morcette, J.J. Second simulation of the satellite signal in the solar spectrum, 6S: An overview. IEEE Trans. Geosci. Remote Sens. 1997, 35, 675–686. [Google Scholar] [CrossRef]
  23. Dubovik, O.; Holben, B.; Eck, T.F.; Smirnov, A.; Kaufman, Y.J.; King, M.D.; Tanré, D.; Slutsker, I. Variability of absorption and optical properties of key aerosol types observed in worldwide locations. J. Atmos. Sci. 2002, 59, 590–608. [Google Scholar] [CrossRef]
  24. Kotchenova, S.Y.; Vermote, E.F.; Matarrese, R.; Klemm, F.J. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part I: Path radiance. Appl. Opt. 2006, 45, 6762–6774. [Google Scholar] [CrossRef] [PubMed]
  25. Kotchenova, S.Y.; Vermote, E.F. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part II. Homogeneous Lambertian and anisotropic surfaces. Appl. Opt. 2007, 46, 4455–4464. [Google Scholar] [CrossRef] [PubMed]
  26. Petrucci, B.; Huc, M.; Feuvrier, T.; Ruffel, C.; Hagolle, O.; Lonjou, V.; Desjardins, C. MACCS: Multi-Mission Atmospheric Correction and Cloud Screening tool for high-frequency revisit data processing. In Proceedings of the Image and Signal Processing for Remote Sensing XXI, Toulouse, France, 21–24 September 2015. [Google Scholar]
Figure 1. The scatterplots of AOT estimates at 550 nm based on Landsat-8 observations compared to the AERONET measurements from all the sites. The main plots refer to the AOT values up to 1, while in the sub-plots (upper right) higher AOT values are also included.
Figure 1. The scatterplots of AOT estimates at 550 nm based on Landsat-8 observations compared to the AERONET measurements from all the sites. The main plots refer to the AOT values up to 1, while in the sub-plots (upper right) higher AOT values are also included.
Remotesensing 10 00352 g001
Figure 2. The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for OLI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference for Landsat SR (0.005 + 0.05 × ρ).
Figure 2. The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for OLI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference for Landsat SR (0.005 + 0.05 × ρ).
Remotesensing 10 00352 g002aRemotesensing 10 00352 g002b
Figure 3. The scatterplots of AOT estimates at 550 nm based on Sentinel-2 observations versus the AERONET measurements. The main plots refer to the AOT values up to 0.8, while in the sub-plots (upper right), higher AOT values are also included.
Figure 3. The scatterplots of AOT estimates at 550 nm based on Sentinel-2 observations versus the AERONET measurements. The main plots refer to the AOT values up to 0.8, while in the sub-plots (upper right), higher AOT values are also included.
Remotesensing 10 00352 g003
Figure 4. The scatterplots of WV estimates based on Sentinel-2 observations versus the AERONET measurements.
Figure 4. The scatterplots of WV estimates based on Sentinel-2 observations versus the AERONET measurements.
Remotesensing 10 00352 g004aRemotesensing 10 00352 g004b
Figure 5. The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for MSI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference (0.005 + 0.05 × ρ).
Figure 5. The accuracy (red line), precision (green line), and uncertainty (blue line) as computed in bins (blue bars) for MSI Band 4 (Red). The total number of points used in the computations is given also in the plot. The magenta line represents the theoretical SR reference (0.005 + 0.05 × ρ).
Remotesensing 10 00352 g005aRemotesensing 10 00352 g005b
Table 1. The list of ACIX participants.
Table 1. The list of ACIX participants.
AC ProcessorParticipantsAffiliationReferenceData Submitted
Landsat-8Sentinel-2
ACOLITEQuinten VanhellmontRoyal Belgian Institute for Natural Sciences [Belgium]-
ATCOR/S2-AC2020Bringfried Pflug, Rolf Richter, Aliaksei MakarauDLR German Aerospace Center [Germany][16,17]
CorA [Brockmann]Grit Kirches, Carsten BrockmannBrockmann Consult GmbH [Germany][18]-
FORCEDavid Frantz, Joachim HillTrier University [Germany][19]
iCOR [OPERA]Stefan AdriaensenVITO [Belgium]-
GA-PABTFuqin LiGeoscience Australia [Australia][20]
LACAntoine ManginACRI [France]--
LaSRCEric VermoteGSFC NASA [USA][13]
MACCSOlivier HagolleCNES [France][3]-
GFZ-AC
[SCAPE-M]
André HollsteinGFZ German Research Centre for Geosciences [Germany]--
SeaDASNima PahlevanGSFC NASA [USA][7,8,21]-
Sen2Cor v2.2.2Jerome LouisTelespazio France [France]--
Bringfried PflugDLR German Aerospace Center [Germany]
Table 2. The 19 AERONET sites involved in ACIX.
Table 2. The 19 AERONET sites involved in ACIX.
Test Sites *Zone **Land CoverAeronet Station
Lat., Lon.
TemperateCarpentras [France]Temperatevegetated, bare soil, coastal44.083, 5.058
Davos [Switzerland]Temperateforest, snow, agriculture46.813, 9.844
Beijing [China]Temperateurban, mountains39.977, 116.381
Canberra [Australia]Temperateurban, vegetated, water−35.271, 149.111
Pretoria_CSIR-DPSS [South Africa]Temperateurban, semi-arid−25.757, 28.280
Sioux_Falls [USA]Temperatecropland, vegetated43.736, −96.626
GSFC [USA]Temperateurban, forest, cropland, water38.992, −76.840
Yakutsk [Russia]Temperateforest, river, snow61.662, 129.367
AridBanizoumbou [Niger]Tropicaldesert, cropland13.541, 2.665
Capo_Verde [Capo Verde]Tropicaldesert, ocean16.733, −22.935
SEDE_BOKER [Israel]Temperatedesert34.782, 30.855
Equatorial ForestAlta_Floresta [Brazil]Tropicalcropland, urban, forest−9.871, −56.104
ND_Marbel_Univ [Philippines]Tropicalcropland, urban, forest6.496, 124.843
BorealRimrock [USA]Temperatesemi-arid46.487, −116.992
CoastalThornton C-power [Belgium]Temperatewater, vegetated51.532, 2.955
Gloria [Romania]Temperatewater, vegetated44.600, 29.360
Sirmione_Museo_GC [Italy]Temperatewater, vegetated, urban45.500, 10.606
Venice [Italy]Temperatewater, vegetated, urban45.314, 12.508
WaveCIS_Site_CSI_6 [USA]Temperatewater, vegetated28.867, −90.483
* Selected considering the AERONET data availability. The nomenclature for the site names is according to the AERONET sites. ** The nomenclature for latitude region was 66.5° < Temperate < 23.5, 23.5° < Tropical < −23.5° and equivalent for southern hemisphere latitudes.
Table 3. The matrix of the distances, taken pairwise, between the AC processors.
Table 3. The matrix of the distances, taken pairwise, between the AC processors.
AC Processor 1AC Processor 2AC Processor 3AC Processor n
AC Processor 10d12d13d1n
AC Processor 2d210d23d2n
AC Processor 3d31d320d3n
….
AC Processor ndn1dn2dn30
Table 4. AOT statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
Table 4. AOT statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
AC Processor-Reference AOT
No. of SamplesMinMean±RMS (stdv)Max
ATCOR12000.1220.2071.844
FORCE1240.0020.1120.2111.745
iCOR1110.0020.0950.1191.015
LaSRC1190.0010.2330.3872.017
Table 5. OLI surface reflectance accuracy (A), precision (P), and uncertainty (U) results of every processor and regarding all the test sites. The number of points (nbp) involved in the APU analysis varies due to the different number of Landsat scenes processed and submitted by every processor.
Table 5. OLI surface reflectance accuracy (A), precision (P), and uncertainty (U) results of every processor and regarding all the test sites. The number of points (nbp) involved in the APU analysis varies due to the different number of Landsat scenes processed and submitted by every processor.
OLI Band ATCORFORCELaSRCiCOR
nbp5094039498143861095503985227
1A0.0090.009−0.005−0.004
P0.0100.0080.0100.011
U0.0130.0120.0120.012
2A0.001−0.001−0.004−0.004
P0.0070.0060.0090.010
U0.0070.0060.0100.010
3A0.000−0.009−0.0040.000
P0.0050.0060.0070.009
U0.0050.0100.0080.009
4A0.000−0.009−0.0040.000
P0.0050.0060.0060.010
U0.0050.0110.0070.010
5A0.0050.000−0.0050.010
P0.0050.0050.0070.010
U0.0080.0050.0080.014
6A−0.001−0.023−0.0020.006
P0.0040.0120.0030.006
U0.0040.0260.0040.008
7A−0.001−0.0080.0010.006
P0.0060.0070.0030.005
U0.0060.0100.0030.007
Table 6. AOT statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
Table 6. AOT statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
AC Processor-Reference AOT
No. of SamplesMinMean±RMS (Stdv)Max
CorA4700.1330.1550.757
FORCE480.0030.1160.1690.871
iCOR370.0020.150.1510.599
LaSRC480.0020.1150.0970.602
MACCS240.0020.1760.20.778
S2-AC2020360.0020.1070.1440.652
GFZ-AC410.0010.1590.2230.92
Sen2Cor470.0050.1580.1470.805
Table 7. WV statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
Table 7. WV statistics of the comparison between retrieved and reference values per processor over all the sites. The lowest RMS values are underlined.
AC Processor-Reference WV
No. of SamplesMinMean±RMS (Stdv)Max
CorA360.0080.370.3321.312
FORCE430.0010.2150.3051.504
LaSRC410.0210.2970.3031.906
MACCS200.0020.2690.3871.654
S2-AC2020290.0050.3440.4372.18
GFZ-AC390.0270.4570.2831.246
Sen2Cor410.0120.280.3461.63
Table 8. Accuracy (A), precision (P), and uncertainty (U) scores per band for the S-2 SR products of every processor and over all the test sites. The number of points (nbp) involved in the APU analysis varies due to the different number of S-2 scenes processed and submitted by every processor.
Table 8. Accuracy (A), precision (P), and uncertainty (U) scores per band for the S-2 SR products of every processor and over all the test sites. The number of points (nbp) involved in the APU analysis varies due to the different number of S-2 scenes processed and submitted by every processor.
MSI Band CorAFORCEiCORLaSRCMACCSS2-AC2020GFZ-ACSen2Cor
nbp2387320229568870238086473686327412538144342434903415939030335882
1A−0.006−0.002−0.010−0.010-−0.0060.026−0.003
P0.0960.0090.0240.010-0.0170.0140.011
U0.0960.0090.0260.014-0.0180.0290.011
2A0.000−0.0040.000−0.007−0.008−0.0040.023−0.001
P0.0210.0070.0280.0080.0100.0210.0160.009
U0.0210.0080.0280.0110.0130.0220.0290.009
3A0.003−0.0120.013−0.005−0.0080.0000.0310.004
P0.0240.0060.0340.0060.0080.0230.0230.010
U0.0250.0140.0360.0080.0120.0230.0390.011
4A0.002−0.0070.018−0.003−0.0070.0020.0220.006
P0.0270.0050.0360.0060.0070.0250.0200.012
U0.0270.0090.0400.0070.0100.0260.0300.013
5A0.008−0.0080.027−0.002−0.0050.0070.0310.020
P0.0290.0050.0380.0060.0060.0120.0220.018
U0.0300.0090.0460.0060.0080.0140.0380.027
6A0.0050.0010.024−0.001−0.0030.0040.0240.017
P0.0320.0050.0330.0050.0060.0100.0420.011
U0.0320.0050.0410.0050.0070.0110.0490.021
7A0.006−0.0020.025−0.003−0.0070.0050.0200.014
P0.0330.0050.0310.0050.0050.0090.0470.010
U0.0340.0060.0400.0050.0080.0100.0510.017
8A0.0080.0170.0320.001−0.0010.0110.0250.022
P0.0330.0100.0340.0050.0050.0260.0470.014
U0.0340.0190.0470.0050.0060.0280.0530.026
8aA−0.0080.0000.023−0.002−0.0080.0030.0160.013
P0.0330.0050.0280.0040.0050.0110.0490.008
U0.0340.0050.0360.0050.0090.0110.0510.015
11A0.021−0.0100.0180.002−0.0030.0090.0170.020
P0.0350.0050.0190.0030.0030.0070.0110.009
U0.0410.0110.0260.0030.0040.0110.0200.022
12A0.0200.0040.0130.0040.0000.0080.0140.025
P0.0300.0060.0130.0030.0020.0060.0190.014
U0.0360.0070.0180.0050.0030.0100.0240.028

Share and Cite

MDPI and ACS Style

Doxani, G.; Vermote, E.; Roger, J.-C.; Gascon, F.; Adriaensen, S.; Frantz, D.; Hagolle, O.; Hollstein, A.; Kirches, G.; Li, F.; et al. Atmospheric Correction Inter-Comparison Exercise. Remote Sens. 2018, 10, 352. https://doi.org/10.3390/rs10020352

AMA Style

Doxani G, Vermote E, Roger J-C, Gascon F, Adriaensen S, Frantz D, Hagolle O, Hollstein A, Kirches G, Li F, et al. Atmospheric Correction Inter-Comparison Exercise. Remote Sensing. 2018; 10(2):352. https://doi.org/10.3390/rs10020352

Chicago/Turabian Style

Doxani, Georgia, Eric Vermote, Jean-Claude Roger, Ferran Gascon, Stefan Adriaensen, David Frantz, Olivier Hagolle, André Hollstein, Grit Kirches, Fuqin Li, and et al. 2018. "Atmospheric Correction Inter-Comparison Exercise" Remote Sensing 10, no. 2: 352. https://doi.org/10.3390/rs10020352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop