1. Introduction
The radiance observed by a hyperspectral imager depends on illumination and observation geometry and on the interaction of solar radiation with the ground and atmosphere [
1]. Gas absorption is the major responsible for the modulation of the atmospheric transmission spectrum. Furthermore, scattering by molecules and aerosols plays a significant role in the extinction of radiation [
2] so that the radiation received by the sensor can be ascribed to different contributions, depending on scattering events occurring along each photon’s trajectory and its interaction with the ground. The estimate of these different contributions to the received radiation can be carried out by means of a direct Radiative Transfer Model (RTM) [
3].
An atmospheric correction procedure performs the so-called inversion of the direct model: it retrieves the surface spectral reflectance from the (calibrated) at-sensor spectral radiance [
4,
5,
6,
7] and permits the characterization of the observed surface, regardless of the effects of atmosphere, illumination and observation geometry. Ground surface spectral reflectance is the starting point for terrain classification, the determination of different environmental parameters and the Earth radiation budget estimate [
8]. Inversion is performed by simulating, via the RTM, the propagation of solar radiation through the atmospheric medium and its interaction with ground and air components.
Due to atmospheric scattering, a single sensor pixel collects, together with the radiance directly received from the corresponding ground pixel along the ground-to-sensor direction, two different contributions [
5,
6,
9]: radiation diffused by the atmosphere without interaction with the ground (called path-radiance) and radiation scattered from the ground surrounding the generic ground pixel into its Instantaneous Field Of View (IFOV) by one or more scattering events. The latter contribution produces the mixing of signals coming from different (mainly adjacent) pixels. Such a mixing effect arises from the sum of two terms. The first is given by the light illuminating the pixel of interest after subsequent reflections and scattering between the surface and the atmosphere (due to light “trapping” between the ground and the overlaying air/aerosol) [
9]. The second term takes into account the ground-leaving radiation received into the pixel’s IFOV after reflection by the surrounding ground (adjacency effect) and is the main subject of this paper [
10].
Accurate atmospheric correction procedures are usually supervised, as the ones implemented in widely used software packages, such as FLAASH
® (Fast Line-of-sight Atmospheric Analysis of Hypercubes) [
11,
12] and ATCOR
® (Atmospheric & Topographic Correction) [
13,
14], which are both based on the outputs of the MODTRAN
® (MODerate resolution atmospheric TRANsmission) RTM software [
14,
15,
16]. These codes incorporate both scattering processes and the absorption of gas components. Iterative procedures have mainly been developed for the retrieval of spectral reflectance without a priori knowledge of atmospheric characteristics, e.g., the iterative procedure introduced in [
17], which was developed for hyperspectral imagers working in the visible–Short Wave InfraRed (SWIR) spectral range. The iterative approach to atmospheric correction procedures has also been used in the retrieval of ocean colour from multispectral/hyperspectral images [
18,
19].
Since the first studies on atmospheric corrections, efforts have been made to evaluate the impact of adjacency effects on such procedures [
20,
21,
22,
23,
24]. Generally, adjacency effects are evaluated by using an priori information on the spectral reflectance of neighbour pixels. Tanre et al. proposed one of the first approaches to solve the problem by defining a function,
F(r), that models the effect of surrounding pixels as a function of their distance
r from the observed one [
21]. This function was later incorporated into the Radiative Transfer Equation (RTE) software 5S
® and 6S
® [
9,
22]. A similar, yet more sophisticated approach was adopted by Minomura et al., in which the dependence of the adjacency effect is correlated to the aerosol optical thickness [
25]. Richter adopted a different approach to model adjacency, which relies on the difference between target reflectance and average reflectance and the ratio between diffuse transmittance and direct transmittance [
26]. Atmospheric correction procedures often use the average reflectance of the surrounding pixels to evaluate the adjacency contribution to at-sensor radiance [
27,
28]. The procedure implemented in a recent release of ATCOR
® used a spatial filter to derive the adjacent area’s apparent reflectance [
29]. A different approach is the one developed for the detection and correction of water pixels affected by adjacency effects based on the comparison of spectra [
30].
Starting from version 5.3.2, the MODTRAN
® software [
31] has been designed to produce dedicated outputs that can be easily implemented in the atmospheric correction procedure proposed by Verhoef [
6,
10]. The latter procedure evaluates effects related to the presence of adjacent areas by assuming a contribution that depends on their spatially averaged reflectance.
This paper proposes a method that iteratively applies the model adopted by Verhoef in order to correct the surrounding pixel contribution, assuming that the direct radiative transfer model provided by MODTRAN® 5.3.2 output is representative of the actual atmospheric conditions. The reflectance of the pixels present in the image is computed iteratively by successive approximations without the need of any a priori knowledge or assumptions about their reflectance. The novelty of the proposed method relies on its iterative approach to evaluate the contribution of surrounding pixels: a first run of the atmospheric correction procedure is performed by assuming that the spectral reflectance of the surrounding pixels is equal to that of the pixel under investigation. Such information is used in the subsequent iteration steps to calculate a new value for the mean spectral radiance of the surrounding pixels in order to make a more accurate evaluation of the reflectance image.
Below, the iterative procedure is described and the results obtained by applying it to both synthetic and real images are presented and discussed.
2. Materials and Methods
The atmospheric correction procedure proposed in this paper is an iterative procedure that uses MODTRAN
® to model electromagnetic radiation propagation through the atmosphere. MODTRAN
®, one of the most widely used radiation transfer models, assumes that the atmosphere is a stratified medium made up of plane-parallel layers between the ground and the Top Of Atmosphere (TOA) [
16]. Starting from version 5, the MODTRAN
® radiative transfer code implements the modelling approach proposed by Verhoef [
1,
6,
10,
31] for adjacency effects. Atmospheric correction input parameters are the Sun and observation (i.e., the sensor) geometries and atmosphere scattering, and absorption properties calculated from the constituent (i.e., gas and aerosol) profile at a given spectral resolution. The proposed approach considers flat terrain, absence of cloud cover, and Lambertian surfaces but neglects light polarization.
Following the nomenclature adopted by Verhoef [
6,
10], the spectral irradiance impinging on the generic ground pixel (with reflectance
) can be expressed as the sum of different contributions (
Figure 1a): direct (modulated by atmospheric absorption and propagated along the Sun-to-ground-pixel line), diffuse (impinging on the pixel after one or more atmospheric scattering events) and diffuse after multiple interactions between the atmosphere (with albedo
) and the surrounding ground (with average reflectance
). As a consequence, the irradiance illuminating the generic ground pixel depends on the illumination/observation geometry, on the atmospheric medium, and on the contribution due to the reflectance (also called “hemispherical reflectance”) of the neighbouring terrain.
The radiance reaching the sensor (function of the irradiance impinging on the ground pixel and its reflectance
) can further be divided into the following contributions (
Figure 1b): the directly transmitted radiance modulated by the ground-pixel-to-sensor atmospheric transmittance, the atmosphere-backscattered radiation (with no ground interaction) and the diffuse radiance scattered into the IFOV after being reflected by adjacent ground pixels.
It is interesting to note that without contributions 2 (
Figure 1a) and 4 (
Figure 1b), the radiance acquired by each sensor pixel would be independent of the others. Pixel interdependence is caused by atmospheric scattering (either due to multiple ground reflections of contributions 2 (
Figure 1a) or to adjacency contribution 4 (
Figure 1b)).
For a generic wavelength
and a Sun zenith angle
, we consider
as the total ground irradiance (direct plus diffuse, i.e., made up of the contribution 0 plus 1 of
Figure 1a) impinging on a black ground (i.e., having zero reflectance). Therefore, the irradiance
illuminating the generic ground pixel (having non-zero ground reflectance) can be written as the sum of the contribution 0, 1 and 2:
By substituting the corresponding limit of the series in (1), we get the total irradiance illuminating the pixel in the general case, considering for the contribution 2 an average reflectance
.
Following the MODTRAN
® notation as in [
31] and defining
and
as the view zenith angle and the relative Sun-observer azimuth angle, the observed at-sensor radiance
can be written as the sum of three terms: the path radiance
, the radiance received directly from the ground pixel and modulated by the ground-to-sensor transmittance
, and the radiance from its neighbouring pixels
taking into account the reflectance of the spatially averaged value
(
Figure 1b):
where
and
are, respectively, the direct and diffuse transmittance of the atmosphere along the ground-to-sensor path, and
is the contribution given by the path radiance. The terms
,
,
,
are determined by the MODTRAN
® RTM output for a given atmospheric model, Sun position and observation geometry. The term
depends on observation geometry; therefore, it varies according to the position of the pixels in the at-sensor radiance image. To calculate it, we hypothesize scene scanning along the sensor trajectory (common either in push-broom or whisk-broom acquisition), fixing the observation geometry for each column of the image.
is provided by the RTM for three different values of
and
for, respectively, the leftmost, central and rightmost columns of the image.
is then calculated by interpolation for each pixel of the image line.
Equation (2) can be expressed as apparent reflectance
by normalizing for the lighting conditions, where
is TOA solar irradiance:
Here and are, respectively, direct and diffuse transmittance by the atmosphere along the Sun-ground path and . Equation (3) is made up of three components: the first term never interacts with the ground; the second term takes into account the interaction of solar radiation with the observed pixel; and the third term considers the adjacency effects due to neighbouring pixels that have spatially averaged reflectance .
The apparent spectral reflectance
must be integrated over the spectral width of the considered instrumental spectral channel. After making angular and spectral dependences implicit, Equation (3) can be re-written as
where the operator
takes into account the integration over the channel spectral response, in which
is the spectral interval and
is the spectral response function integrally normalized to the unit.
It is an acceptable hypothesis that the spectral variation of the reflectance terms in the channel is modest; thus, the terms
and
(reflectance spectrally averaged in the channel) can be factored out of the spectral integration. Reintroducing the geometric series (as in [
31]) and considering only the first two terms, Equation (4) can be rewritten as
This last formulation suggests that, for each spectral channel, the atmospheric albedo can be regarded as made of two contributions:
, related to the pixel of interest, and
, related to the surrounding pixels.
2.1. The Image Simulation Tool PRIMUS
The image simulation tool PRIMUS (PRISMA Realistic IMage Utility for Simulation) [
32] takes into account the instrumental characteristics, the acquisition geometry and the major phenomena and interactions that affect the acquired hyperspectral images. In the simulation process the illumination and acquisition geometry, the spatial and spectral variability of the target, the atmosphere (aerosol and gas constituents) and its interaction with the soil are considered. PRIMUS uses the approach proposed by Verhoef and Bach [
6] to take into account adjacency effects: the at-sensor radiance is calculated using the formula in Equation (3) assuming that the surrounding pixels are characterized by a spectral reflectance equal to their average value
. Finally, the simulation process considers the main effects introduced by the instrument, such as sensor sampling, instrument optical characteristics, and noise. The image simulation tool is composed of three independent blocks:
The scenario builder, which creates a reflectance ground map (associating combinations of spectra to the classes of thematic map);
The atmospheric propagation calculator, which calculates the at-sensor radiance data cube using as input the reflectance map provided by the scenario builder. This step also simulates adjacency;
The sensor simulator, which computes the synthetic images of the selected scenario considering instrumental effects (foreoptics transmittance, Modulation Transfer Function (MTF), detector sampling (spectral and spatial), and noise (e.g., photonic noise, and detector thermal noise).
PRIMUS is a powerful simulation tool that can be used to test and validate algorithms and procedures, thanks to the controlled flux of data and the presence of a verified “ground truth”.
In the framework of this paper, PRIMUS was used to generate synthetic images to test critical issues of the iterative procedure described in next section, in particular in the case of sharp spatial/spectral gradients inside the image when the contribution to the apparent reflectance given by adjacency effects is maximum. PRIMUS is an end-to-end simulator that gives the possibility of comparing retrieved data with input data (acting as ground truth). The simulation is performed at a high-spectral-resolution, which allows convolution with the sensor response for each spectral channel.
3. Iterative Atmospheric Correction Procedure
The proposed iterative method aimed at solving Equation (5) by assuming that, at the first iteration, the spectral reflectance of the surrounding pixels was almost equal to the one observed, i.e., . Following the iterative approach, the first run of the correction procedure evaluated the spectral reflectance of each pixel of the entire image on the assumption that . The reflectance retrieved in the first run was used in the following step to calculate the new value for the mean spectral reflectance of surrounding pixels . The new value of was used in a new evaluation of the reflectance image. The iteration process can be summarized as follows:
STEP 0:
Assuming
,
and
from Equation (6) have an average as
Under these assumptions, the apparent reflectance can be expressed as
and the at-ground spectral reflectance at order 0 is
For each pixel of the processed image, the value of reflectance in (9) was used to determine the mean spectral reflectance averaged on the entire image. Therefore, then became the input for the next iteration.
Using the average
as a mean value for the reflectance of the surrounding pixels, the spectral reflectance at the first order was expressed as
Then the average value was calculated by spatial averaging as in the previous step and used as input for the next iteration.
At present, the number of iterations is set by the user. An alternative solution is offered by an iteration stopping as soon as the minimum of a user-defined suitable functional is reached. A candidate for the functional could be the global (i.e., calculated on the entire image or on the main regions of interest) relative variation of the reflectance image with respect to as calculated in the previous step.
The model proposed by Verhoef and Bach [
6] and described in the previous section assumes, for the evaluation of the average value
of Equation (1), an equal contribution by all the surrounding pixels in the image (i.e., a
spatial average on the entire image). In principle, our approach can exploit the possibility of using, when available, a non-uniform distribution for spatially weighting the average reflectance
to take into account the surrounding pixel contribution as a function of the distance from the target pixel. In the trivial case in which the weighting function is the uniform distribution, the value of
is the reflectance spatially averaged over the entire image and spectrally averaged in the channel
, as adopted by Verhoef and the present paper.
Implementation of the Procedure
The iterative procedure was implemented in a stand-alone software procedure using as input data the MODTRAN
® 5.3.2 radiative transfer software output data. In this version of MODTRAN, the four-stream model [
6] produced outputs useful for the calculation of the terms introduced in Equation (2). In particular, from the dedicated *.acd MODTRAN output file, the following five quantities were extracted and spectrally averaged by using a user-selected filter (Gaussian-triangular-rectangular) with the central wavelength and the Full-Width Half Maximum (FWHM) of each spectral band of the sensor and by implementing the calculation of Equations (7)–(11): Sun-to-ground diffuse transmittance of the atmosphere
, Sun-to-ground direct transmittance
, observer-to-ground embedded diffuse transmittance
, observer (or sensor)-to-ground direct transmittance
, and spherical albedo of the atmosphere
. The value of
was calculated from
and
and the Sun’s zenith angle was computed for user-selected geographic coordinates and time of the day.
Concerning the observation geometry, we assumed a pushbroom acquisition mode (across-track acquisition); that is, the sensor observed a spectrally resolved single line on the ground, orthogonal to the flight direction. In pushbroom geometry, observation angles are constant along the flight direction (along-track), i.e., for each column of the image. Differences in observation angles for each pixel in each row of the image are managed by interpolating three different MODTRAN run with proper and angles for, respectively, the left, central and right column of the image (with respect to the flight direction).
The spectrally averaged values for each parameter are angularly interpolated and the zero-order reflectance image is calculated. Further iterations allow for the calculation of the N-order reflectance , integrated over the spectral width of the considered band as in (11). The selection of input data and the settings for the output as well as the number of iterations can be customised.
The code is entirely developed in Java language and runs on every OS implementing a Java Virtual Machine able to run standard Java bitcode.
4. Results
The iterative algorithm was applied to two different images: a simulated image, produced with the in-house developed hyperspectral image simulator, PRIMUS (introduced in
Section 2.1) on the PROBA-1 (Project for On-Board Autonomy-1) platform. The simulated image was used to test the algorithm’s performance on a supervised data set in which almost all the parameters involved in hyperspectral image acquisition can be controlled. The test on the CHRIS image was exploited to assess the performance of the algorithm when applied to a realistic case study.
4.1. Atmospheric Correction Procedure Applied to the Simulated Image
For the purposes of this paper, the simulation performed by PRIMUS was simplified by directly providing as input a high-spectral-resolution ground-reflectance datacube and neglecting the simulation of electronic noise or MTF (the effect of detector was given by the spectral convolution of the image in each spectral channel). The input for the simulation of the at-sensor-radiance image was provided by a synthetic, high-spectral-resolution, 30 × 20 pixel reflectance datacube with 601 spectral bands at a step of 1 nm in the range of 400 to 1000 nm. Such a ground reflectance image was generated to represent a principal case in which an adjacency contribution could influence pixel mixing, leading to an incorrect estimation of ground reflectance. The image was, consequently, non-representative of natural ground features; on the contrary, it presented a different combination of ground spatial and spectral patterns. This ground reflectance image was used as input to generate the corresponding at-sensor radiance image. The sensor simulated by PRIMUS has 64 channels having Gaussian spectral response, mirroring the characteristics of existing hyperspectral sensors in the same spectral range. The position of the channels and spectral widths are reported in
Table 1. As a consequence of spectral convolution with the simulated sensor’s channels, the simulated at-sensor-radiance image maintained the original spatial dimension of 30 × 20 pixels but only on 64 spectral bands in the 400–1000 nm range. At this point, the PRIMUS tool provided the at-sensor-radiance input image to test our retrieval procedure. The atmospheric vertical profile used for the simulation is shown in
Figure 2.
The image is made up of six different subimages, characterized by six different 10 × 10 pixel reflectance patterns. The dots in
Figure 3 show the pixel position for which the reflectance was extracted. The pixels were chosen following the desiderata described in the following list.
Pixels surrounded by spatially uniform reflectance
- A:
pixel with black reflectance (spectrally constant: ) surrounded by pixels with the same reflectance value;
- B:
pixel with grey reflectance (spectrally constant: ) surrounded by pixels with the same reflectance value;
- C:
pixel with spectral reflectance linearly increasing from 0.1 (at 400 nm) to 0.9 (at 1000 nm) surrounded by pixels with the same reflectance value;
Pixels surrounded by chessboard spatial pattern, spectrally constant
- D:
pixel with grey reflectance (spectrally constant: ) surrounded by a chessboard spatial pattern of pixels with reflectance between 0.5 and 1.0;
- E:
pixel with unitary reflectance (spectrally constant: ) surrounded by a chessboard spatial pattern of pixels with reflectance increasing from 0.0 (at 400 nm) and 0.5 (at 1000 nm);
Pixels surrounded by random spatial pattern, spectrally variable:
- F:
pixel with ground reflectance varying between 0.5 (at 400 nm) to 1.0 (at 1000 nm) surrounded by a random spatial pattern, spectrally variable;
- G:
pixel with ground reflectance varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable;
- H:
pixel with ground reflectance, varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable.
The procedure for determining ground reflectance was run iteratively. The result after each iteration was compared with the ground truth to test the goodness of the reconstructed reflectance.
The retrieved reflectance spectrum was compared with the corresponding ground truth image, which is, by definition, the image observed by the sensor, normalized for illumination effects and observed without the presence of the atmosphere. As a consequence, the ground truth has the physical dimensions of a reflectance. It was obtained directly by the convolution of the high-spectral-resolution reflectance image (the same used as input for the at-sensor radiance simulation by PRIMUS) with the sensor spectral response for each channel (as in
Table 1).
Figure 3 shows, for the first and last spectral channels, a change in the retrieved reflectance while the iteration number increased. The ground truth reflectance is also shown together with the position of the eight points, the spectra of which were individually analysed. Each point was representative of different ground conditions: pixels surrounded by spatially uniform reflectance (A, B, C); pixels surrounded by a chessboard spatial pattern, spectrally constant (D, E); pixel surrounded by a chessboard spatial pattern, spectrally variable (F, G, H).
For points A to H,
Figure 4,
Figure 5 and
Figure 6 illustrate the results of successive iterations compared with “ground truth” data. The Reflectance spectra and the difference from the ground truth were plotted for all the selected points. Note that the algorithm set to 1 all reflectance values greater than 1. We observed that the ground pattern affected the first retrieval of ground reflectance (iteration 0), given that it was a spectrum retrieved under the assumption of a spatially constant reflectance value. The retrieved values of reflectance converged to the ground truth (grey line) after the first iteration.
An analogous analysis was carried out for the entire image for each spectral channel: the difference between the entire retrieved reflectance image and the ground truth was evaluated as the iteration number increased.
Figure 7 shows, as a function of the wavelength, the difference between the spatially averaged reflectance image and the spatially averaged ground truth (a) and the maximum error value between retrieved reflectance and ground truth (b) after each iteration.
The analysis was extended to the root-mean-square (RMS) error as shown in
Figure 8. The RMS was calculated, respectively: (a) as the root of squared difference between points A to H of the retrieved reflectance and the corresponding ground truth, averaging on each point of the spectrum; (b) as the root of squared difference between all the retrieved reflectance image pixels and the ground truth, averaged both spatially and spectrally.
4.2. Atmospheric Correction on CHRIS-PROBA Image
CHRIS on PROBA-1 [
32], developed by SIRA (UK) and mounted in the mini-satellite PROBA-1, was the first hyperspectral sensor capable of acquiring the same image at five different viewing angles (Viewing Zenith Angle equal to: 0°, ±36°, ±55°). In the framework of the ESA project EOPI cat. 1-LBR Project ID.2832 “Assimilation of biophysical and biochemical variables in biochemical and hydrological models at landscape scale”, the CHRIS sensor acquired a large number of images over the Cal/Val test site inside the Migliarino–San Rossore–Massaciuccoli Regional Park (Italy) and managed by our institution. CHRIS performed acquisitions over 18 spectral bands in the visible and near infrared with a spatial resolution of 18 m.
Table 2 summarizes CHRIS instrumental characteristics.
During CHRIS acquisition, an in-field campaign for data validation and assimilation was executed at San Rossore. A Fieldspec Spectroradiometer by ASD Inc. was used to acquire in situ spectral reflectance measurements of different materials belonging to three different classes: soil, vegetation and manmade materials. Measurements mainly consisted of the spectral reflectance of vegetation and soils, but also the spectra acquired on materials produced by man such as cement and tiles. Measurements of inland waters were also carried out, in particular on freshwater canals in the park. As far as vegetation was concerned, the reflectance spectra of grass, leaves (poplar, holm, oak and pine), shrubs (bramble), dry grass and dry shrubs were acquired. Spectral reflectance measurements of soils and sands were performed in different areas. During acquisitions, the change in solar illumination was measured by making reference measurements with a cadence of 1 measure of white every 10 min using a 25 × 25 cm Spectralon tablet. The spectrum of each material was obtained as the average of 10 measurements.
Figure 9 shows different acquired natural and manmade spectra.
Samples of natural targets were also collected and their reflectance measured in a laboratory using the Spectroradiometer PERKIN ELMER LAMBDA 19. Both in-field and laboratory measurements were used to validate results from atmospheric correction procedures.
A selected CHRIS image, characterized by high radiometry and clear sky acquisition, as well as the acquisition of the corresponding ground truth for selected pixels made possible the test of our iterative procedure on a real dataset.
Figure 10a shows the at-sensor radiance image acquired with Viewing Zenith Angle (VZA) equal to 0° by CHRIS over San Rossore Park on 25 July 2003.
Figure 10b shows the radiance spectra extracted from pixels containing natural or manmade targets, where the discontinuity at 551 nm introduced in the last release (Release 4.1) of CHRIS radiometrically calibrated data is evident. This release mitigated unexpected peaks (present in release 3.1) at 490 nm (close to O
3 feature), 753 nm (close to O
2 feature), and 911 nm (close to H
2O feature), but introduced a new one at 551 nm [
33]. The related reflectance spectra obtained by applying the iterative atmospheric correction procedure are shown in
Figure 10c. As for the previous case with simulated data, the iterations were stopped at the tenth step after observing that the retrieved reflectance spectrum reached an asymptotic value. On the basis of the results obtained for the simulated data, we assumed at first that the asymptotic value found by the algorithm was the true reflectance spectrum. Then the asymptotic value was compared with the one directly measured in-field for selected pixels.
Following this approach, for each iteration the differences between the estimated reflectance value at the generic iteration step and the asymptotic one were evaluated. The difference between each reflectance retrieval and its asymptotic value was spectrally dependent as shown in
Figure 11a. The value of this maximum as the iteration step increased is shown in
Figure 11b, where it was evaluated for all the targets shown in
Figure 10b,c.
Finally, spectral reflectance values were numerically compared with reflectance spectra measured in situ and used as ground truth data. The comparison was performed using data coming from selected areas within San Rossore Park. The selection criteria are listed below:
– areas should be accessible to easily perform the measurements during acquisition;
– areas should, as far as possible, be uniform within a spatial dimension of 54 × 54 m (equal to three times the CHRIS Ground Sampling Distance (GSD)) in order to avoid mixed pixels when comparing ground measurements with reflectance spectra retrieved from the CHRIS atmospherically corrected images.
Taking into account the considerations listed above, a pixel relative to sand and another relative to vegetation were selected in the image. Their comparison is shown in
Figure 12a,b, where the retrieved reflectance spectra at the first and last iteration together with the corresponding ground truth values were reported. The maximum and mean values for the relative error between the ground truth and the retrieved values as the iteration number increased are reported in
Figure 12c. The graph stops at step 7 since an asymptotic value was reached. The relative error as a function of wavelength is shown in
Figure 12d. The difference between the ground truth and the (interpolated) retrieved reflectance spectra for the first 4 iterations is reported in
Figure 12e,f.
6. Conclusions
The paper presented an atmospheric correction procedure based on the direct model implemented on the widely known radiative transfer model MODTRAN® and made direct use of the output of the MODTRAN software package starting from version 5.2.3.
The procedure demonstrated the ability to invert the direct model simulated by MODTRAN, thereby determining iteratively the ground reflectance of each pixel of the image separating the contribution of surrounding pixels. The procedure was tested on both synthetic images (for taking advantage of a simulated, i.e., controlled, environment and a fully known ground truth) and on real hyperspectral images and ground reflectance measurements. The proposed iterative procedure converged rapidly after step 2 for both simulated and real cases and for different ground reflectance spectra and patterns.
In the considered case of a real CHRIS image, the mean relative error in retrieved reflectance after the second iteration remained under 6%, with peaks of approximately 10%. A major source of error can be attributed to the RTM parametrization, in particular to the molecular absorption model, having a greater error around the atmospheric absorption bands. As a consequence, a better a priori knowledge of the gaseous components of the atmosphere could improve retrieval accuracy.
The procedure can exploit the possibility of using, whenever available, alternative models to evaluate the surrounding pixels’ contribution (i.e., an analytical relation as a function of the distance from the target pixel), thus allowing the use of different radiative transfer models.
The algorithm can be also implemented in more sophisticated atmospheric correction procedures, aiming to autonomously tune both the abundances of selected main atmospheric components (gases and aerosol) and the ground spectral reflectance map.