Next Article in Journal
Investigating the Relationship between X-Band SAR Data from COSMO-SkyMed Satellite and NDVI for LAI Detection
Previous Article in Journal
Mapping the Spatial Distribution of Winter Crops at Sub-Pixel Level Using AVHRR NDVI Time Series and Neural Nets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods

Department of Geography, Swansea University, Singleton Park, Swansea SA2 8PP, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2013, 5(3), 1355-1388; https://doi.org/10.3390/rs5031355
Submission received: 16 January 2013 / Revised: 4 March 2013 / Accepted: 7 March 2013 / Published: 14 March 2013

Abstract

: Radiative transfer models predicting the bidirectional reflectance factor (BRF) of leaf canopies are powerful tools that relate biophysical parameters such as leaf area index (LAI), fractional vegetation cover fV and the fraction of photosynthetically active radiation absorbed by the green parts of the vegetation canopy (fAPAR) to remotely sensed reflectance data. One of the most successful approaches to biophysical parameter estimation is the inversion of detailed radiative transfer models through the construction of Look-Up Tables (LUTs). The solution of the inverse problem requires additional information on canopy structure, soil background and leaf properties, and the relationships between these parameters and the measured reflectance data are often nonlinear. The commonly used approach for optimization of a solution is based on minimization of the least squares estimate between model and observations (referred to as cost function or distance; here we will also use the terms “statistical distance” or “divergence” or “metric”, which are common in the statistical literature). This paper investigates how least-squares minimization and alternative distances affect the solution to the inverse problem. The paper provides a comprehensive list of different cost functions from the statistical literature, which can be divided into three major classes: information measures, M-estimates and minimum contrast methods. We found that, for the conditions investigated, Least Square Estimation (LSE) is not an optimal statistical distance for the estimation of biophysical parameters. Our results indicate that other statistical distances, such as the two power measures, Hellinger, Pearson chi-squared measure, Arimoto and Koenker–Basset distances result in better estimates of biophysical parameters than LSE; in some cases the parameter estimation was improved by 15%.

1. Introduction

Biophysical parameters estimated from satellite data are important inputs to ecological models and land-surface models [1,2]. Various algorithms have been developed to estimate biophysical parameters from remotely-sensed reflectance data [3,4]. The forward problem, i.e., to predict the reflected radiation and canopy light interactions given the structure and optical properties of the canopy and surface, is well understood and several models exist that produce realistic results [4]. The inverse problem is difficult to solve, however, because the problem is underdetermined. A commonly adopted way to overcome this is to add additional constraints or make a priori assumptions regarding the properties of the land surface, see [57].

Biophysical parameters are estimated from satellite data by inverting a sample of the of bidirectional reflectance factor, BRF(λ) = f(Angular Geometry, Structural Parameters), where structural parameters (canopy properties, soil background reflectance, etc.) are input parameters and BRF(λ) is the model output (wavelength-dependent reflectance). Numerical solution of this inverse problem adjusts the model parameters such that model-predicted values closely match the measured values [7,8]. The match between model output and data is usually based on minimizing the sum of least squares [9].

Four approaches can be distinguished to estimate biophysical parameters from satellite data; the advantages and limitations of various approaches of biophysical parameter estimation are discussed in [10,11]. A first approach is to estimate biophysical parameters from an empirical relationship with a spectral index, see for example [1]. A second approach is to invert an analytical model; this approach puts a high demand on computing resources if the analytical model is complex. A third approach is to use machine learning, for a example by training a neural network using the inputs and outputs of a BRF model [12,13]. A fourth approach is to use LUTs. This is an attractive way to estimate biophysical parameters for various reasons. Solutions of the model can be constrained to a range of realistic input parameters, optimization is fast and the complexity of the analytical model is retained [14]. In the present study we adopt a LUT-based inversion using the FLIGHT radiative transfer model [9,15].

The estimation of biophysical parameters from satellite data is hampered by uncertainties and errors that arise from a number of sources. These include uncertainties in instrument calibration, variations in atmospheric composition or simplifying assumptions in the representation of canopy and soil background [16,17]. Errors in the representation of the canopy and soil background are of particular concern in the present study since they have non-zero mean and are not normally distributed. Outliers and nonlinearities distort the residuals and in these cases a key assumption for using LSE is violated, which is that errors have a white noise, zero mean distribution of residuals. For this reason, we investigate three broad classes of statistical distances (in the remote sensing literature referred to as cost functions, elsewhere also known as metrics, or divergence measures) that are based on different error distributions. These classes are: information measures of divergence[18], M-estimates[19], and minimum contrast methods[20,21].

The first class of distance or divergence measures is referred to as information measures; optimization using these measures is based on minimization of distances between two probability distributions (Section 3). Thus, we need to rewrite the BRF as a probability distribution function to apply these measures. The information measures can be further divided into three sub-classes. The first subclass is referred to as f-divergences, a term introduced by Kullback and Leibler [22]. This class of measures is based on the distance between probability distributions (Section 3.1). The f-divergences are not bounded, i.e., they range between 0 and ∞. A second subclass of divergence measures, referred to as blended measures, allows bounds to be calculated explicitly ([23] and Section 3.3). A third subclass—consisting of generalized (h, f)-divergences or superposition of two functions, see Section 3.2—is a generalization of the f-divergences [24].

The second class, of M-estimates, is a broad class of functions to which, among others, least squares optimization belongs. For this class of measures, the BRF is not considered a probability distribution. Within the class of M-estimates are a large number of functions with robust or resistant properties (Section 4).

The third class, of minimum contrast estimates, considers the spectral domain. We express the BRF as a spectral density function (Section 5) to apply these measures.

The present paper has two aims. The first is to provide a review of available statistical distances and divergences to date (Sections 3–5 and Appendix). Only a few of these measures have been applied to parameter estimation from remote sensing data. The second aim is to apply the distance and divergence measures to the estimation of biophysical parameters from satellite data. The availability of a large number of statistical distances gives a high degree of flexibility, since it allows model optimization for a wide range of error distributions. We illustrate this in the numerical experiments where retrieval of biophysical parameters is tested for simulated needleleaf and broadleaf forests for ground-measured BRF. The present paper tests the use of alternative distance measures on simulated observations. This allows assessment of distance measures in a well-controlled environment with known errors in estimated biophysical parameters for a wide range of simulated land-surface conditions. The method will be tested on real observations in a follow-on study [25].

The paper is organized as follows. In Section 2 the estimation of biophysical parameters from Earth Observation (EO) data is expressed in a form comparable with statistical distance theories. Sections 3–5 describe the statistical distance and divergence measures that performed best in our study. Section 6 provides a description of the BRF simulations with FLIGHT; this includes a description of the land-surface scenes and the generation of LUTs. In Section 7 the statistical distance and divergence measures are applied to the estimation of LAI, fV and fAPAR by numerical inversion of the LUTs. The Appendix contains an extensive list of distance measures with references to examples of applications in the peer-reviewed literature.

We acknowledge that the range of conditions (vegetation type, simulated error distribution, land-surface properties, BRF sampling) is limited; the results of this study can therefore only be used as a guideline.

2. Statement of the Problem

The present section formulates the BRF in a way that is appropriate for the application of statistical distances and divergence measures. First we represent the following elements in the LUT: Ri(λ1, ..., λn, θ̄) is a realization of the BRF dependent on wavelength, solar zenith angle, relative azimuth and view zenith angle, LAI, fV, leaf area distribution, ground reflectance, etc. In this notation λ is the wavelength and λ1, ..., λn ∈ Λ, i = 1, ..., N, where, N is the number of entries (rows) in the LUT and θ̄ =(η̄, ζ̄) is a vector with unknown biophysical parameters η̄ = (η1, ..., ηk) of interest to our study (e.g., LAI, fV, fAPAR) and ζ̄ = (ζ1, ..., ζr) is a vector with parameters that we do not need to estimate, either because they are already known (e.g., solar zenith angle, relative azimuth and view zenith angle) or because their value is obtained by other means and is not estimated in the inversion (e.g., crown shape, soil reflectance). Denoting satellite observations by R*(λ1, ..., λn), we estimate the unknown parameters, the elements of the vector η̄*, by minimizing a measure that provides the best “closeness” between R in the LUT and R*.

Let Γ be a class of measures (distances) Γ(R*(λj), Ri(λj, η̄, ζ̄)) between two BRF functions; the LUT and the observations. The classical statistical method of inversion (or estimation and finding required η̄*) of the radiative transfer model can be formulated as a semi-parametric problem

η ¯ * = arg min η ¯ sup ζ ¯ Γ [ R * ( λ j , η ¯ , ζ ¯ ) , R i ( λ j , η ¯ , ζ ¯ ) ]
The purpose is to find the best estimate for η̄* by solving the minimization problem (1) using different statistical distances and divergences between simulated satellite signals (“observations”) and LUTs. We consider the parameters η s * and ηs,i, 1 ≤ iN closed if | η s * η s , i | = min { | η s * η s , i |, 1 ≤ sk}, 1 ≤ iN. The classical approach of this minimization problem is known as LSE, which is based on the minimization of the quadratic function
λ j Λ ( R * ( λ j ) R i ( λ j , η ¯ , ζ ¯ ) ) 2 min η ¯

We consider alternative statistical distances, which can be divided into three classes. The majority of statistical distances belong to the so-called class of information measures. This class considers distances or measures of divergence between two probability distributions. To apply these functions to biophysical parameter estimation, the BRF must be normalized such that the sum of probabilities is 1. The expression for the LUTs becomes

Q = ( q 1 * , , q n * ) = R * ( λ 1 ) λ j Λ R * ( λ j ) , , R * ( λ N ) λ j Λ R * ( λ j )
and for the simulated satellite observations it is
P i = ( p 1 i , , p n i ) = R i ( λ 1 ) λ j Λ R i ( λ j ) , , R i ( λ N ) λ j Λ R i ( λ j )
with l = 1 n q l = 1, and l = 1 n p l i = 1 for 1 ≤ iN. Thus the BRF functions can be rewritten as discrete probability mass functions by a simple normalization.

Finally, to compare the results obtained with different distance measures, the mean absolute error in parameter retrieval is defined, for example to assess the residual error in estimated LAI the following merit function is used

Error LAI = 1 M i = 1 M | LAI true LAI LUT |

In the next three sections we discuss the different statistical distances evaluated in the present study. The distances are applied to two reflectance distributions, P refers to the LUT reflectances and Q refers to the observed “true” reflectances. To simplify the notation we drop the index i.

3. Information Measures

Information theory was born in 1948 when Shannon [26] published his revolutionary paper motivated by the problem of efficiently transmitting information over a noisy channel. Since Mahalanobis [27] introduced the concept of distances between two probability distributions, several other distance measures have been suggested in the statistical literature and these have been referred to as measures of distance between two distributions, measures of separation, measures of discriminatory information and measures of variation-distance. While these measures were not always introduced for the same reason, they all increase when two distributions become “further away” from each other.

Divergence is an important concept in information theory and it is useful in many applications such as multimedia classification, neuroscience, optimization of the performance of density estimation methods, and cluster analysis. Distance measures also allow a wide range of tests to see if samples are from the same distribution.

Entropies are defined over the space of distributions that form the bases of independence/dependence concepts. For these reasons, Shannon’s mutual information function has been increasingly utilized in the literature [28]. Shannon’s relative entropy and almost all other entropies fail to be “metric”, as they violate either symmetry, or the triangular rule, or both. For these reasons, it is more appropriate to refer to these entropies as measures of divergence rather than measures of distance.

Informally, entropy can be understood as “the quantity of surprise one should feel upon reading the result of a measurement”. More formally, we can write: if event A occurs with probability P (A), define the “information” I(A) gained by knowing that A has occurred to be

I ( A ) = log 2 P ( A )
The intuitive idea is that the rarer an event A, the more information we gain if we know it has occurred.

3.1. f-Divergence Information Measures

Kullback and Leibler (KL) [22] first introduced the concept of information divergences, which are non-symmetric measures between two distributions P and Q. Typically P represents the “true” distribution of the data and Q represents a model or an approximation of P. In information theory, KL divergence can be interpreted as cross Shannon entropy. This class has been extended in many directions since its initial application in decoding schemes and in signal processing. In particular, Rényi proposed a generalization of Shannon entropy [29], one of a family of functionals for quantifying the diversity, uncertainty or randomness of a system. The Rényi entropies are important in ecology and statistics as indices of diversity. Later, KL and Rényi related divergences were included in a broader class of divergences called f-divergences, introduced by Csiszár [30]. This class can be formulated as follows.

A general class of divergence measures is given by

Γ F [ P , Q ] = l = 1 n F ( p l , q l )
where
  • 0 ≤ p1, ...pn ≤ 1, 0 ≤ q1, ...qn ≤ 1, l = 1 n q l = 1 and l = 1 n p l = 1;

  • F (p, q) is a strictly convex function of p so that ΓF(P, Q) is a strictly convex function of 0 ≤ p1, ..., pn ≤ 1;

  • For fixed Q, Γf(P, Q) attains its unconstrained global minimum when pl = ql for all l, i.e., if P = Q;

  • for a given strictly convex twice differentiable function f(.) we define

    Γ f [ P , Q ] = l = 1 n q l f ( p l q l )

Note that the global minimum value is equal to f(1). Thus for a given f
D f [ P , Q ] = Γ f [ P , Q ] f ( 1 ) 0 and = 0 P = Q

Many measures were added to this class from different areas of science and new divergences are still being discovered. Here we present some of these f-divergence measure, see [31] and additional list can be found in Section A.1 of the Appendix.

  • Let f(x) = x ln(x) and f(1) = 0. The corresponding measure is

    D f [ P , Q ] = l = 1 n p l ln ( p l q l )
    This measure is called the KL divergence; it also corresponds to the maximum likelihood distance. It has wide application in code theory, signal processing, data compression, data storage and data communication as well as others areas of science.

  • Let f ( x ) = 1 x and f(1) = 1 then

    D f [ P , Q ] = l = 1 n q l 2 p l = l = 1 n ( q l p l ) 2 p l
    This measure is called the Pearson chi-square. This measure is used in the chi-squared test first proposed by K. Pearson.

  • χα—Vajda divergence corresponds to the function f(x) = |x − 1|α and f(1) = 0, where α ⩾ 1, then

    D f [ P , Q ] = l = 1 n | p l q l | α q 1 α

  • Let f ( x ) = ( x 1 ) 2 and f(1) = 0, then

    D f [ P , Q ] = l = 1 n q l ( p l q l 1 ) 2 = l = 1 n ( p l q l ) 2
    is called the squared Hellinger measure. It is bounded full distance and has been applied to parameter estimation in censored models (where the variable of interest is only observable under certain conditions) as well as many other areas of science.

  • It can be generalized in the following form to give more flexibility on parameter estimation. Assume f(x) = (xα − 1)1/α and f(1) = 0, then

    D f [ P , Q ] = l = 1 n ( p l 1 / 2 j q l 1 / 2 j ) 2 j , j = 1 , 2 , 3

  • Let f(x) = (1 − x)2j, f(1) = 0, then power divergence has the form

    D f [ P , Q ] = l = 1 n q l ( 1 p l q l ) 2 j , j = 1 , 2 , 3

  • Power divergence measures [32] with minimum at zero. This class was introduced to unite efficiency with robust properties; the class is also Fisher consistent

    D f [ P , Q ] = l = 1 n p l { [ p l / q l ] α 1 } α ( α + 1 )
    which gives for α = −2, −1, −1/2, 0, 1 the following already known measures: the Neyman chi-squared measure divided by 2, the Kullback–Leibler divergence, the twice-squared Hellinger distance, the likelihood disparity, and the Pearson’s chi-squared divided by 2.

3.2. Other Divergence Measures between Two Probability Distributions

There are another two important sub-classes based on the divergence between probability distributions that do not belong to the class of f-divergences. These sub-classes are referred to as (h, f)-measures and f-entropy measures.

The first of these sub-classes, (h, f)-measures, was introduced by [33]. It can be written in the following form D f h [ P , Q ] = h ( D f [ P , Q ] ), where h is a differentiable increasing function mapping from [ 0 , f ( 0 ) + li m t f ( t ) t ] [ 0 , ]. Under different assumptions, it is shown that the asymptotic distributions of the (h, f)-divergence statistics are either normal or chi-square. These divergences were developed for hypothesis testing on multinomial populations and to test goodness of fit and independence. This class is based on the superposition of two functions and it gives a large degree of flexibility to deal with outliers.

Here is one of the examples of these measures. Additional list can be found in Section A.2 of the Appendix.

Rényi divergence with

h ( x ) = 1 α ( α 1 ) log ( α ( α 1 ) x + 1 ) ; f ( x ) = x α α ( x 1 ) 1 α ( α 1 ) ; α 0 , 1
D f h [ P , Q ] = 1 α ( α 1 ) log ( ( l = 1 n q l ( p l / q l ) α α ( p l q l ) q l ) + 1 )

The second subclass is referred to as entropy measures and can be introduced as follows. Let X be a random variable with probability distribution P. Shannon’s entropy [26] has the form

H ( X ) = H ( P ) = l = 1 n p l log p l
while the cross-entropy is
H ( P / Q ) = l = 1 n p l log q l
It is easy to verify that the KL divergence Equation (7) is related to Shannon’s entropy, i.e., Df [P, Q] = H(P) − H(P/Q).

In order to present a systematic way of studying the different entropy measures, Burbea and Rao introduced the so-called f-entropies, by

H f ( X ) = H f ( P ) = l = 1 n f ( p l )
where f : (0, ∞) → R is a continuous concave function and f ( 0 ) = lim t 0 f ( t ) ( , ). It turned out that some important entropy measures cannot be written as f-entropy. For this reason [24], defined the (h, f)-entropy as follows,
H f h ( X ) = H f h ( P ) = h ( H ( P ) )
where either f : (0, ∞) → R is concave and h : RR is differentiable and increasing, or f : (0, ∞) → R is convex and h : RR is differentiable and decreasing.

Based on the concavity property of the (h, f)-entropy, new generalization was introduced in [34]:

D f h [ P , Q ] = H f h ( P + Q 2 ) H f h ( P ) + H f h ( Q ) 2

These measures of divergence have been introduced to present systematic ways to study different entropy measures. They are used in applications that are associated with random variables with finite support in genetic diversity between populations, the study of taxonomy in biology and to test if populations are homogeneous in genetics and for the analysis of discriminant techniques.

An example of these measures can be seen below and additional list can be found in Section A.2 of the Appendix.

Arimoto (1971)

f ( x ) = x 1 / α , h ( x ) = 1 α 1 ( x α 1 ) , α > 0 , α 1
D f h ( P , Q ) = ( 1 α 1 ) [ ( l = 1 n ( p l + q l 2 ) 1 / α ) α 1 2 [ ( ( p l 1 / α ) ) α + ( ( q l 1 / α ) ) α ] ]

3.3. Blended f-Disparities

A third group of divergences is referred to as blended divergences. Lindsay [32] found that inference based on statistics of type f-divergence (obtained by replacing either one or both probability distributions by suitable estimators) requires either bounded differentiability of f or boundedness of f itself. He introduced a new class of divergences by the modification of weights inside the integral expression of Pearson’s chi-squared divergence called “blended weight chi-squared disparity”—BWCS(β) and “blended weight Hellinger disparity”—BWHD(β), β ∈ [0, 1]. In general all these new classes of disparities have the following common property. If the blending parameter is equal to the limiting values β = 0 or β = 1, then the two original divergences on which the blend was based are achieved in the class of blended divergences. Definitions and theorems can be found in [23].

All blended f-disparities have been used in goodness-of-fit tests in medical statistics and are shown to be an excellent compromise between the Pearson’s chi-square and the log likelihood ratio tests.

To illustrate the theory of blended divergences, we give several examples below.

Blended weighting scheme that generalizes Hellinger distance:

D α ( P , Q ) = 1 2 l = 1 n ( p l q l ) 2 ( α p l + ( 1 α ) q l ) 2 , α ( 0 , 1 )

4. Nonlinear Regression and M-Estimates

Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. Regression analysis seeks to find the relationship between one or more independent variables and a dependent variable. Certain widely used methods of regression, such as LSE, have favorable properties if their underlying assumptions are true, but can give misleading results if those assumptions are violated. In cases where errors are not normally distributed, outliers occur, or ordinary LSE assumptions are violated in some other way, the validity of the regression results is compromised if a non-robust regression technique is used. M-estimators (see for example [35]) form a broad class of estimators that exhibit certain robust properties. Estimates with robust regression methods can be more stable with respect to anomalous errors.

We use M-estimates to the estimation of biophysical parameters from reflectances as follows. Let us consider the BRF R(λj, θ̄) as a nonlinear regression function g(λj, θ̄) of its parameter set, which is observed at wavelength λj ∈ Λ with some noise j of complicated nature. The observations set R*(λj) = xj can be represented as follows

x j = g ( λ j , θ ¯ ) + j , j = 1 , , n
where E∊j = 0, i.e., the expectation is that j is random noise with zero mean, not Gaussian in general.

In our case we are interested in the unknown parameter vector of interest η̄ =(LAI, fV, fAPAR). We use the M-estimates generated by a function of loss ρ(x), x ∈ ℝ, see [19,35], for some motivation details and properties of these estimates.

The M-estimate of the unknown parameter vector η̄ obtained from the observations xj j = 1, ..., n is given by the solution of the minimization problem:

Q n ( η ¯ ) = j = 1 n ρ ( x j g ( λ j , η ¯ ) ) min η ¯
The function ψ(x) = ρ′(x) is called the score function and the minimization problem (Equation (23)) can be written in the following equivalent way:
j = 1 n ψ ( x j g ( λ j , η ¯ ) ) g η ¯ ( λ j η ¯ ) = 0 .

The classical LSE corresponds to the case ρ(x) = x2, ψ(x) = 2x. In this case random noise j are i.i.d.r.v. that have a Gaussian distribution. In general for nonlinear regression, j may be independent but not necessarily Gaussian or even non-Gaussian dependent random variables. It is well known that LSE regression methods are consistent, asymptotically normal and asymptotically efficient. However, when the density function of errors is non-Gaussian or even has a non-symmetric skewed distribution, LSE estimates are no longer efficient and their application can result in large losses of efficiency. Robust methods replace the sum of squares by more suitable loss functions.

The following examples belong to the class of M-estimates and can be found in [3537] and others. The full list of M-estimates can be found in Section A.2 of the Appendix.

  • If errors are normally distributed f ( x ) = ( 1 / 2 π ) exp ( x 2 / 2 ), then

    ρ ( x ) = x 2 , ψ ( x ) = x
    is LSE and it is non-robust. It is used widely in many application of remote sensing, biology, economy and other areas of science since it has nice properties described above.

  • The function

    ρ c ( x ) = { c x , x 0 ( c 1 ) x , x < 0
    defines the class of Koenker–Basset estimators [38], (0 < c < 1). In contrast to classical methods based on least-squares residuals, this measure is robust and has an explicit probabilistic meaning. It was introduced for the estimation of an unknown parameter η̄ in the nonlinear regression model (Equation (23)) when the samples are independent, but errors j = xjg(λj,η ̄) have some skewness (non-symmetric) property. The empirical quantile function may be defined in terms of solutions to a simple optimization problem. Explicitly,
    Q ^ ( c ) = inf { y | j = 1 n ρ c ( i y ) = min }
    where ρc-function (Equation (26)). One can also interpret this as (E∊j = 0)
    { P { i 0 } = c P { i < 0 } = 1 c
    This distance measure has been applied in econometrics to study wage distributions and to study linear regression quantiles on censored data.

5. Minimum Contrast Estimation

To apply minimum contrast measures to our problem, we interpret the BRF as a spectrum or spectral density of some stochastic process. The basic idea behind a minimum contrast estimator is to minimize the distance (contrast) between a parametric model and a non-parametric spectral density. The estimates obtained are first-order efficient and are also attractive because they have robust properties. This class of estimates is close to the class of quasi-likelihood estimators, where instead of independence (which does not hold for many cases) we use asymptotical independence, as discussed below.

We adopt the following terminology from time series analysis to interpret our observations as measurements in the spectral domain [39]. Let {Zt} be a stationary processes with spectral density f(λ) = fθ(λ), λ ∈ Λ ⊆ R, with expectation m, and θ ∈ Θ ⊂ ℝp. Our aim is to estimate the unknown parameter θ, that is, to identify the true value of spectrum fθ*(λ).

To implement this idea we consider the so-called quasi-likelihood method [40]. The BRF observations in the spectral domain are written in a form, In(λj), λj ∈ Λ = {λ1, ..., λn}, where

I n ( λ ) = 1 2 π n | j = 1 n ( Z j m ) e i j λ | 2 , λ Λ

This is the so-called periodogram non-parametric estimation of spectral density f(λ).

Under some general conditions [39], at the Fourier frequencies λj ∈ Λ, the random variables In(λj) are asymptotically independent and have an exponential distribution, that is

lim n P { I n ( λ j ) u } = 1 e u f ( λ j , θ ) , u 0
The pdf that corresponds to the distribution function in the right hand side of Equation (31) takes the form: 1 f ( λ j , θ ) e u f ( λ j , θ ).

Thus, one can construct a quasi-likelihood function or its logarithm

log L ( I n ( λ 1 ) , , I n ( λ n ) ) = log j = 1 n 1 f ( λ j , θ ) e I n ( λ j ) f ( λ j , θ ) = j [ log f ( λ j , θ ) ) + I n ( λ j ) f ( λ j , θ ) ]
which has to be maximized in order to estimate θ. It means that we need to minimize the so-called Whittle functional
Q n ( θ ) = j [ log f ( λ j , θ ) ) + I n ( λ j ) f ( λ j , θ ) ]
and the quasi-likelihood estimate is
θ ^ n = Argmin θ Θ Q n ( θ )

The Whittle estimator was also extended to cover correlated signal-plus-noise models, providing a formal asymptotic distribution theory specifically tailored for parameter estimation. This approach was first applied in time series for exponential volatility models; it then caught attention in financial econometrics and in related fields. These models are able to represent some of the stylized features of financial returns, such as uncorrelation in levels but strong dependence in squares and log-squares and leverage effect.

The Whittle estimator belongs to a class of more general estimates known as minimum contrast estimates, see [20,21]. To demonstrate the idea, let us assume that the true value of unknown parameter θ*intΘ, the interior of Θ. A contrast function for θ* is a deterministic function F(θ*, θ), Fθ*: Θ → ℝ+, which has a unique minimum, θ = θ*.

A contrast process for Fθ* is a sequence of random variables Un(θ), n = 1, 2... such that the ergodic like condition holds for some U(θ) in probability:

lim n [ U ( θ ) U n ( θ * ) ] = F ( θ , θ * )

The minimum contrast estimator is a value of θ for which the function Un(θ) takes its minimum, or

θ ^ N = Argmin θ Θ U n ( θ )

Under some sets of conditions the minimum contrast estimators are consistent, see [41]. Often the contrast function can be chosen as a distance L(fθ, g) between two spectral densities fθ(λ) and g(λ), which can be written in the form:

L ( f θ , g ) = Λ K { f θ ( λ ) / g ( λ ) d λ
where K(x) is a three times continuously differentiable function on (0, ∞) and has a unique minimum at x = 1. The contrast process in practice can be approximated as follows:
U n ( θ ) = Λ K { f θ ( λ ) / I n ( λ ) d λ λ j Λ K ( f θ ( λ j ) I n ( λ j ) )

The following examples of distances L(fθ, g) are widely used in parametric estimation in time-series analysis in frequency domain, in particular for autoregressive and moving average models.

One of the example can be seen below and the full list of such distances can be found in Section A.3 of the Appendix.

Let K ( x ) = log x + 1 x. This criterion is equivalent to the quasi-Gaussian maximum likelihood (73) and has the following form

L ( f θ , g ) = λ j Λ { log ( f θ ( λ j ) / g ( λ j ) ) + g ( λ j ) / f θ ( λ j ) }

Note that function K(x) have a unique minimum at x = 1. To find a minimum at zero we need to subtract K(1) under the sum from each of the functions. In practical applications we use the notation from Section 2, i.e., in the above methodology the abstract parameter θ is replaced by the vector parameter of our interest η̄ = (η1, ..., ηk).

Additional cost functions from the literature [4248] are summarized in the Appendix.

6. Methodology

6.1. Radiative Transfer Modeling

The performance of the distance measures in terms of retrieving biophysical parameters (LAI, fAPAR, fV) from reflectance values was tested on simulations. This approach has the advantage that the estimated biophysical parameters can be compared directly with model input parameters and that therefore errors associated with the use of different distance measures can be established with accuracy. Moreover, we can test distance measures on a large number of simulations and this provides a good indication of their robustness.

Simulations were carried out similar to the approach taken by Prieto-Blanco et al. [9]. We used two models in conjunction to simulate a set of the ground BRF observations and to generate the LUTs. Simulations were carried out for 3 needleleaf and 2 broadleaf scenes. We used PROSPECT [49] to simulate light scattering and absorption by leaves, and FLIGHT [15] to simulate light scattering and absorption by vegetation canopies. The models used are state of the art and provide realistic simulations of the interaction of solar radiation with the vegetation canopy and the soil.

PROSPECT [49] calculates leaf transmittance and reflectance from 400 to 2,500 nm. In PROSPECT4 each leaf is considered as a stack of N absorbing plates with rough surfaces giving rise to scattering of light. Absorption is calculated as the linear summation of the concentrations of chlorophyll, water, and dry matter, each with their specific absorption coefficients [49]. The PROSPECT input parameters are described in Table 1. The inputs include: N, the leaf structure parameter; Cab, the chlorophyll a + b concentration (μg/cm2); Cw, the equivalent water thickness (g/cm2); Cm, the dry matter content (g/cm2). Chlorophyll content (Cab) in leaves is linked to the maximum photosynthetic capacity of vegetation and varies with leaf development stage, productivity, stress and nitrogen levels. For the LUTs of conifers, a maximum and minimum value for Cab was entered to reflect a range of conditions.

FLIGHT [15] is a 3D radiative transfer model for light interaction with vegetation canopies using Monte Carlo simulation of photon transport. The original model traced the photons’ trajectories forwards from the source until they were absorbed in the canopy or left the canopy boundary. Subsequent improvements include calculation of paths back from any view direction to the intercepted surface facets [50,51], simulation of fine angular resolution, simulation of photosynthesis and simulation of LiDAR signals [52]. A hybrid representation is used to model the discontinuous nature of the forest canopy. Large-scale structure is represented by geometric primitives defining shapes and positions of the tree crowns and trunks, here estimated from a statistical distribution. Within each crown, foliage is approximated by structural parameters of area density, angular distribution and size and optical properties of reflectance and transmittance. These parameters are approximated as homogeneous within each boundary, but may vary between crowns. Simulation of 3D photon trajectories allows accurate evaluation of multiple scattering within crowns, and between distinct crowns, trunks and ground surface. FLIGHT simulations have previously been compared with other 3D canopy radiative transfer models as part of the Radiation Model Intercomparison (RAMI) project [4]. The recent analysis within RAMI of six selected 3D models, including FLIGHT, showed dispersion within 1% over a large range of canopy descriptions, see [53].

Radiation was simulated in 15 spectral bands (500, 560, 630, 690, 700, 740, 790, 830, 870, 1,035, 1,200, 1,250, 1,650, 2,100, 2,250 nm). A previous study [54] suggested such a selection of bands could provide approximately 90% of the information about the land surface that is provided by a full spectrum, although this study was based on field spectroscopy. The set is chosen here to demonstrate the retrieval method and selection of error metrics, but the method is applicable to any set of bands or potential view directions, where the study should be repeated to determine optimal error metrics.

6.2. Sites

The simulations were carried out on two main types of forest: conifer and broadleaf forests. Three conifer representatives were chosen from the former BOREAS sites [55], each characterized by a different dominant species: the Old Black Spruce (Picea Mariana) site (OBS), the Old Jack Pine (Pinus Banksiana) site (OJP) and the Young Jack Pine (Pinus Banksiana) site. Vegetation of these sites has a complex structure, needles show a high degree of clumping and there is mutual shadowing by crowns. These sites are therefore known to pose a challenge to biophysical parameter estimation. Detailed crown, leaf and soil background measurements are available for these sites as these have been extensively studied in [56,57]. Chlorophyll content in coniferous canopies has been estimated in [58]. Changes in leaf chlorophyll produce large differences in leaf reflectance and transmittance spectra, therefore three values of Cab were used to obtain a wide range of possible values [59], (Table 1). Broadleaf simulations were carried out for an oak and beech forest since these are among the most important species for the European forestry [60,61].

Table 1 shows the leaf optical properties for the conifer and broadleaf forests that were used in PROSPECT; values are based on [62]. Tables 2 and 3 show the vegetation structure parameters and the angular configurations for the FLIGHT model. Five characteristic soil spectra from the Purdue spectral library were selected (Table 2) [63,64].

6.3. Generation of Look-Up Tables

The LUT contains a total of N = 90, 404 entries of BRF reflectances. These are reflectance values calculated for parameters obtained at regular intervals of solar zenith angle, view zenith angle, relative azimuth angle, LAI, fV and Cab, see Table 1. Crown shape parameters can be found in Table 3 and fAPAR is obtained by summing individual fAPAR values at each band times weighted by the fraction of downwelling light within the band:

f APAR = i = 1 n F a , i W i
where Fa,i is the mean fraction of radiant energy absorbed by canopy at band i, calculated from FLIGHT output, and Wi are the weights, see also [9].

6.4. Simulation of Observations

We simulated BRF values using the coefficients in Tables 13. For each realization the observation tables contain a total of M = 5,000 entries for the three conifer sites (OBS, OJP, YJP) and M = 5,000 for the broadleaf sites (beech and oak). The tables were simulated similar to the calculation of the LUT but using a random selection from the range of parameters in Table 2, last column (LAI, fV, angular geometry, atmospheric aerosol depth). Note that the parameters from Tables 1 and 3 stay the same. The observations are simulated from a wider range of conditions that are not present in the conifer and broadleaf LUTs (soils, leaf area distribution). Further errors due to sensor noise and calibration are not considered here, but have been examined in previous studies [9,65] and should be considered prior to application to particular instruments. The tables of “observed” ground reflectances were constructed to contain complex errors for needleleaf and broadleaf forests. These errors, originating from the mismatch between LUT and “observations”, represent conditions encountered in real applications where we have to match a model representing only a selection of conditions with richly varied real world conditions.

Canopy reflectance models demonstrate increasing sensitivity to soil reflectance at lower vegetation cover. Soil reflectance is one of the most sensitive parameters in canopy reflectance models [17]. Errors are biased and unsymmetrical and for this reason we expect some robust distances to perform better than LSE. This effect is more pronounced at lower values of LAI (<3).

Our simulations are necessarily restricted and represent only a subset of all parameters that can be varied within Prospect and FLIGHT. We believe that these simulations of errors are more realistic than the commonly adopted method of adding Gaussian noise to the spectrum. Errors due to incorrect assumptions on soil or leaf spectral properties will be spectrally correlated.

7. Results

The statistical distances listed in Sections 3–5 were evaluated as follows. Reflectances in the LUTs were matched to “observed” reflectances. For each case, we matched one entry in the “observations” table consisting of M = 5,000 spectral bands with one entry in the LUT for each type of forest. The inversion finds parameters for the nearest angular geometry in the LUT, i.e., the angular geometry is known. Other parameters are assumed to be unknown. The performance of the statistical distances was then assessed on the biophysical parameter estimated (LAI, fV, fAPAR). Some statistical distances allow parameters that govern the shape of the error distribution to be varied, thus the choice of these parameters leads to another optimization problem in itself. For these cases we tested a range of parameters and chose the parameter that minimizes the error in estimated biophysical parameters. All distances were tested by comparing the estimated biophysical parameters with the a priori known parameters.

For the purpose of clarity we present a selection of results that consists of the best performing measures from each of the three classes of statistical distance measures. The results show significant variability in retrieval accuracy depending on the chosen divergence measure. Overall, the optimal measures in each case show an improvement over using LSE, see Tables 4 and 5.

Table 4 shows the best distance measures (bold italic) for estimating biophysical parameters on broadleaf forests from BRF as well as results obtained with LSE (bold text). Table 5 shows the same but for estimating biophysical parameters on conifer forest.

The improvement obtained in estimating biophysical parameters for broadleaf forest is further illustrated in Figure 1. In Figure 1 residual errors in broadleaf biophysical parameters obtained from BRF reflectances by using LSE are compared with errors obtained by Hellinger (Equation (10)) and power divergence (Equation (12)). The range of errors (maximum and minimum errors) for these two methods is the same, however the alternative divergence shows a marked improvement in the error distribution for all biophysical parameters.

Figure 2 provides a further illustration of the reduction in the error in biophysical parameter estimation for needleleaf forests from BRF reflectance data obtained by power divergences in Equations (12) and (13) (see also Table 5).

The results summarized in Tables 4 and 5 and Figures 1 and 2 illustrate that the use of alternative distances measures can significantly improve parameter estimation. We found that the following distance measures perform well for the cases described above:

  • (1) For the broadleaf forest the best distances were:

We found that compared with LSE, the Koenker–Basset distances (Equations (26) and (35)) gave better results in all cases. The improvement compared with LSE was 15% for LAI, 10% for fV and 11% for fAPAR.

  • (2) For conifer forest the best distances were:

Similar to the broadleaf case, we found that the Koenker–Basset metric (Equations (26) and (35)) improved estimation in all cases. All biophysical parameters, LAI, fV and fAPAR, improved by around 10% in this case.

8. Discussion

8.1. Recommendations for Distance Choice

When optimizing parameterized models for which the error distribution (shape and bias) is known, the user can choose an appropriate cost functions based on specific physical properties of the model and metrics. When we deal with non-parametric models or non-linear model with many parameters (such as the present case where we simulate BRF using PROSPECT, FLIGHT) it may be useful to check a range of available distances to get the optimal cost function. For problems similar to the present study we can provide the following guidance.

8.2. Considering the Shape of the Distribution

For non-symmetric error distributions the recommended cost function is Koenker–Basset (Equation (26)). Such non-symmetric error distribution may arise, for example, from undetected sub-pixel cloud in pre-processing. Based on the shape of this function, it is expected that for the parameter c for this function becomes close to one with increasing skewness of the error distribution. We expect skewed error distributions to be common for biophysical parameter estimation from satellite data, especially when there is a mismatch between the soil reflectance specified in the LUT and the “true” soil reflectance.

For symmetric error distributions we can recommend Hellinger (Equation (10)), Arimoto (Equation (20)) and power divergence (Equation (12)) and standard LSE (Equation (25)).

For heavy-tailed distribution (right) and semi-heavy-tailed distribution (left) (i.e., the tail behave as negative exponent divided by power function) we can recommend non-symmetric power divergence (Equation (13)), Vajda (Equation (9)) and Pearson chi-square divergence (Equation (8)) cost functions. As can be seen from Figures 1 and 2, application of specific metrics to the biophysical parameters makes the errors more localized, removes heavy tails and makes the distributions more symmetric. This is due to the non-symmetric nature of the metrics themselves. To the best of our knowledge, these effects are usually not addressed in remote sensing inversion problems.

Special interest should be paid to the class of spectral metrics, since it represents informational distances in the spectral domain. Since informational transformation to the spectral domain usually makes our observation asymptotically independent, it is plausible that the spectral metric (Equation (35)) that corresponds to quasi-maximum likelihood provides good results. This is also consistent with the statistical theory that the maximum likelihood estimator is asymptotically optimal. For this reason we recommend to use this or a similar cost function.

8.3. Considering Properties of the Estimated Parameters

For all types of forest we found that for fV the optimal metric is power divergence (Equation (12)). This symmetric cost function represents a Rényi type entropy that maximizes the entropy distribution using a power law behavior. This gives us a better understanding of the nature of errors in this case.

In the case when parameters of the model are linearly correlated, we observe consistency in the optimal cost function for these parameters (for example LAI and fV and fAPAR in our case). Thus, we can recommend using the same cost function for linearly correlated parameters. However, this does not hold if the correlation has a more complicated nature.

9. Conclusions

Over 60 statistical distances from three major classes, information measures, M-estimates and minimum contrast methods, were obtained from the mathematical literature. A comprehensive list of these statistical distances was provided. The statistical distances were tested to see, if compared with LSE, they improved the estimation of biophysical parameters for needleleaf and broadleaf forests. We found that the commonly used LSE distance is not the optimal cost function for the cases studied and that better results can be obtained using alternative cost functions.

For the numerical experiments we use PROSPECT and FLIGHT to simulate “observed” reflectance values in 15 different spectral bands. We generate LUTs for a limited set of land-surface and atmospheric conditions. However for the observations we generate reflectance values for a wider range of conditions and thus introduce a mixture of errors caused by variations in angular geometry, LAI, fV, soil reflectance and leaf angle distribution. For the biophysical parameter estimation we match the observed reflectances to the reflectances of the LUT with different cost functions. The largest sources of (biased) error, i.e., the mismatch between observations and LUT, are potentially related to soils, since only a limited amount of variability associated with these variables was incorporated in the LUTs. We conclude that our analysis resembles a common problem for the estimation of biophysical parameters from satellite data, i.e., one estimates biophysical parameters assuming a limited set of ground conditions. A cost function that is based on an asymmetric, biased or heavy-tailed error distribution can therefore result in better estimates of biophysical parameters than LSE, which is based on a normal error distribution.

We found that the information measures from Section 3 provide better results when the BRF is normalized; see Equations (3) and (4). This result may not be valid for a smaller number of wavebands.

A caveat of the present study is that we analyzed only a limited subset of a wide range of possibilities, and for different applications it is likely that different cost functions may be more suitable. We are preparing a study where cost functions are used on real observations as opposed to simulated observations [25].

Alternative divergence measures and distances have been known in the statistical literature for some time and could find an application in many areas besides remote sensing, such as in biology, geography and geophysics.

The approach outlined in the present study can be extended to other applications that use LUT optimization, interpolation, linearization of parameter space, etc. It can be used in addition to or as an alternative to data training and machine learning schemes. We believe that the use of alternative statistical measures has great potential for remote sensing applications.

Acknowledgments

We thank Ana Prieto for advice and provision of LUT code. This work was funded by the NERC National Centre for Earth Observation (NCEO). Three anonymous reviewers are thanked for their constructive comments to improve the paper.

References

  1. Sellers, P.; Los, S.O.; Tucker, C.J.; Justice, C.O.; Dazlich, D.A.; Collatz, G.J.; Randall, D.A. A revised land-surface parameterization (SiB2) for atmospheric GCMs. Part 2: The generation of global fields of terrestrial biophysical parameters from satellite data. J. Clim 1996, 9, 706–737. [Google Scholar]
  2. Jonckheere, I.; Fleck, S.; Nackaerts, K.; Muysa, B.; Coppin, P.; Weiss, M.; Baret, F. Review of methods for in situ leaf area index determination Part I. Theories, sensors and hemispherical photography. Agric. For. Meteorol 2004, 212, 19–35. [Google Scholar]
  3. Verhoef, W. Light scattering by leaf layers with application to canopy reflectance modeling. The SAIL model. Remote Sens. Environ 1984, 16, 125–141. [Google Scholar]
  4. Widlowski, J.-L.; Taberner, M.; Pinty, B.; Bruniquel-Pinel, V.; Disney, M.; Fernandes, R.; Gastellu-Etchegorry, J.-P.; Gobron, N.; Kuusk, A.; Lavergne, T.; et al. The third Radiation transfer Model Intercomparison (RAMI) exercise: Documenting progress in canopy reflectance models. In J. Geophys. Res.; 2007. [Google Scholar] [CrossRef]
  5. Iaquinta, J.; Pinty, B.; Privette, J.L. Inversion of a physically based bidirectional reflectance model of vegetation. IEEE Trans. Geosci. Remote Sens 1997, 35, 687–698. [Google Scholar]
  6. Gao, F.; Jin, Y.F.; Li, X.W.; Schaaf, C.B.; Strahler, A.H. Bidirectional NDVI and atmospherically resistant BRDF inversion for vegetation canopy. IEEE Trans. Geosci. Remote Sens 2002, 40, 1269–1278. [Google Scholar]
  7. Qi, J.; Kerr, H.; Moran, M.S.; Weltz, M.; Hueye, A.R.; Dorooshian, S.; Bryant, R. Leaf area index estimates using remotely sensed data and BRDF models in a semiarid region. Remote Sens. Environ 2003, 73, 18–30. [Google Scholar]
  8. Pinty, B.; Verstraete, M.M.; Dickinson, R.E. Physical model of the bi-directional reflectance of vegetation canopies. Inversion and validation. J. Geophys. Res 1990, 95, 11767–11775. [Google Scholar]
  9. Prieto-Blanco, A.; North, P.R.J.; Barnsley, M.J.; Fox, N. Satellite-driven modelling of Net Primary Productivity (NPP): Theoretical analysis. Remote Sens. Environ 2009, 113, 137–147. [Google Scholar]
  10. Strahler, A.H. Vegetation canopy reflectance modelling. Recent developments and remote sensing perspectives. Remote Sens. Rev 1997, 15, 179–194. [Google Scholar]
  11. Qiu, J.; Gao, W.; Lesht, B.M. Inverting optical reflectance to estimate surface properties of vegetation canopies. Int. J. Remote Sens 1998, 19, 641–656. [Google Scholar]
  12. Verrelst, J.; Monoz, J.; Alonso, L.; Delegido, J.; Rivera, J.P.; Camp-Valls, G.; Moreno, J. Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for sentinel-2 and -3. Remote Sens. Environ 2012, 118, 127–139. [Google Scholar]
  13. Richter, K.; Hank, T.B.; Vuolo, F.; Mauser, W.; D’Urso, G. Optimal exploitation of the sentinel-2 spectral capabilities for crop leaf area index mapping. Remote Sens 2012, 4, 561–582. [Google Scholar]
  14. Gascon, F.; Gastellu-Etchegorry, J.-P.; Lefevre-Fonollosa, M.-J.; Dufrene, E. Retrieval of forest biophysical variables by inverting a 3-D radiative transfer model and using high and very high resolution imagery. Int. J. Remote Sens 2004, 25, 5601–5616. [Google Scholar]
  15. North, P.R.J. Three-dimensional forest light interaction model using a Monte Carlo method. IEEE Trans. Geosci. Remote Sens 1996, 34, 946–956. [Google Scholar]
  16. Dawson, T.P.; Curran, P.J.; North, P.R.J.; Plummer, S.E. The propagation of foliar biochemical absorption features in forest canopy reflectance: A theoretical analysis. Remote Sens. Environ 1999, 67, 147–159. [Google Scholar]
  17. Privette, J.L.; Myneni, R.B.; Emery, W.J.; Pinty, B. Inversion of a soil bidirectional reflectance model for use with vegetation reflectance models. J. Geophys. Res 1995, 100, 497–525. [Google Scholar]
  18. Pardo, L. Statistical Inference Based on Divergence Measures. In Statistics: A Series Textbooks and Monographs; Chapman and Hall/CRC: New York, NY, USA, 2006. [Google Scholar]
  19. Staudte, R.G.; Sheather, S.J. Robust Estimation and Testing; John Wiley & Sons, Inc: New York, NY, USA, 1990. [Google Scholar]
  20. Taniguchi, M. On estimation of parameters on Gaussian stationary processes. J. Appl. Prob 1981, 16, 575–591. [Google Scholar]
  21. Taniguchi, M. Minimum contrast estimation for spectral densities of stationary processes. J. R. Stat. Soc 1987, 9, 315–325. [Google Scholar]
  22. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat 1951, 22, 79–86. [Google Scholar]
  23. Kûs, V. Blended divergences with examples. Kybernetika 2003, 39, 43–54. [Google Scholar]
  24. Salicrú, M.; Menéndez, M.L.; Morales, D.; Pardo, L. Asymptotic distribution of -phi-entropies. Commun. Statist. Theor. Method 1993, 22, 2015–2031. [Google Scholar]
  25. Leonenko, G.; Los, S.O.; North, P.R.J. Retrieval of leaf area index from MODIS surface reflectance by model inversion using different minimization criteria. Unpublished work. 2013. [Google Scholar]
  26. Shannon, C.E. The mathematical theory of communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar]
  27. Mahalanobis, P.C. On the generalised distance in statistics. Proc. Natl. Acad. Sci. India 1936, 2, 49–55. [Google Scholar]
  28. Granger, C.; Lin, J.L. Using the mutual information coefficient to identify lags in nonlinear models. J. Time Ser. Anal 1994, 15, 371–384. [Google Scholar]
  29. Rényi, A. Probability Theory; North-Holland: London, UK, 1970. [Google Scholar]
  30. Csiszar, I. Eine informationstheoretische ungleichung and ihre anwendung auf den beweis der ergodizitat von markoffischen ketten. Magyar Tud. Akad. Mat. Kutato Int. Kozl 1963, 8, 85–108. [Google Scholar]
  31. Kapur, J.N. Maximum-Entropy Models in Science and Engineering; John Wiley & Sons, Inc: New York, NY, USA, 1989. [Google Scholar]
  32. Lindsay, B.G. Efficiency versus robustness: The case for minimum hellinger distance and related methods. Ann. Stat 1994, 22, 1081–1114. [Google Scholar]
  33. Menéndez, M.L.; Morales, D.; Pardo, L.; Salicrú, M. Asymptotic behavior and statistical approach of divergence measures in multinomial populations: A unified study. Stat. Pap 1995, 36, 1–29. [Google Scholar]
  34. Pardo, L.; Morales, D.; Salicrú, M.; Menéndez, M.L. Statistics in applied categorical data analysis with stratified sampling. Utilitas Math 1993, 44, 145–164. [Google Scholar]
  35. Huber, P.J. Robust Statistics; Wiley: New York, NY, USA, 1981. [Google Scholar]
  36. Hampel, F.R.; Rousseeuw, P.J.; Ronchetti, E.M.; Stahel, W. Robust Statistics. The Approach Based in Influence Functions; Wiley: New York, NY, USA, 1989. [Google Scholar]
  37. Edlund, O.; Ekblom, H. Computing the constrained M-estimates for regression. Comput. Stat. Data An 2005, 49, 19–32. [Google Scholar]
  38. Koenker, R.; Bassett, G. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar]
  39. Brillinger, D.R. The Series Data Analysis and Theory; Hoden Day: San Francisco, CA, USA, 1981. [Google Scholar]
  40. Heyde, C.C. Quasi-Likelihood and Its Application. A General Approach to Optimal Parameter Estimation; Springer: New York, NY, USA, 1997. [Google Scholar]
  41. Dacunha-Castelle, D.; Duflo, M. Probability and Statistics; Springer: New York, NY, USA, 1986. [Google Scholar]
  42. Liese, F.; Vajda, I. Convex Statistical Distances; Teubner: Leipzig, Germany, 1987. [Google Scholar]
  43. Bregman, L.M. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math 1967, 7, 200–217. [Google Scholar]
  44. Basu, A.; Park, C.; Lindsay, B.G.; Li, H. Some variants of minimum disparity estimation. Comput. Stat. Data Anal 2004, 45, 741–763. [Google Scholar]
  45. Read, T.R.C.; Cressie, N.A.C. Goodness of Fit Statistics for Discrete Multivariate Data; Springer-Verlag: New York, NY, USA, 1988. [Google Scholar]
  46. Sharma, B.D.; Mittal, D.P. New non-additive measures of relative information. J. Combinatorics Inf. Sys Sci 1977, 2, 122–133. [Google Scholar]
  47. Itakura, F.; Saito, S. Analysis Synthesis Telephony Based on the Maximum Likelihood Method. Proceedings of the 6th International Congress on Acoustics, Tokyo, Japan, 21–28 August 1968; pp. C17–C20.
  48. Rényi, A. On Measures of Entropy and Information. Proceedings of the International 4th Berkeley Symposium Mathematical Statistics Probability, Berkeley, CA, USA, 30 June–30 July 1961; 1, pp. 547–561.
  49. Jacquemoud, S.; Ustin, S.L.; Verdebout, J.; Schmuck, G.; Andreoli, G.; Hosgood, B. Estimating leaf biochemistry using the PROSPECT leaf optical properties model. Remote Sens. Environ 1996, 56, 194–202. [Google Scholar]
  50. Disney, M.I.; Lewis, P.; North, P.R.J. Monte Carlo ray tracing in optical canopy reflectance modelling. Remote Sens. Rev 2000, 18, 197–226. [Google Scholar]
  51. Barton, C.V.M.; North, P.R.J. Remote sensing of canopy light use efficiency using the photochemical reflectance index: Model and sensitivity analysis. Remote Sens. Environ 2001, 78, 164–273. [Google Scholar]
  52. North, P.R.J.; Rosette, J.A.B.; Suárez, J.C.; Los, S.O. A Monte Carlo radiative transfer model of satellite waveform LiDAR. Int. J. Remote Sens 2010, 31, 1343–1358. [Google Scholar]
  53. Widlowski, J.-L.; Pinty, B.; Disney, M.; Gastellu-Etchegorry, J.-P.; Lavergne, T.; Lewis, P.E.; North, P.R.J.; Pinty, B.; Thompson, R.; Verstraete, M.M. The RAMI on-line model checker (ROMC): A web-based benchmarking facility for canopy reflectance models. Remote Sens. Environ 2008, 112, 1144–1150. [Google Scholar]
  54. Thenkabail, P.S.; Enclona, E.A.; Ashton, M.S.; van Der, M. Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sens. Environ 2004, 91, 354–376. [Google Scholar]
  55. Newcomer, J.; Landis, D.; Conrad, S.; Curd, S.; Huemmrich, K.; Knapp, D.; Morrell, A.; Nickeson, J.; Rinker, P.A.D.; Strub, R.; et al. Collected Data for the Boreas Ecosystem-Atmosphere Study; BOREAS Information System NASA/Goddard Space Flight Center: Greenbelt, MD, USA, 2000; (CD-ROM). [Google Scholar]
  56. Sellers, P.; Hall, F.; Kelly, R.; Black, A.; Baldocchi, D.; Berry, J.; Ryan, M.; Ranson, J.; Crill, K.; Lettenmaier, D.; et al. BOREAS. Experiment overview, scientific results and future directions. J. Geophys. Res 1997, 102, 28731–28770. [Google Scholar]
  57. Gamon, J.; Huemmrich, K.; Peddle, D.; Chen, J.; Fuentes, D.; Hall, F.; Kimball, J.S.; Goetz, S.; Gu, J.; McDonald, K.C.; et al. Remote sensing in Boreas: Lessons learned. Remote Sens. Environ 2004, 89, 139–162. [Google Scholar]
  58. Verrelst, J.; Schaepman, M.E.; Malenovsky, Z.; Clevers, J.G.P.W. Effects of woody elements on simulated canopy reflectance: Implications for forest chlorophyll content retrieval. Remote Sens. Environ 2010, 114, 647–656. [Google Scholar]
  59. Kempeneers, P.; Zarco-Tejada, P.J.; North, P.R.J.; De Backer, S.; Delalieux, S.; Sepulcre-Cant, G.; Morales, F.; Van Aardt, J.A.N.; Sagardoy, R.; Coppin, P.; Scheunders, P. Model inversion for chlorophyll estimation in open canopies from hyperspectralimagery. Int. J. Remote Sens 2008, 29, 5093–5111. [Google Scholar]
  60. Rock, J.; Puettmann, K.J.; Gockel, H.A.; Schulte, A. Spatial aspects of the influence of silver birch (Betula pendula L.) on growth and quality of young oaks (Quercus spp.) in central Germany. Forestry 2004, 77, 235–247. [Google Scholar]
  61. Grotem, R.; Reiter, I.M. Competition-dependent modelling of foliage biomass in forest stands. Trees 2004, 18, 596–607. [Google Scholar]
  62. Yang, Z.; Shi, R. Calculation of Mesophyll Structure Parameter and Its Effect on Leaf Spectral Reflectance. Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium (IGARSS ’05), Seoul, Korea, 25–29 July 2005; pp. 1299–1301.
  63. Baumgardner, M.F.; LeRoy, F.S.; Biehl, L.L.; Stoner, E.R. Reflectance properties of soils. Adv. Agron 1985, 38, 1–42. [Google Scholar]
  64. North, P.R.J. Estimation of f, LAI, and vegetation fractional cover from ATSR-2 imagery. Remote Sens. Environ 2002, 80, 114–121. [Google Scholar]
  65. Moses, W.J.; Bowles, J.H.; Lucke, R.L.; Corson, M.R. Impact of signal-to-noise ratio in a hyperspectral sensor on the accuracy of biophysical parameter estimation in case II waters. Opt. Express 2012, 20, 4309–4330. [Google Scholar]

Appendix

A.1. List of f-Divergence Information Measures

Additional information about the measures below can be found in [31]. List of measures with minimum at f(1).

  • Let f(x) = x2 and f(1) = 1 then

    D f [ P , Q ] = l = 1 n p l 2 q l = l = 1 n ( p l q l ) 2 q l
    This measure is called the Neyman chi-square divergence. It was introduced to test for goodness of fit in the case of multinomial probabilities. It has wide application in medical statistics.

  • Let f(x) = (x − 1) ln(x) and f(1) = 0, then

    D f [ P , Q ] = l = 1 n ( p l q l ) ( ln ( p l ) ln ( q l ) )
    This measure is called Jeffreys–Kullback–Leibler. It was introduced to obtain a symmetrical KL divergence.

  • K-divergence of Lin is used to analyze of contingency tables. It corresponds to the function f ( x ) = x ln ( x ) x ln ( 1 + x 2 ) and f(1) = 0 and takes the form

    D f [ P , Q ] = l = 1 n p l ln ( 2 p l p l + q l )

  • L-divergence of Lin is a symmetric version of K-divergence. It corresponds to the function f ( x ) = x ln ( x ) ( 1 + x ) ln 1 + x 2 and f(1) = 0, thus

    D f [ P , Q ] = l = 1 n p l ln ( p l ) + q l ln ( q l ) ( p l + q l ) ln ( p l + q l 2 )

  • Let f(x) = xα and f(1) = 1 and x > 0 then

    Γ f [ P , Q ] = l = 1 n q l ( p l q l ) α = l = 1 n p l α q l 1 α
    Thus,
    D f [ P , Q ] = l = 1 n p l α q l 1 α 1 0
    is a measure of discrepancy when α > 1. Havrada and Charvat suggested the first non-additive measure of entropy for 0 < α < 1
    D f [ P , Q ] = l = 1 n p l α q l 1 α e α 1 1
    For α → 1 it turns to be Kullback–Leibler’s distance. When 0 < α < 1, l = 1 n p l α q l 1 α is positive concave function and so its logarithm is also a concave function.

  • We can use a higher order cross-entropy, or the so-called cross-entropy of order α

    D f [ P , Q ] = 1 α 1 ln ( l = 1 n p l α q l 1 α ) , α > 0 , α 1
    This measure was suggested by Rényi [29] and it plays an important role in ecology and statistics as indices of diversity. It is related to the Shannon entropy of integer order.

  • Liese and Vajda [42] proposed bounded asymptotic Rényi measure, which has application in signal detection problem. For all α ≠ 0, 1 we define

    D f [ P , Q ] = 1 α ( α 1 ) ln ( l = 1 n p l α q l 1 α )

  • The harmonic Toussaint measure corresponds to the function f ( x ) = x x 1 x + 1 and f(1) = 0. It has application in measuring the dissimilarity between musical rhythms, music information retrieval and copyright infringement resolution to computational music theory and evolutionary studies of music. The measure has the following form

    D f [ P , Q ] = l = 1 n ( p l 2 p l q l p l + q l )

Distances with the minimum at zero:
  • The negative exponential disparity measure is used as an estimator that is asymptotically fully efficient and is robust against outliers and inliers, see [32]:

    D f [ P , Q ] = l = 1 n q l ( exp ( p l q l q l ) 1 )

  • The Bregman divergences [43] are not full distance measures, because they does not satisfy the triangle inequality and they are not symmetric. Bregman divergences are important for two reasons: Firstly, they generalize squared Euclidean distances to a class of distances that all share similar properties. Secondly, they bear a strong connection to exponential families of distributions.

    D f [ P , Q ] = l = 1 n ( q l a + 1 a 1 p l a 1 a 1 p l q l a 1 ) , a 1
    The symmetrized version of the Bregman divergences can be presented in a form
    D f [ P , Q ] = l = 1 n ( p l q l ) [ p l a 1 q l a 1 ]

  • The powered Pearson divergence is reasonably efficient and robust [44], and has wide applicability in genetics.

    D f [ P , Q ] = 1 2 α 2 l = 1 n q l ( { p l α q l α } q l α ) 2 , α ( 0 , 1 ]
    It includes the Pearson’s chi-squared measure (α = 1) and the Hellinger’s measure (α = 1/2).

  • The Cressie and Read power divergence [45] results in stable disparity measures and solutions when outliers are added to the data. Under certain general conditions this estimator has asymptotic breakdown points of 50%.

    D f [ P , Q ] = 1 α ( α + 1 ) [ l = 1 n p l [ p l / q l ] α 1 ] , < α <

  • The Sharma and Mittal divergences [46] are two generalizations of the Kullback–Leibler measure. One is called α-order and β-degree divergence measure and the other is called 1-order and β-degree divergence measure

    D f [ P , Q ] = 1 ( β 1 ) [ ( l = 1 n p l α q l 1 α ) β 1 α 1 1 ] , α , β 1
    D f [ P , Q ] = 1 ( β 1 ) [ exp ( ( β 1 ) l = 1 n p l log ( p l q l ) ) 1 ] , β 1

  • Finally, the β-divergence [47], is given by

    D f [ P , Q ] = { l = 1 n ( 1 β ( β 1 ) ) ( p l β + ( β 1 ) q l β β p l q l β 1 ) p l / q l log [ p l / q l ] 1 , β / ( 0 , 1 ) l = 1 n p l ( log [ p l ] log [ q l ] ) + ( q l p l ) , β = 1 l = 1 n p l / q l log [ p l / q l ] 1 , β = 0

This divergence was introduced by Itakura–Saito for the estimation of short-time speech spectra using an autoregressive model. It became popular in speech and acoustics research and it was applied to denoising and up-mix (mono to stereo conversion) of music.

A.2. List of the Measures between Two Probability Distributions

List of (h, f)-measures [18]:

  • Sharma–Mittal divergence [46] with

    h ( x ) = 1 ( β 1 ) ( ( 1 + α ( α 1 ) x ) β 1 α 1 1 ) ; f ( x ) = x α α ( x 1 ) 1 α ( α 1 ) ; α 0 , 1 ;
    D f h [ P , Q ] = 1 ( β 1 ) ( [ 1 + l = 1 n q l ( p l / q l ) α α ( p l q l ) q l ] β 1 α 1 1 )

  • Bhattacharyya divergence has interesting application in signal selection and it has the following form

    h ( x ) = log ( x + 1 ) ; f ( x ) = x 1 / 2 + ( 1 / 2 ) ( x + 1 )
    D f h [ P , Q ] = log ( 1 + l = 1 n p l q l 1 2 ( p l + q l ) )

List of the entropy measures:
  • Shannon (1948) [26]

    f ( x ) = x log x , h ( x ) = x
    D f h ( P , Q ) = l = 1 n ( p l + q l 2 ) log ( p l + q l 2 ) + 1 2 ( l = 1 n p l log ( p l ) + l = 1 n q l log ( q l ) )

  • Rényi (1961) [48]

    f ( x ) = x α , h ( x ) = [ 1 α ( 1 α ) log x ] , α 0 , 1
    D f h ( P , Q ) = ( 1 α ( 1 α ) ) [ log ( l = 1 n ( p l + q l 2 ) α ) 1 2 ( log ( p l ) α + log l = 1 n ( q l ) α ) ]

  • Varma (1966) [18]

    f ( x ) = x α β + 1 , h ( x ) = [ 1 β α log x ] , β 1 < α < β , β 1
    D f h ( P , Q ) = ( 1 β α ) [ log ( l = 1 n ( p l + q l 2 ) α β + 1 ) 1 2 ( log ( p l ) α β + 1 + log l = 1 n ( q l ) α β + 1 ) ]

  • Varma (1966) [18]

    f ( x ) = x α / β , h ( x ) = [ 1 β ( β α ) log x ] , 0 < α < β , β 1
    D f h ( P , Q ) = ( 1 β ( β α ) ) [ log ( l = 1 n ( p l + q l 2 ) α / β ) 1 2 ( log ( p l ) α / β + log l = 1 n ( q l ) α / β ) ]

  • Havrda and Charvat (1967) [18]

    f ( x ) = 1 1 α ( x α x ) , h ( x ) = x , α > 0 , α 1
    D f h ( P , Q ) = ( 1 1 α ) [ ( l = 1 n ( p l + q l 2 ) α ( p l + q l 2 ) ) 1 2 ( ( p l α p l ) + ( q l α q l ) ) ]

  • Sharma and Mittal [46]

    f ( x ) = x log x , h ( x ) = exp ( ( α 1 ) x ) 1 ( 1 α ) , α > 0 , α 1
    D f ( P , Q ) = ( 1 1 α ) exp ( ( α 1 ) l = 1 n ( ( p l + q l 2 ) log ( p l + q l 2 ) ) ) ( 1 1 α ) [ 1 2 ( exp ( ( α 1 ) l = 1 n ( p l log p l ) ) + exp ( ( α 1 ) l = 1 n ( q l log q l ) ) ) ]

  • Sharma and Mittal [46]

    f ( x ) = x β , h ( x ) = 1 ( 1 α ) ( x α 1 β 1 1 ) α > 0 , β > 0 α , β 1
    D f h ( P , Q ) = ( 1 1 α ) [ ( l = 1 n ( p l + q l 2 ) β ) α 1 β 1 1 2 ( [ l = 1 n ( p l β ) ] α 1 β 1 + [ l = 1 n ( q l β ) ] α 1 β 1 ) ]

  • Ferreri (1980) [18]

    f ( x ) = ( 1 + α x ) log ( 1 + α x ) , h ( x ) = ( 1 + 1 α ) log ( 1 + α ) x α , α > 0
    D f h ( P , Q ) = 1 α ( l = 1 n ( 1 + α [ p l + q l 2 ] ) log ( 1 + α [ p l + q l 2 ] ) ) + 1 α [ 1 2 ( l = 1 n ( 1 + α p l ) log ( 1 + α p l ) + l = 1 n ( 1 + α q l ) log ( 1 + α q l ) ) ]

  • Kapur (1972) [18]

    f ( x ) = x α + ( 1 x ) α 1 ( 1 α ) , h ( x ) = x , α 1
    D f h ( P , Q ) = ( 1 1 α ) ( l = 1 n ( p l + q l 2 ) α + ( 1 ( p l + q l 2 ) ) α 1 ) ( 1 1 α ) [ 1 2 ( [ l = 1 n p α + ( 1 p l ) α 1 ] + [ l = 1 n q α + ( 1 q l ) α 1 ] ) ]

  • Burbea (1984) [18]

    f ( x ) = x α ( 1 x ) α + 1 + ( α 1 ) 1 ( 2 α 2 ) x ( α 2 ) , h ( x ) = x , α > 0 , α 1
    D f h ( P , Q ) = ( 1 α 2 ) [ l = 1 n ( p l + q l 2 ) α ( 1 ( p l + q l 2 ) ) α + 1 + ( α 1 ) 1 ( 2 α 2 ) ( p l + q l 2 ) ] 1 2 ( 1 α 2 ) [ l = 1 n p α ( 1 p l ) α + 1 + ( α 1 ) 1 ( 2 α 2 ) p l ] 1 2 ( 1 α 2 ) [ l = 1 n q α ( 1 q l ) α + 1 + ( α 1 ) 1 ( 2 α 2 ) q l ]

A.3. List of the Blended f-Disparities

More information of the measures below could be found in [23].

  • Pearson–Neyman blend with corresponding blended divergence

    D β ( P , Q ) : = 1 2 l = 1 n ( p l q l ) 2 β p l + ( 1 β ) q l
    This blend coincides with the generalized LeCam divergence. Also it is bounded and
    0 D f ( P , Q ) 1 2 1 1 β + 1 2 β

  • Blended power divergence-variant A. For aR − {0, 1} we have

    D a , β ( P , Q ) = 1 a ( a 1 ) l = 1 n p l a + q l a ( β p l + ( 1 β ) q l ) a 1 2 , a 0 , 1
    Note that D0 (P, Q) are unbounded for all β ∈ [0, 1), which corresponds to the reversed Kullback blend. And D1 (P, Q) for all β ∈ (0, 1)- Kullback-reversed blend is bounded and
    0 D f ( P , Q ) ln ( 1 β ) ln β
    D 1 , β ( P , Q ) = l = 1 n p l ln ( p l β p l + ( 1 β ) q l ) + q l ln ( q l β p l + ( 1 β ) q l )

  • Blended power divergence-variant B. For 0 < |a| < 1 and β ∈ (0, 1)

    D a , β ( P , Q ) = sign ( a ) l = 1 n ( β p l + ( 1 β ) q l ) p l a + q l a + 1 ( β p l + ( 1 β ) q l ) a , 0 < | a | < 1
    is bounded, but not symmetric.

A.4. List of M-Estimates

Information of the measures below is based on [3537].

  • More general estimates with Laplace distribution

    ρ ( x ) = | x | p , ψ ( x ) = p sign ( x ) | x | p 1 , 1 p 2
    are known as Lp-estimates. It has been used in statistics of speech and image data processing, especially when observed in a transform domain like the wavelet or discrete Fourier transform domains. For example, the over complete wavelet transform coefficients of images are found to have sparse distributions, a property that has been extensively exploited in coding and denoising. It appears that p must be fairly moderate to provide a relatively robust estimator or, in other words, to provide an estimator scarcely perturbed by outlying data.

  • For positive α, the function

    ρ ( x ) = 1 2 α exp ( α x 2 ) 2 α , ψ ( x ) = x exp { α x 2 }
    leads to the so-called alpha estimator. It has been applied in a special class of change-point models, where the change is defined as a shift of observations means.

  • For ν, s > 0, we can determine the trigonometric and the hyperbolic estimators

    ρ ( x ) = ν ( x arctan ( s x ) log ( s 2 x 2 + 1 ) 2 s ) , ψ ( x ) = ν arctan ( s x )
    ρ ( x ) = ν log ( cosh ( s x ) ) s , ψ ( x ) = ν tanh ( s x )
    Errors in this case are distributed by a logistic distribution.

  • For the Cauchy distribution f(x) = 1/(π(1 + x2)) we have

    ρ ( x ) = c 2 2 log ( ( x 2 / c 2 ) + 1 ) , ψ ( x ) = 2 x 1 + x 2 / c 2
    Cauchy’s function, also known as the Lorentzian function, does not guarantee a unique solution (unicity). With a descending first derivative, such a function has a tendency to yield erroneous solutions. It has been applied in image analysis and in particular to the problem of parametric image registration.

  • Latter influence functions trimmed at 0 < c < ∞ are presented in the form

    ψ ( x ) = x 1 ( c , c ) ( x ) , ψ ( x ) = 1 ( c , c ) ( x ) sign ( x )
    and they specify the trimmed least squares and the trimmed absolute error estimators.

  • The Welsh distance has the form

    ρ ( x ) = c 2 2 ( 1 exp ( x 2 / c 2 ) )
    As can be seen from the influence function, the influence of large errors only decreases linearly with their size. However it has the same problem as the Cauchy distance.

  • The Geman and McClure function tries to reduce the effect of large errors further, but it also cannot guarantee unicity.

    ρ ( x ) = x 2 ( 1 + x 2 ) 1
    It has been applied successfully in optical flow estimation, image restoration and vision-based recognition in continuous dynamic hand gestures.

  • The Tukey function also encounters the problem of unicity; it can be written as

    ρ ( x ) = { c 2 6 ( 3 x 2 3 x 4 + x 6 ) , | x | c c 2 / 6 | x | > c
    It has applications in economics, computer vision and satellite retrievals.

  • The Huber function has the following form

    ρ c ( x ) = { 1 2 x 2 , | x | < c c | x | 1 2 c 2 , | x | c
    and the score function is represented as
    ψ c ( x ) = max { min ( x , c ) , c } , c > 0
    It has some optimal properties, see [35], and the function ψc(x) can be approximated by a twice differentiable score function. Huber’s M-estimation has been applied in GPS positioning and modeling of complex technical experiments where it reduces the effect of outliers. This estimator is so satisfactory that it has been recommended for almost all situations, however, from time to time, difficulties are encountered, which may be related to a lack of stability.

A.5. Minimum Contrast Estimation List

Detailed information of the following measures can be found in [20,21].

  • let K(x) = −logx + x, then

    L ( f θ , g ) = λ j Λ { log ( f θ ( λ j ) / g ( λ j ) ) + f θ ( λ j ) / g ( λ j ) }

  • let K(x) = (logx)2, then

    L ( f θ , g ) = λ j Λ { log f θ ( λ j ) log ( g ( λ j ) ) } 2

  • let K(x) = xlogxx, then

    L ( f θ , g ) = λ j Λ f θ ( λ j ) g ( λ j ) 1 { log ( f θ ( λ j ) g ( λ j ) 1 ) 1 }

  • let K(x) = (xα − 1)2, where 0 < α < ∞ then

    L ( f θ , g ) = λ j Λ { ( f θ ( λ j ) / g ( λ j ) ) α 1 } 2
    This is an α-entropy criterion for a Gaussian process.

Figure 1. Comparison of residual errors resulting from the application of two different distance measures used to estimate biophysical parameters for broad leaf canopies with a look-up table (LUT). Biophysical parameters, including leaf area index, LAI, fraction of photosynthetically active radiation absorbed by the canopy, fAPAR, and vegetation cover fraction, fV, are estimated from simulated ground reflectance values. The rows from top to bottom show the comparison of residual errors in estimating LAI with the least squares estimate (LSE) and Hellinger Equation (10) (top row), of residual errors in fV with LSE and the power divergence Equation (12) with power 1 and j = 4 (centre row) and of residual errors in fAPAR with LSE and the Hellinger equation (bottom row). The columns from left to right show frequency distributions of residual errors using LSE as a cost function (left column) and the residual errors using either a Hellinger Equation (10) for LAI and fAPAR or a power divergence Equation (12) with power 1 and j = 4 for fV (centre column). The right column shows the quantile–quantile plots comparing the frequency distributions of the left and centre columns expressed as absolute errors. The errors in biophysical parameters associated with the alternative cost functions are smaller and better behaved (more symmetrical and smaller bias).
Figure 1. Comparison of residual errors resulting from the application of two different distance measures used to estimate biophysical parameters for broad leaf canopies with a look-up table (LUT). Biophysical parameters, including leaf area index, LAI, fraction of photosynthetically active radiation absorbed by the canopy, fAPAR, and vegetation cover fraction, fV, are estimated from simulated ground reflectance values. The rows from top to bottom show the comparison of residual errors in estimating LAI with the least squares estimate (LSE) and Hellinger Equation (10) (top row), of residual errors in fV with LSE and the power divergence Equation (12) with power 1 and j = 4 (centre row) and of residual errors in fAPAR with LSE and the Hellinger equation (bottom row). The columns from left to right show frequency distributions of residual errors using LSE as a cost function (left column) and the residual errors using either a Hellinger Equation (10) for LAI and fAPAR or a power divergence Equation (12) with power 1 and j = 4 for fV (centre column). The right column shows the quantile–quantile plots comparing the frequency distributions of the left and centre columns expressed as absolute errors. The errors in biophysical parameters associated with the alternative cost functions are smaller and better behaved (more symmetrical and smaller bias).
Remotesensing 05 01355f1 1024
Figure 2. Comparison of residual errors resulting from the application of two different distance measures used to estimate biophysical parameters for needleleaf canopies with a look-up table (LUT). Biophysical parameters, including leaf area index, LAI, fraction of photosynthetically active radiation absorbed by the canopy, fAPAR, and vegetation cover fraction, fV, are estimated from simulated ground reflectance values. The rows from top to bottom show the comparison of residual errors in estimating LAI with the least squares estimate (LSE) and power Equation (13) with power 2 and α = −5 (top row), of residual errors in fV with LSE and the power divergence Equation (12) with power 1 and j = 4 (centre row), and of residual errors in fAPAR with LSE and the power Equation (13) with power 2 and α = −5 (bottom row). The columns from left to right show frequency distributions of residual errors using LSE as a cost function (left column) and the residual errors using either a power equation for LAI and fAPAR or a power divergence Equation ((12)) with power 1 and j = 4 for fV (centre column). The right column shows the quantile–quantile plots comparing the frequency distributions of the left and centre columns expressed as absolute errors. The errors in biophysical parameters associated with the alternative cost functions are smaller and better behaved (more symmetrical and smaller bias).
Figure 2. Comparison of residual errors resulting from the application of two different distance measures used to estimate biophysical parameters for needleleaf canopies with a look-up table (LUT). Biophysical parameters, including leaf area index, LAI, fraction of photosynthetically active radiation absorbed by the canopy, fAPAR, and vegetation cover fraction, fV, are estimated from simulated ground reflectance values. The rows from top to bottom show the comparison of residual errors in estimating LAI with the least squares estimate (LSE) and power Equation (13) with power 2 and α = −5 (top row), of residual errors in fV with LSE and the power divergence Equation (12) with power 1 and j = 4 (centre row), and of residual errors in fAPAR with LSE and the power Equation (13) with power 2 and α = −5 (bottom row). The columns from left to right show frequency distributions of residual errors using LSE as a cost function (left column) and the residual errors using either a power equation for LAI and fAPAR or a power divergence Equation ((12)) with power 1 and j = 4 for fV (centre column). The right column shows the quantile–quantile plots comparing the frequency distributions of the left and centre columns expressed as absolute errors. The errors in biophysical parameters associated with the alternative cost functions are smaller and better behaved (more symmetrical and smaller bias).
Remotesensing 05 01355f2 1024
Table 1. PROSPECT input parameters. N is the leaf structure parameter; Cab are the chlorophyll a + b concentrations (μg/cm2); Cw is the equivalent water thickness (g/cm2); and Cm is the dry matter content (g/cm2).
Table 1. PROSPECT input parameters. N is the leaf structure parameter; Cab are the chlorophyll a + b concentrations (μg/cm2); Cw is the equivalent water thickness (g/cm2); and Cm is the dry matter content (g/cm2).
CONIFERBROADLEAF

OBSOJPYJPBeechOak
N2.472.552.551.43 1.5 1.61.61 1.97 2.64
Cab29 19.39 27.5627.07 13.10 24.2721.89 19 29.0344.765.1
Cw0.040.010.030.020.008
Cm0.0280.0120.0120.0030.006
Table 2. FLIGHT input parameters. The column “range” represents the minimum and maximum values of the input parameters; column “step” is the increment for the LUT; column “Observed” represents the range over which a random number “r.n.” is selected to generate satellite observed BRF, R*(λ1, ..., λn). Thus the simulations and LUT are generated from different input parameters; in particular, different values are used for viewing geometry, soil reflectance, fractional cover, and leaf area index (Section 6.4).
Table 2. FLIGHT input parameters. The column “range” represents the minimum and maximum values of the input parameters; column “step” is the increment for the LUT; column “Observed” represents the range over which a random number “r.n.” is selected to generate satellite observed BRF, R*(λ1, ..., λn). Thus the simulations and LUT are generated from different input parameters; in particular, different values are used for viewing geometry, soil reflectance, fractional cover, and leaf area index (Section 6.4).
ParameterRangeStep“Observed”
Solar zenith angle30°–70°10°r.n.∈30°–70°
View zenith angle0°–60°10°r.n ∈0°–60°
Relative azimuth angle0°–180°30°r.n.∈0°–180°
Fraction of green leaves0.8-same
Fraction of shoot material0.05-same
Fraction of bark in foliage0.15-same
Leaf angle distributionSpruce leaf, Spherical-same
Soil roughness index0-0
Soil Reflectancesandy loamdrummer2, jal, lonrina, onaway, talbott
Frac. cover by trees0.1–0.90.1r.n.∈0.0–0.9
LAI0–7, LAI ≤ 8FC1r.n.∈0.0–7.0
Table 3. FLIGHT input parameters: Crown shapes, where “c” represents cone shape and “e” represents ellipsoid shape.
Table 3. FLIGHT input parameters: Crown shapes, where “c” represents cone shape and “e” represents ellipsoid shape.
ParameterConifer forestBroadleaf forest
Crown shapeconeellipsoid
type of forestOBSOJPYJPBeechOak
crown shape“c”“c”“c”“e”“e”
Crown radius (m)0.451.30.851.22.6
Crown center to top dist (m)97.244.23.2
Minimum height to first branch (m)0.496.90.496.47.1
Maximum height to first branch (m)0.517.10.5110.29.2
Table 4. Summary statistics for the performance of different distance measures in estimating biophysical parameters from ground reflectance data (BRF); reflectances were simulated for broadleaf trees.
Table 4. Summary statistics for the performance of different distance measures in estimating biophysical parameters from ground reflectance data (BRF); reflectances were simulated for broadleaf trees.
DistanceTypeParametersError LAIError fVError fAPAR
Equation (7) Kullback–Leiblerinf. meas.0.630.250.10
Equation (10) Hellingerinf. meas.0.610.260.09
Equation (12) powerinf. meas.j = 40.690.240.12
Equation (11) Gen Hellingerinf. meas.j = 20.660.260.10
Equation (14) Rényiinf. meas.α = 0.50.630.260.10
Equation (21) Blended Gen Hellinginf. meas.β = 0.90.630.260.10
Equation (20) Arimotoinf. meas.α = 0.80.620.260.10
Equation (25) LSEM-estim1.790.340.21
Equation (26) Koenker -B.M-estimα = 0.990.920.290.12
Equation (35)min. cont. meth.1.010.260.16
Table 5. Summary statistics for the estimation of biophysical parameters from ground reflectance values (BRF) for needleleaf canopies.
Table 5. Summary statistics for the estimation of biophysical parameters from ground reflectance values (BRF) for needleleaf canopies.
DistanceTypeParametersError LAIError fVError fAPAR
Equation (9) Vajdainf. meas.α = 30.740.210.08
Equation (8) Pearson χ2inf. meas.0.690.220.07
Equation (13) powerinf. meas.α = −50.680.220.07
Equation (12) powerinf. meas.j = 40.770.190.08
Equation (14) Rényiinf. meas.α = 0.50.700.220.078
Equation (21) Blend Hellinginf. meas.β = 0.90.690.220.076
Equation (25) LSEM-estim1.350.300.16
Equation (26) Koenker -B.M-estimα = 0.21.290.290.16
Equation (35)min. cont. meth.0.950.280.12

Share and Cite

MDPI and ACS Style

Leonenko, G.; Los, S.O.; North, P.R.J. Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods. Remote Sens. 2013, 5, 1355-1388. https://doi.org/10.3390/rs5031355

AMA Style

Leonenko G, Los SO, North PRJ. Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods. Remote Sensing. 2013; 5(3):1355-1388. https://doi.org/10.3390/rs5031355

Chicago/Turabian Style

Leonenko, Ganna, Sietse O. Los, and Peter R. J. North. 2013. "Statistical Distances and Their Applications to Biophysical Parameter Estimation: Information Measures, M-Estimates, and Minimum Contrast Methods" Remote Sensing 5, no. 3: 1355-1388. https://doi.org/10.3390/rs5031355

Article Metrics

Back to TopTop