Next Article in Journal
A Fault Detection Method Based on an Oil Temperature Forecasting Model Using an Improved Deep Deterministic Policy Gradient Algorithm in the Helicopter Gearbox
Previous Article in Journal
Traffic Volatility Forecasting Using an Omnibus Family GARCH Modeling Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Fourier-Based Inequality Indices

by
Giuseppe Toscani
1,2
1
Department of Mathematics “L.Casorati”, University of Pavia, 27100 Pavia, Italy
2
Institute of Applied Mathematics and Information Technologies “E. Magenes”, 27100 Pavia, Italy
Entropy 2022, 24(10), 1393; https://doi.org/10.3390/e24101393
Submission received: 6 September 2022 / Revised: 21 September 2022 / Accepted: 26 September 2022 / Published: 29 September 2022

Abstract

:
Inequality indices are quantitative scores that take values in the unit interval, with a zero score denoting complete equality. They were originally created to measure the heterogeneity of wealth metrics. In this study, we focus on a new inequality index based on the Fourier transform that demonstrates a number of intriguing characteristics and shows great potential for applications. By extension, it is demonstrated that other inequality measures, such as the Gini and Pietra indices, can be usefully stated in terms of the Fourier transform, allowing us to illuminate characteristics in a novel and straightforward manner.

1. Introduction

As recently discussed in [1,2,3], the challenge of measuring the statistical heterogeneity of measures arises in most fields of science and engineering, and it is one of the fundamental features of data analysis.
In economics and social sciences size measures of interest are wealth measures, and in the context of wealth measures many inequality indices have been introduced [4,5,6,7]. Specifically, inequality indices quantify the socio-economic divergence of a given wealth measures from the state of perfect equality. In this area, the most used measure of inequality is the Gini index, first proposed by the Italian statistician Corrado Gini more than a century ago [8,9]. However, although it has had an economic origin, the use of the Gini index has not been limited to wealth alone [10].
A second important index of inequality, still introduced in economics, is the Pietra index [11]. As discussed in [12], the Pietra index is an elemental measure of statistical heterogeneity which has a number of properties that render it not only an alternative to the popular Gini index, but rather, a far more natural and meaningful quantitative tool for the measurement of egalitarianism, and, consequently, for the measurement of statistical heterogeneity at large.
In addition, other indices have been introduced so far. An alternative to the Gini index was introduced by Bonferroni in 1930 in a textbook for students at Bocconi University in Milan [13]. The main properties and representations of the Bonferroni index and its connections with the index of Gini and other measures were studied in [3]. Furthermore, it is important to mention the Kolkata index, first introduced in [14] as a measure of inequality, whose connections with the Gini and Pietra indices have been studied in [1,15].
An indispensable tool for measuring statistical heterogeneity of measures is the Lorenz function and its graphical representation, the Lorenz curve [16]. For wealth measures, the Lorenz curve plots the percentage of total income earned by the various sectors of the population, ordered by the increasing size of their incomes. The Lorenz curve is typically represented as a curve in the unit square of opposite vertices in the origin of the axes and the point ( 1 , 1 ) , starting from the origin and ending at the point ( 1 , 1 ) .
The diagonal of the square exiting the origin is the line of perfect equality, representing a situation in which all individuals have the same income. Since the diagonal is the line of perfect equality, one can say that that the closer the Lorenz curve is to the diagonal, the more equal is the distribution of income.
This idea of closeness between the line of perfect equality and the Lorenz curve can be expressed in many ways, each of which gives rise to a possible measure of inequality. Thus, starting from the Lorenz curve, several indices of inequality can be defined, including the Gini index. Various indices were obtained by looking at the maximal distance between the line of perfect equality and the Lorenz curve, either horizontally or vertically, or alternatively parallel to the other diagonal of the unit square [2].
Despite the enormous amount of research illustrating the fields of application of inequality indices, the use of arguments based on Fourier transforms appears rather limited. In particular, although the Gini index can be easily expressed in terms of the Fourier transform, at least to our knowledge, its expression in Fourier has never been considered in applications. The same conclusion can be drawn for the Pietra index, whose expression in Fourier transform is very useful to understand its nature, and to introduce from that other Fourier-based measures of inequality, including the one considered in this paper.
We would like to point out that having inequality indices expressed in terms of the Fourier transform could be very interesting for a variety of applications. Indeed, the Fourier transform makes it possible to model many (often very surprising) phenomena ranging from environmental problems to image processing and social sciences. To name a few, environmental pollution [17], image processing [18,19], markets description as quantum processes, where supply and demand strategies are described as reciprocal Fourier transforms [20].
The objective of this paper is to introduce a new inequality index based on the Fourier transform, which satisfies some properties that make it very interesting for possible applications.
Denote by P s ( R ) , s 1 , the class of all probability measures F on the Borel subsets of R such that
m s ( F ) = R | x | s d F ( x ) < + .
Further, denote by P ˜ s ( R ) the class of probability measures F P s ( R ) which possess a positive mean value
m ( F ) = R x d F ( x ) > 0 ,
and with P s + ( R ) the subset of probability measures F P s ( R ) such that F ( x ) = 0 for x 0 . Let F s be the set of Fourier transforms
f ^ ( ξ ) = R e i ξ x d F ( x ) .
of probability measures F in P ˜ s ( R ) . On F s we introduce an inequality index, named T ( F ) , given by the formula
T ( F ) = 1 2 sup ξ R f ^ ( ξ ) f ^ ( ξ ) f ^ ( 0 ) ,
In definition (1), f ^ ( ξ ) = f ξ ^ ( ξ ) denotes the derivative of the Fourier transform f ^ ( ξ ) with respect to its argument ξ . Indeed, F P s ( R ) implies that f ^ ( ξ ) is continuously differentiable on the entire real line.
In the following, we will show that the functional T ( F ) is a measure of inequality which satisfies most of the properties required to be a good measure of sparsity and/or heterogeneity [10]. The interest in having a measure of inequality based on the Fourier transform, such as T ( F ) , is twofold. On the one hand, it is very simple to calculate the value taken by this measure at probability distributions for which the characteristic function is explicitly available. This is the case, among others, of the Poisson distribution and, for probability measures defined on the whole real line R , of the stable laws. On the other hand, in the case of dealing with a discrete probability measure, the use of the Fourier transform makes it possible to develop very fast computational procedures [21,22].
From a certain point of view, the measure of inequality defined by (1) has many points of contact with the inequality measures obtained from the Lorenz curve through the concept of maximum distance.
In fact, the index (1) expresses the maximum value of the modulus of the difference between the Fourier transforms of a probability measure F of positive mean m and its derivative normalized by the mean. In the economic context, the closer the Fourier transform of the probability measure is to its derivative normalized by dividing it by the mean, the more equal is the distribution of income. In other words, the line of perfect equality in the Lorenz square is here substituted by the Fourier transform of a Dirac delta function located in a point different from zero.
It is interesting to note that, as will become clear from the examples, the maximum value is usually taken in the finite interval ( 2 π , 2 π ) .
Before studying the new index T ( · ) defined in (1) and listing its properties, we will begin with a brief introduction to the use of the Fourier transform to express the classical Gini and Pietra indices. This will be done in Section 2. As we shall see, the use of Fourier transform allows to clarify the functional setting where these indices live. It is worth mentioning that, unlike the classical Gini and Pietra indices, neither the Bonferroni index nor the Kolkata index seem to be expressible in closed form in terms of the Fourier transform.
Next, Section 3 will be devoted to the study of the main properties of the new inequality measure. Various examples will be collected in Section 4. Last, Section 5 illustrates how some property of the index can be fruitfully used in connection with linear kinetic models.

2. A Fourier Approach to Gini and Pietra Indices

2.1. A Fourier-Based Expression of Gini Index

In the rest of the paper, for any fixed constant a > 0 , we will denote by F a ( x ) the Heaviside step function defined by
F a ( x ) : = 0 x < a 1 x a
Clearly, F a ( x ) is the cumulative measure function of a random variable which is almost surely equal to a. It belongs to P s ( R ) for any s 1 , and m ( F a ) = a .
To obtain an explicit expression in Fourier transform for the Gini index, which admits many equivalent formulations [23], we will resort to its well-known form in terms of a continuous probability measure. For a probability measure F P s + ( R ) with mean m, the Gini index is defined by the formula
G ( F ) = 1 1 m R + ( 1 F ( x ) ) 2 d x
Since F P s + ( R ) , F ( x ) = 0 for x 0 . Hence, resorting to the definition of the Heaviside step function F 0 ( x ) , we have the identity
R + ( 1 F ( x ) ) 2 d x = R | F 0 ( x ) F ( x ) | 2 d x .
For any given pair of probability measures F , G P s ( R ) , the Parseval formula implies
R | F ( x ) G ( x ) | 2 d x = 1 2 π R | F ^ ( ξ ) G ^ ( ξ ) | 2 d ξ ,
where F ^ and G ^ are the Fourier transforms of the probability measures F , G . If
f ^ ( ξ ) = R e i ξ x d F ( x ) , g ^ ( ξ ) = R e i ξ x d G ( x ) ,
it holds that
F ^ ( ξ ) G ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) i ξ .
Indeed, considering that F ( ) G ( ) = F ( + ) G ( + ) = 0 , integration by parts gives
R ( F ( x ) G ( x ) ) e i ξ x   d x = R ( F ( x ) G ( x ) ) d d ξ e i ξ x i ξ d x = ( F ( x ) G ( x ) ) e i ξ x i ξ + + R e i ξ x i ξ d ( F ( x ) G ( x ) ) = f ^ ( ξ ) g ^ ( ξ ) i ξ .
Consequently, we have the identity
R | F ( x ) G ( x ) | 2 d x = 1 2 π R | f ^ ( ξ ) g ^ ( ξ ) | 2 ξ 2 d ξ .
Therefore, for any probability measure F P s + ( R ) , the Gini index has a simple expression in Fourier transform, given by
G ( F ) = 1 1 2 π m R | 1 f ^ ( ξ ) | 2 ξ 2 d ξ .
Remark 1.
For a given constant q > 0 , let H ˙ q denote the homogeneous Sobolev space of fractional order with negative index q , endowed with the norm
h H ˙ q = R | ξ | 2 q | h ^ ( ξ ) | 2 d ξ .
Then, the variable part of the Gini index coincides with the scaling invariant distance between the probability measure F and the Heaviside step function F 0 in the homogeneous Sobolev space H ˙ 1 .
Remark 2.
Considering that the value zero in (7) is obtained when f ^ ( ξ ) = e i m ξ , namely when F = F m , we can rewrite Gini index as
G ( F ) = 1 2 π m R | 1 e i m ξ | 2 ξ 2 d ξ R | 1 f ^ ( ξ ) | 2 ξ 2 d ξ .

2.2. Another Fourier-Based Inequality Measure

Expression (9) suggests considering a related expression in which the dispersion of the probability measure F of mean value m > 0 coincides with its scale invariant H ˙ 1 –distance from the Heaviside step function F m with the same mean value m. We define
H ( F ) = 1 2 π m R | f ^ ( ξ ) e i m ξ | 2 ξ 2 d ξ .
Unlike the Gini index, which requires F P s + ( R ) , the inequality measure H ( F ) is well-defined for any measure F P ˜ s ( R ) .
It is interesting to remark that, similarly to the Gini index, the inequality measure H ( F ) , for F P s + ( R ) , is bounded above by 1. This property is shown in the Appendix A.
The interest in having an inequality index that quantifies the statistical heterogeneity of probability measures defined on the whole real line R in terms of the Fourier transform is evident. As an example, let us compute the value of the functional H for a Gaussian probability measure F of mean m > 0 and variance σ 2 . Since the Fourier transform of the Gaussian density is given by
f ^ ( ξ ) = exp i m ξ σ 2 2 ξ 2 ,
we easily obtain
H ( F ) = 1 2 π m R 1 exp σ 2 2 ξ 2 2 ξ 2 d ξ .
Integration by parts yields
0 1 exp σ 2 2 ξ 2 2 ξ 2 d ξ = 1 exp σ 2 2 ξ 2 2 1 ξ 0 + 0 1 ξ 1 exp σ 2 2 ξ 2 σ 2 ξ exp σ 2 2 ξ 2 d ξ = σ 0 e x 2 / 2 e x 2 d x = σ ( 2 1 ) π 2 .
Thus, for a Gaussian probability measure F of mean m > 0 and variance σ 2 we have the value
H ( F ) = ( 2 1 ) 2 2 π σ m ,
namely a value proportional to the coefficient of variation σ / m , with an explicit constant strictly less than one.

2.3. A Fourier-Based Expression of Pietra Index

For a probability measure F P s + ( R ) with mean m, the Pietra index P ( F ) [11,12] is defined by the formula
P ( F ) = m + ( 1 F ( x ) ) d x .
As remarked in [12], the definition (13) seems to disregard the part of the measure below the mean. This, however, is not true, and (13) makes use of the full information encapsulated in the probability law of the random variable X of measure F.
There is a simple way to verify the previous assertion. Indeed, since
m = R + ( 1 F ( x ) ) d x ,
it holds
H ( F ) = 1 m R + [ F ( x ) F m ( x ) ] 2 d x = 1 m R + [ 1 F ( x ) ( 1 F m ( x ) ) ] 2 d x = 1 m R + [ 1 F ( x ) ] 2 d x + 1 m R + [ 1 F m ( x ) ] 2 d x 2 m R + [ 1 F ( x ) ] [ 1 F m ( x ) ] d x = 1 m R + [ 1 F ( x ) ] 2 d x + 1 2 m 0 m [ 1 F ( x ) ] d x = 1 m R + [ 1 F ( x ) ] 2 d x 1 + 2 m m + [ 1 F ( x ) ] d x = G ( F ) + 2 P ( F ) .
Hence, we have the identity
P ( F ) = 1 2 G ( F ) + H ( F ) .
In other words, the Pietra index of a probability measure F P ˜ s + is represented by the mean value of the two indices G ( F ) and H ( F ) , where H is defined in (10), identically weighted.
Resorting to the Fourier expressions of Gini and H indices we then obtain for the Pietra index the expression
P ( F ) = 1 2 1 1 2 π m R | 1 f ^ ( ξ ) | 2 ξ 2 d ξ + 1 2 π m R | f ^ ( ξ ) e i m ξ | 2 ξ 2 d ξ .
Remark 3.
The Fourier expression (15) clarifies that the Pietra index is obtained by taking into account at the same time the distances in H ˙ 1 of a probability measure in P s + ( R ) from the Dirac delta functions located in zero, and, respectively in mean value m. From this point of view, the Pietra index appears as a well-balanced inequality index. This feature is hidden in the classical definition.
Remark 4.
Since
H ( F ) = 2 P ( F ) G ( F ) ,
the values of the inequality index H ( F ) for a large number of probability measures can be easily computed resorting to the tables of values assumed by Gini and Pietra indices.
Remark 5.
If one considers only one-dimensional discrete measures, the inequality index H ( F ) defined by (10) coincides with a particular case of the discrepancy function recently introduced in [22], where the discrepancy measures the distance in L 2 distance between the characteristic functions of two given discrete measures weighted by the function k 2 , with k = 1 , 2 , , N . In this case, one of the two discrete measure is a Dirac delta function located in the mean value.

2.4. Towards New Inequality Indices

A part from the scaling constant, the functional H ( F ) , coincides with the square of the L 2 ( R ) –norm of the function
h ( ξ ) = | f ^ ( ξ ) e i m ξ | | ξ | .
It is simple to verify that a further scaled invariant functional can be obtained by considering the L ( R ) –norm of h ( ξ ) . This functional is given by
H ( F ) = 1 2 m sup ξ R | f ^ ( ξ ) e i m ξ | | ξ | .
Resorting to the triangular inequality, we can easily conclude that, if F P ˜ s + , H satisfies the standard bounds
0 H ( F ) < 1 .
Indeed, for F P s + with mean value m
H ( F ) = 1 m sup ξ R | f ^ ( ξ ) e i m ξ | | ξ | 1 m sup ξ R | 1 e i m ξ | | ξ | + 1 m sup ξ R | 1 f ^ ( ξ ) | | ξ | .
Now
1 m sup ξ R | 1 e i m ξ | | ξ | = 1 m sup ξ R 2 ( 1 cos ξ m ) | ξ | = lim ξ 0 2 ( 1 cos ξ m ) m | ξ | = 1 .
Moreover, since by (5)
1 f ^ ( ξ ) i ξ = R ( F 0 ( x ) F ( x ) ) e i ξ x d x ,
we obtain
1 m | 1 f ^ ( ξ ) | | ξ | = 1 m R ( F 0 ( x ) F ( x ) ) e i ξ x d x 1 m R ( F 0 ( x ) F ( x ) ) e i ξ x d x = 1 m R + ( 1 F ( x ) ) d x = 1 .
The functional H is a particular case of a metric for probability measures which have been used to study convergence to equilibrium for the Boltzmann equation. This is an argument that in kinetic theory of rarefied gases goes back to [24], where convergence to equilibrium for the Boltzmann equation for Maxwell pseudo-molecules was studied in terms of a metric for Fourier transforms (cf. also [25,26,27] for further applications).
The metric introduced in [24] in connection with the Boltzmann equation for Maxwell molecules was subsequently applied in various contexts, which include kinetic models for wealth measures [28], thus establishing a number of common points between kinetic modeling and inequality measures.
For a given pair of random variables X and Y distributed according to F and G these metrics read
d r ( X , Y ) = d r ( F , G ) = sup ξ R | f ^ ( ξ ) g ^ ( ξ ) | | ξ | r , r > 0 .
As shown in [24], the metric d r ( F , G ) is finite any time the probability measures F and G have equal moments up to [ r ] , namely the entire part of r R + , or equal moments up to r 1 if r N , and it is equivalent to the weak* convergence of measures for all r > 0 . Among other properties, it is easy to see [24,28] that, for two pairs of random variables X , Y , where X is independent from Y, and Z , Z ˜ (Z independent from Z ˜ ), and any constant c
d r ( X + Y , Z + Z ˜ ) d r ( X , Z ) + d r ( Y , Z ˜ ) d r ( c X , c Y ) = | c | r d r ( X , Y ) .
These properties classify d s as an ideal probability metric in the sense of Zolotarev [29]. Properties of H ( F ) can be easily extracted from (19) considering that, if X is a random variable with probability measure F of mean value m
H ( F ) = H ( X ) = 1 2 m d 1 ( F , F m )
In particular, the second property in (19) implies the scaling invariance of H .
Moreover, the first inequality in (19) implies that, for any pair of independent variables X and Y, with means m X (respectively m Y ), by choosing Z and Z ˜ with probability measures F m X (respectively F m Y )
H ( X + Y ) m X m X + m Y H ( X ) + m Y m X + m Y H ( Y ) ,
namely a property of sub-additivity for convolutions. Moreover, if Y is distributed with probability measure F m Y , Inequality (20) gives
H ( X + Y ) m X m X + m Y H ( X ) < H ( X ) .
Inequality (21) is a typical feature of sparsity measures, which translates to the case of a continuous variable the property that adding a constant to each coefficient decreases sparsity [10].
In view of its properties, the functional H ( · ) appears to be a good measure of inequality. Unfortunately, the computation of the values of H for most probability measures is cumbersome. In particular, it seems not possible to explicitly compute the value of H ( X ) in the simplest case in which the variable X takes only two positive values. Consequently, we can not evaluate if, for a given ϵ 1 , there exists a probability measure with index 1 ϵ . Indeed, as we saw in Section 2.2, this basic property follows from the analysis of the values taken by the inequality index at two-valued random variables.
The upper bound 1 can be found also by resorting to Lagrange theorem. Indeed, since F P s ( R ) the function h ( ξ ) = | f ^ ( ξ ) e i m ξ | is continuously differentiable on the entire real line, and satisfies h ( ξ = 0 ) = 0 .
Therefore, by Lagrange theorem, for any given ξ R , there exists ξ 0 R such that
h ( ξ ) ξ = h ( ξ ) h ( 0 ) ξ 0 = h ( ξ 0 ) ,
which implies
sup ξ R h ( ξ ) ξ sup ξ R | h ( ξ ) | .
Since | e i m ξ | = 1 , we have the identity
h ( ξ ) = | f ^ ( ξ ) e i m ξ | = | f ^ ( ξ ) e i m ξ 1 | ,
and
| h ( ξ ) | f ^ ( ξ ) e i m ξ + i m f ^ ( ξ ) e i m ξ = f ^ ( ξ ) + i m f ^ ( ξ ) .
We therefore obtain
1 m sup ξ R | f ^ ( ξ ) e i m ξ | | ξ | 1 m sup ξ R f ^ ( ξ ) + i m f ^ ( ξ ) = sup ξ R f ^ ( ξ ) f ^ ( ξ ) f ^ ( 0 ) .
Using the argument leading to the upper bound in (24) one easily concludes that H ( F ) is bounded above by T ( F ) , where T ( F ) is the functional defined by (1), and that H ( F ) 1 .

3. A New Fourier-Based Index of Inequality

This Section will be devoted to study in more details the main properties of the inequality index T ( F ) , as given by (1). Depending on convenience, given a random variable X with probability measure F P ˜ s ( R ) , we will write indifferently T ( X ) or T ( F ) .
The inequality index T satisfies various properties we list and prove in the following.

3.1. Scaling

For any constant c > 0 , the index T ( F ) is invariant with respect to the scaling F ( x ) F ( c x ) . The scaling invariance of T ( F ) can be easily seen by noticing that, if f ^ ( ξ ) is the Fourier transform of F ( x ) , f ^ ( ξ / c ) is the Fourier transform of F ( c x ) , and
f ξ ^ ( ξ / c ) f ξ ^ ( 0 ) = f η ^ ( η ) f η ^ ( 0 ) η = ξ / c ,
that implies
sup ξ R f ^ ( ξ / c ) f ξ ^ ( ξ / c ) f ξ ^ ( 0 ) = sup ξ R f ^ ( η ) f η ^ ( η ) f η ^ ( 0 ) η = ξ / c = T ( F ) .

3.2. Lower and Upper Bounds

If F P ˜ s ( R ) , the values of the functional T ( F ) lie between zero and one, where the value zero (minimal inequality) is assumed in correspondence to a Heaviside probability measure F m , with m > 0 . Indeed, let F P s + ( R ) . Since | f ^ ( ξ ) | f ^ ( 0 ) = 1 , and
f ^ ( ξ ) = i R + x e i ξ x d F ( x ) ,
so that f ^ ( 0 ) = i m ( F ) = i m , and | f ^ ( ξ ) | | f ^ ( 0 ) | = m , it is easy to conclude, by the triangular inequality, that T ( F ) satisfies the bounds
0 T ( F ) 1 ,
and T ( F ) = 0 if and only if f ^ ( ξ ) satisfies the differential equation
f ^ ( ξ ) = f ^ ( 0 ) f ^ ( ξ ) ,
with f ^ ( 0 ) = 1 , so that the unique solution is given by f ^ ( ξ ) = e i m ξ , namely by the Fourier transform of a Dirac delta function located in the mean value x = m ( F ) > 0 . Note however that, even if the functional F is defined in the whole class P ˜ s ( R ) , the upper bound is lost if the probability measure F P s + ( R ) , since in this case the inequality | f ^ ( ξ ) / f ^ ( 0 ) | 1 does not hold.
The value T ( F ) = 1 , corresponding to maximal inequality is approached if we compute the value of T ( X ) when the random variable X of mean value m is a two-valued random variable where
P ( X = 0 ) = 1 ϵ ; P X = m ϵ = ϵ ; ϵ 1 .
In this case
f ^ ( ξ ) = 1 ϵ + ϵ exp i m ϵ ξ ,
while
f ^ ( ξ ) = i m exp i m ϵ ξ .
Therefore
2 T ( X ) = sup ξ R 1 ϵ + ϵ exp i m ϵ ξ exp i m ϵ ξ = ( 1 ϵ ) sup ξ R 1 exp i m ϵ ξ = ( 1 ϵ ) sup ξ R 2 ( 1 cos m ϵ ξ ) = 2 ( 1 ϵ ) ,   a n d   T ( X ) = 1 ϵ .

3.3. Convexity

Let F , G P ˜ s ( R ) two probability measures with the same mean value, say m. Then, for any given τ ( 0 , 1 ) it holds τ f ^ ( 0 ) + ( 1 τ ) g ^ ( 0 ) = f ^ ( 0 ) = g ^ ( 0 ) , so that
T ( τ F + ( 1 τ ) G ) = 1 2 sup ξ R τ f ^ ( ξ ) + ( 1 τ ) g ^ ( ξ ) τ f ^ ( ξ ) + ( 1 τ ) g ^ ( ξ ) τ f ^ ( 0 ) + ( 1 τ ) g ^ ( 0 ) = 1 2 sup ξ R τ f ^ ( ξ ) τ f ^ ( ξ ) f ^ ( 0 ) + ( 1 τ ) g ^ ( ξ ) ( 1 τ ) g ^ ( ξ ) g ^ ( 0 ) τ T ( F ) + ( 1 τ ) T ( G ) .
This shows the convexity of the functional T on the set of probability measures with the same mean.

3.4. Sub-Additivity for Convolutions

The most important property characterizing the inequality index T is linked to its behavior in presence of convolutions. For any given pair of Fourier transforms of probability measures in P ˜ s ( R ) , let us set
h ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) .
Then, since | f ^ ( ξ ) | f ^ ( 0 ) = 1 and | g ^ ( ξ ) | g ^ ( 0 ) = 1
sup ξ R h ^ ( ξ ) h ^ ( ξ ) h ^ ( 0 ) = sup ξ R f ^ ( ξ ) g ^ ( ξ ) f ^ ( ξ ) g ^ ( ξ ) + f ^ ( ξ ) g ^ ( ξ ) f ^ ( 0 ) + g ^ ( 0 ) = sup ξ R f ^ ( ξ ) g ^ ( ξ ) f ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) f ^ ( ξ ) g ^ ( ξ ) f ^ ( 0 ) g ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) f ^ ( ξ ) g ^ ( ξ ) g ^ ( 0 ) f ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) sup ξ R f ^ ( ξ ) g ^ ( ξ ) f ^ ( ξ ) g ^ ( ξ ) f ^ ( 0 ) + g ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) sup ξ R f ^ ( ξ ) g ^ ( ξ ) f ^ ( ξ ) g ^ ( ξ ) g ^ ( 0 ) f ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) sup ξ R f ^ ( ξ ) f ^ ( ξ ) f ^ ( 0 ) + f ^ ( 0 ) f ^ ( 0 ) + g ^ ( 0 ) sup ξ R g ^ ( ξ ) g ^ ( ξ ) g ^ ( 0 ) .
Therefore, if X and Y are independent random variables with probability measures in P ˜ s ( R ) , and mean values m X (respectively m Y ) the inequality index T satisfies the inequality
T ( X + Y ) m X m X + m Y T ( X ) + m Y m X + m Y T ( Y ) .
In particular, if Y is a random variable that takes the value m > 0 with probability 1 (so that g ^ ( ξ ) = e i m ξ and T ( Y ) = 0 ),
T ( X + Y ) m X m X + m T ( X ) < T ( X ) .
Since X + Y corresponds to adding the constant m to X, this property asserts that adding a constant wealth to each agent decreases inequality.
Furthermore, if the random variables X 1 and X 2 are distributed with the same law of X, thanks to the scale property
T X 1 + X 2 2 = T X 1 + X 2 T ( X ) ,
while the mean of ( X 1 + X 2 ) / 2 is equal to the mean of X.
Remark 6.
Inequality (27) is fully operational in the case where the two variables X 1 and X 2 are characterized either by a continuous probability measure or take on an infinite number of values. Only in this case, in fact, do the probability measure remain of the same type under the operation of convolution.
Suppose in fact that the variables X i , i = 1 , 2 , are Bernoulli variables, such that
P ( X i = 0 ) = P ( X i = 1 ) = 1 2 , i = 1 , 2 ,
The probability measure of X i , i = 1 , 2 , has Fourier transform
f ^ ( ξ ) = 1 2 1 + e i ξ ,
and the probability measure of the convolution corresponds to the Fourier transform
f ^ ( ξ ) 2 = 1 4 1 + 2 e i ξ + e 2 i ξ .
Hence, the random variable Y = X 1 + X 2 takes the three values 0 , 1 , 2 with probabilities
P ( Y = 0 ) = P ( Y = 2 ) = 1 4 , P ( Y = 1 ) = 1 2 .
Clearly, it makes little sense to relate the heterogeneity of a two-valued random variable to a three-valued random variable.

3.5. Adding a Noise

Another important consequence of inequality (25) is related to the situation in which the random variable Y represents a noise (of mean value m > 0 ) that is present when measuring the inequality index of X. The classical choice is that the additive noise is represented by a Gaussian variable of mean m and variance σ 2 .
If this is the case, the Fourier transform of the Gaussian density is given by (11), which is such that
f ^ ( ξ ) = ( i m σ 2 ξ ) f ^ ( ξ ) ; f ^ ( 0 ) = i m .
Hence, since | f ^ ( ξ ) | f ^ ( 0 ) = 1 ,
f ^ ( ξ ) f ^ ( ξ ) f ^ ( 0 ) = f ^ ( ξ ) i m σ 2 ξ i m f ^ ( ξ ) = σ m σ ξ f ^ ( ξ ) .
Finally, if Y denotes the Gaussian random variable of mean m > 0 and variance σ 2 we obtain
T ( Y ) = σ 2 m sup ξ R ξ e ξ 2 / 2 = σ 2 m 1 e .
As we showed in Section 2.2 for the index H defined by (10), for a Gaussian variable, the inequality index T ( Y ) is proportional to the coefficient of variation of Y. We have in this case
T ( X + Y ) m X m X + m T ( X ) + σ m X + m 1 e ,
namely an explicit upper bound for the inequality index in terms of the mean value and the variance of the Gaussian noise.
Remark 7.
It is important to note that inequality (28) remains valid even if the mean value of the Gaussian noise is assumed equal to zero. In this case, by letting m 0 we obtain the upper bound
T ( X + Y ) T ( X ) + σ m X 1 e ,

4. Examples

In this section we will recover the values of the inequality index T for some well-known probability measures. With few exceptions, any time the explicit expression of the Fourier transform of the probability measure is available, the computation of the value of the inequality index T ( · ) is straightforward. The list of probability measures that can be treated via Fourier transform is consistent, and includes both discrete and continuous distributions. For an in-depth look at this topic, the interested reader can consult the book [30].
We do not consider in this paper the possibility to make use of the fast Fourier transform to compute the values of the functional T in the case of a random variable taking only a finite number of values, a situation that we intend to treat in a companion paper.

4.1. Two-Valued Random Variables

Let X be a Bernoulli random variable, characterized by the probability measure with Fourier transform
f ^ ( ξ ) = 1 p + p e i ξ , 0 < p < 1 .
Then, since f ^ ( ξ ) = i p e i ξ , it immediately follows that
T ( X ) = 1 2 ( 1 p ) sup ξ R 1 e i ξ = 1 p .
For given positive constants a , b , let Y = a X + b : Then Y is characterized by the Fourier transform
h ^ ( ξ ) = f ^ ( a ξ ) e i b ξ .
We have
h ^ ( ξ ) h ^ ( ξ ) h ^ ( 0 ) = a p f ^ ( a ξ ) f ^ ( a ξ ) a p + b = a p a p + b ( 1 p ) 1 e i ξ ,
so that
T ( Y ) = a p ( 1 p ) a p + b .
Choosing α = b and β = a + b , where β > α , we then conclude that a two valued random variable Y such that
P ( Y = α ) = 1 p , P ( Y = β ) = p
has an inequality index
T ( Y ) = ( β α ) p ( 1 p ) α ( 1 p ) + β p .
The same value is assumed by the Gini and Pietra indices of Y.

4.2. Poisson Distribution

Poisson distribution is characterized by the Fourier transform
f ^ ( ξ ) = exp λ e i ξ 1 .
In this case
f ^ ( ξ ) f ^ ( ξ ) f ^ ( 0 ) = e i ξ 1 f ^ ( ξ ) = 2 ( 1 cos ξ ) exp λ ( 1 cos ξ ) .
Let us set 0 1 cos ξ = x 2 2 . Then
T ( F ) = 2 2 sup 0 x 2 x e λ x 2 .
If λ 1 / 4 , the maximum is taken in x ¯ = 2 , and T ( F ) = e 2 λ . If λ > 1 / 4 , the maximum is taken at the point x ¯ = 1 / 2 λ , and in this case
T ( F ) = 1 2 λ e 1 / 2 .
Hence, if F is a Poisson probability measure of mean λ we have
T ( f ) = e 2 λ if λ 1 / 4 1 2 λ e 1 / 2 if λ > 1 / 4
Note that, as a function of λ , the functional T ( F ) is differentiable at the point λ = 1 / 4 , and it decreases as λ increases. Hence, small values of λ corresponds to large heterogeneity.
Remark 8.
It is interesting to remark that the value of the Gini index of a Poisson distribution, say F, can not be computed explicitly by resorting to its expression in Fourier transform, as given by Formula (7). The same conclusion holds if we try to compute the values of H ( F ) , as given by (10), and H ( F ) , defined in (16).
Remark 9.
The previous computations can be extended, at the cost of more complicated calculations, to evaluate the explicit values of the index T to distributions which are obtained by summing up independent Poisson variables. Maybe the most interesting case corresponds to the Skellam distribution [31,32], that is the discrete probability distribution of the difference of two independent random variables X 1 and X 2 , each Poisson-distributed with expected values λ 1 and, respectively λ 2 , with λ 1 λ 2 .

4.3. Stable Laws

As further example of probability measures defined on the whole real line R , we will compute the value of T in correspondence to a stable law [33]. We will restrict ourselves here to the case of symmetric alpha-stable distributions of scale parameter σ > 0 and shift parameter m > 0 , characterized by the Fourier transform
f ^ α ( ξ ) = exp i ξ m σ ξ α , α > 1 .
Note that the Gaussian distribution of mean m and variance 2 σ 2 corresponds to the choice α = 2 .
For these distributions
f ^ α ( ξ ) f α ^ ( ξ ) f α ^ ( 0 ) = α m σ α | ξ | α 2 ξ f ^ α ( ξ ) = α m σ σ ξ α 1 exp σ ξ α .
Consequently
T ( F α ) = σ α 2 m sup x 0 x ( α 1 ) / α e x
Evaluating the value of the supremum, we obtain
T ( F α ) = σ α 2 m α 1 α ( α 1 ) / α exp α 1 α
For α = 1 the distribution reduces to a Cauchy distribution with scale parameter σ and shift parameter m. In this case
T ( F 1 ) = σ 2 m sup x 0 e x = σ 2 m .

4.4. An Interesting Case: The Uniform Distribution

The uniform distribution in the interval ( a , a ) , with a > 0 is characterized by the Fourier transform
f ^ ( ξ ) = sin ( a ξ ) a ξ .
Hence, if X is a random variable uniformly distributed on ( a , a ) , for any constant b > 0 , X + b is uniformly distributed on the interval ( a + b , a + b ) , and the Fourier transform of the probability measure of X + b , of mean value b is given by
g ^ ( ξ ) = f ^ ( ξ ) e i b ξ .
Then
sup ξ R g ^ ( ξ ) g ^ ( ξ ) g ^ ( 0 ) = sup ξ R f ^ ( ξ ) e i b ξ f ^ ( ξ ) e i b ξ i b f ^ ( ξ ) e i b ξ i b = 1 b sup ξ R | f ^ ( ξ ) | .
Next, since f ^ is expressed by (33)
f ^ ( ξ ) = a ξ cos ( a ξ ) sin ( a ξ ) a ξ 2 ,
which implies
sup ξ R | f ^ ( ξ ) | = a sup ξ R ξ cos ξ sin ξ ξ 2 = a δ u
where δ u is a positive constant. Hence, if X is uniformly distributed on the interval ( a , a ) , and b > 0
T ( X + b ) = a 2 b δ u .
In particular, if b > a , by setting α = b a and β = b + a , we conclude that, if Y is a random variable uniformly distributed on the interval ( α , β ) R + , it holds
T ( Y ) = δ u 2 β α β + α .
In this case, at difference with the Gini index, which takes the explicit value
G ( Y ) = 1 3 β α β + α ,
the value of the coefficient δ u can be achieved only numerically. It is however interesting to remark, in the case of a uniform distribution, the values of the two indices have deep similarities.
A rough estimation of the constant δ u follows by studying the function
u ( x ) = sin x x cos x x 2 , x 0 .
It is easy to show that any extremal point x ¯ of the function u ( x ) solves the equation
( x 2 2 ) sin x 2 x cos x = 0 ,
that implies
sin x ¯ x ¯ cos x ¯ = x ¯ 2 2 sin x ¯ .
Consequently, if x ¯ is an extremal point of u ( x ) ,
| u ( x ¯ ) | = 1 2 | sin x ¯ | 1 2 .
Hence δ u 1 / 2 .
To end this Section, we list in Table 1 the values of the inequality index T for some probability measures in R + and R allowing explicit computations. It is remarkable that the Fourier-based index T is well-adapted to compute the heterogeneity index of discrete probability measures, such as the negative binomial distribution, or the geometric distribution, which are explicitly expressible in terms of the Fourier transform. We leave the details of the evaluation to the reader.

5. An Application to Kinetic Theory of Wealth Distribution

Kinetic modelling of agent-based markets are based on few universal assumptions [28]. First, agents are indistinguishable, so that an agent’s state at any instant of time t 0 is completely characterized by his current wealth w 0 . Second, the time variation of the wealth distribution is entirely due to binary trades between agents. A trade represents a binary interaction in which part of the money of each agent is modified according to well-defined rules. When two agents undertake in a trade, their pre-trade wealths v, w change into the post-trade wealths v * , w * according to a linear exchange rule:
v * = p 1 v + q 1 w , w * = q 2 v + p 2 w .
The interaction coefficients p i and q i , i = 1 , 2 , are, in general, non negative random parameters.
The first explicit description of a binary wealth-exchange model dates back to the seminal work of Angle [34], (cf. also [35]), even if the intimate relation to statistical mechanics was only described about a decade later [36,37]. In each binary interaction, winner and loser are randomly chosen, and the loser pays a random fraction of his wealth to the winner. From here, Chakraborti and Chakrabarti [38] developed the class of strictly conservative exchange models, which preserve the total wealth in each individual trade,
v * + w * = v + w .
In its most basic version, the microscopic interaction is determined by one single parameter λ ( 0 , 1 ) , which is the global saving propensity. In the interactions, each agent retains the corresponding fraction of its pre-trade wealth, while the rest ( 1 λ ) ( v + w ) is equally shared equally between the two trading partners,
v * = λ v + 1 2 ( 1 λ ) ( v + w ) , w * = λ w + 1 2 ( 1 λ ) ( v + w ) .
The wealth distribution f ( v , t ) of the system of agents coincides with agent’s density and satisfies the associated spatially homogeneous Boltzmann equation,
t f + f = Q + ( f , f ) ,
on the real half-line, v 0 . The collisional gain operator Q + acts on test functions φ ( v ) as
Q + ( f , f ) [ φ ] = R + φ ( v ) Q + f , f ( v ) d v = 1 2 R + 2 φ ( v * ) + φ ( w * ) f ( v ) f ( w ) d v d w .
Because of (37), the average wealth of the society is conserved with time, so that
m ( t ) = R + w f ( w , t ) d w = m ,
where m > 0 is finite. A useful way of writing Equation (38) is to resort to the Fourier transform [28]. Assuming the initial distribution of wealth in P s + , with s > 1 , the transformed kernel reads
Q ^ f ^ , f ^ ( ξ ) = f ^ 1 λ 2 ξ f ^ 1 + λ 2 ξ ,
where, since the initial density has a bounded mean,
f 0 ^ ( 0 ) = i m .
Hence, the Boltzmann Equation (38) can be rewritten in terms of the Fourier transform of f ( v , t ) as
f ^ ( ξ , t ) t + f ^ ( ξ , t ) = f ^ 1 λ 2 ξ , t f ^ 1 + λ 2 ξ , t .
It immediately follows that the functions e i μ ξ , with μ > 0 , namely the Fourier transforms of a Dirac delta concentrated at the wealth μ , are stationary solutions of Equation (42).
More can be said if we assume that s 2 . Then, the moment of order two of the initial distribution is finite, and, applying (41) with φ ( v ) = ( v m ) 2 , and recalling that the mean value is preserved during the evolution, shows that the variance of f ( v , t ) satisfies
d d t R + ( v m ) 2 f ( v , t ) d v = 1 2 ( 1 λ 2 ) R + ( v m ) 2 f ( v , t ) d v .
As a result, all agents tend, for large times, to become equally rich. Indeed, the steady state f ( v ) is a Dirac delta concentrated at the mean wealth, and is approached at the exponential rate ( 1 λ 2 ) / 2 .
To remain into the framework of inequality indices, the previous result implies that, if the initial distribution belongs to P s + , with s 2 , the coefficient of variation is monotonically decreasing towards zero at the explicit rate ( 1 λ 2 ) / 4 .
This result is lost as soon as the value of s is less than 2. It is however interesting to remark that the inequality index T ( F ( t ) ) , where F ( v , t ) is the probability measure associated to the solution f ^ ( ξ , t ) of Equation (42), is monotonically decreasing in time as soon as s 1 . Indeed, if we set
h ( ξ , t ) = f ^ ( ξ , t ) f ^ ( ξ , t ) f ^ ( 0 ) ,
it is easy to show that h ( ξ , t ) satisfies the equation
h ( ξ , t ) t + h ( ξ , t ) = f ^ 1 λ 2 ξ , t f ^ 1 + λ 2 ξ , t f ^ 1 λ 2 ξ , t f ^ 1 + λ 2 ξ , t f ^ ( 0 , t ) ,
which implies
h ( ξ , t ) t + h ( ξ , t ) sup ξ f ^ 1 λ 2 ξ , t f ^ 1 + λ 2 ξ , t f ^ 1 λ 2 ξ , t f ^ 1 + λ 2 ξ , t f ^ ( 0 , t ) .
If now X ( t ) and Y ( t ) are random variables with probability measures of Fourier transforms f ^ ξ ( 1 λ ) / 2 , t (respectively f ^ ξ ( 1 + λ ) / 2 , t ), which have mean values m ( 1 λ ) / 2 (respectively m ( 1 + λ ) / 2 ), Formula (25) for convolutions gives
T ( X ( t ) + Y ( t ) ) 1 λ 2 T ( X ( t ) ) + 1 + λ 2 T ( Y ( t ) ) .
On the other hand, by scaling invariance T ( X ( t ) ) = T ( Y ( t ) ) = T ( Z ( t ) ) , where the probability measure of Z ( t ) has Fourier transform f ^ ( ξ , t ) . Hence, Equation (44) implies
h ( ξ , t ) t + h ( ξ , t ) sup ξ h ( ξ , t ) ,
that, for any given t 0 < t by Gronwall inequality implies [28]
T ( Z ( t ) ) T ( Z ( t 0 ) ) ,
and, consequently the monotonicity in time of the inequality index T ( F ( t ) of the probability measure solution of the kinetic Equation (38). It is remarkable that this result, which does not require the condition s > 1 , is a direct consequence of the convolution property of the inequality index T. Hence, the monotonicity result does not hold if we resort to Gini and Pietra indices.

6. Conclusions

Inequality indices are quantitative scores that take values in the unit interval, with the zero score characterizing perfect equality. Measuring the statistical heterogeneity of measures arises in most fields of science and engineering, which makes it important to know the strengths and possible weaknesses of heterogeneity measures in applications [1,2,4,5,6,7,10]. In this paper, we draw attention to a new inequality index, based on the Fourier transform, which exhibits a number of interesting properties that make it very promising in applications. In comparison with the well-known and widely used Gini index, which can still be expressed by resorting to the Fourier transform, the new index T allows to compute explicitly the heterogeneity of various probability measures, such as the Poisson distribution, which can not be measured explicitly resorting to the Gini index. Moreover, this new Fourier-based index has an interesting property of sub-additivity for convolutions, which in principle makes it interesting for applications to models of kinetic theory which contain mass and mean preserving bilinear operators [28].

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

This work has been written within the activities of GNFM group of INdAM (National Institute of High Mathematics), and partially supported by IMATI (Institute of Applied Mathematics and Information Technologies “Enrico Magenes”).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A0.
Let F P s + ( R ) be a probability measure of mean value m. Then
0 H ( F ) < 1 .
Proof. 
Let F P s + ( R ) be a probability measure of mean value m. Thanks to the Parseval formula, the value of the Expression (10) in the Fourier space coincides with the value
H ( F ) = 1 m R + | F ( x ) F m ( x ) | 2 d x .
The simplest case in which we can explicitly evaluate H ( F ) is when F ( x ) P s + ( R ) is the measure function of a random variable X taking only two non-negative values, that is, for 0 < p < 1
P ( X = m a ) = p ; P ( X = m + b ) = 1 p , a , b > 0 ; a m .
Since X has mean value m, a and b are related to p by the relation
p a = ( 1 p ) b .
In this case, it is a simple exercise to verify that
1 m R + | F ( x ) F m ( x ) | 2 d x = 1 m p 2 a + ( 1 p ) 2 b ,
so that, thanks to (A3)
H ( F ) = p a m .
Therefore, since a m and p < 1 , we conclude with H ( F ) < 1 .
An interesting application of the previous expression is obtained by assuming a = m , and p = 1 ϵ , with 0 < ϵ 1 . In this case the random variable X of mean value m is such that
P ( X = 0 ) = 1 ϵ ; P X = m ϵ = ϵ .
In economics, this situation describes a population in which most of agents have zero wealth, while one small part possesses an extremely high wealth, while maintaining the mean wealth fixed. In this case H ( F ) = 1 ϵ .
Let us now consider a random variable X of mean value m that takes three non negative values x 1 < x 2 < m < x 3 , where
P ( X = x k ) = p k , k = 1 , 2 , 3 ; p 1 + p 2 + p 3 = 1 .
In this case
R + | F ( x ) F m ( x ) | 2 d x = p 1 2 ( x 2 x 1 ) + ( p 1 + p 2 ) 2 ( m x 2 ) + p 3 2 ( x 3 m ) .
Let
x = p 1 p 1 + p 2 x 1 + p 2 p 1 + p 2 x 2 .
Then, x ( x 1 , x 2 ) , and
( p 1 + p 2 ) x = p 1 x 1 + p 2 x 2 .
Let Y be the two valued random variable defined by
P ( Y = x ) = p 1 + p 2 ; P ( Y = x 3 ) = p 3 .
Then, thanks to (A4), Y has mean value m. Moreover, if we denote by G the measure function of Y,
R + | G ( x ) F m ( x ) | 2 d x = ( p 1 + p 2 ) 2 ( m x ) + p 3 2 ( x 3 m ) .
Owing to (A4) we obtain
( p 1 + p 2 ) 2 ( x 2 x ) = ( p 1 + p 2 ) 2 x 2 p 1 p 1 + p 2 x 1 + p 2 p 1 + p 2 x 2 = p 1 ( p 1 + p 2 ) ( x 2 x 1 ) p 1 2 ( x 2 x 1 ) .
Consequently
p 1 2 ( x 2 x 1 ) + ( p 1 + p 2 ) 2 m x 2 ( p 1 + p 2 ) 2 ( x 2 x ) + ( p 1 + p 2 ) 2 m x 2 = ( p 1 + p 2 ) 2 ( m x ) ,
that implies
H ( F ) H ( G ) < 1 .
The same conclusion holds if we consider a random variable X of mean value m that takes the three non negative values x 1 < m < x 2 < x 3 , and we choose the value x ( x 2 , x 3 ) like in (A4). The previous computations show that, by suitably choosing the point, we can built, starting from a random variable with three values, a random variable with two values, with the same mean and with a bigger value of the functional H, which by the previous computations is less than 1. At this point, we can iterate the procedure and conclude that the upper bound in (A1) holds for the measure function F P s + ( R ) of any discrete random variable X, and finally for any F P s + ( R ) . □

References

  1. Banerjee, B.; Chakrabarti, B.K.; Mitra, M.; Mutuswami, S. Inequality measures: The Kolkata index in comparison with other measures. Front. Phys. 2020, 8, 562182. [Google Scholar] [CrossRef]
  2. Eliazar, I. A tour of inequality. Ann. Phys. 2018, 389, 306–332. [Google Scholar] [CrossRef]
  3. Eliazar, I.; Giorgi, G.M. From Gini to Bonferroni to Tsallis: An inequality-indices trek. Metron 2020, 78, 119–153. [Google Scholar] [CrossRef]
  4. Betti, G.; Lemmi, A. (Eds.) Advances on Income Inequality and Concentration Measures; Routledge: New York, NY, USA, 2008. [Google Scholar]
  5. Coulter, P.B. Measuring Inequality: A Methodological Handbook; Westview Press: Boulder, CO, USA, 1989. [Google Scholar]
  6. Cowell, F. Measuring Inequality; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  7. Hao, L.; Naiman, D.Q. Assessing Inequality; Sage: Los Angeles, CA, USA, 2010. [Google Scholar]
  8. Gini, C. Sulla misura della concentrazione e della variabilità dei caratteri. Atti Del R. Ist. Veneto Sci. Lett. Arti 1914, 73, 1203–1248, English translation in Metron 2005, 3–38.. [Google Scholar]
  9. Gini, C. Measurement of inequality of incomes. Econ. J. 1921, 31, 124–126. [Google Scholar] [CrossRef]
  10. Hurley, N.; Rickard, S. Comparing measures of sparsity. IEEE Trans. Inf. Theory 2009, 55, 4723–4741. [Google Scholar] [CrossRef]
  11. Pietra, G. Delle relazioni tra gli indici di variabilità. Nota I. Atti Reg. Ist. Veneto Sci. Lett. Arti 1915, 74 Pt II, 775–792. [Google Scholar]
  12. Eliazar, I.; Sokolov, I.M. Measuring statistical heterogeneity: The Pietra index. Physica A 2010, 389, 117–125. [Google Scholar] [CrossRef]
  13. Bonferroni, C.E. Elementi di Statistica Generale; Libreria Seber: Torino, Italy, 1930. [Google Scholar]
  14. Ghosh, A.; Chattopadhyay, N.; Chakrabarti, B.K. Inequality in societies, academic institutions and science journals: Gini and k-indices. Physica A 2014, 410, 30–34. [Google Scholar] [CrossRef]
  15. Banerjee, B.; Chakrabarti, B.K.; Mitra, M.; Mutuswami, S. On the Kolkata index as a measure of income inequality. Physica A 2020, 545, 123178. [Google Scholar] [CrossRef]
  16. Lorenz, M. Methods of measuring the concentration of wealth. Publ. Am. Stat. Assoc. 1905, 5, 209–219. [Google Scholar] [CrossRef]
  17. Veerasingam, S.; Ranjani, R.; Venkatachalapathy, R.; Bagaev, A.; Mukhanov, V.; Litvinyuk, D.; Mugilarasan, M.; Gurumoorthi, K.; Guganathan, L.; Aboobacker, V.M.; et al. Contributions of Fourier transform infrared spectroscopy in microplastic pollution research: A review. J. Appl. Ecol. 2005, 42, 1121–1128. [Google Scholar] [CrossRef]
  18. Couteron, P.; Pelissier, R.; Nicolini, E.A.; Paget, D. Predicting tropical forest stand structure parameters from Fourier transform of very high-resolution remotely sensed canopy images. J. Appl. Ecol. 2005, 42, 1121–1128. [Google Scholar] [CrossRef]
  19. Grigoryan, A.M.; Agaian, S.S. New look on quantum representation of images: Fourier transform representation. Quantum Inf. Process. 2020, 19, 148. [Google Scholar] [CrossRef]
  20. Makowski, M.; Piotrowski, E.W.; Fraçkiewicz, P.; Szopa, M. Interpretation for the Principle of Minimum Fisher Information. Entropy 2021, 23, 1464. [Google Scholar] [CrossRef]
  21. Auricchio, G.; Codegoni, A.; Gualandi, S.; Toscani, G.; Veneroni, M. On the equivalence between Fourier-based and Wasserstein metrics. Rend. Lincei Mat. Appl. 2020, 31, 627–649. [Google Scholar]
  22. Auricchio, G.; Codegoni, A.; Gualandi, S.; Zambon, L. The Fourier discrepancy function. Commun. Math. Sci. 2022. to appear. [Google Scholar]
  23. Xu, K. How Has the Literature on Gini’s Index Evolved in the Past 80 Years? Available online: https://www.mathstat.dal.ca/~kuan/howgini.pdf (accessed on 5 September 2022).
  24. Gabetta, G.; Toscani, G.; Wennberg, B. Metrics for probability measures and the trend to equilibrium for solutions of the Boltzmann equation. J. Statist. Phys. 1995, 81, 901–934. [Google Scholar] [CrossRef]
  25. Carrillo, J.A.; Toscani, G. Contractive probability metrics and asymptotic behavior of dissipative kinetic equations. Riv. Mat. Univ. Parma 2007, 6, 75–198. [Google Scholar]
  26. Matthes, D.; Toscani, G. On steady measures of kinetic models of conservative economies. J. Statist. Phys. 2008, 130, 1087–1117. [Google Scholar] [CrossRef]
  27. Toscani, G.; Villani, C. Probability Metrics and Uniqueness of the Solution to the Boltzmann Equation for a Maxwell Gas. J. Statist. Phys. 1999, 94, 619–637. [Google Scholar] [CrossRef]
  28. Pareschi, L.; Toscani, G. Interacting Multiagent Systems. Kinetic Equations & Monte Carlo Methods; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  29. Zolotarev, V.M. Metric distances in spaces of random variables and their measures. Math. USSR-Sb 1976, 30, 373. [Google Scholar] [CrossRef]
  30. Oberhettinger, F. Fourier Transforms of Distributions and Their Inverses: A Collection of Tables; Academic Press: Cambridge, MA, USA, 1973. [Google Scholar]
  31. Skellam, J.G. The frequency distribution of the difference between two Poisson variates belonging to different populations. J. R. Stat. Soc. Ser. A 1946, 109, 296. [Google Scholar] [CrossRef] [PubMed]
  32. Skellam, J.G. Random dispersal in theoretical populations. Biometrika 1951, 38, 196–218. [Google Scholar] [CrossRef] [PubMed]
  33. Zolotarev, V.M. One-Dimensional Stable Distributions; American Mathematical Society: Providence, RI, USA, 1986. [Google Scholar]
  34. Angle, J. The surplus theory of social stratification and the size distribution of personal wealth. Soc. Forces 1986, 65, 293–326. [Google Scholar] [CrossRef]
  35. Angle, J. The inequality process as a wealth maximizing process. Physica A 2006, 367, 388–414. [Google Scholar] [CrossRef]
  36. Drǎgulescu, A.; Yakovenko, V.M. Statistical mechanics of money. Eur. Phys. J. B 2000, 17, 723–729. [Google Scholar] [CrossRef]
  37. Ispolatov, S.; Krapivsky, P.L.; Redner, S. Wealth distributions in asset exchange models. Eur. Phys. J. B 1998, 2, 267–276. [Google Scholar] [CrossRef]
  38. Chakraborti, A.; Chakrabarti, B.K. Statistical mechanics of money: How saving propensity affects its distributions. Eur. Phys. J. B Condens. Matter Complex Syst. 2000, 17, 167–170. [Google Scholar] [CrossRef] [Green Version]
Table 1. Values of the index T for some probability measures.
Table 1. Values of the index T for some probability measures.
MeasureDensityFourier TransformIndex T ( · )
Exponential λ e λ x 1 + i ξ / λ 1 1 4
Gamma 1 Γ ( k ) θ k x k 1 e x / θ ( 1 + i θ ξ ) k 1 2 k 1 + 1 k ( k + 1 ) / 2     k > 0
Chi-squared 1 2 k / 2 Γ ( k / 2 ) x k / 2 1 e x / 2 ( 1 + 2 i ξ ) k / 2 1 2 k 1 + 2 / k ( k + 2 ) / 4     k 1
Laplace 1 2 σ e | x m | / σ e i m ξ 1 + σ 2 ξ 2 1 σ m 16 25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Toscani, G. On Fourier-Based Inequality Indices. Entropy 2022, 24, 1393. https://doi.org/10.3390/e24101393

AMA Style

Toscani G. On Fourier-Based Inequality Indices. Entropy. 2022; 24(10):1393. https://doi.org/10.3390/e24101393

Chicago/Turabian Style

Toscani, Giuseppe. 2022. "On Fourier-Based Inequality Indices" Entropy 24, no. 10: 1393. https://doi.org/10.3390/e24101393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop