Next Article in Journal
The Birth of a Ghost Star
Previous Article in Journal
Signatures of Extreme Events in Cumulative Entropic Spectrum
Previous Article in Special Issue
Semi-Quenched Invariance Principle for the Random Lorentz Gas: Beyond the Boltzmann–Grad Limit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limit Laws for Sums of Logarithms of k-Spacings

Laboratoire de Statistique Théorique et Appliquée, Université Paris VI, 7 Avenue du Château, F 92340 Bourg-la-Reine, France
Entropy 2025, 27(4), 411; https://doi.org/10.3390/e27040411
Submission received: 10 March 2025 / Revised: 8 April 2025 / Accepted: 9 April 2025 / Published: 10 April 2025
(This article belongs to the Special Issue The Random Walk Path of Pál Révész in Probability)

Abstract

:
Let Z = Z 1 , , Z n be an i.i.d. sample from the absolutely continuous distribution function F ( z ) : = P ( Z z ) , with density f ( z ) : = d d z F ( z ) . Let Z 1 , n < < Z n , n be the order statistics generated by Z 1 , , Z n . Let Z 0 , n = a : = inf { z : F ( z ) > 0 } and Z n + 1 , n = b : = sup { z : F ( z ) < 1 } denote the end-points of the common distribution of these observations, and assume that the density f is Riemann integrable and bounded away from 0 over each interval [ a , b ] ( a , b ) . For a specified k 1 , we establish the asymptotic normality of the sum of logarithms of the k-spacings Z i + k , n Z i 1 , n for i = 1 , , n k + 2 . Our results complete previous investigations in the literature conducted by Blumenthal, Cressie, Shao and Hahn, and the references therein.

1. Introduction and Results

Let X 1 , X 2 , be a sequence of independent replicæ of a non-degenerate random variable X, with distribution function F ( x ) : = P ( X x ) , defined for x R ¯ : = R { } { } . We denote by a : = inf { x : F ( x ) > 0 } , and b : = sup { x : F ( x ) < 1 } , the distribution end-points, and we assume that a version of the the density f ( x ) = d d x F ( x ) of X exists for x R and is Riemann integrable and bounded away from 0 over each interval [ a , b ] ( a , b ) . For each n 0 , we set X 0 , n : = a and X n + 1 , n : = b , and for each n 1 , we denote the order statistics of X 1 , , X n by X 1 , n , , X n , n , which fulfill almost surely the strict inequalities
a = X 0 , n < X 1 , n < < X n , n < b = X n + 1 , n .
Given a specified integer k 1 , we are concerned with the limiting behavior as n of the sums of the logarithms of the k-spacings { D i , n ( k ) : 1 i n k + 2 } , defined for n k 1 by
D i , n ( k ) : = X i + k 1 , n X i 1 , n for i = 1 , , n k + 2 .
Since the first and last among the k-spacings in (2), namely, D 1 , n ( k ) and D n k + 2 , n ( k ) , are possibly infinite, we set
p 0 : = 1 when a > , 2 when a = ,
q 0 : = 1 when b < , 2 when b = ,
and consider only the finite k-spacings { D i , n ( k ) : p 0 i n k + 3 q 0 } .
Our goal is to investigate the limiting behavior, as n , of the statistic
T n ( k ; p , q ) : = i = p n k + 3 q log n k D i , n ( k ) ,
defined for each set of integers k 1 , p p 0 , q q 0 , and n k + p + q 3 . In the present paper, we establish the asymptotic normality of T n ( k ; p , q ) when k, p, and q are fixed, and n . The motivation of these statistics is to provide tests of the goodness of fit of the null hypothesis ( H . 0 ) that X is uniformly distributed on ( 0 , 1 ) with f ( x ) = 1 I ( 0 , 1 ) ( x ) , against the alternative. Darling [1] introduced the statistic T n : = T n ( 1 ; 2 , 2 ) , and, later, Blumenthal [2] showed that, under the assumption that f is continuous on ( a , b ) ( , ) and bounded away from 0 on this interval, we have, as n
n 1 / 2 T n n γ n E log f ( X ) d N 0 , ζ ( 2 ) + 1 + Var log f ( X ) .
Here, and in the sequel, we write “ d ” to denote weak convergence and “ = d ” to denote equality in distribution. We let N ( μ , σ 2 ) stand for the Gaussian distribution with mean μ and variance σ 2 . Throughout, we use the convention that ( · ) : = 0 and ( · ) : = 1 . We denote by γ = 0.577215  Euler’s constant (see, e.g., (14) below) and set ζ ( 2 ) = π 2 6 for the value taken by the Riemann zeta function  ζ ( z ) for z = 2 (we refer to Remark 2 in the sequel for some basic facts concerning these mathematical objects).
Under the null hypothesis ( H . 0 ) that X is uniformly distributed on ( 0 , 1 ) , which implies that f ( X ) = 1 a.s., (6) reduces to
n 1 / 2 T n n γ d N 0 , ζ ( 2 ) + 1 .
For 0 < α < 1 , denote by ν α the upper α quantile of the N ( 0 , 1 ) distribution. It follows from (6) that the test rejecting ( H . 0 ) when n 1 / 2 T n n γ ν α ζ ( 2 ) + 1 is asymptotically consistent with size α when n . This result may be refined by using the exact distribution of n 1 / 2 T n n γ , which was obtained in tractable form by Deheuvels and Derzko [3]. The corresponding results allow for the practical use of the so-called Darling–Blumenthal test of uniformity.
Some practical problems arise for the use of the above-described test in the presence of ties (see, e.g., pp. 118–124 in Hájek and Šidák [4]) when some of the observed spacings in the sequence { D i ; n ( 1 ) : 2 i n } are null, in which case T n is not properly defined. In practice, the use of the k-spacings { D i ; n ( k ) : 2 i n k + 1 } for the choice of k 1 that is sufficiently large allows us to overcome this difficulty. This motivates the study of the limiting behavior T n ( k ; p , q ) for a specified choice of the integer k 1 .
We work under the assumptions listed in ( F . 1 3 ) below:
(F.1)
Var log f ( X ) < ;
(F.2)
Either a > and f is Riemann integrable and bounded away from 0 in a right neighborhood of a, or
a and f is monotone in a right neighborhood of a;
(F.3)
Either b < and f is Riemann integrable and bounded away from 0 in a left neighborhood of b, or
b and f is monotone in a left neighborhood of b.
Our main result is stated in Theorem 1 below. We set ψ ( z ) : = ψ ( 1 ) ( z ) = d d z log Γ ( z ) for the digamma function (see Remark 2 in the sequel for details on Euler’s constant γ , Riemann’s ζ ( · ) function, the Gamma function  Γ ( · ) , and the polygamma functions  ψ ( m ) ( · ) for m = 0 , 1 , ).
Theorem 1.
Under ( F . 1 2 3 ) , for each specified set of integers k 1 , p p 0 , and q p 0 , we have, as n ,
n 1 / 2 T n ( k ; p , q ) n ψ ( k ) + log k + E log f ( X ) d N 0 , ψ ( k ) + 1 + Var log f ( X ) .
Remark 1.
Under the assumptions above, for any specified set of integers k 1 , p , p p 0 , and q , q p 0 , we have
| T n ( k ; p , q ) T n ( k ; p , q ) | = O P ( 1 ) as n .
Therefore, when the conclusion of Theorem 1 holds for some specified pairs of integers ( p , q ) with p p 0 and q q 0 , it holds for all other pairs of integers p , q , fulfilling this condition. Because of this, we set T n ( k ) : = T n ( k ; p 0 , q 0 ) and give a proof of the theorem for p = p 0 and q = q 0 .
The proof of Theorem 1 is given in the next section, together with additional results of interest. We mention at this point some historical details about the sums of the logarithms of spacings and related topics. The study of spacings has received considerable attention in the literature, ever since the pioneering work of Darling [1] (see, e.g., Pyke [5,6], Deheuvels [7], and the references therein). To the best of our knowledge, the best result coming close to Theorem 1 was obtained by Cressie [8,9] for p = q = 2 and k 1 . Cressie established a variant of (8) under the assumption that the density f of X is a bounded step function (see Theorem 5.1, p. 352 in [8]). For p = q = 2 and k = 1 , a version of Theorem 1 was given by Blumenthal [2] under rather strenuous conditions on f, assumed to be, at least, twice differentiable. Shao and Hahn [10] largely improved Blumenthal’s theorem by showing that (8) holds for p = q = 1 , k = 1 , and when < a < b < under the assumption that f is Riemann integrable and bounded away from 0 on ( a , b ) . Recently, Deheuvels and Derzko [11] (see also [3]) relaxed the assumptions of Blumenthal and Shao and Hahn by giving a version of Theorem 1 for k = 1 , allowing a and b to be possibly infinite. The present paper improves these results by covering the case of an arbitrary k 1 under less strenuous conditions on f. We refer to Shao and Hahn [10], del Pino [12,13], Czekała [14], and the references therein for discussions and further references on the statistical applications of this theorem.
Remark 2.
( 1 ) The following relations and definitions hold, relating Euler’s constant γ = 0.577215 to the Riemann zeta function ζ ( · ) in (6) (see, e.g., Spanier and Oldham [15] and Gradshteyn and Ryzhik [16], p. xxix). We have, for each r > 1 and m = 1 , 2 , ,
ζ ( r ) : = 1 Γ ( r ) 0 t r 1 d t e t 1 = j = 1 1 j r for r > 1 ,
ζ ( 2 m ) : = 2 2 m π 2 m B 2 m ( 2 m ) ! ,
where Bn is the n-th Bernoulli number. In particular, for r = 2 and m = 1,
ζ ( 2 ) = π 2 6 = 1 Γ ( 2 ) 0 t d t e t 1 = j = 1 1 j 2 .
The Bernoulli numbers {Bn : n ≥ 0} are defined as the constants in the expansion
t e t 1 = n = 0 B n t n n ! which converges for | t | < 2 π .
The Euler constant γ may be defined by either one of the relations
γ : = 0 ( log t ) e t d t = lim r 1 ζ ( r ) 1 r 1 = lim n j = 1 n 1 1 j log n .
( 2 ) The Euler Gamma function Γ ( · ) , digamma function ψ ( · ) , and polygamma function ψ ( m ) ( · ) are respectively defined via the relations (see, e.g., §§6.3–6.4 in Abramowitz and Stegun [17]) for z > 0 and m 1 ,
Γ ( z ) : = 0 t z 1 e t d t ,
ψ ( z ) : = d d z log Γ ( z ) = γ + 0 e t e z t 1 e t d t ,
which fulfills
lim z ψ ( z ) log z = 0 ,
ψ ( m ) ( z ) : = d m + 1 d z m + 1 log Γ ( z ) = ( 1 ) m + 1 0 t m e z t 1 e t d t .
In particular, when z = n 1 is an integer, we obtain (see, e.g., Formulas 6.3.2 and 6.4.2 in [17])
Γ ( n ) = ( n 1 ) ! = j = 1 n 1 j ,
ψ ( n ) : = γ + j = 1 n 1 1 j , ψ ( 1 ) = γ ,
which fulfills
lim n ψ ( n ) log n = 0 , ψ ( m ) ( n ) : = ( 1 ) m + 1 m ! ζ ( m + 1 ) j = 1 n 1 1 j m + 1 ,
ψ ( n ) : = ζ ( 2 ) j = 1 n 1 1 j 2 = j = n 1 j 2 = 0 t e n t 1 e t d t = 0 1 u n 1 { log u } 1 u d u .
( 3 ) Routine computations show that, as n ,
ψ ( n + 1 ) log ( n + 1 ) ψ ( n ) log n = 1 + o ( 1 ) 2 n 2 ,
whence, as n ,
n ψ ( n ) log n 1 2 .
( 4 ) In view of (12) and (20), we readily obtain that, for k = 1 , 2 , ,
ψ ( k ) = ζ ( 2 ) j = 1 k 1 1 j 2 = j = k 1 j 2 = j = k 1 j ( j + 1 ) + j = k 1 j 2 1 j ( j + 1 ) = 1 k + j = k 1 j 2 1 j ( j + 1 ) = 1 k + j = k 1 j 2 ( j + 1 )
= 1 k + j = k 1 j 2 ( j + 1 ) + 1 2 k 2 1 2 j = k 1 j 2 1 ( j + 1 ) 2 = 1 k + 1 2 k 2 + j = k 1 j 2 ( j + 1 ) j + 1 2 j 2 ( j + 1 ) 2 = 1 k + 1 2 k 2 + 1 2 j = k 1 j 2 ( j + 1 ) 2 = 1 k + 1 2 k 2 + 1 6 k 3 1 6 j = k 1 j 3 1 ( j + 1 ) 3 3 j 2 + 3 j j 3 ( j + 1 ) 3 = 1 k + 1 2 k 2 + 1 6 k 3 1 6 j = k 1 j 3 ( j + 1 ) 3 .
This, in turn, implies that, as k ,
( 2 k 2 2 k + 1 ) ψ ( n ) 2 k + 1 = 2 k 2 2 k + 1 ζ ( 2 ) j = 1 k 1 1 j 2 2 k + 1 = 1 3 k + 1 6 k 2 + 1 + o ( 1 ) 6 k 3 0 .
so that the limiting variance of n 1 / 2 T n ( k ; p , q ) in (8) equals
Var log f ( X ) + 1 + o ( 1 ) 3 k Var log f ( X ) ,
as k .
( 5 ) Likewise, we infer from (14) that, as k ,
ψ ( k ) = γ j = 1 k 1 1 j + log k = j = k log 1 + 1 j 1 j = 1 + o ( 1 ) k ,
so that the limiting centering factor of n 1 T n ( k ; p , q ) in (8) equals
E log f ( X ) 1 + o ( 1 ) k E log f ( X ) ,
as k . By all this, for large specified values of k, n 1 T n ( k ; p , q ) follows approximatively, as n , a normal distribution, with expectation E log f ( X ) and variance n 1 / 2 Var log f ( X ) . This gives a heuristical motivation for the use of the statistic n 1 T n ( k ; p , q ) (taken with specified large values of k) to estimate the factor E log f ( X ) .

2. Proofs

2.1. Properties of the Gauss Hypergeometric Function

In our proofs, we make use of a series of identities related to hypergeometric functions, which are of independent interest. For any x R and n I N , define the Pochhammer function by
( x ) n : = x ( x + 1 ) ( x + n 1 ) for n = 1 , 2 , , and ( x ) 0 : = 1 .
We note that, whenever x > 0 and n I N : = { 0 , 1 , } ,
( x ) n = Γ ( x + n ) Γ ( x ) .
In particular, we have ( 1 ) n = n ! for each n I N . We refer to 18:3:1; 18:3:2, and 18:10:1 in Spanier and Oldham [15] for additional properties of the Pochhammer function. Recalling (16), (23) and (24), we obtain readily that, for x { 0 , 1 , , n 1 } , ( x ) n 0 and
d d x ( x ) n = ( x ) n j = 0 n 1 1 x + j = ( x ) n ψ ( x + n ) ψ ( x ) ,
d 2 d x 2 ( x ) n = ( x ) n ψ ( x + n ) ψ ( x ) + ψ ( x + n ) ψ ( x ) 2 .
The usual Gauss hypergeometric function is defined for c I N and z C with | z | < 1 by
F ( a , b ; c ; z ) : = F 1 2 ( a , b ; c ; z ) : = m = 0 ( a ) m ( b ) m ( c ) m z m m ! .
The function F ( a , b ; c ; z ) is defined for | z | = 1 when Re ( c a b ) > 0 (see, e.g., Ch.4 in Rainville). In particular (see, e.g., 60:7:2 in [15]), when this condition holds,
F ( a , b ; c ; 1 ) = Γ ( c ) Γ ( c b a ) Γ ( c a ) Γ ( c b ) .
In particular, we have
F ( 1 , b ; c ; 1 ) = m = 1 ( b ) m m ( c ) m = Γ ( c ) Γ ( c b 1 ) Γ ( c 1 ) Γ ( c b ) = c 1 c b 1 .
The general hypergeometric function of order ( p , q ) is defined for integer p q + 1 by
F q p ( a 1 , , a p ; b 1 , , b q ; z ) : = m = 0 ( a 1 ) m ( a p ) m ( b 1 ) m ( b q ) m z m m ! .
The following identity relates higher-order hypergeometric functions to lower-order ones. We have
F q + 1 p + 1 ( a 1 , , a p , r ; b 1 , , b q , s ; z ) = Γ ( r ) Γ ( s ) Γ ( s r ) 0 1 t r 1 ( 1 t ) s r 1 F q p ( a 1 , , a p ; b 1 , , b q ; t z ) d t .
In particular,
F 2 3 ( 1 , b , r ; c , s ; 1 ) = m = 1 ( b ) m ( r ) m m ( c ) m ( s ) m = Γ ( s ) Γ ( r ) Γ ( r s ) 0 1 t r 1 ( 1 t ) s r 1 F ( 1 , b ; c , t ) d t = Γ ( s ) Γ ( r ) Γ ( s r ) 0 1 t r 1 ( 1 t ) s r 1 m = 1 ( b ) m m ( c ) m t m .
For a > 1 , b > 1 , c > 1 , and 0 x < 1 , we have (see, e.g., 60:10:3 in [15])
0 x F ( a , b ; c ; t ) d t = c 1 ( a 1 ) ( b 1 ) F ( a 1 , b 1 ; c 1 ; x ) 1 .
Proposition 1.
We have, for c > b + 1 , c I N ,
m = 0 ( b ) m ( c ) m = c 1 c b 1 ,
m = 0 ( b ) m ( c ) m ψ ( c + m ) ψ ( c ) = b ( c b 1 ) 2 ,
and
m = 0 ( b ) m ( c ) m ψ ( b + m ) ψ ( b ) = c 1 ( c b 1 ) 2 .
Proof. 
By combining (27) with (28), taken with a = 1 , so that ( 1 ) m = m ! and z = 1 , we obtain
m = 0 ( b ) m ( c ) m = F ( 1 , b ; c ; 1 ) = Γ ( c ) Γ ( c b 1 ) Γ ( c 1 ) Γ ( c b ) = c 1 c b 1 ,
which is (32). By combining (25) with (32), we obtain, in turn, that
c m = 0 ( b ) m ( c ) m = m = 0 c ( b ) m ( c ) m = m = 0 ( b ) m ( c ) m ψ ( c + m ) ψ ( c ) = c c 1 c b 1 = b ( c b 1 ) 2 ,
which is (33). The proof of (34) follows along the same lines with the formal replacement of c by b . Namely, we obtain
b m = 0 ( b ) m ( c ) m = m = 0 b ( b ) m ( c ) m = m = 0 ( b ) m ( c ) m ψ ( b + m ) ψ ( b ) = b c 1 c b 1 = c 1 ( c b 1 ) 2 ,
which is (34). □
Proposition 2.
We have, for c > b > 0 ,
m = 1 ( b ) m m ( c ) m = ψ ( c ) ψ ( c b ) ,
m = 1 ( b ) m m ( c ) m ψ ( c + m ) ψ ( c ) = ψ ( c ) ψ ( c b ) ,
and
m = 1 ( b ) m m ( c ) m ψ ( b + m ) ψ ( b ) = ψ ( c b ) .
Proof. 
By (27) and (28), we have, for 0 x < 1 ,
c h b F ( h , b ; c ; x ) 1 = c h b m = 1 ( h ) m ( b ) m ( c ) m x m m ! = m = 0 ( 1 + h ) m ( 1 + b ) m ( 1 + c ) m x m + 1 ( m + 1 ) ! .
whence by letting x 1 and making use of (28),
c h b F ( h , b ; c ; 1 ) 1 = c h b Γ ( c ) Γ ( c h b ) Γ ( c h ) Γ ( c b ) 1 = m = 0 ( 1 + h ) m ( b + 1 ) m ( c + 1 ) m 1 ( m + 1 ) ! .
When h 0 , we have, for each m 1 ,
( 1 + h ) m ( m + 1 ) ! 1 m + 1 ,
so that
lim h 0 m = 0 ( 1 + h ) m ( b + 1 ) m ( m + 1 ) ! ( c + 1 ) m = m = 0 ( b + 1 ) m ( c + 1 ) m 1 m + 1 = c b lim h 0 1 h Γ ( c ) Γ ( c h b ) Γ ( c h ) Γ ( c b ) 1 .
Next, we make use of (16), which yields the expansions
Γ ( c h ) Γ ( c ) = 1 h ( 1 + o ( 1 ) ) ψ ( c ) as h 0 ,
and
Γ ( c h b ) Γ ( c b ) = 1 h ( 1 + o ( 1 ) ) ψ ( c b ) as h 0 .
It follows readily that
lim h 0 1 h Γ ( c ) Γ ( c h b ) Γ ( c h ) Γ ( c b ) 1 = ψ ( c ) ψ ( c b ) .
By all this, we obtain
m = 0 ( b ) m m ( c ) m = b c m = 0 ( b + 1 ) m ( c + 1 ) m 1 m + 1 = ψ ( c ) ψ ( c b ) ,
which is (35). Given (35), we infer from (25) and (35) that
ψ ( c ) ψ ( c b ) = c ψ ( c ) ψ ( c b ) = lim h 0 1 h m = 1 ( b ) m m ( c + h ) m ( b ) m m ( c ) m = m = 1 ( b ) m m lim h 0 1 h 1 ( c + h ) m 1 ( c ) m = m = 1 ( b ) m m ( c ) m 2 d d c ( c ) m = m = 1 ( b ) m m ( c ) m ψ ( c + m ) ψ ( c ) ,
which is (36). Likewise, we infer from (25) and (35) that
ψ ( c b ) = b ψ ( c ) ψ ( c b ) = lim h 0 1 h m = 1 ( b + h ) m m ( c ) m ( b ) m m ( c ) m = m = 1 1 m ( c ) m lim h 0 1 h ( b + h ) m ( b ) m = m = 1 1 m ( c ) m d d b ( b ) m = m = 1 ( b ) m m ( c ) m ψ ( b + m ) ψ ( b ) ,
which yields (37). □

2.2. Preliminary Results and Moment Calculations

The special case where X follows a uniform distribution on ( 0 , 1 ) will play an instrumental role in our proofs. For a general F, keeping in mind that the existence of f ( x ) = d d x F ( x ) implies that F is continuous, we set U 1 = F ( X 1 ) , U 2 = F ( X 2 ) , , and we observe that these random variables are independent, each with a uniform distribution on ( 0 , 1 ) . For each n 1 , we denote by
U 0 , n : = 0 < U 1 , n = F ( X 1 , n ) < < U n , n = F ( X n , n ) < U n + 1 , n : = 1 ,
the order statistics of 0 , 1 , U 1 , , U n , with the convention that U 0 , n = F ( X 0 , n ) = F ( a ) = 0 and U n + 1 , n = F ( X n + 1 , n ) = F ( b ) = 1 for n 0 . We note that the inequalities in (38) hold a.s. We therefore assume that, without the loss of generality, they are fulfilled on the probability space on which { X n : n 1 } is defined. The uniform k-spacings are then given for 1 k n + 1 by
Δ i , n ( k ) = U i + k 1 , n U i 1 , n for i = 1 , , n k + 2 .
For r 0 , denote by S r = d Γ ( r ) a random variable following a Gamma distribution with mean r. Namely, S 0 : = 0 , and, for r > 0 , S r has density on R , given by
h r ( s ) : = s r 1 Γ ( r ) e s for s > 0 , 0 for s 0 .
where Γ ( r ) : = 0 s r 1 e s d s for r > 0 . When r = 1 , S 1 = d Γ ( 1 ) is exponentially distributed with a unit mean. In this special case, we use the alternative notation S 1 = d E ( 1 ) . In general, for λ > 0 , we denote by Z = d E ( λ ) an exponentially distributed r.v. Z with mean 1 / λ , fulfilling λ Z = d E ( 1 ) . For p > 0 and q > 0 , we denote by R p , q = d β ( p , q ) a random variable following a Beta distribution with parameters p and q, meaning that R p , q has density given by
g p , q ( s ) : = s p 1 ( 1 s ) q 1 β ( p , q ) for 0 < s < 1 , 0 for s ( 0 , 1 ) .
The functions β ( · , · ) and Γ ( · ) are related by Euler’s formula. For any p > 0 and q > 0 , we have
β ( p , q ) : = 0 1 u p 1 ( 1 u ) q 1 d u = Γ ( p ) Γ ( q ) Γ ( p + q ) .
We extend this definition when either p = 0 or q = 0 by setting
R 0 , q = 0 = : d β ( 0 , q ) ( q > 0 ) and R p , 0 = 1 = : d β ( p , 0 ) ( p > 0 ) .
We refer to Ch. 17, 19, and 25 in Johnson, Kotz, and Balakrishnan [18,19] for useful details concerning the Gamma, exponential, and Beta distributions. In particular, we have the following useful distributional identity (see, e.g., p.12 in David [20]). For any 0 j j + i n + 1 ,
U i + j , n U j , n = d β ( i , n i + 1 ) .
In particular, we have the distributional identity, for any 0 i n + 1 ,
U i , n = d 1 U n i + 1 , n = d β ( i , n i + 1 ) .
The following lemma plays an instrumental role in our proofs.
Lemma 1.
For p > 0 , q > 0 , and r > 0 , let S p = d Γ ( p ) , S q = d Γ ( q ) , and S r = d Γ ( r ) be three independent
Gamma-distributed r.v.’s. Then, the r.v.’s
R p , q : = S p S p + S q = d β ( p , q ) and T p , q : = S p + S q = d Γ ( p + q ) ,
are independent and follow β ( p , q ) and Γ ( p + q ) distributions, respectively. Set further
R p , q , r : = S p + S q S p + S q + S r = d β ( p + q , r ) ,
R p , q , r : = S p + S r S p + S q + S r = d β ( p + r , q ) ,
and
T p , q , r : = S p + S q + S r = d Γ ( p + q + r ) .
Then, the r.v. T p , q , r is independent of the random pair ( R p , q , r , R p , q , r ) .
Proof. 
Several variants of the above results have been given in the literature (see, e.g., §25.2, p. 212 in Johnson, Kotz and Balakrishnan [19]). As the proofs are simple, we give details, limiting ourselves to (46). By the change of the variables u = s / ( s + t ) and v = s + t s = u v and t = ( 1 u ) v , the joint density of ( R p , q , T p , q ) is given by
g ( u , v ) = h p ( u v ) h q ( ( 1 u ) v ) | det s u s v t u t v | = v p + q 1 e v Γ ( p + q ) u p 1 ( 1 u ) q 1 β ( p , q ) β ( p , q ) Γ ( p ) Γ ( q ) 1 v | det v u v 1 u | = h p + q ( u ) g p , q ( v ) ,
which is sufficient for our needs. □
In view of (38), set Y 1 = log U 1 , Y 2 = log U 2 , , and observe that Y 1 , Y 2 , constitutes a sequence of independent E ( 1 ) , unit mean exponential random variables. For each n 1 , the order statistics of Y 1 , , Y n fulfill the relations
0 < Y 1 , n = log U n , n < < Y i , n = log U n i + 1 , n < < Y n , n = log U 1 , n < .
Set, for convenience, Y 0 , n = log U n + 1 , n = 0 for n 1 . We will need the following useful fact, closely related to Lemma 1 (refer to Sukhatme [21] and Malmquist [22], and see, e.g., pp. 20–21 in David [20]):
Fact 1.
For each  n 1 , the random variables
ω i , n : = ( n i + 1 ) Y i , n Y i 1 , n = d E ( 1 ) , i = 1 , , n ,
are independent, each following an exponential  E ( 1 )  distribution.
It will be convenient, later on, to make use of the relation following from (51),
Y i , n = log U n i + 1 , n = j = 1 i ω j , n n j + 1 , i = 1 , , n .
In Lemma 2 below, we evaluate the moments of the logarithms of Gamma-distributed random variables, which will play an instrumental role later on. As usual, we make use of the convention that ( · ) = 0 .
Lemma 2.
Let k 1 be an integer, and let S k = d Γ ( k ) be a Gamma-distributed random variable with mean k. Then, for each k 1 ,
γ k : = E log S k = ψ ( k ) = γ j = 1 k 1 1 j ,
r k : = E S k log S k = k ψ ( k ) 1 = k γ k 1 = k γ k j = 1 k 1 1 j 1 ,
and
s k : = E { log S k } 2 = ψ ( k ) 2 + ψ ( k ) = ζ ( 2 ) j = 1 k 1 1 j 2 + γ k 2 = j = k 1 j 2 + γ j = 1 k 1 1 j 2 = π 2 6 j = 1 k 1 1 j 2 + γ j = 1 k 1 1 j 2 .
Proof. 
Recalling the definition (40) of h r ( s ) for r > 0 and the definition (53) of γ k , we obtain that, by integrating by parts, for k 2 ,
γ k = 0 { log s } h k ( s ) d s = 0 { log s } s k 1 ( k 1 ) ! d { e s } = { log s } s k 1 ( k 1 ) ! e s s = 0 s = + 0 { log s } s k 2 ( k 2 ) ! d { e s } 0 s k 2 ( k 1 ) ! e s d s = γ k 1 1 k 1 .
For k = 1 , these relations reduce to (see, e.g., Formula 4.331, p. 573, in Gradshteyn and Ryzhik [16])
γ 1 = 0 { log s } h 1 ( s ) d s = 0 { log s } e s d s = γ .
Recalling the definition (18) of ψ ( m ) for m = 1 , 2 , , and the definition (53) of γ k , by a straightforward induction on k, we infer from the above relations that, for an arbitrary (integer) k 1 ,
γ k = γ k 1 1 k 1 = = γ 1 j = 1 k 1 1 j = ψ ( k ) ,
which is (53). Likewise, in view of (54) and (56), by integrating by parts, we see that, for k 2 ,
r k = 0 s { log s } h k ( s ) d s = 0 { log s } s k ( k 1 ) ! d { e s } = { log s } s k ( k 1 ) ! e s s = 0 s = + 0 { log s } k s k 1 ( k 1 ) ! d { e s } 0 s k 1 ( k 1 ) ! e s d s = k γ k 1 = k γ k j = 1 k 1 1 j 1 ,
which is (54). In the same spirit, to establish (56), we integrate by parts to obtain the recursion formula, for k 2 ,
s k = 0 { log s } 2 h k ( s ) d s = 0 { log s } 2 s k 1 ( k 1 ) ! d { e s } = { log s } 2 s k 1 ( k 1 ) ! e s s = 0 s = + 0 { log s } 2 s k 2 ( k 2 ) ! d { e s } 2 k 1 0 { log s } s k 2 ( k 2 ) ! d { e s } = s k 1 2 γ k 1 k 1 .
By combining (58) with (60), we readily obtain that
s k γ k 2 = s k γ k 1 1 k 1 2 = s k 1 2 γ k 1 k 1 γ k 1 1 k 1 2 = s k 1 γ k 1 2 1 k 1 2 = s k 2 γ k 2 2 1 k 2 2 1 k 1 2 = = s 1 γ 1 2 j = 1 k 1 1 j 2 .
For k = 1 , we combine (57) with the fact that (see, e.g., Formula 4.335, p. 574, in Gradshteyn and Ryzhik [16])
s 1 γ 1 2 = 0 { log s } 2 e s d s γ 2 = π 2 6 = ζ ( 2 ) = j = 1 1 j 2 .
In view of (53), (60) and (61), the relation (56) is straightforward. □
Lemma 3.
Let S k = d Γ ( k ) denote a Gamma-distributed random variable with expectation k. Then, for each integer k 1 , we have
σ k 2 : = Var log S k = s k γ k 2 = ψ ( k ) = ζ ( 2 ) i = 1 k 1 1 i 2 = π 2 6 i = 1 k 1 1 i 2 = i = k 1 i 2 .
Proof. 
Even though the relation (62) is a direct consequence of (53) and (56), below, we give an alternate proof of this statement based upon Lemma 1. The corresponding arguments will be instrumental for the proof of the forthcoming Proposition 3. We may write, making use of (46) in Lemma 3, for each integer m 1 and 1 ,
log S m = log S m S m + S + log S m + S = : V 1 V 2 ,
where
V 1 : = log S m S m + S = d log Z ,
with Z = d β ( m , ) , and
V 2 : = log S m + S = d log Y ,
with Y = d Γ ( m + ) , are independent r.v.’s. Following the arguments of Lemma 1, we note that, in the above relations, S m = d Γ ( m ) and S = d Γ ( ) are two independent Gamma-distributed random variables, with expectations equal to m and , respectively. In view of (50), by combining (44) with (45) and (46), we readily obtain that
V 1 = log S m S m + S = d log U m , m + 1 = Y , m + 1 = j = 1 ω j , m + 1 m + j = d log 1 U , m + 1 = log 1 exp ( Y m , m + 1 ) = log 1 exp j = 1 m ω j , m + 1 m + j .
Set for convenience ρ p : = σ p 2 = Var log S p , with S p = d Γ ( p ) for p 1 ; we infer from (63) and (64) that
ρ m = Var V 2 + Var V 1 = ρ m + + Var V 1 = ρ m + + j = 1 1 ( m + j ) 2 = ρ m + + i = m m + 1 1 i 2 .
By combining (57) and (61) with (62), we see that ρ 1 = ζ ( 2 ) . Therefore, (62) follows readily from (66), taken with m = 1 and m + = k . □
Lemma 4.
Let Z = d β ( m , ) , where m 1 and 1 are integers. Then,
E ( log Z ) = i = m m + 1 1 i = ψ ( m + ) ψ ( m ) ,
and
Var ( log Z ) = i = m m + 1 1 i 2 = ψ ( m ) ψ ( m + ) .
Proof. 
We may write, by (19) and (20),
E ( log Z ) = E ( V 1 ) = E j = 1 ω j , m + 1 m + j = j = 1 1 m + j = ψ ( m + ) ψ ( m ) ,
and
Var ( log Z ) = Var ( V 1 ) = Var j = 1 ω j , m + 1 m + j = j = 1 1 ( m + j ) 2 = ψ ( m ) ψ ( m + ) ,
as sought. □
Lemma 5.
Let { ω m : m 1 } denote an i.i.d. sequence of exponentially distributed E ( 1 ) random variables. For any m 1 , set S m = ω 1 + ω m = d Γ ( m ) and U m , n * : = S m / S n + 1 = d β ( m , n m + 1 ) . Then, for any 1 k n + 1 , we have
E log S k log S n + 1 = ψ ( k ) ψ ( n + 1 ) + ψ ( n + 1 ) ,
and
Cov log S k , log S n + 1 = ψ ( n + 1 ) .
Proof. 
We have
E log S k log S n + 1 = E log S k S n + 1 + log S n + 1 log S n + 1 = E log U k , n * + log S n + 1 log S n + 1 } ) .
Since U k , n * and S n + 1 are independent, it follows that
E log S k log S n + 1 = E log U k , n * E log S n + 1 + E log S n + 1 2 .
Making use of (67) and (57), we see that E log U k , n * = ψ ( n + 1 ) ψ ( k ) , E log S k = ψ ( k ) , E log S n + 1 = ψ ( n + 1 ) , and E log S n + 1 2 = ψ ( n + 1 ) 2 + ψ ( n + 1 ) . By all this, we obtain
E log S k log S n + 1 = ψ ( n + 1 ) 2 + ψ ( k ) ψ ( n + 1 ) + ψ ( n + 1 ) 2 + ψ ( n + 1 ) ,
which is (69). Given (69), the proof of (70) follows from the relations E log S k = ψ ( k ) and E log S n + 1 = ψ ( n + 1 ) . We note that, when k = n + 1 , (69) yields
E log S n + 1 2 = ψ ( n + 1 ) 2 + ψ ( n + 1 ) ,
which is in agreement with (56). □
Proposition 3.
Let 0 k be an integer, and let S k = d Γ ( k ) , S = d Γ ( ) , and S = d Γ ( ) be independent Gamma-distributed random variables. Then, we have
s k , : = E log { S k + S } log { S k + S } = ψ ( k ) 2 ψ ( k + ) ψ ( k + ) ψ ( k ) 2 + m = 1 ( ) m m ( k + ) m ψ ( k + + m ) ψ ( k + m ) .
Proof. 
Set for convenience j = k . When = 0 , we have S = S = 0 , and, therefore, by (7), s k , 0 = ψ ( k ) 2 ψ ( k ) , which is in agreement with (71). Likewise, when = k , S k = 0 , and, hence, s k , k = γ k 2 = ψ ( k ) 2 , which is also in agreement with (71). In fact, when k = , (71) may be rewritten into
s k , k = ψ ( k ) 2 ψ ( 2 k ) ψ ( 2 k ) ψ ( k ) 2 + m = 1 ( k ) m m ( 2 k ) m ψ ( 2 k + m ) ψ ( k + m ) , = ψ ( k ) 2 ψ ( 2 k ) ψ ( 2 k ) ψ ( k ) 2 + m = 1 ( k ) m m ( 2 k ) m ψ ( 2 k + m ) ψ ( 2 k ) + ψ ( 2 k ) ψ ( k ) m = 1 ( k ) m m ( 2 k ) m m = 1 ( k ) m m ( 2 k ) m ψ ( k + m ) ψ ( 2 k ) = ψ ( k ) 2 ψ ( 2 k ) ψ ( 2 k ) ψ ( k ) 2 + ψ ( 2 k ) ψ ( k ) + ψ ( 2 k ) ψ ( k ) 2 + ψ ( k ) = ψ ( k ) 2 ,
where we have made use of (35), (36) and (37), taken with b = k and c = 2 k . Given that the values of s k , 0 = 0 and s k , k = ψ ( k ) 2 are in agreement with (71), we may limit ourselves to establish this relation when and j fulfill 1 , j k . In the remainder of our proof, we therefore assume that this condition is fulfilled.
We make use of the notation and conclusions of Lemma 1 to write that
Σ : = E log { S j + S } log { S j + S } = E log { R j , , T j , , } log { R j , , T j , , } = E log T j , , 2 + E log T j , , { E log R j , , + E log R j , , } + E log R j , , log R j , , ,
where the random pair
R j , , : = S j + S S j + S + S = d β ( j + , ) ,
R j , , : = S j + S S j + S + S = d β ( j + , ) ,
is independent of
T j , , : = S j + S + S = d Γ ( j + 2 ) .
Now, since T j , , = d S j + 2 = d Γ ( j + 2 ) = d Γ ( k + ) , we infer from (53) and (56) that
E ( log T j , , ) = ψ ( k + ) ,
and
E ( { log T j , , } 2 ) = ψ ( k + ) 2 ψ ( k + ) .
Next, set
W 1 : = log R j , , = log S j + S S j + S + S ,
and
W 2 : = log R j , , = log S j + S S j + S + S .
We infer from (74) and (75), in combination with (67) and (68), taken with the formal change of ( m , ) into ( j + , ) , that
E ( W 1 ) = E ( W 2 ) = E ( log R j , , ) = E ( log R j , , ) = ψ ( j + 2 ) ψ ( j + ) = ψ ( k + ) ψ ( k ) .
By all this, we infer from (74), (77) and (78) that
Σ = ψ ( k + ) 2 ψ ( k + ) 2 ψ ( k + ) ψ ( k + ) ψ ( k ) + E ( W 1 W 2 ) = ψ ( k ) 2 { ψ ( k + ) ψ ( k ) } 2 ψ ( k + ) + E ( W 1 W 2 ) .
Next, we observe that the joint distribution of ( W 1 , W 2 ) coincides with that of ( W 1 ( * ) , W 2 ( * ) ) , where
W 1 * = log U j + , j + 2 1 , W 2 * = log 1 U , j + 2 1 .
We then observe that
U : = U , j + 2 1 U j + , j + 2 1 = d U , j + 1 = d β ( , j ) ,
is independent of V : = U j + , j + 2 1 = d β ( j + , ) . Given this fact, we make use of the Taylor expansion of
log ( 1 u v ) = m = 1 u m v m m for | u v | < 1 ,
to obtain that
E ( W 1 W 2 ) = E ( W 1 * W 2 * ) = E log V log ( 1 U V ) = 0 1 0 1 log v log ( 1 u v ) u 1 ( 1 u ) j 1 β ( , j ) d u × v j + 1 ( 1 v ) 1 β ( j + , ) d v = m = 1 1 m 0 1 u + m 1 ( 1 u ) j 1 β ( , j ) d u × 0 1 log v v j + + m 1 ( 1 v ) 1 β ( j + , ) d v .
Recall Euler’s formula β ( r , s ) = Γ ( r ) Γ ( s ) / Γ ( r + s ) and the definition of the Pochhammer symbol ( a ) m = Γ ( a + m ) / Γ ( a ) when a 0 and a m , and, in general,
( a ) m = a ( a + 1 ) ( a + m 1 ) for m = 1 , 2 , ( a ) 0 = 1 .
Recalling that j + = k , we infer from (81) that
E ( W 1 W 2 ) = m = 1 β ( + m , j ) β ( j + + m , ) m β ( , j ) β ( j + , ) ψ ( k + + m ) ψ ( k + m ) = m = 1 Γ ( + m ) Γ ( k + ) m Γ ( ) Γ ( k + + m ) ψ ( k + + m ) ψ ( k + m ) = m = 1 ( ) m m ( k + ) m ψ ( k + + m ) ψ ( k + m ) ,
which, when combined with (80), readily yields (71). □
Let k 1 be fixed, and assume that X is uniformly distributed on ( 0 , 1 ) . Set T n : = T n ( k , 1 , 1 ) to be as in (5).
Proposition 4.
Under the assumptions above, we have
n 1 / 2 T n ( k ) log k + ψ ( k ) d N 0 , ψ ( k ) + 1 .
Proof. 
Denote by { ω n : n 1 } an i.i.d. sequence of exponentially distributed E ( 1 ) r.v.’s, with mean 1. For each n 0 , set
S n : = i = 1 n ω i = d Γ ( n ) ,
and set, in view of Lemma 1, for each 1 i k n + 2 ,
Δ i , n ( k ) * : = S i + k 1 S i 1 S n + 1 = d U k , n = d β ( k , n k + 1 ) .
We keep in mind that { Δ i , n ( k ) * : 0 i n k + 2 } = d { Δ i , n ( k ) : 0 i n k + 2 } and that { Δ i , n ( k ) * : 0 i n k + 2 } is independent of S n + 1 = d Γ ( n + 1 ) . Set further
T n * ( k ) : = i = 1 n k + 1 log 1 k S n + 1 Δ i , n ( k ) * = i = 1 n k + 1 log 1 k ( S i + k 1 S i 1 ) = i = 1 n k + 1 log S i + k 1 S i 1 + log k ,
and
T n * * ( k ) : = ( n k ) log n log S n + 1 .
Set, likewise,
T n * ( k ) : = i = 1 n k + 1 log n k Δ i , n ( k ) * = i = 1 n k + 1 log S i + k 1 S i 1 + log k log n log S n + 1 = T n * ( k ) + T n * * ( k ) .
Observe that T n * ( k ) = d T n ( k ) . Moreover, the r.v.’s T n * ( k ) and T n * * ( k ) are independent. Note further that, for each 0 i n k + 2 , S i + k 1 S i 1 = d Γ ( k ) and S n + 1 = d Γ ( n + 1 ) . By (53), it follows that, for each 0 i n k + 2 ,
E log S i + k 1 S i 1 = ψ ( k ) and E log S n + 1 = ψ ( n + 1 ) ,
whence
E T n * ( k ) = ( n k ) ψ ( k ) + log k
and
E T n * * ( k ) = ( n k ) log n + ψ ( n + 1 ) .
We have, therefore,
E T n * ( k ) = E T n * ( k ) + E T n * * ( k ) = ( n k ) ψ ( k ) + log k + ψ ( n + 1 ) log n .
Next, we note that the R 2 -valued r.v.’s { ( log S i + k 1 S i 1 , S i S i 1 : i 1 } form a stationary k-dependent sequence. Since, by (53) and (56), for all i 1 ,
Var log S i + k 1 S i 1 = ψ ( k ) and Var S i S i 1 = 1 ,
the partial sums of this sequence are asymptotically normal in R 2 . It follows readily that, as n ,
n 1 / 2 T n * ( k ) E T n * ( k ) , T n * * ( k ) E T n * * ( k ) = n 1 / 2 i = 1 n k + 1 { log S i + k 1 S i 1 + ψ ( k ) log k log n + ψ ( n + 1 ) log S n + 1 } d N 0 0 , ψ ( k ) 0 0 1 .
Here, we have made use of the fact that, as n ,
ψ ( n + 1 ) log n = 1 + o ( 1 ) 2 n ,
so that, in (88),
n 1 / 2 ψ ( n + 1 ) 0 .
Likewise, making use of (70), we see that
Cov T n * ( k ) , T n * * ( k ) = i = 1 n k + 1 Cov log S i + k 1 S i 1 , log S n + 1 = ( n k ) Cov log S k , log S n + 1 = ( n k ) ψ ( n + 1 ) 1 as n ,
Making use of (69), an easy argument shows that, in turn,
n 1 / 2 i = 1 n k + 1 { log S i + k 1 S i 1 + ψ ( k ) log k log n log S n + 1 } d N 0 , 1 + ψ ( k ) .
In view of (87), we readily obtain (83) from this last relation. □
Remark 3.
Let G ( u ) : = inf { x : F ( x ) u } for 0 < u < 1 denote the quantile function of X. Assume that both F ( · ) and G ( · ) are continuous. In this case, we may define the quantile density function of X by g ( u ) = 1 / f ( G ( u ) ) , which is continuous for 0 < u < 1 . We may then set, for 1 i n ,
X i + k 1 , n X i 1 , n = G ( U i + k 1 , n ) G ( U i 1 , n ) = 1 + o P ( 1 ) f ( G ( i / n ) ) { U i + k 1 , n U i 1 , n } .
Having proved Theorem 1 for f ( x ) = 1 , the conclusion for a general f follows by routine arguments based on this observation, relating uniform spacings to general spacings. We omit the details.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

I am honored to dedicate this modest contribution to the memory of Pàl Révész who has inspired my own research in probability and statistics for many decades. His celebrated book [23] was the starting point of a number of investigations related to empirical processes. His scientific influence in the field goes beyond description. I also thank the referees for their careful reading of the manuscript and insightful comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Darling, D.A. On a class of problems related to the random division of an interval. Ann. Math. Stat. 1953, 24, 239–253. [Google Scholar] [CrossRef]
  2. Blumenthal, S. Logarithms of sample spacings. SIAM J. Appl. Math. 1968, 16, 1184–1191. [Google Scholar] [CrossRef]
  3. Deheuvels, P.; Derzko, G. Exact laws for products of uniform spacings. Austrian J. Stat. 2003, 32, 29–47. [Google Scholar]
  4. Hájek, J.; Šidák, Z. Theory of Rank Tests; Academic Press: New York, NY, USA, 1967. [Google Scholar]
  5. Pyke, R. Spacings. J. R. Stat. Soc. B 1965, 27, 395–436. [Google Scholar] [CrossRef]
  6. Pyke, R. Spacings revisited. In Proceedings of the 6th Berkeley Symposium; University of California Press: Berkeley, CA, USA, 1972; Volume 1, pp. 417–427. [Google Scholar]
  7. Deheuvels, P. Spacings and applications. In Proceedings of the 4th Pannonian Symposium on Mathematical Statistics, Bad Tatzmannsdorf, Austria, 4–10 September 1983; pp. 1–30. [Google Scholar]
  8. Cressie, N. On the logarithms of high-order spacings. Biometrika 1976, 63, 343–355. [Google Scholar] [CrossRef]
  9. Cressie, N. Power results for tests based on high-order gaps. Biometrika 1978, 65, 214–218. [Google Scholar] [CrossRef]
  10. Shao, Y.; Hahn, M.G. Limit theorems for the logarithm of sample spacings. Stat. Probab. Lett. 1995, 24, 121–132. [Google Scholar] [CrossRef]
  11. Deheuvels, P.; Derzko, G. Tests of fit based on products of spacings. In Probability, Statistics and Modelling in Public Health; Springer: Boston, MA, USA, 2005. [Google Scholar]
  12. del Pino, G. On the asymptotic distribution of some goodness of fit tests based on spacings. Bull. Inst. Math. Stat. 1975, 4, 137. [Google Scholar]
  13. del Pino, G. On the asymptotic distribution of k-spacings with application to goodness-of-fit tests. Ann. Stat. 1979, 7, 1058–1075. [Google Scholar] [CrossRef]
  14. Czekała, F. Normalizing constants for a statistic based on logarithm of disjoint m-spacings. Appl. Math. 1996, 23, 405–416. [Google Scholar] [CrossRef]
  15. Spanier, J.; Oldham, K.B. An Atlas of Functions; Hemisphere Publ. Co.: Washington, DC, USA, 1987. [Google Scholar]
  16. Gradshteyn, I.S.; Ryzhik, I.M. Tables of Integrals, Series and Products; Academic Press: New York, NY, USA, 1965. [Google Scholar]
  17. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions; Dover: New York, NY, USA, 1970. [Google Scholar]
  18. Johnson, N.J.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, Volume 1, 2nd ed.; Wiley: New York, NY, USA, 1994. [Google Scholar]
  19. Johnson, N.J.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, Volume 2, 2nd ed.; Wiley: New York, NY, USA, 1995. [Google Scholar]
  20. David, H.A. Order Statistics, 2nd ed.; Wiley: New York, NY, USA, 1981. [Google Scholar]
  21. Sukhatme, P.V. Tests of significance for samples of the χ2 population with two degrees of freedom. Ann. Eugen. 1937, 8, 52–56. [Google Scholar] [CrossRef]
  22. Malmquist, S. On a property of order statistics from a rectangular distribution. Skand. Aktuarietidskr. 1950, 33, 214–222. [Google Scholar] [CrossRef]
  23. Csörgő, M.; Révész, P. Strong Approximations in Probability and Statistics; Akadémiai Kiadó, Budapest, and Academic Press: New York, NY, USA, 1981. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deheuvels, P. Limit Laws for Sums of Logarithms of k-Spacings. Entropy 2025, 27, 411. https://doi.org/10.3390/e27040411

AMA Style

Deheuvels P. Limit Laws for Sums of Logarithms of k-Spacings. Entropy. 2025; 27(4):411. https://doi.org/10.3390/e27040411

Chicago/Turabian Style

Deheuvels, Paul. 2025. "Limit Laws for Sums of Logarithms of k-Spacings" Entropy 27, no. 4: 411. https://doi.org/10.3390/e27040411

APA Style

Deheuvels, P. (2025). Limit Laws for Sums of Logarithms of k-Spacings. Entropy, 27(4), 411. https://doi.org/10.3390/e27040411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop