Next Article in Journal
Evidential-Reasoning-Type Multi-Attribute Large Group Decision-Making Method Based on Public Satisfaction
Next Article in Special Issue
Ideals and Filters on Neutrosophic Topologies Generated by Neutrosophic Relations
Previous Article in Journal
Spectral Curves for Third-Order ODOs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monotonic Random Variables According to a Direction

by
José Juan Quesada-Molina
1,† and
Manuel Úbeda-Flores
2,*,†
1
Department of Applied Mathematics, University of Granada, 18071 Granada, Spain
2
Department of Mathematics, University of Almería, 04120 Almeria, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(4), 275; https://doi.org/10.3390/axioms13040275
Submission received: 26 February 2024 / Revised: 6 April 2024 / Accepted: 17 April 2024 / Published: 20 April 2024
(This article belongs to the Special Issue Advances in Classical and Applied Mathematics)

Abstract

:
In this paper, we introduce the concept of monotonicity according to a direction for a set of random variables. This concept extends well-known multivariate dependence notions, such as corner set monotonicity, and can be used to detect dependence in multivariate distributions not detected by other known concepts of dependence. Additionally, we establish relationships with other known multivariate dependence concepts, outline some of their salient properties, and provide several examples.
MSC:
60E15; 62H05

1. Introduction

There are numerous methodologies and approaches available for the exploration and analysis of the intricate relationships of dependence among random variables. As underscored by Jogdeo [1], this area stands as a cornerstone of extensive research within the expansive domains of probability theory and statistics. The investigation of dependence among variables is fundamental in understanding the underlying structure and behavior of complex systems, making it a focal point of study across various scientific disciplines.
When delving into the examination of a multivariate model, it becomes imperative to conduct a thorough analysis of the specific type of dependence structure it encapsulates. This meticulous scrutiny is essential for discerning the suitability of a particular model for a given dataset or practical application. By comprehensively understanding the nature of dependence presents, researchers can make informed decisions regarding model selection and parameter estimation, thereby enhancing the robustness and reliability of their analyses.
Within the vast landscape of studied dependence types, our attention is particularly drawn to the nuanced distinctions between positive and negative dependence. Positive dependence expresses a tendency for the variables to move in the same direction, exhibiting a mutual influence that often reflects synergistic relationships. Conversely, negative dependence denotes an inverse relationship, where the movement of one variable is accompanied by a corresponding opposite movement in another, indicative of regulatory or inhibitory interactions.
By elucidating the intricacies of positive and negative dependence, researchers gain valuable insights into the underlying dynamics of the systems under study. This deeper understanding not only enriches theoretical frameworks but also has practical implications in various fields, including finance, engineering, and epidemiology. Moreover, it underscores the importance of considering diverse dependence structures in statistical modeling, ensuring that analyses accurately capture the complexities of real-world phenomena.
Positive dependence is defined by any criterion capable of mathematically characterizing the inclination of components within an n-variate random vector to assume concordant values [2]. As emphasized by Barlow and Proschan [3], the concepts of (positive) dependence in the multivariate context are more extensive and intricate compared to the bivariate case.
The literature contains various extensions of the bivariate dependence concepts to the multivariate domain (we refer to [4,5,6,7] for more details). Our objective in this study is to extend certain established notions of multivariate positive and negative dependence. This includes the exploration of concepts such as orthant dependence and corner set monotonicity, investigating their connections with other dependence concepts and presenting several associated properties.
The paper is organized as follows. We begin with some preliminaries (Section 2) pertaining to the properties of multivariate dependence. This section serves to lay the foundation for our subsequent analyses by elucidating key concepts and frameworks, essential for understanding the complexities of dependence structures among random variables. Following the preliminaries, in Section 3, we delve into the concept of monotonic random variables with respect to a given direction. This extends the notion of corner set monotonicity and provides a more nuanced understanding of the directional dependence present in multivariate systems. We further explore several properties pertaining to these monotonic random variables and provide some examples. Finally, Section 4 is dedicated to presenting our conclusions drawn from the analyses and discussions shown in the preceding sections.

2. Preliminaries

In the sequel, by convention, we will indistinctly use “increasing” (respectively, “decreasing”) and “nondecreasing” (respectively, “nonincreasing”). In addition, a subset A R d , with d 1 , is an increasing set if its indicator function χ A is increasing.
Let n 2 be a natural number. Let ( Ω , F , P ) be a probability space, where Ω is a nonempty set, F is a σ -algebra of subsets of Ω , and P is a probability measure on F , and let X = ( X 1 , X 2 , , X n ) be an n-dimensional random vector from Ω to I R ¯ n = [ , ] n .
The orthant dependence according to a direction is defined as follows [8]: Let α = ( α 1 , α 2 , , α n ) be a vector in R n such that | α i | = 1 for all i = 1 , 2 , , n . An n-dimensional random vector X —or its joint distribution function—is said to be orthant positive (respectively, negative) dependent according to direction α —written PD( α ) (respectively, ND( α ))—if
P i = 1 n ( α i X i > x i ) i = 1 n P [ α i X i > x i ]   for   all   ( x 1 , x 2 , , x n ) R ¯ n
(respectively, with the reversed inequality in (1)).
Note that for some elections of direction α —e.g., for α = 1 = ( 1 , 1 , , 1 ) or α = 1 = ( 1 , 1 , , 1 ) —we obtain different known (bivariate and multivariate) dependence concepts in the literature, as positive quadrant dependence, positive upper orthant dependence, etc. (we refer to [2,7,9,10,11,12] for more details).
Let X be an n-dimensional random vector. The following two multivariate positive dependence notions—on corner set monotonicity—were introduced in [13], where the expression “nonincreasing in x ”—and similarly for nondecreasing—means that it is nonincreasing in each of the components of x, and X x means X i x i for all i = 1 , 2 , , n :
  • X is left corner set decreasing, denoted by LCSD ( X ) , if
    P [ X x | X x ] is nonincreasing in x for all x .
  • X is right corner set increasing, denoted by RCSI ( X ) , if
    P [ X > x | X > x ] is nondecreasing in x for all x .
The corresponding negative dependence concepts LCSI ( X ) (left corner set increasing) and RCSD ( X ) (right corner set decreasing) are defined in a similar manner by exchanging “nondecreasing” and “nonincreasing” in (2) and (3), respectively.
Let H be the joint distribution function of X . We note that condition (2) can be written as
H ( x x ) H ( x ) is nonincreasing in x for all x ,
where x x = ( min { x 1 , x 1 } , min { x 2 , x 2 } , , min { x n , x n } ) . Denoting by H ¯ the survival function of H, i.e., H ¯ ( x ) = P [ X > x ] , condition (3) can be written as
H ¯ ( x x ) H ¯ ( x ) is nondecreasing in x for all x ,
where x x = ( max { x 1 , x 1 } , max { x 2 , x 2 } , , max { x n , x n } ) .
For properties of these notions and relationships with other multivariate dependence concepts, see, e.g., refs. [7,14].

3. Monotonic Dependence According to a Direction

In this section, we undertake a comprehensive examination of the concepts of left-corner set and right-corner set dependence in sequence according to a direction, delineating their definitions within the framework of directional dependence for a set of random variables. Building upon the foundations laid out in Section 2, where the concepts of LCSD and RCSI were recalled, we extend these notions to incorporate directional considerations, thus offering a more nuanced understanding of dependence structures. It is worth noting that a similar analytical approach could be applied to explore negative dependence concepts, mirroring the methodology employed for positive dependence. Furthermore, we not only define these directional dependence concepts but also delve into some of their salient properties.

3.1. Definition

We begin with the key definition of this work, in which for a direction α and an n-dimensional random vector X , α X denotes the vector ( α 1 X 1 , α 2 X 2 , , α n X n ) .
Definition 1.
Let X be an n-dimensional random vector and α = α 1 , α 2 , , α n R n such that α i = 1 for all i = 1 , 2 , , n . The random vector X —or its joint distribution function—is said to be increasing (respectively, decreasing) according to the direction α—denoted by I(α) (respectively D(α))—if
P α X > x | α X > x
is nondecreasing (respectively, nonincreasing) in x for all x .
In the sequel, we focus on the I( α ) concept. Similar results can be obtained for the D( α ) concept, so we omit them.
Observe that the I( α ) concept generalizes the LCSD and RCSI concepts defined in Section 2; that is, I( 1 ) (respectively, I( 1 )) corresponds to LCSD (respectively, RCSI).
We wish to emphasize that, in general, the I( α ) concept signifies positive dependence. This means that large values of the variables X j , for j J , are associated with small values of the variables X j , for j I \ J , where I = { 1 , 2 , , n } and J = { i I : α i = 1 } . Therefore, if a random vector X is I( α ), then
P j J X j > x j , j I \ J X j x j | j J X j > x j , j I \ J X j x j
is nondecreasing in each x j for j J , and nonincreasing in each x j for j I \ J , for all x .

3.2. Relationships with Other Multivariate Dependence Concepts

In this subsection, our focus lies on exploring the connections between the I( α ) dependence notion and several established multivariate dependence concepts within the context of directional analysis. By examining these relationships, we aim to elucidate how the I( α ) concept aligns with or diverges from other well-known measures of dependence, thus providing a more comprehensive understanding of their interplay. Through this investigation, we seek to uncover potential insights into the nature of multivariate dependence and its implications in various analytical scenarios.
We begin our study by recalling the increasingness in the sequence dependence concept.
Definition 2.
([15]). Let X 1 , X 2 , , X n be n random variables and α = α 1 , α 2 , , α n R n such that α i = 1 for all i = 1 , 2 , , n . The random variables X 1 , X 2 , , X n are said to be increasing in sequence according to direction α—denoted by IS(α)—if, for any x i R ,
P α i X i > x i | α 1 X 1 > x 1 , , α i 1 X i 1 > x i 1
is nondecreasing in x 1 , x 2 , , x i 1 R for all i = 2 , 3 , , n .
The relationship between I( α ) and IS( α ) dependence concepts is given in the following result.
Proposition 1.
If a random vector X is I(α), then it is IS(α).
Proof. 
Let x 1 , x 2 , , x n , x 1 , x 2 , , x n R such that x i x i for 1 i n . Since X is I( α ), for any i, 2 i n , we have
P α i X i > x i | j = 1 i 1 ( α j X j > x j ) = P j = 1 n α j X j > t j | j = 1 n α j X j > s j P j = 1 n α j X j > t j | j = 1 n α j X j > s j = P α i X i > x i | j = 1 i 1 ( α j X j > x j )
where
t j = x i , j = i , , j i ,
s j = x j , for j = 1 , 2 , , i 1 , , for j = i , , n ,
and
s j = x j , for j = 1 , 2 , , i 1 , , for j = i , , n ;
whence X is IS( α ), which completes the proof. □
The converse of Proposition 1 does not hold in general: see, for instance, (Ref. [16], Exercise 5.33) for a counterexample in the bivariate case.
In [15], the authors establish a significant result demonstrating that the IS( α ) condition implies PD( α ). Building upon this crucial insight, we can readily derive the following result, thereby highlighting the logical consequence of this implication.
Corollary 1.
If a random vector X  is I(α), then it is PD(α).
The next definition involves the concept of multivariate totally positive of order two.
Definition 3.
([17]). Let X be an n-dimensional random vector with joint density function f, and α R n , with α i = 1 for all i = 1 , 2 , , n . Then, X is said to be multivariate totally positive of order two according to direction α—denoted by MTP2 ( α ) —if
f ( α ( x y ) ) f ( α ( x y ) ) f ( α x ) f ( α y )
for all x , y R ¯ n .
The relationship between the notions I( α ) and MTP2 ( α ) is given in the following result.
Proposition 2.
If a random vector X  is MTP2 ( α ) , then it is I(α).
Proof. 
Let X = ( X 1 , X 2 , , X n ) be an n-dimensional random vector such that X is MTP2 ( α ) . Given x i , x i R ¯ , for i = 1 , 2 , , n , we consider three cases:
  • If x i > x i for all i = 1 , 2 , , n , we have that
    P i = 1 n α i X i > x i | i = 1 n α i X i > x i = P i = 1 n α i X i > x i P i = 1 n α i X i > x i
    is nondecreasing in x 1 , x 2 , , x n .
  • If x i x i for all i = 1 , 2 , , n , we have
    P i = 1 n α i X i > x i | i = 1 n α i X i > x i = 1 ,
    and hence it is nondecreasing in x 1 , x 2 , , x n .
  • Given j { 1 , 2 , , n } , consider, without loss of generality, x i x i for 1 i j and x i > x i for j + 1 i n . Then, we have
    P α X > x | α X > x = P i = 1 n α i X i > x i , i = 1 n α i X i > x i P i = 1 n α i X i > x i = P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i P i = 1 n α i X i > x i .
    Since
    P i = 1 n α i X i > x i P i = 1 n α i X i > x i
    for all x i , x i R ¯ such that x i x i for 1 i n , we have that (5) is nondecreasing in x j + 1 , , x n . In order to prove that (5) is also nondecreasing in x 1 , , x j , considering x 1 , , x j R ¯ such that x i x i for 1 i j , we need to verify
    P i = j + 1 n α i X i > x i | i = 1 n α i X i > x i P i = j + 1 n α i X i > x i | i = 1 j α i X i > x i , i = j + 1 n α i X i > x i .
    For that, it suffices to show that the determinant
    D = P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i P i = 1 n α i X i > x i P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i
    is non-positive (note that, in this case, the quotient between the elements of the first column would be less than the quotient between the elements of the second column, obtaining (6)). First of all, if we add the second column changed of sign to the first column, we have
    D = P i = 1 j x i < α i X i x i , i = j + 1 n α i X i > x i P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i P i = 1 j x i < α i X i x i , i = j + 1 n α i X i > x i P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i ,
    and now adding to the second row, the first row with a changed sign, we obtain
    D = P i = 1 j x i < α i X i x i , i = j + 1 n α i X i > x i P i = 1 j α i X i > x i , i = j + 1 n α i X i > x i P i = 1 j x i < α i X i x i , i = j + 1 n x i < α i X i x i P i = 1 j α i X i > x i , i = j + 1 n x i < α i X i x i .
    Since X is MTP2 ( α ) , from Ref. [17] (Propositions 2 and 4), we have
    h y h y h y j h y j ,
    for any pair of vectors y , y R ¯ n such that y i y i for all i = 1 , 2 , , n , and for any 1 j n 1 , where y j = y 1 , , y j , y j + 1 , , y n and y j = y 1 , , y j , y j + 1 , , y n , and h is the joint density function of the random vector α 1 X 1 , α 2 X 2 , , α n X n . By integrating both sides of (8) in y , y , with x i < y i x i < y i for i = 1 , 2 , , j and x i < y i x i < y i for i = j + 1 , , n , we obtain
    x 1 x 1 x j x j x j + 1 x j + 1 x n x n x 1 + x j + x j + 1 + x n + h y h y d y d y x 1 x 1 x j x j x j + 1 x j + 1 x n x n x 1 + x j + x j + 1 + x n + h y j h y j d y d y .
    It easily follows that the determinant D in (7) is non-positive.
In the three cases, we obtain that X is I( α ), which completes the proof. □
In order to conclude this subsection, we summarize the relationships among the different dependence concepts outlined above in the following scheme:
MTP 2 ( α ) I ( α ) IS ( α ) PD ( α ) .

3.3. Properties

The subsequent results presented herein encapsulate essential properties inherent to the I( α ) families. These properties span a diverse range of scenarios, encompassing not only the behavior of independent random variables but also extending to subsets of the newly introduced concept IS( α ), as well as the concatenation of I( α ) random vectors. Furthermore, these results touch upon topics, such as weak convergence, thereby providing a comprehensive framework for analyzing and understanding the dynamics of multivariate dependence within the realm of I( α ) families.
Proposition 3.
Every set of independent random variables is I(α) for any α R n .
Proof. 
If the random variables X 1 , X 2 , , X n are independent, then for any α R n and for all x , x R ¯ n , we have
P α X > x | α X > x = i = 1 n P α i X i > x i | α i X i > x i .
Given i { 1 , 2 , , n } , consider the probability P α i X i > x i | α i X i > x i . We study two cases:
  • If x i x i , we have
    P α i X i > x i | α i X i > x i = 1 .
  • If x i > x i , we have
    P α i X i > x i | α i X i > x i = P α i X i > x i P α i X i > x i .
    We consider two subcases:
    (a)
    If x i x i x i , then we have
    P α i X i > x i P α i X i > x i ,
    and therefore
    P α i X i > x i | α i X i > x i P α i X i > x i | α i X i > x i .
    (b)
    If x i x i x i , then we have
    P α i X i > x i | α i X i > x i 1 = P α i X i > x i | α i X i > x i .
In any case, we obtain that the probability P α i X i > x i | α i X i > x i is nondecreasing in x i for any x i R ¯ and for all i = 1 , 2 , , n , whence the result follows. □
Exploring the interplay of stochastic processes, we delve into the transformation of subsets of I( α ) random variables.
Proposition 4.
Every subset of I(α) random variables is I( α ), where α is the vector attained by excluding from α the components associated with the random variables not included in the subset.
Proof. 
Assume that X = ( X 1 , X 2 , , X n ) is I( α ), and let X k = ( X i 1 , X i 2 , , X i k ) be a subvector of X . Let I = { 1 , 2 , , n } . For any x i 1 , x i 2 , , x i k , x i 1 , x i 2 , , x i k R ¯ , and by considering x i = x i = for every i I \ { i 1 , i 2 , , i k } , we have
P j = 1 k α i j X i j > x i j | j = 1 k α i j X i j > x i j = P i = 1 n α i X i > x i | i = 1 n α i X i > x i .
Thus, given x i j R ¯ , 1 j k , such that x i j x i j , and taking x i = for every i I \ { i 1 , i 2 , , i k } , we obtain
P i = 1 n α i X i > x i | i = 1 n α i X i > x i = P j = 1 k α i j X i j > x i j | j = 1 k α i j X i j > x i j .
Since X is I( α ), we conclude that X k is I α i 1 , α i 2 , , α i k , completing the proof. □
Within the domain of stochastic processes, we now show that when applying strictly increasing functions to the components of an I ( α ) random vector, the I ( α ) property is retained.
Proposition 5.
If the random vector X = ( X 1 , X 2 , , X n ) is I(α), and g 1 , g 2 , , g n are n real-valued and strictly increasing functions, then the random vector ( g 1 ( X 1 ) , g 2 ( X 2 ) , , g n ( X n ) ) is I(α).
Proof. 
Let y , y , y R ¯ n such that y i y i for all i = 1 , 2 , , n . Since X 1 , X 2 , , X n are I( α ) and α i g i 1 ( α i y i ) α i g i 1 ( α i y i ) for every i = 1 , 2 , , n , we have
P i = 1 n ( α i g i ( X i ) > y i ) | i = 1 n ( α i g i ( X i ) > y i ) = P i = 1 n ( α i X i > α i g i 1 ( α i y i ) ) | i = 1 n ( α i X i > α i g i 1 ( α i y i ) ) P i = 1 n ( α i X i > α i g i 1 ( α i y i ) ) | i = 1 n ( α i X i > α i g i 1 ( α i y i ) ) = P i = 1 n ( α i g i ( X i ) > y i ) | i = 1 n ( α i g i ( X i ) > y i ) ,
i.e., ( g 1 ( X 1 ) , g 2 ( X 2 ) , , g n ( X n ) ) is I( α ), which completes the proof. □
For the next result, we need some additional notations. Given α = ( α 1 , α 2 , , α n ) R n and β = ( β 1 , β 2 , , β m ) R m , ( α , β ) will denote concatenation, that is, ( α , β ) = ( α 1 , , α n , β 1 , , β m ) I R n + m . Similar notation will be used in the case of random vectors.
Proposition 6.
If X = ( X 1 , X 2 , , X n ) is I(α), Y = ( Y 1 , Y 2 , , Y m ) is I(β), and X and Y are independent, then ( X , Y ) is I( α , β ).
Proof. 
Let x , x , x R ¯ n and y , y , y R ¯ m such that x x and y y . Since X is I( α ), Y is I( β ), and X and Y are independent, then we have
P i = 1 n ( α i X i > x i ) , j = 1 m ( β j Y j > y j ) | i = 1 n ( α i X i > x i ) , j = 1 m ( β j Y j > y j ) = P i = 1 n ( α i X i > x i ) | i = 1 n ( α i X i > x i ) · P j = 1 m ( β j Y j > y j ) | j = 1 m ( β j Y j > y j ) P i = 1 n ( α i X i > x i ) | i = 1 n ( α i X i > x i ) · P j = 1 m ( β j Y j > y j ) | j = 1 m ( β j Y j > y j ) = P i = 1 n ( α i X i > x i ) , j = 1 m ( β j Y j > y j ) | i = 1 n ( α i X i > x i ) , j = 1 m ( β j Y j > y j ) ,
whence ( X , Y ) is I( α , β ). □
The following result pertains to a closure property of the I( α ) family of multivariate distributions and, similarly, of the D( α ) family.
Proposition 7.
The family of I(α) distribution functions is closed under weak convergence.
Proof. 
Let { X n } n N be a sequence of p-dimensional random vectors such that X n = ( X 1 n , X 2 n , , X p n ) is I ( α ) for all n N , and { X n } n N converges weakly to X . If x , x , x R ¯ p are such that x i x i for all i = 1 , 2 , , p , then we have
P i = 1 p ( α i X i > x i ) | i = 1 p ( α i X i > x i ) = lim n + P i = 1 p ( α i X i n > x i ) | i = 1 p ( α i X i n > x i ) lim n + P i = 1 p ( α i X i n > x i ) | i = 1 p ( α i X i n > x i ) = P i = 1 p ( α i X i > x i ) | i = 1 p ( α i X i > x i ) ;
therefore, X is I( α ), whence the result follows. □

3.4. Examples

In this section, we delve into examples that illustrate the I( α ) concept of dependence. Through the following three examples—involving both continuous and discrete cases—, we aim to elucidate the behavior and implications of this type of dependence in various contexts. These examples serve to elucidate the impact on statistical analysis, decision-making processes, and other pertinent areas of study.
Example 1.
Let X = ( X 1 , X 2 , , X n ) be a random vector with multivariate Normal distribution N ( μ , Σ ) , where μ = ( μ 1 , μ 2 , , μ n ) and Σ is the covariance matrix. Let r i j = Σ 1 such that r i j < 0 for all ( i , j ) with 1 i < j n —a similar study can be conducted by considering r i j > 0 . The probability density function of X is given by
f ( x 1 , x 2 , , x n ) = ( 2 π ) n / 2 | Σ | 1 / 2 exp 1 2 i = 1 n j = 1 n r i j ( x i μ i ) ( x j μ j ) .
Then, for every pair ( i , j ) with 1 i < j n , we can express the probability density function as follows:
f ( x 1 , x 2 , , x n ) = f 1 x ( i ) f 2 x ( j ) exp ( r i j x i x j ) ,
where x ( k ) = ( x 1 , , x k 1 , x k + 1 , , x n ) for k = i , j and appropriate functions f 1 , f 2 . Now, given x i , x j , x i , x j such that x i x i and x j x j , and ( α i , α j ) such that | α k | = 1 for k = i , j , we have
f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n f x 1 , , α i x i , , α j x j , , x n = f 1 x ( i ) f 1 x ( i ) f 2 x ( j ) f 2 x ( j ) · exp ( r i j α i α j ( x i x j + x i x j ) ) exp ( r i j α i α j ( x i x j + x i x j ) ) .
Since
α i α j ( x i x j + x i x j x i x j x i x j ) = α i α j [ ( x i x i ) ( x j x j ) ] 0 ,
as long as α i α j > 0 , then (9) is non-negative if, and only if, α i α j > 0 . Then we have that, for any α = ( α 1 , α 2 , , α n ) R n such that | α i | = 1 for all i = 1 , 2 , , n , the random vector X is MTP2 ( α ) if, and only if, α i α j > 0 for any election of ( i , j ) —see Theorem 3 of [17]. From Proposition 2, we conclude that X is I(α) for α = 1 and α = 1 .
Example 2.
Let X = ( X 1 , X 2 , , X n ) be a random vector with Dirichlet distribution Dir ( γ ) , with γ = ( γ 1 , γ 2 , , γ n ; γ n + 1 ) , and such that γ i > 0 for all i = 1 , 2 , , n and γ n + 1 1 . The probability density function is given by
f ( x 1 , x 2 , , x n ) = Γ i = 1 n + 1 γ i i = 1 n + 1 Γ ( γ i ) i = 1 n x i γ i 1 1 i = 1 n x i γ n + 1 1 ,
with x i 0 and i = 1 n x i 1 . Given any selection of ( i , j ) , with 1 i < j n , and any real numbers x i , x j , x i , x j such that x i x i and x j x j , we have
f x 1 , , x i , , x j , , x d f x 1 , , x i , , x j , , x d f ( x 1 , , x i , , x j , , x d ) f ( x 1 , , x i , , x j , , x d ) = Γ i = 1 n + 1 γ i i = 1 n + 1 Γ ( γ i ) 2 k = 1 k i , j n x k 2 ( γ k 1 ) x i γ i 1 ( x i ) γ i 1 x j γ j 1 ( x j ) γ j 1 · { 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j γ n + 1 1 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j γ n + 1 1 } .
Since
1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j 1 k = 1 k i , j n x k x i x j = x i x j + x i x j x i x j x i x j = ( x i x i ) ( x j x j ) 0
and γ n + 1 1 , then (10) is non-positive; therefore, we have that X is MRR2( 1 ), that is, X  is a multivariate reverse rule of order two—the corresponding negative analog to (4) by reversing the inequality sign in (Ref. [17], Theorem 3)—according to the direction ( 1 , 1 , , 1 ) . Thus, by applying the corresponding negative dependence concept, in a manner similar to that provided in Proposition 2 for the corresponding positive dependence concept, we conclude that X is D ( 1 ) .
Example 3.
Let X = ( X 1 , X 2 , , X n ) be a random vector with a multinomial distribution with parameters N (number of trials) and p = ( p 1 , p 2 , , p n ) (event probabilities) such that p i 0 for all i = 1 , 2 , , n and 0 < i = 1 n p i < 1 . The joint probability mass function is given by
f ( x 1 , x 2 , , x n ) = N ! i = 1 n x i ! N i = 1 n x i ! i = 1 n p i x i 1 i = 1 n p i N i = 1 n x i ,
where i = 1 n x i N . The multinomial distribution function is the conditional distribution function of independent Poisson random variables given their sum. As a consequence of (Ref. [5], Theorem 4.3) and (Ref. [17], Theorem 3), we have that X is MRR2( 1 ). Thus, we conclude that X  is D ( 1 ) .
Remark 1.
We want to note that by considering a similar reasoning to that given in Example 3, we have that any random vector with multivariate hypergeometric distribution—the conditional distribution function of independent binomial random variables given their sum—is also D ( 1 ) .
Now we provide an illustrative example demonstrating the application of Proposition 7 regarding weak convergence.
Example 4.
Let X = ( X 1 , X 2 , , X n ) be a random vector with joint distribution function
H θ ( x ) = exp i = 1 n e θ x i 1 / θ
for all x R ¯ n , and θ 1 . This parametric family of distribution functions is a multivariate generalization of the Type B bivariate extreme-value distributions (see [16,18]). By applying (Ref. [19], Theorem 2.11)—which involves log-convex functions [20]—we have that the random vector X is I ( 1 ) . Consider the sequence of distribution functions { H θ } θ N . When θ goes to ∞, we get H ( x ) = min ( F 1 ( x 1 ) , F 2 ( x 2 ) , , F n ( x n ) ) , where F i , with i = 1 , 2 , , n , are the one-dimensional marginals of H θ ; therefore, as a consequence of Proposition 7, we obtain that H θ is I ( 1 ) as well.

4. Conclusions

In this paper, we have undertaken a significant endeavor by introducing a novel concept of monotonicity, characterized by its directionality, for a set of random variables. This extension of existing multivariate dependence concepts represents a substantial contribution to the field, offering a more nuanced understanding of dependence structures. Moreover, we have not only defined this directional monotonicity concept but also delved into its implications by establishing relationships with other well-known multivariate dependence concepts. These comparative analyses shed light on the interconnectedness and compatibility between different analytical approaches, enriching our understanding of multivariate dependence. The exploration of I ( α ) stochastic orders—closely resembling those studied in [21]—is ongoing and represents a fertile ground for future research.

Author Contributions

Conceptualization, J.J.Q.-M.; methodology, J.J.Q.-M. and M.Ú.-F.; validation, J.J.Q.-M. and M.Ú.-F.; investigation, J.J.Q.-M. and M.Ú.-F.; writing—original draft preparation, M.Ú.-F.; writing—review and editing, M.Ú.-F.; visualization, J.J.Q.-M. and M.Ú.-F.; supervision, J.J.Q.-M. and M.Ú.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Innovation (Spain) grant number PID2021-122657OB-I00.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors thank the comments provided by three anonymous reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jogdeo, K. Concepts of dependence. In Encyclopedia of Statistical Sciences; Kotz, S., Johnson, N.L., Eds.; Wiley: New York, NY, USA, 1982; Volume 1, pp. 324–334. [Google Scholar]
  2. Colangelo, A.; Scarsini, M.; Shaked, M. Some notions of multivariate positive dependence. Insur. Math. Econ. 2005, 37, 13–26. [Google Scholar] [CrossRef]
  3. Barlow, R.E.; Proschan, F. Statistical Theory of Reability and Life Testing: Probability Models; To Begin With: Silver Spring, MD, USA, 1981. [Google Scholar]
  4. Block, H.W.; Ting, M.-L. Some concepts of multivariate dependence. Comm. Statist. A Theory Methods 1981, 10, 749–762. [Google Scholar] [CrossRef]
  5. Block, H.W.; Savits, T.H.; Shaked, M. Some concepts of negative dependence. Ann. Probab. 1982, 10, 765–772. [Google Scholar] [CrossRef]
  6. Colangelo, A.; Müller, A.; Scarsini, M. Positive dependence and weak convergence. J. Appl. Prob. 2006, 43, 48–59. [Google Scholar] [CrossRef]
  7. Joe, H. Multivariate Models and Dependence Concepts; Chapman & Hall: London, UK, 1997. [Google Scholar]
  8. Quesada-Molina, J.J.; Úbeda-Flores, M. Directional dependence of random vectors. Inf. Sci. 2012, 215, 67–74. [Google Scholar] [CrossRef]
  9. Kimeldorf, G.; Sampson, A.R. A framework for positive dependence. Ann. Inst. Statist. Math. 1989, 41, 31–45. [Google Scholar] [CrossRef]
  10. Lehmann, E.L. Some concepts of dependence. Ann. Math. Statist. 1966, 37, 1137–1153. [Google Scholar] [CrossRef]
  11. Shaked, M. A general theory of some positive dependence notions. J. Multivariate Anal. 1982, 12, 199–218. [Google Scholar] [CrossRef]
  12. Karlin, S.; Rinott, Y. Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions. J. Multivariate Anal. 1980, 10, 467–498. [Google Scholar] [CrossRef]
  13. Harris, R. A multivariate definition for increasing hazard rate distribution functions. Ann. Math. Statist. 1970, 41, 713–717. [Google Scholar] [CrossRef]
  14. Popović, B.V.; Ristić, M.M.; Genç, A.İ. Dependence properties of multivariate distributions with proportional hazard rate marginals. Appl. Math. Model. 2020, 77, 182–198. [Google Scholar] [CrossRef]
  15. Quesada-Molina, J.J.; Úbeda-Flores, M. Monotonic in sequence random variables according to a direction. University of Almería: Almería, Spain, 2024; to be submitted. [Google Scholar]
  16. Nelsen, R.B. An Introduction to Copulas, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  17. de Amo, E.; Quesada-Molina, J.J.; Úbeda-Flores, M. Total positivity and dependence of order statistics. AIMS Math. 2023, 8, 30717–30730. [Google Scholar] [CrossRef]
  18. Johnson, N.L.; Kotz, S. Distributions in Statistics: Continuous Multivariate Distributions; John Wiley & Sons: New York, NY, USA, 1972. [Google Scholar]
  19. Müller, A.; Scarsini, M. Archimedean copulae and positive dependence. J. Multivar. Anal. 2005, 93, 434–445. [Google Scholar] [CrossRef]
  20. Kingman, J.F.C. A convexity property of positive matrices. Quart. J. Math. Oxford 1961, 12, 283–284. [Google Scholar] [CrossRef]
  21. de Amo, E.; Rodríguez-Griñolo, M.R.; Úbeda-Flores, M. Directional dependence orders of random vectors. Mathematics 2024, 12, 419. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quesada-Molina, J.J.; Úbeda-Flores, M. Monotonic Random Variables According to a Direction. Axioms 2024, 13, 275. https://doi.org/10.3390/axioms13040275

AMA Style

Quesada-Molina JJ, Úbeda-Flores M. Monotonic Random Variables According to a Direction. Axioms. 2024; 13(4):275. https://doi.org/10.3390/axioms13040275

Chicago/Turabian Style

Quesada-Molina, José Juan, and Manuel Úbeda-Flores. 2024. "Monotonic Random Variables According to a Direction" Axioms 13, no. 4: 275. https://doi.org/10.3390/axioms13040275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop