Next Article in Journal
Algoritmic Trading System Based on State Model for Moving Average in a Binary-Temporal Representation
Previous Article in Journal
An Overview on the Landscape of R Packages for Open Source Scorecard Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Elasticity of a Random Variable as a Tool for Measuring and Assessing Risks

by
Ernesto-Jesús Veres-Ferrer
and
Jose M. Pavía
*
Quantitative Methods Area, Department of Applied Economics, Universitat de Valencia, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Risks 2022, 10(3), 68; https://doi.org/10.3390/risks10030068
Submission received: 15 February 2022 / Revised: 3 March 2022 / Accepted: 14 March 2022 / Published: 18 March 2022

Abstract

:
Elasticity is a very popular concept in economics and physics, recently exported and reinterpreted in the statistical field, where it has given form to the so-called elasticity function. This function has proved to be a very useful tool for quantifying and evaluating risks, with applications in disciplines as varied as public health and financial risk management. In this study, we consider the elasticity function in random terms, defining its probability distribution, which allows us to measure for each stochastic process the probability of finding elastic or inelastic situations (i.e., with elasticities greater or less than 1). This new tool, together with new results on the most notable points of the elasticity function covered in this research, offers a new approach to risk assessment, facilitating proactive risk management. The paper also includes other contributions of interest, such as new results that relate elasticity and inverse hazard functions, the derivation of the functional form of the cumulative distribution function of a probability model with constant elasticity and how the elasticities of functionally dependent variables are related. The interested reader can also find in the paper examples of how elasticity cumulative distribution functions are calculated, and an extensive list of probability models with their associated elasticity functions and distributions.

1. Introduction

Elasticity is a very popular concept in economics and physics, recently exported to the statistical field where it has been reinterpreted (Belzunce et al. 1995; Veres-Ferrer and Pavía 2014). The elasticity function is closely related to the reverse hazard function or reverse hazard (failure) rate (see, e.g., Chechile 2011) and generalises, for probability distributions with any support, the so-called proportional reverse hazard function (Veres-Ferrer and Pavía 2017a).
Prior to the introduction of the concept of elasticity in the statistical field, reverse hazard rates and proportional reverse hazard rates were almost exclusively used in survival analysis studies, mostly in reliability engineering for assessing waiting times, hidden failures, inactivity times or the study of the probability of the successful functioning of systems (see, e.g., Block et al. 1998; Chandra and Roy 2001; Xie et al. 2002; Poursaeed 2010; Desai et al. 2011), and for studying the ordering of random variables (see, e.g., Belzunce et al. 1998; Gupta and Nanda 2001; Finkelstein 2002; Shaked and Shanthikumar 2006), but rarely employed in other fields. The exceptions to this are seen in the research studies of Kalbfleisch and Lawless (1991) and Singh and Maddala (1976), who used them to analyse data on AIDS and HIV infection and income inequality, respectively.
The reinterpretation of elasticity in statistical terms and its better comprehension (Veres-Ferrer and Pavía 2012, 2014) has indirectly expanded the use of (proportional) reserve hazard rates and has directly opened up new areas for their application. In addition to fostering their classical usefulness in survival analysis and stochastic ordering (see, e.g., Navarro et al. 2014; Oliveira and Torrado 2015; Balakrishnan et al. 2017; Kundu and Ghosh 2017; Hazra et al. 2017; Arab and Oliveira 2019; Esna-Ashari et al. 2020), they are being applied to new areas, such as epidemiology (Veres-Ferrer and Pavía 2021), risk management in business (Pavía et al. 2012; Torrado and Oliviera 2013; Pavía and Veres-Ferrer 2016; Torrado and Navarro 2021), and trading and bidding in markets (Yang et al. 2021; Auster and Kellner 2022). Furthermore, these tools are proving their usefulness in the field of medicine (Popović et al. 2021), for the characterisation of stochastic distributions (Veres-Ferrer and Pavía 2017a, 2017b) and as a means for proposing new probability models (Szymkowiak 2019; Kotelo 2019; Anis and De 2020). Additional applications can be found, among others, in Yan and Luo (2018), Behdani et al. (2019), Nanda and Kayal (2019) or Zhang and Yan (2021).
The elasticity of a random variable accounts for the variation experienced by its cumulative distribution function as a function of variations of its values in the log-scale; that is, how the relative accumulation of probability behaves throughout the support of the variable. In other words, it measures the odds in favour of advancing further to higher values in a proportionate sense. This provides valuable information for proper risk management (e.g., economic or health) and enables the situation of the underlying process, a system or an action protocol to be assessed. For instance, an elasticity greater than 1 (or a slightly lower number) in a time stochastic process that measures failures, casualties or deaths is a clear indicator of increasing risk (see, e.g., Pavía and Veres-Ferrer 2016; Veres-Ferrer and Pavía 2021), which reports on the deterioration in the evolution of the variable. From this perspective, therefore, we can affirm that knowing in advance the probabilities of observing elasticities higher than one (or, in general, within a range of interest) of a given random process whose behaviour is synthesised in its probability distribution would constitute a very useful tool for proactive risk management.
To address the previous issue, this study considers elasticity as a function of the associated random variable which, in turn, means it can be interpreted as a random variable. This makes it possible to determine for each specific probability model what would be the associated probability of finding elastic or inelastic situations (i.e., with elasticities greater or less than 1). This has important implications. On the one hand, it would allow a correct dimension a priori of resources to meet the materialisation of risks. On the other hand, if we can modify or influence the risk structure, this would make it possible to intelligently intervene in the process, keeping the risks controlled at levels considered appropriate after carrying out a cost-benefit analysis. Specifically, this could be done by evaluating a priori what would be the impact of an alternative design of systems or protocols, or what consequences would the implementation of a set of restrictive or preventive measures have (in terms of probability of materialisation of the risk).
The consideration of elasticity as a random variable is not the only relevant contribution of this paper. As is inferred from the previous discussion and has been revealed in other research studies (see, e.g., Pavía and Veres-Ferrer 2016; Veres-Ferrer and Pavía 2021), unit elasticities play a critical role in risk management, as they mark points of change. Hence, in this paper we also devote some space to studying how they relate to other singular points of the distribution. In addition to these two main contributions, this paper also includes some other (likely less relevant) results.
The rest of the paper is structured as follows. Section 2 defines elasticity in the probabilistic context and interprets it in its role as a function of a random variable. Section 3 presents some results that relate relevant points of elasticity, cumulative and density functions. Section 4 briefly delves into Veres-Ferrer and Pavía (2017a) by providing new results that relate elasticity and inverse risk functions. Section 5 derives the functional form that a probability distribution with constant elasticity must have. Section 6 establishes the relationship between the elasticities of a random variable and of some of its transforms. Section 7 presents an application of how knowing the probability distribution of the elasticity can be used to take decisions regarding risk management. Section 8 shows through examples how to obtain probability distributions of elasticities. On the one hand, it presents three examples of probability models for which the analytical calculation of the elasticity distribution function is possible, with each of the examples belonging to a type of random variable with different elasticity: elastic, inelastic or mixed. On the other hand, it develops, through the case of a standardised Normal, how the probability distribution of elasticity can be approximated when it cannot be obtained analytically. The number of examples is greatly expanded in Section 9. In this section the reader can find several summary tables with the distribution and elasticity functions corresponding to elasticities of almost 40 probability models. The associated calculations are provided in the Appendix A. To facilitate navigation through the large number of examples, each example is identified with a code, which is the same in the tables and in the Appendix A. The paper ends with a short Conclusion section.

2. The Elasticity of a Random Variable as a Random Variable

Let X be a continuous random variable, with real support in D = [ a , b ] (or D = ] a , b [     o r     [ a , b [     o r     ] a , b ] ). Denoting F ( x ) as its cumulative distribution function and f ( x ) its density function, the elasticity of X is defined by Equation (1):
e ( x ) = d   l n F ( x ) d   l n x = F ( x ) F ( x ) 1 | x | = | x | f ( x ) F ( x ) > 0     x ( a , b ]    
where the absolute value is reached by definition. The elasticity is non-negative. The elasticity function, e ( x ) , in the same way as other functions (such as the characteristic function, the odds ratio or the (reverse) hazard function), uniquely characterises the probability distribution of the random variable (Veres-Ferrer and Pavía 2012). This issue makes reasoning new probability models easier by introducing hypotheses and knowledge about the random process from a different perspective (Veres-Ferrer and Pavía 2014).
The interpretation of e ( x ) mimics the interpretation of the classic concept of elasticity in economics. A value of null elasticity, e ( x ) = 0 , captures a situation of perfect inelasticity. In these points, infinitesimal changes do not provoke changes in the accumulation of probability. Values 0 < e ( x ) < 1 describe inelastic situations. In these points, infinitesimal increases of x cause relatively smaller increases in the accumulation of probabilities. Unitary elasticities, e ( x ) = 1 , occur in situations where infinitesimal changes in x produce changes of the same quantity in the accumulation of probabilities. Finally, elasticities greater than one, e ( x ) > 1 , happen in elastic situations. In these points, infinitesimal increases in x give rise to greater increases in the accumulation of probabilities. At the limit, when e ( x ) tends to infinity, a perfect elasticity situation is reached. In these points, an infinitesimal increase in x leads to a theoretically infinite increase in the accumulation of probability.
If instead of considering elasticity as a function of the values of the random variable, we consider it as a function of the random variable, it makes perfect sense to look for its probability distribution, which can be synthesised through its cumulative distribution function, F e ( y ) :
F e ( y ) = P ( e ( X ) y ) = P ( | x | · f X ( X ) F X ( X ) y )    
That is,
F e ( y ) = P ( X e 1 ( y ) ) = F X ( e 1 ( y ) )
The value of x at which its probability distribution changes elasticity, going from elastic to inelastic or vice versa, usually provides particularly relevant information about the underlying random process behaviour. For example, it allows the identification of sensitive change points of the random variable, such as the moment of remission in the evolution of a pandemic (Veres-Ferrer and Pavía 2021) or the confirmation of the goodness of an alarm system (Pavía and Veres-Ferrer 2016).
Based on this criterion, it is possible to classify random variables with non-constant elasticity as totally inelastic when F e ( 1 ) = 1 , as preferably inelastic when F e ( 1 ) > 1 F e ( 1 ) , neutral when F e ( 1 ) = 1 2 , preferably elastic when F e ( 1 ) < 1 F e ( 1 ) , and totally elastic when F e ( 1 ) = 0 . Knowledge of this function, therefore, makes it possible to quantify the probability that a system is out of control or to compare the risks associated to different systems.
Analytical determination of the elasticity distribution function can be complex. In fact, on occasions, to determine the probability that the probabilistic model provides elastic or inelastic values, it is necessary to resort to approximate procedures or numerical analysis. The problem is significantly simplified when, in expression (3), it is possible to resolve X analytically. Section 9 develops some representative examples of situations where (3) can and cannot be solved analytically. Many further examples can be found in the Appendix A.

3. On the Relationships between Elasticity, Density and Cumulative Distribution Functions in Elasticity Unitary Points

From what has been stated in the two previous sections it can be inferred that, once the elasticity function of a probability distribution is known, a set of notable values of a random variable X is one in which its members have unit elasticities. The set, which can be empty or infinite, is composed of those points x e ( x ) = 1 for which X verifies (4).
| x e ( x ) = 1 | = F ( x e ( x ) = 1 ) f ( x e ( x ) = 1 )    
Obviously, a point of unitary elasticity is not necessarily a point where the elasticity reaches its maximum value or presents an inflection, although it can be so under certain hypotheses. It is interesting, therefore, to study the properties of the notable points of a probability distribution and their relations with other singular points of the distribution. Next, we show four propositions where we study some of these relationships. These statements are only true for positive domain random variables.
The point at which the probability density presents an inflection point does not determine an analogous behaviour for elasticity. However, it could be the case when the elasticity in it is unitary.
Proposition 1.
Let X be a random variable of the positive domain. Let x I > 0 be an inflection point of the density function f ( x ) of the variable, verifying d   f ( x ) d x | x I = f ( x I ) = 0 . Let us suppose that in this, the elasticity is unitary, x I · f ( x I ) F ( x I ) = 1 , then x I is also an inflection point for the elasticity function.
Proof. 
The first three derivatives of the elasticity function are given by the following expressions:
d   e ( x ) d x | x 0 = f ( x ) · F ( x ) + x · f ( x ) · F ( x ) x · f 2 ( x ) F 2 ( x ) d 2   e ( x ) d x 2 | x 0 = 2 f ( x ) · F 2 ( x ) + x · f ( x ) · F 2 ( x ) 2 f 2 ( x ) · F ( x ) 3 x · f ( x ) · f ( x ) · F ( x ) + 2 x · f 3 ( x ) F 3 ( x ) d 3   e ( x ) d x 3 | x 0 = 9 f ( x ) · f ( x ) · F 2 ( x ) + 3 f ( x ) · F 3 ( x ) + x · f ( x ) · F 3 ( x ) F 4 ( x ) + x · f ( x ) · f ( x ) · F 2 ( x ) 3 x · ( f ( x ) ) 2 · F 2 ( x ) + 6 · f 3 ( x ) · F ( x ) F 4 ( x ) + 6 x · f 4 ( x ) + 12 x · f 2 ( x ) · f ( x ) · F ( x ) F 4 ( x )
From which, using the hypothesis
d   f ( x ) d x | x I = f ( x I ) = 0 ,   d 2   f ( x ) d x 2 | x I = f ( x I ) = 0 ,   d 3   f ( x ) d x 3 | x I = f ( x I ) 0   and   x I = F ( x I ) f ( x I )
we obtain:
d 2   e ( x ) d x 2 | x I ; f ( x I ) = f ( x I ) = 0 = 2 f 2 ( x I ) · F ( x I ) + 2 x I · f 3 ( x I ) F 3 ( x I )    
d 2   e ( x ) d x 2 | x I ; f ( x I ) = f ( x I ) = 0 ;   x I = F ( x I ) f ( x I ) = 0   and
d 3   e ( x ) d x 3 | x I ; f ( x I ) = f ( x I ) = 0 = x I · f ( x I ) F ( x I ) + 6 · f 3 ( x I ) · F ( x I ) 6 x I · f 4 ( x I ) F 4 ( x I )
d 3   e ( x ) d x 3 | x I ; f ( x I ) = f ( x I ) = 0 ; x I = F ( x I ) f ( x I ) = x I · f ( x I ) F ( x I ) 0
From this, it follows that x I is an inflection point in the elasticity function. □
It is also possible to relate the inflection points of the distribution function with the behaviour, maximum or minimum, of elasticity.
Proposition 2.
Let X be a random variable of the positive domain. Let x I be an inflection point of the distribution function F ( x ) of the random variable, and let us suppose that, in this, the elasticity is unitary, x I · f ( x I ) F ( x I ) = 1 , then x I is an extreme (maximum or minimum) of the elasticity function.
Proof. 
Using the expressions for d e ( x ) d x and d 2   e ( x ) d x 2 obtained in the Proposition 1 and that by hypothesis d 2   F ( x ) d x 2 | x I = f ( x I ) = 0 ,   d 3   F ( x ) d x 3 | x I = f ( x I ) 0 and x I = F ( x I ) f ( x I ) , we have that:
d   e ( x ) d x | x I ; f ( x I ) = 0 = f ( x I ) · F ( x I ) x I · f 2 ( x I ) F 2 ( x I )   and
d   e ( x ) d x | x I ; f ( x I ) = 0 ;   x I = F ( x I ) f ( x I ) = 0
d 2   e ( x ) d x 2 | x I ; f ( x I ) = 0 ; f ( x I ) 0 = x I · f ( x I ) · F 2 ( x I ) 2 f 2 ( x I ) · F ( x I ) + 2 x I · f 3 ( x I ) F 3 ( x I )
d 2   e ( x ) d x 2 | x I ; f ( x I ) = 0 ;   f ( x I ) 0 ;   x I = F ( x I ) f ( x I ) = x I · f ( x I ) F ( x I ) 0
From this, it follows that when f ( x I ) > 0 the elasticity function presents a minimum and that when f ( x I ) < 0 the elasticity function presents a maximum. □
The point at which the probability density presents a maximum or a minimum does not determine a similar behaviour for elasticity. However, as in the outcome of Proposition 1, it does when the elasticity is unitary.
Proposition 3.
Let X be a random variable of the positive domain. Let x M > 0 be a point at which the density function f ( x ) of the variable reaches a maximum (minimum). Let us suppose that the elasticity in this is unitary, x M = F ( x M ) f ( x M ) , then the elasticity also reaches in it a maximum (minimum).
Proof. 
Considering the expressions for d e ( x ) d x and d 2   e ( x ) d x 2 obtained from Proposition 1 and using that by the hypothesis, it is verified that d   f ( x ) d x | x M = f ( x M ) = 0 , d 2 f ( x ) d x 2 | x M = f ( x M ) < 0     ( > 0 ) , and x M = F ( x M ) f ( x M ) , and so we have that:
d   e ( x ) d x | x M ; f ( x M ) = 0 = f ( x M ) · F ( x M ) x M · f 2 ( x M ) F 2 ( x M )   and
d   e ( x ) d x | x M ; f ( x M ) = 0 ; x M = F ( x M ) f ( x M )   = 0
d 2   e ( x ) d x 2 | x M ; f ( x M ) = 0 = x M · f ( x M ) · F 2 ( x M ) 2 f 2 ( x M ) · F ( x M ) + 2 x M · f 3 ( x M ) F 3 ( x M )
d 2   e ( x ) d x 2 | x M ; f ( x M ) = 0 ; x M = F ( x M ) f ( x M ) = x M · f ( x M ) F ( x M ) | f ( x M ) < 0   ( > 0 ) < 0     ( > 0 )
From this, it follows that x M is the maximum (minimum) in the elasticity function. □
Corollary 1.
Under the circumstances of the previous Proposition 3, the random variable turns out to be inelastic (elastic) x .
Proof. 
If the function of elasticity in x M reaches a maximum (minimum), with e ( x M ) = 1 , then x x M gives e ( x ) < 1 ( e ( x ) > 1 ). □
Proposition 4.
Let X be a random variable of the positive domain. Let x M > 0 be a point at which the elasticity function e ( x ) reaches a maximum or minimum. Let us suppose that, in this, the elasticity is also unitary,   x M = F ( x M ) f ( x M ) , and also that, at this point, it is verified that if d 2 f ( x ) d x 2 | x M = f ( x M ) < 0     ( d 2 f ( x ) d x 2 | x M = f ( x M ) > 0 ) , then the density function reaches a maximum (minimum) in x M .
Proof. 
Considering the expression d   e ( x ) d x of Proposition 1 and using that by hypothesis d   e ( x ) d x | x M 0 = 0 , it follows that:
d   e ( x ) d x | x M 0 = f ( x M ) · F ( x M ) + F ( x M ) f ( x M ) · f ( x M ) · F ( x M ) F ( x M ) f ( x M ) · f 2 ( x M ) F 2 ( x M )   = f ( x M ) f ( x M ) = 0 f ( x M ) = 0

4. Some Relationships between Elasticities and Reverse Hazard Rates

The inverse hazard function, which can be interpreted as a conditional probability—the probability of a state change happening in an infinitesimal interval preceding a value x , given that the state change takes place at x or before   x (Chechile 2011)—is closely related to the elasticity function (Veres-Ferrer and Pavía 2014, 2017a). Proposition 5 exploits this relationship to infer the behaviour of the inverse hazard function at the points where the first derivative of the elasticity function vanishes, while Proposition 6 links both functions in terms of dominance.
Proposition 5.
Let X be a random variable of the positive domain. Let x M > 0 be a point at which the first derivative of elasticity e ( x ) is null. Then the inverse hazard in x M is decreasing.
Proof. 
The inverse hazard function comes from   v ( x ) = f ( x ) F ( x ) and its derivative from:
d   v ( x ) d x | x 0 = f ( x ) · F ( x ) f 2 ( x ) F 2 ( x )  
So, considering the expression of d   e ( x ) d x obtained from Proposition 1, and using that by hypothesis d   e ( x ) d x | x M = e ( x M ) = 0 , this gives:
d   e ( x ) d x | x M = f ( x M ) · F ( x M ) + x M · f ( x M ) · F ( x M ) x M · f 2 ( x M ) F 2 ( x M ) = 0
x M · f ( x M ) · F ( x M ) f 2 ( x M ) F 2 ( x M ) = f ( x M ) F ( x M )    
and through substitution:
d   v ( x ) d x | x M = f ( x M ) x M · F ( x M ) = v ( x M ) x M < 0
Specifically, if in x M > 0 the elasticity shows a maximum or minimum, the inverse risk function is decreasing at this point.
Proposition 6.
Let X be a random variable with the domain of definition D , then the following is verified that e ( x ) v ( x ) | x | 1     and e ( x ) v ( x ) | x | 1 .
Proof. 
e ( x ) v ( x ) | x | · f ( x ) F ( x ) f ( x ) F ( x ) | x | 1 and
e ( x ) v ( x ) | x | · f ( x ) F ( x ) f ( x ) F ( x ) | x | 1
For example, if X is Uniform U ( 0 , b ) , with b > 1 its elasticity e ( x ) and inverse risk v ( x ) functions (see A.3.9a of the Appendix A) are:
e ( x ) = 1     x [ 0 , b ]   and   v ( x ) = 1 x     x [ 0 , b ]
verifying that:
e ( x ) = 1 1 x = v ( x ) 0 x 1   and   e ( x ) = 1 1 x = v ( x ) 1 x b
Corollary 2.
Let X be a random variable with the domain D . Its elasticity is less than its inverse risk at all points of D if, and only if, D [ 1 , + 1 ] .
Proof. 
e ( x ) v ( x )   x D | x | · f ( x ) F ( x ) f ( x ) F ( x )     x D | x | 1   x D D [ 1 , + 1 ]
For example, if X is Uniform U ( 1 , + 1 ) its elasticity e ( x ) and inverse hazard v ( x ) functions (see A.3.9d of the Appendix A) are:
e ( x ) = | x | x + 1     x [ 1 , + 1 ]   and   v ( x ) = 1 x + 1     x [ 1 , + 1 ]
verifying:
e ( x ) = | x | x + 1 1 x + 1 = v ( x ) x [ 1 , + 1 ]
Corollary 3.
Let X be a random variable with domain D . Its elasticity is greater than its inverse risk at all points of D if and only if D ] , 1 [ ] 1 , + [ .
Proof. 
e ( x ) v ( x )   x D | x | · f ( x ) F ( x ) f ( x ) F ( x )     x D  
| x | 1   x D D ] , 1 [ ] 1 , + [
For example, if X is U(1,b), with b > 1 its elasticity e ( x ) and inverse risk v ( x ) functions (see A.3.9b of the Appendix A) are:
e ( x ) = | x | x 1     x [ 1 , b ]   and   v ( x ) = 1 x 1     x [ 1 , b ]
verifying:
e ( x ) = | x | x 1 1 x 1 = v ( x ) | x | 1

5. Probability Models with Constant Elasticity

Constant elasticities and, in particular, unitary elasticities characterise processes where the elasticities do not evolve, presenting situations where risks either vanish or increase to an untenable limit. In this section, we characterise the functional form of these types of distributions.
Proposition 7.
If X is a continuous random variable with a constant elasticity throughout its domain of definition D, then X is non-negative, bounded at the top and bounded at the bottom by 0.
Proof. 
e ( x ) = k > 0     x D f X ( x ) F X ( x ) = k | x | ln F X ( x ) = k · s i g ( x ) · ln x + ln C X > 0     and     C > 0 ln F X ( x ) = k · ln x + ln C = ln ( C x k ) F X ( x ) = C x k
as
F X ( + ) = 1 C ( + ) x k = + b     such   as     X b < +     being     F X ( b ) = 1     and
X > 0 a 0     such   as     F X ( a ) = 0 = C a k a = 0 D = [ 0 , b ]
Corollary 4.
If X is a random variable with a constant elasticity throughout its domain of definition, then its distribution function is:
F X ( x ) = { 0 , x < 0 ( x b ) k , 0 x b 1 , x > b      
Proof. 
It follows from Proposition 7 that D = [ 0 , b ] and F X ( x ) = C x k , and from 1 = F X ( b ) = C b k C = 1 b k . □
We see that the elasticity e ( x ) = k > 0 is well defined, since it verifies the three properties that characterise an elasticity function (Veres-Ferrer and Pavía 2012):
(a)
Non negative and continuous
(b)
lim x + k x = 0
(c)
lim x 0 0 b k x d x = + .
Distribution I-4 (see Appendix A.1.4 of the Appendix A) is a particular case of this model for b = 1 . When k > 1 , the random variable is always elastic ( P ( e ( x ) > 1 ) = 1 ) and when k < 1 , the random variable is always inelastic ( P ( e ( x ) < 1 ) = 1 ). In both cases, considering elasticity as a random variable, this gives a degenerate random variable:
F e ( y ) = { 0 , y < k 1 , y k
The following two Corollaries can be proposed:
Corollary 5.
If X is a random variable with a unit elasticity throughout its domain of definition, then it is Uniform at D = [ 0 , b ] for some b > 0 .
This result is reflected in model 3.9.a, where P ( e ( x ) = 1 ) = 1 is verified.
Corollary 6.
The uniform random variable at D = [ 0 , b ] is the only variable with a unit elasticity in its entire domain of definition.

6. Relationships between the Elasticities of Functionally Related Random Variables

This section continues to study properties of elasticity, specifically relating elasticities of random variables that are functionally linked. This knowledge is relevant because it can simplify the calculations of elasticities in certain models. Proposition 8 presents the overall result, Proposition 9 develops the relationships for some of the more common transformations and Proposition 10 derives the general expression for the class of distributions whose distribution function can be parameterised as exponential of another distribution function.
Proposition 8.
Let Y = g ( X ) be a transformation of the random variable X , and g a derivable function with inverse function g 1 , then:
e Y = g ( X ) ( y ) = | y | d   g 1 ( y ) d y | g 1 ( y ) | e X ( g 1 ( y ) )  
Proof. 
If Y = g ( X ) , the following is verified:
F Y ( y ) = F X ( g 1 ( y ) )   and   f Y ( y ) = d   g 1 ( y ) d y f X ( g 1 ( y ) )
From which it follows that the elasticities have the following relationship:
e Y = g ( X ) ( y ) = | y | · f Y ( y ) F Y ( y ) = | y | d   g 1 ( y ) d y f X ( g 1 ( y ) ) F X ( g 1 ( y ) ) = | y | d   g 1 ( y ) d y | g 1 ( y ) | e X ( g 1 ( y ) )
For certain common transformations it is possible to obtain simple expressions that relate the elasticities of the original variable and the transformed variable. The following proposition shows the relationships for five transformations (linear, quadratic, square root, logarithm, and exponential):
Proposition 9.
I. Linear. Let Y = a + b X ,     b > 0 ,   then the relationship between the respective elasticities is:
e Y ( y ) = | y y a | · e X ( y a b )
Proof. 
In a linear relationship between random variables, the following relationships are known:
f Y ( y ) = 1 b f X ( y a b )                                   F Y ( y ) = F X ( y a b )  
From which the elasticities have the following relationship:
e Y ( y ) = | y | · 1 b f X ( y a b ) F X ( y a b ) = | y | b · | y a b | · f X ( y a b ) | y a b | · F X ( y a b ) = | y y a | · e X ( y a b )
For example, considering the random variable of model 1.5 of the summary, where Y = a + b X , with a > 0 and b > 0 , we have that a Y a + b , which gives:
e Y = a + b X ( y ) = y y a · ( 1 + b y + b a )    
Considering the relationship between the distribution and elasticity functions (Veres-Ferrer and Pavía 2017a), this elasticity function provides the following probability distribution for the variable Y :
F Y = a + b X ( y ) = { 0 , y < a 2 · ( y a ) 2 b · ( y + b a ) , a y a + b 1 , y > a + b      
II. Quadratic. Let Y = X 2 , then the following is verified:
e Y = X 2 ( y ) = y · ( f X ( y ) + f X ( y ) ) 2 · ( F X ( y ) F X ( y ) )      
Specifically, when the variable is a non-negative support variable, X 0 , the relationship between the respective elasticities is:
e Y = X 2 ( y ) = 1 2 · e X ( y )
Proof. 
Since Y = X 2 , the following relationships are verified:
f Y ( y ) = 1 2 y ( f X ( y ) + f X ( y ) )           F Y ( y ) = F X ( y ) F X ( y )  
From which the elasticities have the following relationship:
e Y ( y ) = y · 1 2 y · ( f X ( y ) + f X ( y ) ) ( F X ( y ) F X ( y ) ) = y · ( f X ( y ) + f X ( y ) ) 2 ( F X ( y ) F X ( y ) )    
And, specifically, when X 0 :
e Y = X 2 ( y ) = y · f X ( y ) 2 · F X ( y ) = 1 2 · e X ( y )
For example, considering again the model 1.5 in the summary table, since Y = X 2 , and with 0 Y 1 , this gives:
e Y = X 2 ( y ) = 1 2 · 2 + y 1 + y = 2 + y 2 + 2 · y    
Considering the relationship between the distribution and elasticity functions, this elasticity function has the following probability distribution:
F Y = X 2 ( y ) = { 0 , y < 0 2 y 1 + y , 0 y 1 1 , y > 1    
III. Root-squared.Let Y = X , with   X 0 , then the following is verified: e Y ( y ) = 2 · e X ( y 2 ) .
Proof. 
Since Y = X , the following relationships are verified:
f Y ( y ) = 2 y · f X ( y 2 )                         F Y ( y ) = F X ( y 2 )  
From which the elasticities have the following relationship:
e Y ( y ) = y · 2 y · f X ( y 2 ) F X ( y 2 ) = 2 · e X ( y 2 )
For example, considering again the model 1.5 in the table, since Y = X , and with 0 Y 1 :
e Y = X ( y ) = 2 · 2 + y 2 1 + y 2 = 4 + 2 y 2 1 + y 2    
Considering the relationship between the distribution and elasticity functions, this elasticity function has the following probability distribution:
F Y = X ( y ) = { 0 , y < 0 2 y 4 1 + y 2 , 0 y 1 1 , y > 1  
IV. Logarithm. Let Y = l n X , with X 0 , then the following is verified: e Y = l n X ( y ) = | y | · e X ( e y ) .
Proof. 
Since Y = ln X the following relationships are verified:
f Y ( y ) = e y · f X ( e y )                   F Y ( y ) = F X ( e y )
From which the elasticities have the following relationship:
e Y ( y ) = | y | · e y · f X ( e y ) F X ( e y ) = | y | · e X ( e y )
For example, considering again example 1.5, since Y = ln X , with 0 X 1 , it follows that:
e Y = ln X ( y ) = | y | · 2 + e y 1 + e y    
And considering the relationship between the distribution and elasticity functions, this elasticity function has the following probability distribution:
F Y = ln X ( y ) = { 2 e 2 y 1 + e y , y 0 1 , y > 0    
V. Exponential.Let Y = e X , ( Y > 0 ), then the following is verified: e Y = e X ( y ) = e X ( ln y ) ln y .
Proof. 
Since Y = e X , the following relationships are verified:
f Y ( y ) = 1 y · f X ( ln y )                 F Y ( y ) = F X ( ln y )
From which the elasticities have the following relationship:
e Y ( y ) = y · 1 y · f X ( ln y ) F X ( ln y ) = e X ( ln y ) ln y
For example, considering again the model 1.5 in the table, since Y = e X , the following can be easily derived:
e Y = e X ( y ) = 1 ln y · 2 + ln y 1 + ln y      
And considering the relationship between the distribution and elasticity functions, this elasticity function has the following probability distribution:
F Y = e X ( y ) = { 2 · ( ln x ) 2 ln x + 1 , 1 y e 1 , y > e      
Finally, for a random variable X with distribution function F ( x ) , we consider the class of distribution functions parameterised as G ( x ) = ( F ( x ) ) γ ,     γ > 0 (Marshall and Olkin 2007), known as the inverse proportional hazard ratio distribution, for which the following is verified (Popović et al. 2021):
G ( x ) = ( F ( x ) ) γ ,       γ > 0 r F ( x ) = γ · r G ( x )
since r F ( x ) and r G ( x ) are the respective inverse hazard rates of F and G . This result can be directly extended to the respective elasticities.
Proposition 10.
Given X is a random variable with the distribution function F ( x ) , the following is verified:
G ( x ) = ( F ( x ) ) γ ,       γ > 0 e F ( x ) = γ · e G ( x )  
Distributions which belong to this class of distributions are the well-known Burr distribution, with F ( x ) = ( 1 e x 2 ) μ (Burr 1942), the generalised exponential distribution, with F ( x ) = ( 1 e x ) β (Gupta and Kundu 1999), and the Topp–Leone distribution, with F ( x ) = ( 2 x x 2 ) υ ,     x ( 0 , 1 ) (Topp and Leone 1955).

7. Exemplifying the Use of the Distribution Function of the Elasticity

This paper reinterprets the elasticity function in random terms and states that the linked probability distribution function can be employed to assess and manage risks. In this section, we exemplify, using real data, how this new tool could be used to help make decisions related to risk. To do this, we revisit the problem considered in Pavía and Veres-Ferrer (2016) of whether a card issuer could build a card-risk alarm system using just the information provided by the cardholders regarding incidents of theft, loss or fraudulent use of their cards.
The data available in Table 1 of Pavía and Veres-Ferrer (2016), based on a total of 1069 claims, show the empirical distribution of the time delay in reporting credit card events by customers, raising the question as to whether customers are diligent enough when reporting incidents. Using these data, we can estimate the theoretical distribution of the underlying random variable and from this derive the probability distribution function of its elasticity function.
Although several models can be fitted for this data set, it seems that a gamma distribution is appropriate. The null hypothesis that the data follow a gamma distribution with the scale parameter β = 42.15 and shape parameter α = 0.304 , estimated conditional on a gamma distribution by maximum likelihood, is not rejected for the usual significance levels ( p - v a l u e = 0.1280 ) . Likewise, the null hypothesis that α 1 is rejected for all levels ( p - v a l u e < 0.0001 ) . So, assuming that the time taken from the incident occurring and it being reported is distributed as a gamma with α < 1 , it follows that P ( e ( x ) 1 ) = 1 , i.e., that the corresponding variable is inelastic x . This means that no reliable alarm system could be constructed based on customer diligence in reporting incidents. This result is in line with that reported in Pavía and Veres-Ferrer (2016).

8. Showing by Way of Examples How to Calculate Elasticity Probability Distributions

Equation (1) provides a general expression for calculating the cumulative distribution function of the elasticity of a probability model. This section shows, through four examples, how to operationalise such a relationship. In the first three examples, the analytical expressions of elasticity and its distribution function are obtained in three models that collect the three possible typologies for elasticity: elastic, inelastic and mixed. The fourth example, corresponding to a standardised Gaussian distribution, shows how to obtain an approximation of the cumulative distribution function of elasticity when it is not possible to obtain the analytical expression. The examples shown in this section are extended further in the following section where the probability models and their respective functions of elasticity, e ( x ) , and cumulative distribution of elasticity, F e ( y ) , are summarised in a table format. The calculations for the additional models presented in Section 9 are collected in the Appendix A.
Example 1.
Elastic probability distribution throughout its domain of definition.
Distribution I-5 (see Appendix A.1.5 in the Appendix A) corresponds to the random variable X that has the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F X ( x ) = { 0 , x < 0 2 x 2 1 + x , 0 x 1 1 , x > 1    
f X ( x ) = 2 x · ( 2 + x ) ( 1 + x ) 2       x [ 0 , 1 ]    
e X ( x ) = x · 2 x · ( 2 + x ) ( 1 + x ) 2 2 x 2 ( 1 + x ) = 2 + x 1 + x               x [ 0 , 1 ]    
This distribution is elastic throughout its domain, e ( x ) > 1   x [ 0 , 1 ] . Specifically, it has a maximum that is reached at the upper limit of its support and decreases from elasticity e ( x = 0 ) = 2 to elasticity e ( x = 1 ) = 3 2 . In other words, P ( e ( x ) > 1 ) = 1 .
Considering elasticity as a random variable, its cumulative distribution function is:
F e ( y ) = P ( e ( x ) y ) = P ( 2 + x 1 + x y ) = P ( 2 y y 1 x ) = 1 2 · ( 2 y y 1 ) 2 1 + 2 y y 1 = 2 y 2 + 9 y 9 y 1    
giving:
F e ( y ) = { 0 , y < 3 2 2 y 2 + 9 y 9 y 1 , 3 2 y 2 1 , y > 2  
Example 2.
Inelastic probability distribution throughout its domain of definition.
Topp–Leone model (see Appendix A.2.5 in the Appendix A) presents the random variable X   defined as the absolute difference of two standard uniforms (Kotz and Van Dorp 2004), which has the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F X ( x ) = { 0 , x < 0 2 x x 2 , 0 x 1 1 , x > 1    
f X ( x ) = 2 2 x       x [ 0 , 1 ]    
e X ( x ) = x · ( 2 2 x ) 2 x x 2 = 2 2 x 2 x               x [ 0 , 1 ]
The probability distribution of this random variable, typical of a Topp–Leone distribution with υ = 1 , does not show a behavioural change in its elasticity: it is always inelastic, as it decreases from unit elasticity to perfect inelasticity: P ( e ( x ) 1 ) = 1 and 0 e ( x ) 1 .
Considering elasticity as a random variable, its cumulative distribution function is:
F e ( y ) = P ( e ( x ) y ) = P ( 2 2 x 2 x y ) = P ( 2 ( 1 y ) 2 y x ) = 1 [ 4 ( 1 y ) 2 y 4 ( 1 y ) 2 ( 2 y ) 2 ] = y 2 ( 2 y ) 2    
giving:
F e ( y ) = { 0 , y < 0 y 2 ( 2 y ) 2 , 0 y 1 1 , y > 1      
Example 3.
Probability distribution of mixed behaviour.
The random variable X , introduced in Appendix A.3.2 of the Appendix A, follows a Pareto type I distribution in   [ x 0 , + [ . Its distribution, density and elasticity functions are (Veres-Ferrer and Pavía 2018):
F X ( x ) = { 0 , x < x 0 1 ( x 0 x ) b , x x 0 , x 0 > 0 , b > 0      
f X ( x ) = b x 0 b x b + 1         x x 0 > 0 , b > 0      
e X ( x ) = b x 0 b x b x 0 b         x x 0 > 0 , b > 0    
The elasticity function of the distribution, supported at [ 0 , + [ , decreases asymptotically, from perfect elasticity to perfect inelasticity with a change from an elastic to an inelastic situation, e X ( x ) = 1 , in x = x 0 1 + b b .
Considering elasticity as a random variable, its cumulative distribution function is:
F e ( y ) = P ( e ( x ) y ) = P ( b x 0 b x b x 0 b y ) = P ( ( b + y ) · x 0 b y x b ) = P ( ( b + y ) · x 0 b y b x ) = y b + y    
giving:
F e ( y ) = { 0 , y 0 y b + y , y > 0  
which does not depend on x 0 since P ( e ( x ) > 1 ) = 1 F e ( 1 ) = 1 1 b + 1 = b b + 1 .
Example 4.
Standardised Normal Distribution Z .
Let Z be a standardised Normal variable. As it is well known, for this variable there is no analytical expression for its cumulative distribution function, nor for its elasticity function, although it is possible to approximate it, numerically (Veres-Ferrer and Pavía 2018). For negative values, Z presents a decreasing elasticity, from perfect elasticity to perfect inelasticity at z = 0 . From z = 0 , the elasticity of Z increases until it reaches a maximum at z     0.8401 (with a value e ( z )     0.29453 , less than the unit), decreasing again from that moment asymptotically towards perfect inelasticity. The change in elasticity, from elastic to inelastic, e ( z )     1 , occurs at the abscissa z 0.7518 . Hence the probability that the standardised Normal provides an elastic abscissa is P ( e ( z ) > 1 ) P ( Z > 0.7518 ) 0.2261 .
From the above, using Proposition 9 it is verified for any Normal, X ~ N ( μ , σ ) , that:
    e N ( μ , σ ) ( x = μ 0.7518 · σ ) = | μ 0.7518 · σ 0.7518 · σ | · e Z ( 0.7518 ) = | μ 0.7518 · σ 0.7518 · σ |
If μ = 0 gives e N ( μ , σ ) ( x = 0.7518 · σ ) = e Z ( 0.7518 ) = 1 , then at x = 0.7518 · σ the variable X = N ( μ , σ ) reaches unit elasticity.

9. Summary Tables with Examples of Elasticity Functions and Distributions

The following tables summarise the behaviour of elasticity for various stochastic models. In particular, Table 1 and Table 2 present examples of perfectly elastic and inelastic models, respectively. Table 3 is devoted to show examples of models with a mixing behaviour. Models based on a continuous Uniform and a Log-uniform distributions are respectively displayed in Table 4 and Table 5. Finally, Table 6 presents examples of some of the most popular continuous models, for which there is not analytical solutions.
The reader interested in more mathematical details of the results presented in the tables should consult the Appendix A that accompanies this paper. The codes employed in the tables to identify each model coincide with the numbers of the subsections in the Appendix A where the corresponding mathematical calculus is developed.

10. Conclusions

The elasticity function of a random variable is a function, recently introduced in statistical practice, which allows characterisation of the probability distribution of any random variable. In this study, the elasticity function is extended to the stochastic field. An issue which enables the measurement of the probability is that this takes values at any interval, in particular, values greater than 1. As recent practice has shown, the change registered by a process passing from an inelastic to an elastic situation (or vice versa) has important consequences for risk management, with implications in business, epidemiological or insurance problems.
The consideration of elasticity as a random variable, together with a greater understanding of the properties of the elasticity function, with special relevance to its notable points (those in which the elasticity is unitary), can offer a new direction in risk assessment. This will facilitate a proactive management and evaluation a priori of the effects that a modification of protocols or the implementation of a set of actions could have. Once the consequences of the changes have been measured in probabilistic terms, we could calculate what effects they would have in terms of elasticity, determine in which regions this is greater or less than one, or measure the probability that the elasticity is greater or less than a certain value. From a theoretical perspective, in addition, the latter allows the probability models to be classified as perfectly elastic, perfectly inelastic and with mixed behaviour, being preferably elastic or inelastic.
The paper also includes other contributions of interest, such as obtaining new results that relate the elasticity and inverse hazard functions, the derivation of the functional form that the cumulative distribution function must have for a probability model with constant elasticity or how the elasticities of functionally dependent variables are related. The paper concludes by showing, through examples, how cumulative distribution functions of elasticity functions are calculated and offering, in a several summary tables, a broad list of models with their elasticity functions and distributions, including details about some of their most relevant characteristics.

Author Contributions

Conceptualisation, E.-J.V.-F. and J.M.P.; methodology, E.-J.V.-F.; validation, E.-J.V.-F. and J.M.P.; formal analysis, E.-J.V.-F.; writing—original draft preparation, E.-J.V.-F. and J.M.P.; writing—review and editing, E.-J.V.-F. and J.M.P.; supervision, E.-J.V.-F. and J.M.P.; project administration, E.-J.V.-F. and J.M.P.; funding acquisition, J.M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Generalitat Valenciana through project AICO/2021/257 (Consellería d’Innovació, Universitats, Ciència i Societat Digital).

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank Marie Hodkinson for translating the text of the paper into English and two anonymous reviewers and the assistant editor for their valuable comments and suggestions. The authors also acknowledge the support of Generalitat Valenciana through project AICO/2021/257 (Consellería d’Innovació, Universitats, Ciència i Societat Digital).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

This Appendix presents to the interested reader some details not included in the paper regarding the results presented in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 of Section 9.

Appendix A.1. Distributions of Always Elastic Probability

Appendix A.1.1. Distribution I-1

Let X be a continuous random variable, with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) (Veres-Ferrer and Pavía 2012):
F ( x ) = { 0 , x 0 x 1 + 1 x 2 , 0 < x < 1 1 , x b
f ( x ) = 1 1 x 2 · ( 1 + 1 x 2 ) > 0       x ( 0 , 1 )
e ( x ) = 1 1 x 2       x ( 0 , 1 )
This distribution is (i) elastic throughout the domain of definition of the variable, e ( x ) > 1   x ( 0 , 1 ) , (ii) increases from unit elasticity and (iii) tends towards perfect elasticity, 1 e ( x ) < + , in the upper limit of its domain of definition. Therefore, P ( e ( x ) > 1 ) = 1 .
In addition, considering the elasticity as a function of the random variable, its cumulative distribution function is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 1 x 2 y ) = P ( x 2 1 1 y 2 ) = P ( 0 x 1 1 y 2 ) = y 1 y + 1      
giving:
F e ( y ) = { 0 , y 1 y 1 y + 1 , y > 1      

Appendix A.1.2. Distribution II-1

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x )
F ( x ) = { 0 , x < 0 a x 4 a x , 0 x 2 a ,   a > 0 1 , x > 2 a      
f ( x ) = 4 a ( 4 a x ) 2         0 x 2 a , a > 0      
e ( x ) = x · 4 a ( 4 a x ) 2 a x 4 a x = 4 4 a x           0 x 2 a      
This distribution is (i) elastic throughout its domain of definition, e ( x ) > 1       x [ 0 , 2 a ] and (ii) increases from the unit elasticity until elasticity e ( x ) = 2 , which is reached in the upper limit of its domain. Therefore, P ( e ( x ) > 1 ) = 1 .
Considering the elasticity as a function of the random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 4 4 a x y ) = P ( 4 4 y a x y ) = P ( x 4 ( y 1 ) a y ) = P ( 4 4 y a x y ) = 4 ( y 1 ) y ( 4 4 ( y 1 ) y ) = y 1
giving:
F e ( y ) = { 0 , y < 1 y 1 , 1 y 2 1 , y > 2    
which is a Uniform distribution in (1,2) and which does not depend on parameter a .

Appendix A.1.3. Distribution III-1

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 1 x · ln x e , 1 x e 1 , x > e
f ( x ) = ln x + 1 e               x [ 1 , e ]    
e ( x ) = ln x + 1 ln x               x [ 1 , e ]    
This distribution is (i) elastic throughout the domain of definition of the variable, e ( x ) > 1     x [ 1 , e ] and (ii) decreasing from perfect elasticity to elasticity e ( x = e ) = 2 , which is reached in the upper limit of its domain. Therefore, P ( e ( x ) > 1 ) = 1 .
Considering the elasticity as a function of the random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( ln x + 1 ln x y ) = P ( e 1 ( y 1 ) x ) = 1 1 e e 1 ( y 1 ) ln ( e 1 ( y 1 ) ) = 1 1 y 1 e 2 y y 1
giving:
F e ( y ) = { 0 , y < 2 1 1 y 1 e 2 y y 1 , 2 y < +    

Appendix A.1.4. Distribution IV-1

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 x α ,   0 x 1           α 1 1 , x > 1  
f ( x ) = α · x α 1               x [ 0 , 1 ]    
e ( x ) = α                         x [ 0 , 1 ]
This distribution is elastic throughout the domain of definition of the variable, e ( x ) > 1       x [ 0 , 1 ] , so that P ( e ( x ) > 1 ) = 1 . Considering a random variable, the elasticity is degenerate on being constant:
F e ( y ) = P ( e ( x ) y ) { 0 , y < α 1 , y α      

Appendix A.1.5. Distribution V-1

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 2 x 2 1 + x , 0 x 1 1 , x > 1
f ( x ) = 2 x · ( 2 + x ) ( 1 + x ) 2               x [ 0 , 1 ]
e ( x ) = x · 2 x · ( 2 + x ) ( 1 + x ) 2 2 x 2 1 + x = 2 + x 1 + x               x [ 0 , 1 ]
This distribution is (i) elastic throughout the domain of definition of the variable, e ( x ) > 1       x [ 0 , 1 ] and (ii) decreasing from elasticity e ( x = 0 ) = 2 until elasticity e ( x = 1 ) = 3 2 , a value which is reached at the upper limit of its domain. Therefore,   P ( e ( x ) > 1 ) = 1 .
Considering the elasticity as a function of the random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 2 + x 1 + x y ) = P ( 2 y y 1 x ) = 1 2 · ( 2 y y 1 ) 2 1 + 2 y y 1 = 2 y 2 + 9 y 9 y 1    
giving:
F e ( y ) = { 0 , y < 3 2 2 y 2 + 9 y 9 y 1 , 3 2 y 2 1 , y > 2    

Appendix A.2. Distributions of Always Inelastic Probability

Appendix A.2.1. Exponential Distribution

Let X be an Exponential random variable, then its functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) are (Veres-Ferrer and Pavía 2012):
F ( x ) = { 0 , x 0 1 e λ x , x > 0
f ( x ) = λ e λ x               x > 0         λ > 0        
e ( x ) = λ x e λ x ( 1 e λ x ) = λ x e λ x 1               x > 0    
The probability distribution of this random variable shows no behavioural change in its elasticity: it is (i) decreasing throughout its domain, (ii) reaching its upper limit when x approaches 0 and (iii) tending asymptotically to perfect inelasticity as x approaches infinity,   e ( x ) < 1     x > 0 . Therefore, even without knowing the probability distribution of the elasticity, it can be stated that   P ( e ( x ) 1 ) = 1 .

Appendix A.2.2. Distribution II-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) (Veres-Ferrer and Pavía 2018):
F ( x ) = { 0 , x < 0 x · e 1 x 2 2 , 0 x 1 1 , x > 1
f ( x ) = { 0 , x < 0 ( 1 x 2 ) · e 1 x 2 2 , 0 x 1 1 , x > 1
e ( x ) = 1 x 2           0 x 1    
This distribution does not show a change in its elasticity. It is inelastic throughout its domain of definition, decreasing parabolically from unit elasticity to perfect inelasticity. Therefore, P ( e ( x ) 1 ) = 1 , 0 x 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 x 2 y ) = P ( 1 y x ) 1 1 y · e · e 1 y 2 = 1 1 y · e y 2    
giving:
F e ( y ) = { 0 , y < 0 1 1 y · e y 2 , 0 y 1 1 , y > 1

Appendix A.2.3. Distribution III-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) (Veres-Ferrer and Pavía 2018):
F ( x ) = { 0 , x < 0 x e x 1 , 0 x 1 1 , x > 1    
f ( x ) = 1 x e x 1             0 x 1      
e ( x ) = 1 x             0 x 1
The probability distribution of this random variable does not show a behavioural change in its elasticity; it is always inelastic. Indeed, the elasticity of the distribution decreases linearly from unit elasticity to perfect inelasticity, P ( e ( x ) 1 ) = 1 , 0 x 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 y x ) = 1 ( 1 y ) · e y
giving:
F e ( y ) = { 0 , y < 0 1 ( 1 y ) · e y , 0 y 1 1 , y > 1

Appendix A.2.4. Distribution IV-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 2 a x 1 + a x , 0 x 1 a , a 1 1 , x > 1 a \
f ( x ) = a a x · ( 1 + a x ) 2         0 x 1 a     \
e ( x ) = x · a a x · ( 1 + a x ) 2 2 a x 1 + a x = 1 2 ( 1 + a x )             0 x 1 a \
This distribution is (i) inelastic throughout the domain of the variable, e ( x ) < 1       x   [ 0 , 1 a ] , (ii) decreasing and (iii) takes values in [ 1 2 , 1 4 ] . Therefore, P ( e ( x ) > 1 ) = 0 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 2 ( 1 + a x ) y ) = P ( 1 2 y + 2 y a x ) = P ( ( 1 2 y ) 2 4 a y 2 x ) = 1 2 · ( 1 2 y ) 2 4 a y 2 ( 1 + ( 1 2 y ) 2 4 a y 2 ) = 4 y 1    
giving:
F e ( y ) = { 0 , y < 1 4 4 y 1 , 1 4 y 1 2 1 , y > 1 2      
which is a Uniform distribution in [ 1 2 , 1 4 ] and does not depend on the paramter a .

Appendix A.2.5. Topp–Leone Distribution

Let X be the random variable defined as the absolute difference of two standard uniforms (Kotz and Van Dorp 2004), and which is a Topp–Leone distribution, with υ = 1 . This variable has the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 2 x x 2 , 0 x 1 1 , x > 1    
f ( x ) = 2 2 x               0 x 1      
e ( x ) = 2 2 x 2 x                 0 x 1
The probability distribution of this random variable does not show a behavioural change in its elasticity; it is always inelastic and decreases from unit elasticity to perfect inelasticity, P ( e ( x ) 1 ) = 1 , 0 x 1 and 0 e ( x ) 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 2 2 x 2 x y ) = P ( 2 ( 1 y ) 2 y x ) = 1 4 ( 1 y ) 2 y 4 ( 1 y ) 2 ( 2 y ) 2 = y 2 ( 2 y ) 2    
giving:
F e ( y ) = { 0 , y < 0 y 2 ( 2 y ) 2 , 0 y 1 1 , y > 1    

Appendix A.2.6. Distribution VI-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 b x b x + a , x 0 ;   a , b > 0    
f ( x ) = a b ( b x + a ) 2             x 0 ;   a , b > 0
e ( x ) = a b x + a             x 0      
The probability distribution of this random variable does not show behavioural change in its elasticity, (i) being always inelastic, (ii) decreasing, (iii) tending to unity when x approaches 0 and asymptotically to perfect inelasticity when x tends to infinity. Therefore, P ( e ( x ) 1 ) = 1 ,       0 e ( x ) 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( a b x + a y ) = P ( a a y b y x ) = 1 a b a b y b y ( a + a b a b y b y ) = y    
giving:
F e ( y ) = { 0 , y < 0 y , 0 y 1 1 , y > 1
which is a Standard Uniform distribution and does not depend on the parameters a and b .

Appendix A.2.7. Distribution VII-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 b x b x + a , x 0 ;   a , b > 0
f ( x ) = a b 2 x ( b x + a ) 3 2           x 0 ; a , b > 0    
e ( x ) = a x b 2 x ( b x + a ) 3 2 b x b x + a = a 2 ( b x + a )             x 0    
The probability distribution of this random variable does not show a behavioural change in its elasticity: (i) it is always inelastic, (ii) decreasing, (iii) takes the value ½ at the abscissa x = 0 and (iv) tends asymptotically to the perfect inelasticity as x tends to infinity. Therefore, P ( e ( x ) 1 ) = 1 and 0 e ( x ) 1 2 < 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( a 2 ( b x + a ) y ) = P ( a 2 a y 2 b y x ) 1 ( a b 2 a b y 2 b y ) ( a + a b 2 a b y 2 b y ) = 1 1 2 y           0 < y < 1 2
giving:
F e ( y ) = { 0 , y 0 1 1 2 y , 0 < y 1 2 1 , y > 1 2
which is a distribution that does not depend on the parameters a and b .

Appendix A.2.8. Distribution VIII-2

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 2 x 1 + x , 0 x 1 1 , x > 1    
f ( x ) = 2 + x ( 1 + x ) 2                 0 x 1
e ( x ) = x · 2 + x ( 1 + x ) 2 2 x 1 + x = 2 + x 2 ( 1 + x )             0 x 1
The probability distribution of this random variable does not show a behavioural change in its elasticity: (i) it is always inelastic and (ii) decreasing from the unit elasticity e ( x = 0 ) = 1 to the elasticity e ( x = 1 ) = 3 4 , which is reached at the upper limit of its domain of definition. So, P ( e ( x ) 1 ) = 0 and 3 4 e ( x ) 1 .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 2 + x 2 ( 1 + x ) y ) = P ( 4 ( 1 y ) 2 ( 2 y 1 ) 2 x ) = 1 8 ( 1 y ) 2 ( 2 y 1 ) 2 1 + 2 ( 1 y ) ( 2 y 1 ) = 18 y 8 y 2 9 2 y 1      
giving:
F e ( y ) = { 0 , y < 3 4 18 y 8 y 2 9 2 y 1 , 3 4 y 1 1 , y > 1

Appendix A.3. Mixed Behaviour Probability Distributions

Appendix A.3.1. Upper Limited Unit Exponential Distribution

Let X be the upper limited unit exponential continuous random variable (Labatut et al. 2009), characterised by being the only continuous random variable for which its density and distribution functions are equal, with the following distribution functions F ( x ) , density f ( x ) and elasticity e ( x ) (Veres-Ferrer and Pavía 2012):
F ( x ) = { e x , x 0 1 , x > 0    
f ( x ) = { e x , x 0 0 , x > 0    
e ( x ) = | x |             x 0    
The elasticity of the distribution decreases from perfect elasticity to perfect inelasticity, reached at 0 , which is the upper limit of the domain of definition of the random variable. The change in elasticity occurs at the abscissa x = 1 , where it goes from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( | x | y ) = P ( x y ) = 1 e y      
that is, the Exponential of the parameter λ = 1 is:
F e ( y ) = { 0 , y 0 1 e y , y > 0
Consequently, the probability that the model provides elastic results is P ( e ( x ) > 1 ) = F ( 1 ) = 1 F e ( 1 ) = e 1 0.3679 .

Appendix A.3.2. Pareto I Distribution

Let X be a random variable with Pareto distribution in [ x 0 , + [ , then (Veres-Ferrer and Pavía 2018):
F ( x ) = { 0 , x < x 0 1 ( x 0 x ) b , x x 0 > 0 ;   b > 0
f ( x ) = b x 0 b x b + 1         x x 0 > 0 ;   b > 0    
e ( x ) = b · x 0 b x b x 0 b           x x 0 > 0 ;   b > 0
The elasticity of the distribution, with values at [ 0 , + [ , decreases asymptotically, from perfect elasticity to perfect inelasticity, so there is a change in elasticity, specifically at x = x 0 1 + b b , where it goes from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( b · x 0 b x b x 0 b y ) = P ( ( b + y ) · x 0 b y x b ) = P ( ( b + y ) · x 0 b y b x ) = y b + y
giving:
F e ( y ) = { 0 , y 0 y b + y , y > 0
which does not depend on x0. Additionally, it is verified that P ( e ( x ) > 1 ) = 1 F e ( 1 ) = 1 1 b + 1 = b b + 1 .

Appendix A.3.3. Distribution III-3

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 0 a x 6 a x , 0 x 4 a , a > 0 1 , x > 4 a
f ( x ) = 6 a + a 2 x 2 a x ( 6 a x ) 2                   0 x 4 a
e ( x ) = 6 + a x 2 · ( 6 a x )                 0 x 4 a
The elasticity of the distribution, which takes values in the interval [ e ( 0 ) = 1 2 ,     e ( 4 a ) = 5 2 ] , is increasing, with unit elasticity expressing a change in elasticity, reached at x = 2 a , at which point it goes from inelastic to elastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 6 + a x 2 · ( 6 a x ) y ) = P ( 6 + a x 12 y 2 a x y ) = P ( x 12 y 6 a ( 1 + 2 y ) ) = 12 y 6 1 + 2 y · ( 6 12 y 6 1 + 2 y ) = 24 y 2 6 12
A result that is independent of the parameter a . So:
F e ( y ) = { 0 , y < 1 2 24 y 2 6 12 , 1 2 y 5 2 1 , y > 5 2
and P ( e ( x ) > 1 ) = 1 18 12 0.6464 .

Appendix A.3.4. Gumbel Type II Distribution

Let X be a random variable with Gumbel type II distribution (Gumbel 1958), with parameters   α > 0 and b. Note that when b = 1 this is the Fréchet (1927) random variable, also known as the inverse Weibull distribution (Khan et al. 2008).
Its functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) are the following:
F ( x ) = { 0 , x 0 e b x α , x > 0 ;   α > 0
f ( x ) = α b x α 1 e b x α         x > 0 ;   α > 0    
e ( x ) = x · α b x α 1 e b x α e b x α = α b x α       x > 0 ;   α > 0    
The elasticity of the distribution decreases from perfect elasticity and tends to perfect inelasticity. The change in elasticity occurs at the abscissa x = ( α b ) 1 α , where it goes from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( α b x α y ) = P ( ( α b y ) 1 α x ) = 1 e y α
a distribution that is independent of parameter b :
F e ( y ) = { 0 , y 0 1 e y α , y > 0
Consequently, the probability that the model provides elastic results is P ( e ( x ) > 1 ) = e 1 α . Note that, given the independence of F e ( y ) from parameter b , the distribution of the elasticity function of the Fréchet variable is the same as that of the Gumbel type II variable.

Appendix A.3.5. Distribution V-3

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 1 ln x , 1 x e 1 , x > e
f ( x ) = 1 2 x ln x                 1 x e
e ( x ) = 1 2 ln x           1 x e
The elasticity of the distribution, which takes values at [ 1 2 , + [ , is decreasing. Unit elasticity, expressing a change in elasticity, is reached at x = e , going from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 2 ln x y ) = P ( e 1 2 y x ) = 1 1 2 y
giving:
F e ( y ) = { 0 , y < 0.5 1 1 2 y , y 0.5
and P ( e ( x ) 1 ) = 1 1 2 = 0.2929 .

Appendix A.3.6. Distribution VI-3

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < a x a x , x a ;   a > 0
f ( x ) = a x 2             x a
e ( x ) = a x x a x = a x a                 x a
The elasticity of the distribution, which takes values at [ e ( a ) = + ,     e ( + ) = 0 ] , is decreasing. Unit elasticity, expressing a change in elasticity, is reached at x = 2 a , where it goes from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( a x a y ) = P ( a + a y y x ) = 1 a + a y y a a + a y y = y 1 + y    
a result that is independent of parameter a . So:
F e ( y ) = { 0 , y < 0 y 1 + y , y 0
and P ( e ( x ) > 1 ) = 1 1 2 = 1 2 , so the distribution is neutral with respect to the possible values of its elasticity.

Appendix A.3.7. Distribution VII-3

Let X be the random variable with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < 1 e · ln x x , 1 x e 1 , x > e
f ( x ) = e · 1 ln x x 2               1 x e
e ( x ) = 1 ln x ln x               1 x e
The elasticity of the distribution, which takes values in the interval [ e ( x = e ) = 0 , e ( x = 1 ) = + ] , is decreasing. Unit elasticity, expressing change in elasticity, is reached at x = e , where it goes from elastic to inelastic.
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 ln x ln x y ) = P ( e 1 1 + y x ) = 1 e · 1 1 + y e 1 1 + y = 1 e y 1 + y 1 + y
giving:
F e ( y ) = { 0 , y < 0 1 e y 1 + y 1 + y , y 0    
and P ( e ( x ) > 1 ) = e 2 0.8244 .

Appendix A.3.8. Standard Logistic Distribution

Let X be the logistic random variable (Balakrishnan 1992), with parameters M = 1 and N = 1 . Its distribution functions F ( x ) , density f ( x ) and elasticity e ( x ) are the following (Veres-Ferrer and Pavía 2012):
F ( x ) = 1 1 + e x           x
f ( x ) = e x ( 1 + e x ) 2           x      
e ( x ) = | x | · e x 1 + e x               x
For negative values of X , elasticity is decreasing from perfect elasticity, reaching perfect inelasticity for x = 0 ; while in its positive values, the elasticity of the variable initially grows without exceeding unity, to decrease again towards perfect inelasticity as the value of X increases (Veres-Ferrer and Pavía 2017a).
Given the impossibility of analytically solving the variable X in the expression of the elasticity function, the approximate calculation of the elasticity function, with M = 1 and N = 1 , provides in the positive values of x the maximum elasticity 0.278465 lower than unity, on the abscissa x 1.2785 . The change in elasticity from elastic to inelastic occurs at x 1.2785 . Consequently, the probability that the distribution provides an elastic abscissa is:
P ( e ( x ) > 1 ) P ( X 1.2785 ) = 1 1 + e 1.2785 0.2178

Appendix A.3.9. Uniform Distribution

Let X be a Uniform U ( a , b ) , with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) (Veres-Ferrer and Pavía 2012):
F ( x ) = { 0 , x a x a b a , x ( a , b )         a < b 1 , x b
f ( x ) = 1 b a           x ( a , b )
e ( x ) = x x a           x ( a , b )    
If a < 0 and b 0 , the elasticity is decreasing, starting from the perfect elasticity. If a < 0 and b > 0 , the elasticity is decreasing, starting from the perfect elasticity until reaching the perfect inelasticity at the value 0 of the random variable, and increasing from that moment, without reaching the unit elasticity, and reaching its maximum value at the upper end of the domain of definition of the variable. If a = 0 , the elasticity is unitary throughout the domain of definition. Finally, if a 0 , the distribution is elastic, with a decreasing elasticity from perfect elasticity to the minimum, which is reached at the upper end of the domain of definition of the variable.
To determine the probability distribution of the elasticity function when considering a random variable function, we must distinguish according to the sign of the parameters a and b .
A.3.9a. Uniform Distribution with a = 0
If a = 0 , elasticity is unitary throughout the domain of definition of the variable. Consequently, being a degenerate distribution:
P ( e ( x ) > 1 ) = P ( e ( x ) < 1 ) = 0       a n d       P ( e ( x ) = 1 ) = 1
verifying that:
F e ( y ) = { 0 , y < 1 1 , y 1    
Specifically, with the transformation being     V = F X ( x ) , it is known that V = U ( 0 , 1 ) . Consequently, e V = F X ( x ) ( v ) = 1 ,       0 v 1 .
A.3.9b. Uniform Distribution with a , b > 0
If a , b > 0 elasticity is decreasing and takes values in e ( x ) [ b b a , + ] . The distribution is always elastic, P ( e ( x ) > 1 ) = 1 . Its probability distribution, considered as a function of a random variable, is:
F e ( y ) = P ( e ( x ) y ) = P ( x x a y ) = P ( x a y y 1 ) = 1 a y y 1 a b a = y · ( b a ) b ( b a ) · ( y 1 )      
giving:
F e ( y ) = { 0 , y < b b a y · ( b a ) b ( b a ) · ( y 1 ) , y b b a
A.3.9c1. Uniform Distribution with a < b 0
If a < b 0 elasticity is decreasing and takes values in e ( x ) [ b b a , + ] , with b b a > 0 , its probability distribution, considered as a function of a random variable, is:
F e ( y ) = P ( e ( x ) y ) = P ( | x | x a y ) = P ( x x a y ) = P ( x a y y + 1 ) = b a y y + 1 b a = b + y · ( b a ) ( b a ) · ( y + 1 )
giving:
F e ( y ) = { 0 , y < b b a y · ( b a ) + b ( b a ) · ( y + 1 ) , y b b a
If b b a > 1 , the distribution is always elastic, so P ( e ( x ) > 1 ) = 1 .
A.3.9c2. Uniform Distribution with a < b 0 and b b a < 1
If b b a < 1 , which implies that b > a 2 , there is a change in elasticity at x = a 2 , going from elastic to inelastic. In this case:
P ( e ( x ) > 1 ) = 1 F e ( 1 ) = 1 2 b a 2 ( b a ) = a 2 ( b a )
A.3.9d. Uniform Distribution with a < 0 and   b > 0
If a < 0 and   b > 0 there is a change in the elasticity, which occurs in negative values of X . As for its positive values, elasticity is increasing from 0 and is bounded by b b a < 1 . Elasticity takes values in e ( x ) [ 0 , + ] , and its unit value at the abscissa is x = a 2 .
To obtain the distribution of elasticity, considered as a function of a random variable, it is necessary to distinguish three situations:
Situation 1. If b b a y < + , essentially x < 0, then:
F e ( y ) = P ( e ( x ) y ) = P ( | x | x a y ) = P ( x x a y ) = P ( x a y y + 1 ) = b a y y + 1 b a = b + y · ( b a ) ( b a ) · ( y + 1 )
verifying that F e ( + ) = 1 and F e ( b b a ) = 2 b 2 b a .
The following situation, in which 0 y < b b a , could work for negative values of X in x ] a b 2 b a , 0 [ , as well as for positive values in x [ 0 , b b a ] .
Situation 2. If 0 y < b b a and x < 0 :
F e ( y ) = P ( e ( x ) y ) = P ( x x a y ) = P ( 0 > x a y y + 1 ) = a y y + 1 b a = a y ( b a ) · ( y + 1 )
Situation 3. If 0 y < b b a and x > 0 :
F e ( y ) = P ( e ( x ) y ) = P ( x x a y ) = P ( 0 < x a y 1 y ) = a y 1 y b a = a y ( b a ) · ( 1 y )
Therefore, this, 0 y < b b a , gives:
F e ( y ) = P ( e ( x ) y ) = a y ( b a ) · ( y + 1 ) + a y ( b a ) · ( 1 y ) = 2 a y ( b a ) · ( 1 y 2 )
verifying that F e ( 0 ) = 0 and F e ( b b a ) = 2 b 2 b a . The final result gives the following:
F e ( y ) = { 0 , y < 0 2 a y ( b a ) ( 1 y 2 ) , 0 y < b b a b + y ( b a ) ( b a ) ( 1 + y ) , b b a y < +
It is verified that P ( e ( x ) > 1 ) = 1 2 b a 2 ( b a ) = a 2 ( b a ) since b ( b a ) < 1 .

Appendix A.3.10. Log-Uniform Distribution

Let X be the random variable with reciprocal distribution or log-uniform distribution (Hamming 1970), with the following functions of distribution F ( x ) , density f ( x ) and elasticity e ( x ) :
F ( x ) = { 0 , x < a ln x ln a ln b ln a , 0 < a x b 1 , x > b
f ( x ) = 1 x · ( ln b ln a )             0 < a x b
e ( x ) = x x · ( ln b ln a ) ln x ln a ln b ln a = 1 ln x ln a               0 < a x b    
The elasticity function is decreasing and takes values in [ 1 ln b ln a , + [ .
Considering elasticity as a function of a random variable, its probability distribution is:
F e ( y ) = P ( e ( x ) y ) = P ( 1 ln ( x a ) y ) = P ( 1 y ln ( x a ) ) = P ( e 1 y x a ) = P ( a · e 1 y x ) = 1 1 y ln b ln a = 1 1 y · ( ln b ln a )    
giving:
F e ( y ) = { 0 , y < 1 ln b a 1 1 y · ( ln b ln a ) , y 1 ln b a
A.3.10a. Log-Uniform Distribution with b a e
If b a e , the distribution is elastic throughout the domain of definition of the variable, i.e., e ( x ) > 1       x [ a , b ] , decreasing from perfect elasticity to x = 1 ln b ln a . Therefore, P ( e ( x ) > 1 ) = 1 .
A.3.10b. Log-Uniform Distribution with b a > e
If b a > e , there is a change in elasticity in x = a · e . The distribution is decreasing elastic in [ a , a · e ] and inelastic in ] a · e , b ] , reaching minimum elasticity 1 ln b ln a at the upper extreme of the interval of definition of X . In addition, considering the distribution of elasticity function, it is verified that P ( e ( x ) 1 ) = F e ( 1 ) = ln b ln a 1 ln b ln a .

Appendix A.4. Models without Analytical Expression of Elasticity

Appendix A.4.1. Standardised Normal Distribution

Standardised Normal Z  Distribution. Despite the impossibility of analytically solving the variable in the expression of the elasticity function, we can, however, study its properties numerically. The approximate calculation of the elasticity of the standardised normal distribution expresses, for negative values of Z , a decreasing elasticity from perfect elasticity to perfect inelasticity at z = 0 , and increasing from that moment to the maximum, which is reached at z 0.8401 (with a value of elasticity 0.29453 , less than unity), decreasing from that moment asymptotically towards perfect inelasticity. The change in elasticity, from elastic to inelastic, occurs at the abscissa z 0.7518 . Hence, the probability that the Standard Normal provides an elastic abscissa is P ( e ( x ) > 1 ) P ( Z 0.7518 ) 0.2261 .
For any Normal     X = N ( μ , σ ) , using Proposition 9 it is verified that:
e N ( μ , σ ) ( x = μ 0.7518 · σ ) = | μ 0.7518 · σ 0.7518 · σ | · e Z ( z = 0.7518 ) = | μ 0.7518 · σ 0.7518 · σ | .
From which, if μ = 0 this gives e N ( μ , σ ) ( x = 0.7518 · σ ) = e Z ( z = 0.7518 ) = 1 , so at x = 0.7518 · σ , the variable X ~ N ( 0 , σ ) reaches unit elasticity.

Appendix A.4.2. Weibull Distribution

Let X be a Weibull. For this distribution, it is not possible to obtain the analytical expression of its elasticity function and, consequently, of its probability distribution. The approximate calculation of the elasticity function of a W e i b u l l ( α > 0 , β > 0 ) confirms its decreasing trend towards perfect inelasticity, with values at e ( x ) ] 0 , α [ .
Examples of the rough calculation of elasticity draw different behaviours of the distribution. Thus, the Weibull of parameters α 1 and β = 1 is always inelastic, with decreasing elasticity tending to perfect inelasticity. So, P ( e ( x ) > 1 ) = 0 .
Furthermore, the Weibull of parameters α = 3 and β = 1 has a decreasing elasticity with values in the interval ] 0 ,   3 [ , and tends to perfect inelasticity. The unit elasticity is reached at the abscissa x 1.2394 , so that in the interval ] 0 ,   1.2394 [ the distribution is elastic, being inelastic at ] 4.4836 , + [ . Therefore, P ( e ( x ) > 1 ) P ( X 4.4836 ) 0.7153 .

Appendix A.4.3. Student’s t Distribution

Let t n be Student’s t-distribution of n degrees of freedom. Whatever the value of n , it presents a decreasing elasticity for its negative abscissa until reaching perfect inelasticity at x = 0 and from that point, on the positive abscissa, the elasticity grows to a relative maximum of less than unity, and then decreases, tending asymptotically to perfect inelasticity. Specifically, elasticity takes values e ( x ) ] 0 , n [ .
For example, with one degree of freedom, the Student’s t-distribution is inelastic. It decreases in ] ,   0 [ , taking values e ( x ) ] 0 ,   1 [ . Starting from x = 0 , the elasticity increases, up to the abscissa   x 0.8019 where it takes the value e ( x = 0.8019 ) 0.2172 < 1 . Thereafter it decreases, tending asymptotically to perfect elasticity. Consequently, P ( e ( x ) > 1 ) = 0 .
The elasticity of a Student’s t-distribution with, for example, ten degrees of freedom, (i) takes values at e ( x ) ] 0 ,   10 [ , (ii) is elastic and decreasing in the interval x ] , 0.8 [ , (iii) reaches unit elasticity at x 0.8 and (iv) is always inelastic thereafter. In the interval x ] 0.8 ,   0 [   it decreases at the abscissa x = 0 , reaching perfect inelasticity. Thereafter, it grows by x ] 0 ,   0.83415 [ , with e ( x = 0.83415 ) 0.2845 < 1 . After this relative maximum, the elasticity decreases asymptotically to perfect inelasticity. Consequently, P ( e ( x ) > 1 ) P ( X 0.8 ) 0.2212 .
The Student’s t-distribution with, for example, 50 grades of freedom takes the values e ( x ) ] 0 , 50 [ . Similar to the Student’s t-distribution with 10 grades of freedom, it is elastic in the interval x ] , 0.7608 [ , reaching unit elasticity at x 0.7608 . Thereafter, it is always inelastic. It also increases at x ] 0 ,   0.8387 [ , reaching a maximum value in this interval at e ( x = 0.8387 ) 0.2925 < 1 . From this relative maximum, the elasticity decreases asymptotically towards perfect inelasticity. Consequently, P ( e ( x ) > 1 ) P ( X 0.7608 ) 0.2252 .

Appendix A.4.4. Fisher-Snedecor F Distribution

Let F n . m be a Snedecor’s or Fisher-Snedecor F-distribution, then its elasticity function is decreasing for all its degrees of freedom, tending asymptotically to perfect inelasticity as its degrees of freedom increase. Specifically, being n , the degrees of freedom of the numerator, the elasticity takes values in e ( x ) ] 0 , n 2 [ .
An F-distribution with a degree of freedom in its numerator, and for any value in the denominator, is always inelastic and decreasing, tending more rapidly to perfect inelasticity as the degrees of freedom of the denominator increase. Its elasticity takes values in the interval ] 0 ,   0.5 [ and P ( e ( x ) > 1 ) = 0 .
An F-distribution with a degree of freedom in the denominator, and for any value in the numerator n , is decreasing, elastic in the smallest values of x and reaches the unit elasticity to continue decreasing asymptotically to perfect inelasticity, with greater rapidity in line with an increase in the degrees of freedom of the numerator. For example, the elasticity of X ~ F 50 , 1 takes values in e ( x ) ] 0.25 [ , reaching unit elasticity x 0.6758 , verifying that P ( e ( x ) > 1 ) P ( X 0.6758 ) 0.2295 .
For example, X ~ F 50 , 1 elasticity takes values in e ( x ) ] 0   , 37.5 [ , reaches unit elasticity at the abscissa x 1.3646 , and in this case, P ( e ( x ) > 1 ) P ( X 1.3646 ) 0.2560 .

Appendix A.4.5. Chi-Squared Distribution

In the same way as the F-distribution, the elasticity of the random variable χ n 2 is decreasing for all its degrees of freedom, tending asymptotically to perfect inelasticity, progressively slower as its degrees of freedom increase. Furthermore, with n being the degrees of freedom, elasticity also takes values in e ( x ) ] 0 , n 2 [ .
The distributions χ n 2 when n = 1 ,   2 are always inelastic and decreasing. Its elasticities take values in the intervals ] 0 ,   0.5 [ and ] 0 ,   1 [ , respectively, and, in both cases, P ( e ( x ) > 1 ) = 0 .
When n > 2 the elasticities continue to be decreasing, being elastic for the values closest to the origin and inelastic for higher values. As the degrees of freedom increase, the approach to perfect inelasticity is slower. For example, χ 10 2 has unit elasticity at the abscissa x 12.6451 , so it is elastic e ( x ) > 1 in the interval ] 0 ,   12.6451 [ and inelastic e ( x ) < 1 in ] 12.6451 , + [ , verifying that P ( e ( x ) > 1 ) 0.7558 . Similarly, χ 25 2 has unit elasticity at the abscissa x 32.6518 , being elastic, e ( x ) > 1 , in the interval ] 0 ,   32.6518 [ and inelastic, e ( x ) < 1 , in ] 32.6518 , + [ , verifying that P ( e ( x ) > 1 ) 0.8600 .

Appendix A.4.6. Beta Distribution

Let X be a B e t a ( p , q ) variable. Its elasticity depends on the relationship between its parameters.
A.4.6a. Beta Distribution with p > 1 and q < 1
If p > 1 and q < 1 , elasticity is increasing with values in ] p , + [ , so it is elastic x [ 0 , 1 ] : P ( e ( x ) > 1 ) = 1 .
A.4.6b. Beta Distribution with p < 1 and q > 1
If p < 1 and q > 1 , elasticity is decreasing, taking values in ] 0 , p [ , being, therefore, inelastic x [ 0 , 1 ] : P ( e ( x ) > 1 ) = 0 .
A.4.6c. Beta Distribution with p ,   q > 1
If p ,   q > 1 , elasticity is decreasing, taking values in ] 0 ,   p [ , which presents a change in elasticity. For example, with X ~ B e t a ( 3 ,   2 ) , elasticity takes values in ] 0 ,   3 [ and in x 0.8889 its elasticity is unitary, e ( x ) = 1 , so in the interval x ] 0 ,   0.8889 [ the distribution is elastic, and when x ] 0.8889 , 1 [ it is inelastic. It is verified that P ( e ( x ) > 1 ) P ( X 0.8889 ) 0.93644 .
A.4.6d. Beta Distribution with p ,   q < 1
If p , q < 1 , elasticity is increasing with values in ] p , + [ , so this also presents a change in elasticity. For example, if X ~ B e t a ( 0.8 ,   0.5 ) , elasticity takes values in ] 0.8 , + [ , and in x 0.5259 , the elasticity is unitary, e ( x ) = 1 , and so in the interval x ] 0 , 0.5259 [ , the distribution is inelastic and for x ] 0.5259 , 1 [ it is elastic. It is verified that P ( e ( x ) > 1 ) P ( X 0.5259 ) 0.6222 .

Appendix A.4.7. Gamma Distribution

Let X be a G a m m a ( α , β ) variable. Its elasticity is decreasing x , and takes values in the interval ] 0 , α [ . The parameter β influences the speed with which elasticity approaches perfect inelasticity: the lower its value, equal to the parameter α, the faster elasticity approaches value 0.
If < 1 , the distribution is inelastic x , so P ( e ( x ) > 1 ) = 0 . If > 1 there is a change in elasticity, being elastic at the values nearest to 0 and inelastic for higher values. For example, in a G a m m a ( 10 ,   1 ) elasticity is unitary at x 13.0931 , so the distribution is elastic at x ] 0 ,   13.0931 [ and inelastic at x ] 13.0931 , + [ , verifying that P ( e ( x ) > 1 ) P ( X 13.0931 ) 0.8402 .

Appendix A.4.8. Standard Cauchy Distribution

Let X be the standard variable with a Cauchy distribution, then X is inelastic x , and P ( e ( x ) > 1 ) = 0 . Elasticity is increasing in the interval x ] , 0.8785 [ , with a relative maximum at x 0.8785 , e(x = −0.8785) ≅ 0.1284. In addition, it decreases for x ] 0.8785 ,   0 [ , then starts to increase again at x ] 0 ,   1.2505 [ , presenting the absolute maximum at x 1.2505 , with e ( x = 1.2505 ) = 0.2172 , and from this point on it decreases again towards perfect inelasticity.

References

  1. Anis, Mohammed Zafar, and Debsurya De. 2020. An Expository Note on Unit-Gompertz Distribution with Applications. Statistica 80: 469–90. [Google Scholar]
  2. Arab, Idir, and Paulo Eduardo Oliveira. 2019. Iterated failure rate monotonicity and ordering relations within Gamma and Weibull distributions. Probability in the Engineering and Informational Sciences 33: 64–80. [Google Scholar] [CrossRef] [Green Version]
  3. Auster, Sarah, and Christian Kellner. 2022. Robust bidding and revenue in descending price auctions. Journal of Economic Theory 199: 105072. [Google Scholar] [CrossRef]
  4. Balakrishnan, N., G. Barmalzan, and S. Kosari. 2017. On joint weak reversed hazard rate order under symmetric copulas. Mathematical Methods of Statistics 26: 311–18. [Google Scholar] [CrossRef]
  5. Balakrishnan, Narayanaswamy. 1992. Handbook of the Logistic Distribution. New York: Marcel Dekker. [Google Scholar]
  6. Behdani, Zahra, Gholam Reza Mohtashami Borzadaran, and Bahram Sadeghpour Gildeh. 2019. Connection of Generalized Failure Rate and Generalized Reversed Failure Rate with Inequality Curves. International Journal of Reliability, Quality and Safety Engineering 26: 1950006. [Google Scholar] [CrossRef]
  7. Belzunce, F., J. Candel, and J. M. Ruiz. 1995. Ordering of truncated distributions through concentration curves. Sankhyā A 57: 375–83. [Google Scholar]
  8. Belzunce, F., J. Candel, and J. M. Ruiz. 1998. Ordering and asymptotic properties of residual income distributions. Sankhyā B 60: 331–48. [Google Scholar]
  9. Block, Henry W., Thomas H. Savits, and Harshinder Singh. 1998. The reversed hazard rate function. Probability in the Engineering and Informational Sciences 12: 69–90. [Google Scholar] [CrossRef]
  10. Burr, Irving W. 1942. Cumulative frequency distribution. The Annals of Mathematical Statistics 13: 215–32. [Google Scholar] [CrossRef]
  11. Chandra, Nimai Kumar, and Dilip Roy. 2001. Some results on reverse hazard rate. Probability in the Engineering and Informational Sciences 15: 95–102. [Google Scholar] [CrossRef]
  12. Chechile, Richard A. 2011. Properties of reverse hazard functions. Journal of Mathematical Psychology 55: 203–22. [Google Scholar] [CrossRef]
  13. Desai, D., V. Mariappan, and M. Sakhardande. 2011. Nature of reversed hazard rate: An investigation. International Journal of Performability Engineering 7: 165–71. [Google Scholar]
  14. Esna-Ashari, Maryam, Mahdi Alimohammadi, and Erhard Cramer. 2020. Some new results on likelihood ratio ordering and aging properties of generalized order statistics. Communications in Statistics—Theory and Methods, 1–25. [Google Scholar] [CrossRef]
  15. Finkelstein, Maxim S. 2002. On the reversed hazard rate. Reliability Engineering & System Safety 78: 71–75. [Google Scholar]
  16. Fréchet, M. 1927. Sur la loi de probabilité de l’écart maximum. Annales De La Société Polonaise De Mathématique 6: 93. [Google Scholar]
  17. Gumbel, Emil Julius. 1958. Statistics of Extremes. New York: Columbia University Press. [Google Scholar]
  18. Gupta, Rameshwar D., and Asok K. Nanda. 2001. Some results on reversed hazard rate ordering. Communications and Statististics—Theory & Methods 30: 2447–57. [Google Scholar]
  19. Gupta, Rameshwar D., and Debasis Kundu. 1999. Generalized exponential distributions. The Australian & New Zealand Journal of Statistics 41: 173–88. [Google Scholar]
  20. Hamming, R. M. 1970. On the distribution of numbers. The Bell System Technical Journal 49: 1609–25. [Google Scholar] [CrossRef]
  21. Hazra, Nil Kamal, Mithu Rani Kuiti, Maxim Finkelstein, and Asok K. Nanda. 2017. On stochastic comparisons of maximum order statistics from the location-scale family of distributions. Journal of Multivariate Analysis 160: 31–41. [Google Scholar] [CrossRef]
  22. Kalbfleisch, J. D., and J. F. Lawless. 1991. Regression models for right truncated data with applications to AIDS incubation times and reporting lags. Statistica Sinica 1: 19–32. [Google Scholar]
  23. Khan, M. Shuaib, G. R. Pasha, and Ahmed Hesham Pasha. 2008. Theoretical Analysis of Inverse Weibull Distribution. Wseas Transactions on Mathematics 7: 30–38. [Google Scholar]
  24. Kotelo, Taoana Thomas. 2019. Mixture Failure Rate Modeling with Applications. Ph.D. dissertation, University of the Free State, Bloemfontein, Sudafrica. [Google Scholar]
  25. Kotz, Samuel, and Johan Rene Van Dorp. 2004. Beyond Beta: Other Continuous Families of Distributions with Bounded Support and Applications. London: World Scientific Publishing Co. [Google Scholar]
  26. Kundu, Chanchal, and Amit Ghosh. 2017. Inequalities involving expectations of selected functions in reliability theory to characterize distributions. Communications in Statistics—Theory and Methods 17: 8468–78. [Google Scholar] [CrossRef] [Green Version]
  27. Labatut, Gregorio, José Pozuelo, and Ernesto J. Veres-Ferrer. 2009. Modelización temporal de las ratios contables en la detección del fracaso empresarial en la PYME española. Revista Española de Financiación y Contabilidad XXXVIII: 423–47. [Google Scholar]
  28. Marshall, Albert W., and Ingram Olkin. 2007. Life Distributions: Structure of Nonparametric, Semiparametric, and Parametric Families. New York: Springer. [Google Scholar]
  29. Nanda, Phalguni, and Suchandan Kayal. 2019. Mean inactivity time of lower record values. Communications in Statistics—Theory and Methods 48: 5145–64. [Google Scholar] [CrossRef]
  30. Navarro, Jorge, Yolanda del Águila, Miguel A. Sordo, and Alfonso Suárez-Lorrens. 2014. Preservation of reliability classes under the formation of coherent systems. Applied Stochastic Models in Business and Industry 30: 444–54. [Google Scholar] [CrossRef]
  31. Oliveira, Paulo Eduardo, and Nuria Torrado. 2015. On proportional reversed failure rate class. Statistical Papers 56: 999–1013. [Google Scholar] [CrossRef]
  32. Pavía, Jose M., and Ernesto Veres-Ferrer. 2016. Is the cardholder an efficient alarm system to detect credit card incidents? International Journal of Consumer Studies 40: 229–34. [Google Scholar] [CrossRef]
  33. Pavía, Jose M., Ernesto J. Veres-Ferrer, and Gabriel Foix-Escura. 2012. Credit Card Incidents and Control Systems. International Journal of Information Management 32: 501–3. [Google Scholar] [CrossRef]
  34. Popović, Božidar V., Ali İ. Genç, and Filippo Domma. 2021. Generalized proportional reversed hazard rate distributions with application in medicine. Statistical Methods & Applications, 1–22. [Google Scholar] [CrossRef]
  35. Poursaeed, M. H. 2010. A note on the mean past and the mean residual life of a (n −k +1)-out-of-n system under multi monitoring. Statistical Papers 51: 409–19. [Google Scholar] [CrossRef]
  36. Shaked, Moshe, and J. George Shanthikumar. 2006. Stochastic Orders. New York: Springer. [Google Scholar]
  37. Singh, S. K., and G. S. Maddala. 1976. A function for size distributions of incomes. Econometrica 44: 963–70. [Google Scholar] [CrossRef]
  38. Szymkowiak, Magdalena. 2019. Measures of ageing tendency. Journal of Applied Probability 56: 358–83. [Google Scholar] [CrossRef]
  39. Topp, Chester W., and Fred C. Leone. 1955. A family of J-shaped frequency functions. Journal of the American Statistical Association 50: 209–19. [Google Scholar] [CrossRef]
  40. Torrado, Nuria, and Jorge Navarro. 2021. Ranking the extreme claim amounts in dependent individual risk models. Scandinavian Actuarial Journal 2021: 218–47. [Google Scholar] [CrossRef]
  41. Torrado, Nuria, and Paulo Eduardo Oliviera. 2013. On closure properties of risk aversion measures. Preprint 2013: 13–45. [Google Scholar]
  42. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2012. La elasticidad: una nueva herramienta para caracterizar distribuciones de probabilidad. Rect@: Revista Electrónica de Comunicaciones y Trabajos de ASEPUMA 13: 145–58. [Google Scholar]
  43. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2014. On the relationship between the reversed hazard rate and elasticity. Statistical Papers 55: 275–84. [Google Scholar] [CrossRef]
  44. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2017a. Properties of the elasticity of a continuous random variable. A special look to its behaviour and speed of change. Communications in Statistics—Theory and Methods 46: 3054–69. [Google Scholar]
  45. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2017b. The elasticity function of a discrete random variable and its properties. Communications in Statistics—Theory and Methods 46: 8631–46. [Google Scholar] [CrossRef]
  46. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2018. El uso de la elasticidad para la determinación del cambio en la acumulación de probabilidad. Anales de Economía Aplicada 2018: 1469–89. [Google Scholar]
  47. Veres-Ferrer, Ernesto Jesus, and Jose M. Pavía. 2021. Elasticity as a measure for online determination of remission points in ongoing epidemics. Statistics in Medicine 40: 865–84. [Google Scholar] [CrossRef] [PubMed]
  48. Xie, M., O. Gaudoin, and C. Bracquemond. 2002. Redefining failure rate function for discrete distributions. International Journal of Reliability, Quality and Safety Engineering 9: 275–85. [Google Scholar] [CrossRef]
  49. Yan, Rongfang, and Tianqi Luo. 2018. On the optimal allocation of active redundancies in series system. Communications in Statistics—Theory and Methods 47: 2379–88. [Google Scholar] [CrossRef]
  50. Yang, Luyi, Zhongbin Wang, and Shiliang Cui. 2021. A model of queue scalping. Management Science 67: 6803–21. [Google Scholar] [CrossRef]
  51. Zhang, Li, and Rongfang Yan. 2021. Stochastic comparisons of series and parallel systems with dependent and heterogeneous Topp-Leone generated components. AIMS Mathematics 6: 2031–47. [Google Scholar] [CrossRef]
Table 1. Examples of perfectly elastic models, F e ( 1 ) = 0 .
Table 1. Examples of perfectly elastic models, F e ( 1 ) = 0 .
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
1.1
{ 0 , x 0 x 1 + 1 x 2 , 0 < x < 1 1 , x b
1 1 x 2 { 0 , y 1 y 1 y + 1 , y > 1 0
1.2
{ 0 , x < 0 a x 4 a x , 0 x 2 a , a > 0 1 , x > 2 a
4 4 a x Uniform (1, 2)
{ 0 , y < 1 y 1 , 1 y 2 1 , y > 2
0
1.3
{ 0 , x < 1 x · ln x e , 1 x e 1 , x > e
ln x + 1 ln x { 0 , y < 2 1 1 y 1 e 2 y y 1 , 2 y < + 0
1.4
{ 0 , x < 0 x α , 0 x 1 ,   α 1 1 , x > 1
α { 0 , y < α 1 , y α 0
1.5
{ 0 , x < 0 2 x 2 1 + x , 0 x 1 1 , x > 1
2 + x 1 + x { 0 , y < 3 2 2 y 2 + 9 y 9 y 1 , 3 2 y 2 1 , y > 2 0
Table 2. Examples of perfectly inelastic models, F e ( 1 ) = 1 .
Table 2. Examples of perfectly inelastic models, F e ( 1 ) = 1 .
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
2.1
Exponential (λ > 0)
λ x e λ x 1 -1
2.2
{ 0 , x < 0 x · e 1 x 2 , 2 0 x 1 1 , x > 1
1 x 2 { 0 , y < 0 1 1 y · e y 2 , 0 y 1 1 , y > 1 1
2.3
{ 0 , x < 0 x e x 1 , 0 x 1 1 , x > 1
1 x { 0 , y < 0 1 ( 1 y ) e y , 0 y 1 1 , y > 1 1
2.4
{ 0 , x < 0 2 a x 1 + a x , 0 x 1 a , a 1 1 , x > 1 a
1 2 ( 1 + a x ) Uniform ( 1 4 , 1 2 ) { 0 , y < 1 4 4 y 1 , 1 4 y 1 2 1 , y > 1 2 1
2.5
Absolute difference variable of two standard uniforms
{ 0 , x < 0 2 x x 2 , 0 x 1 1 , x > 1
(case specific to the Topp–Leone distribution, with υ = 1 )
2 2 x 2 x { 0 , y < 0 y 2 ( 2 y ) 2 , 0 y 1 1 , y > 1 1
2.6
{ 0 , x < 0 b x b x + a , x 0 ;   a , b > 0
a b x + a Uniform ( 0 ,   1 ) { 0 , y < 0 y , 0 y 1 1 , y > 1 1
2.7
{ 0 , x 0 b x b x + a , x > 0 ;   a , b > 0
a 2 ( b x + a ) { 0 , y 0 1 1 2 y , 0 < y 1 2 1 , y > 1 2 1
2.8
{ 0 , x < 0 2 x 1 + x , 0 x 1 1 , x > 1
2 + x 2 ( 1 + x ) { 0 , y < 3 4 18 y 8 y 2 9 2 y 1 , 3 4 y 1 1 , y > 1 1
Table 3. Examples of Models of mixed behaviour, 0 < F e ( 1 ) < 1 .
Table 3. Examples of Models of mixed behaviour, 0 < F e ( 1 ) < 1 .
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
3.1
Unit exponential, upper bound
{ e x , x 0 1 , x > 0
| x | Exponential ( = 1 )
{ 0 , y 0 1 e y , y > 0
0.6321
Elasticity is equal to 1
at x = 1
3.2
Pareto
{ 0 , x < x 0 1 ( x 0 x ) b , x x 0 > 0 ;   b > 0
b · x 0 b x b x 0 b { 0 , y 0 y b + y , y > 0 1 b + 1
Elasticity is equal to 1
at x = x 0 1 + b b
3.3
{ 0 , x < 0 a x 6 a x , 0 x 4 a , a > 0 1 , x > 4 a
6 + a x 2 · ( 6 a x ) { 0 , y < 1 2 24 y 2 6 12 , 1 2 y 5 2 1 , y > 5 2 0.3536
Elasticity is equal to 1
at x = 2 a
3.4
Gumbel tipo II
{ 0 , x 0 e b x α , x > 0 ;   α > 0
If b = 1 , it is the Fréchet variable, or Weibull inverse distribution
α b x α { 0 , y 0 1 e y α , y > 0 e 1 α
Elasticity is equal to 1
at x = ( α b ) 1 α
3.5
{ 0 , x < 1 ln x , 1 x e 1 , x > e
1 2 ln x { 0 , y < 1 2 1 1 2 y , y 1 2 0.2929
Elasticity is equal to 1
at x = e
3.6
{ 0 , x < a x a x , x a ;   a > 0
a x a { 0 , y < 0 y 1 + y , y 0 0.5
Elasticity is equal to 1
at x = 2 a
3.7
{ 0 , x < 1 e · ln x x , 1 x e 1 , x > e
1 ln x ln x { 0 , y < 0 1 e y 1 + y 1 + y , y 0 0.1756
Elasticity is equal to 1
at x = e
3.8
Logistic ( M = 1 , N = 1 )
1 1 + e x           x
| x | · e x 1 + e x   - 0.7822
Elasticity is equal to 1
at x 1.2785
Table 4. Examples of (3.9) Uniform distributions models.
Table 4. Examples of (3.9) Uniform distributions models.
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
(a) Uniform U ( 0 , b ) 1 { 0 , y < 1 1 , y 1
Degenerate distribution
1
(b) Uniform U ( a , b )
0 < a < b
x x a { 0 , y < b b a y · ( b a ) b ( b a ) · ( y 1 ) , y b b a 0
(c1) Uniform U ( a , b )
a < b 0 and b b a > 1
| x | x a { 0 , y < b b a y · ( b a ) + b ( b a ) · ( y + 1 ) , y b b a 0
(c2) Uniform U ( a , b )
a < b 0 and b b a < 1
| x | x a { 0 , y < b b a y · ( b a ) + b ( b a ) · ( y + 1 ) , y b b a 2 b a 2 ( b a )
Elasticity is equal to 1
at x = a 2
(d) Uniform U ( a , b )
a < 0     and     b > 0
| x | x a { 0 , y < 0 2 a y ( b a ) ( 1 y 2 ) , 0 y < b b a b + y ( b a ) ( b a ) ( 1 + y ) , b b a y < + 2 b a 2 ( b a )
Elasticity is equal to 1
at x = a 2
Table 5. Examples of (3.10) reciprocal distributions or log-uniform distributions models.
Table 5. Examples of (3.10) reciprocal distributions or log-uniform distributions models.
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
(a) Reciprocal distributions or log-uniform distributions, with
b a e { 0 , x < a ln x ln a ln b ln a , 0 < a x b 1 , x > b
1 ln x ln a { 0 , y < 1 ln a b 1 1 y · ( ln b ln a ) , y 1 ln a b 0
(b) Reciprocal distributions or log-uniform distributions, with
b a e { 0 , x < a ln x ln a ln b ln a , 0 < a x b 1 , x > b
1 ln x ln a { 0 , y < 1 ln a b 1 1 y · ( ln b ln a ) , y 1 ln a b ln b ln a 1 ln b ln a
Elasticity is equal to 1
at x = e · a
Table 6. Examples of models without analytical expression for the elasticity.
Table 6. Examples of models without analytical expression for the elasticity.
Model/Function of
Distribution ,   F X ( x )
Function of
Elasticity ,   e ( x )
Distribution of
Elasticity ,   F e ( y )
P ( e ( x ) 1 )
4.1.a
Standardised Normal Z
Decreasing in
] , 0 [ ] .8401 , + [
Increasing in
] 0 ,   .8401 [
- P ( e ( x ) 1 ) 0.7739 .
Elasticity is equal to 1
at z 0.7518
4.1.b
Normal ( μ , σ )
For example, in
x = μ 0.7518 · σ e N ( μ , σ ) = | μ 0.7518 · σ 0.7518 · σ |

-
If X = N ( 0 , σ ) ,
P ( e ( x ) 1 ) 0.7739
and in x = 0.7518 · σ , elasticity is unitary
4.2
Weibull ( α > 0 , β > 0 )
Decreasing
e ( x ) ] 0 , α [
a) Inelastic if α 1 and β = 1
b) If α = 3 and β = 1 , elastic in ] 0 , 1.2394 [
and inelastic in
] 1.2394 , + [
c) α = 2 and β = 4 , elastic in ] 0 , 4.4836 [
and inelastic in
] 4.4836 + [
-(a) If α 1 and β = 1 , inelastic:
P ( e ( x ) 1 ) = 1
(b) If α = 3 and β = 1 , preferably elastic. Elasticity is equal to 1 in x 1.2394 and P ( e ( x ) 1 ) 0.1490
(c) If α = 2 and β = 4 , preferably elastic. Elasticity is equal to 1 in x 4.4836 , and P ( e ( x ) 1 ) 0.2847
4.3
t n Student
e ( x ) ] 0 , n [
Decreasing in
] , 0 [ ] t 0 , + [
con e ( t 0 ) < 1 .
Increasing in ] 0 , t 0 [
-(a) If n = 1 , inelastic, P ( e ( x ) 1 ) = 1 and e ( t 0 = 0.8019 ) 0.2172 < 1
(b) If n = 10 , preferably inelastic. Elasticity is equal to 1 in t = 0.8 ,   e ( t 0 = 0.83415 ) 0.2845 < 1 and P ( e ( x ) 1 ) 0.7788
(c) If n = 50 , preferably inelastic. Elasticity is equal to 1 in t = 0.7608 ,   e ( t 0 = 0.8387 ) 0.2925 < 1 and P ( e ( x ) 1 ) 0.7748
4.4
Snedecor’s F n , m
e ( x ) ] 0 , n 2 [
Decreasing
-(a) If n = 1 , inelastic ∀m,
P ( e ( x ) 1 ) = 1
(b) If   m = 1 , there is a change ∀n, preferably being inelastic. For example, with n = 50 , the elasticity is equal to 1 in x 0.6758 , and P ( e ( x ) 1 ) 0.7705
(c) If n , m > 1 , preferably inelastic. There is a change in elasticity. For example, if n = 75 and m = 15 , the elasticity is equal to 1 in x 1.3646 and P ( e ( x ) 1 ) 0.7440
4.5
χ n 2
e ( x ) ] 0 ,   n 2 [
Decreasing
-(a) If n = 1 ,   2 , inelastic,
P ( e ( x ) 1 ) = 1 .
(b) If n > 2 , there is a change in elasticity. For example, if n = 10 , preferably elastic. The elasticity is equal to 1 in x 12.6451 and P ( e ( x ) 1 ) 0.2442
(c) If n = 25 , preferably elastic. The elasticity is equal to 1 in x 32.6518   and P ( e ( x ) 1 ) 0.1400
4.6
B e t a ( p , q )
(a) If p > 1 and q < 1 , increasing, with values in e ( x ) ] p , + [
(b) If p < 1 and q > 1 , decreasing, with values in e ( x ) ] p , 0 [
(c) If   p , q > 1 , decreasing, with values in e ( x ) ] p , 0 [
(d) If p , q < 1 , increasing, with values in e ( x ) ] p , + [
-(a) P ( e ( x ) 1 ) = 0 , elastic
(b) P ( e ( x ) 1 ) = 1 , inelastic
(c) There is a change in elasticity. For example, for X ~ B e t a ( 3 ,   2 ) , the elasticity is equal to 1 in x 0.8889   and P ( e ( x ) 1 ) 0.0636
(d) There is a change in elasticity. For example, for X ~ B e t a ( .8 ,   .5 ) , the elasticity is equal to 1 in x 0.5259 and P ( e ( x ) 1 ) 0.3778
4.7
Gamma ( α ,   β )
e ( x ) ] α , 0 [
Decreasing
-(a) If < 1 , inelastic, P ( e ( x ) 1 ) = 1 .
(b) If > 1 there is a change in elasticity. For example, for X ~ G a m m a ( α , β ) , the elasticity is equal to 1 in x 13.0931 , and P ( e ( x ) 1 ) 0.1598 .
4.8
Cauchy standard
Inelastic, increasing in ] , 0.8785 [ ] 0 , 1.2505 [ and decreasing in the rest - P ( e ( x ) 1 ) = 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Veres-Ferrer, E.-J.; Pavía, J.M. The Elasticity of a Random Variable as a Tool for Measuring and Assessing Risks. Risks 2022, 10, 68. https://doi.org/10.3390/risks10030068

AMA Style

Veres-Ferrer E-J, Pavía JM. The Elasticity of a Random Variable as a Tool for Measuring and Assessing Risks. Risks. 2022; 10(3):68. https://doi.org/10.3390/risks10030068

Chicago/Turabian Style

Veres-Ferrer, Ernesto-Jesús, and Jose M. Pavía. 2022. "The Elasticity of a Random Variable as a Tool for Measuring and Assessing Risks" Risks 10, no. 3: 68. https://doi.org/10.3390/risks10030068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop