Next Article in Journal
The Friedrichs Extension of Elliptic Operators with Conditions on Submanifolds of Arbitrary Dimension
Next Article in Special Issue
More General Ostrowski-Type Inequalities in the Fuzzy Context
Previous Article in Journal
MSI-HHO: Multi-Strategy Improved HHO Algorithm for Global Optimization
Previous Article in Special Issue
Subgradient Extra-Gradient Algorithm for Pseudomonotone Equilibrium Problems and Fixed-Point Problems of Bregman Relatively Nonexpansive Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Order Properties Concerning Tsallis Residual Entropy

by
Răzvan-Cornel Sfetcu
1,* and
Vasile Preda
2,3,1
1
Faculty of Mathematics and Computer Science, University of Bucharest, Str. Academiei 14, 010014 Bucharest, Romania
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics, Calea 13 Septembrie 13, 050711 Bucharest, Romania
3
“Costin C. Kiriţescu” National Institute of Economic Research, Calea 13 Septembrie 13, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 417; https://doi.org/10.3390/math12030417
Submission received: 27 November 2023 / Revised: 19 January 2024 / Accepted: 24 January 2024 / Published: 27 January 2024
(This article belongs to the Special Issue Recent Trends in Convex Analysis and Mathematical Inequalities)

Abstract

:
With the help of Tsallis residual entropy, we introduce Tsallis quantile entropy order between two random variables. We give necessary and sufficient conditions, study closure and reversed closure properties under parallel and series operations and show that this order is preserved in the proportional hazard rate model, proportional reversed hazard rate model, proportional odds model and record values model.
MSC:
60E15; 60K10; 62B10; 62N05; 90B25; 94A17

1. Introduction

The concept of entropy, defined mathematically by Shannon in [1], measures the uncertainty of a physical system and has applications in many scientific and technological areas such as physics, probability theory, statistics, communication theory and economics. This notion appeared from thermodynamics and statistical mechanics. In this theory, for a data communication system, we have three elements: a receiver, a communication channel and a source of data. Based on the signal that is received through the channel, Shannon tried to identify what sort of data were generated. Many methods on how to encode, compress and transmit the message were considered. In Shannon’s source coding theorem, also known as Shannon’s first theorem, an error-free encoding is established. This result is generalized especially for noisy channel in Shannon’s noisy coding theorem. In the last couple of years, Shannon entropy was intensively studied and many generalizations have appeared (Tsallis entropy, Rényi entropy, Varma entropy, Kaniadakis entropy, relative entropy, weighted entropy, cumulative entropy, etc.).
In [2], Tsallis used another formula instead of the classical algorithm which appears in Shannon entropy, defining, in this way, what we call today, Tsallis entropy. There are many applications of this new entropy, especially in physics, and, more precisely: superstatistics (see [3]), spectral statistics (see [4]), earthquakes (see [5,6,7]), stock exchanges (see [8,9]), plasma (see [10]), income distribution (see [11]), non-coding human DNA (see [12]), internet (see [13]), and statistical mechanics (see [2,14]). For more information about Tsallis entropy, we recommend reading [15].
Among the applications of other entropies (Rényi entropy, Varma entropy, Kaniadakis entropy, relative entropy, weighted entropy, etc.), we can list the following: Markov chains (see [16,17,18]), model selection (see [19,20]), combinatorics (see [21,22]), finance (see [23,24,25]), Lie symmetries (see [26,27]), and machine learning (see [28,29]).
There are several papers in which the authors compare random variables from the point of view of residual entropies: for Shannon residual entropy, see [30,31,32]; for Rényi residual entropy, see [33,34]; for Varma residual entropy, see [35]; and for Awad-Varma residual entropy, see [36]. Other orders between random variables can be found in [37,38,39,40,41,42,43,44].
Rao et al. [45] introduced an alternative measure to Shannon entropy, known as the cumulative residual entropy (CRE), by considering the survival function instead of the probability density function. Because the survival function is more regular than the probability density function, CRE is considered to be more stable and has more mathematical properties. Moreover, the distribution function exists even if the probability density function does not exist (see, e.g., generalized lambda, power-Pareto and Govindarajulu distributions). Sati and Gupta [46] introduced a cumulative residual Tsallis entropy and extended it to its dynamic form based on the residual lifetime. Rajesh and Sunoj [47] introduced an alternative form of the cumulative residual Tsallis entropy and proved some results with applications in reliability. Toomaj and Atabay [48] elaborated some further consequences of the alternative cumulative residual Tsallis entropy, introduced by Rajesh and Sunoj [47], including stochastic ordering, expressions and bounds and proposed a normalized version of the cumulative residual Tsallis entropy, which can be used as a dispersion measure in place of coefficient of variation. Kumar [49] obtained characterization results based on the dynamic cumulative residual Tsallis entropy. In many realistic situations, uncertainty is not necessarily related to the future and can refer to the past as well. For instance, if at time t, a system which is observed only at certain preassigned inspection times is found to be down, then the uncertainty of the system life relies on the past, i.e., on which instant in ( 0 , t ) it has failed. A wide variety of research is available on entropy measures and its applications in past lifetime. For more detail, one can refer to Di Crescenzo and Longobardi [50], Di Crescenzo and Longobardi [51], Sachlas and Papaioannou [52] and Di Crescenzo and Toomaj [53]. Also, a study on the cumulative Tsallis entropy for past lifetime is available in Nair et al.’s work [54], Calì et al.’s work [55], Khammar and Jahanshahi’s work [56], Sunoj et al.’s study [57] and Alomani and Kayid’s study [58]. Baratpour and Khammar [59] studied Tsallis entropy of order statistics. The quantile-based approach has some advantages: it provides an alternative methodology in deriving the cumulative Tsallis entropy in past lifetime and facilitates the extension of domain of application of the cumulative Tsallis entropy in past lifetime to many flexible quantile functions which serve as useful lifetime models and which possess no probability distribution function.
The paper is organized as follows. After this Introduction section, in Section 2, Background and Notations, we present the main notions and notations used throughout the article. In Section 3, Fundamental Results, we present the main theorem (Theorem 1), which is used in all of our results. In this section, we also prove that the dispersive order and the convex transform order apply to the Tsallis quantile entropy order. In Section 4, Closure and Reversed Closure Properties, we show the closure and reversed closure properties of Tsallis quantile entropy order under parallel and series operations. In the last four sections, we show the preservation of Tsallis quantile entropy in some stochastic models: the proportional hazard rate model (Section 5Preservation of Tsallis Quantile Entropy Order in the Proportional Hazard Rate Model), the proportional reversed hazard rate model (Section 6Preservation of Tsallis Quantile Entropy Order in the Proportional Reversed Hazard Rate Model), the proportional odds model (Section 7Preservation of Tsallis Quantile Entropy Order in the Proportional Odds Model) and the proportional record values model (Section 8Preservation of Tsallis Quantile Entropy Order in the Record Values Model).

2. Background and Notations

Throughout this paper, we assume that all expectations are finite and all ratios and powers are well defined. For information on notions of probability theory, we recommend [60].
We consider X a non-negative random variable with an absolutely continuous cumulative distribution function F X , a survival function F ¯ X = d e f 1 F X and a probability density function f X (X represents a living thing or the lifetime of a device).
Shannon entropy of X is given by
H X = E Z log f X ( Z ) ,
where “log” is the natural logarithm function and Z is a non-negative random variable identically distributed like X.
Let α R { 1 } . Tsallis logarithm is given via
log T ( x ) = x α 1 1 α 1 if x > 0 0 if x = 0 .
From this point onward, we assume that α > 0 .
Tsallis entropy of X is defined by
H X T = E Z log T f X ( Z ) .
In this paper, we work with Tsallis residual entropy, defined via
H X T ( t ) = E Z 1 F ¯ X ( t ) log T f X ( Z ) F ¯ X ( t ) | [ Z > t ]   for   any   t 0 .
We recall that the quantile function of X is given by
Q X ( u ) = d e f F X 1 ( u ) = inf { x [ 0 , ) | F X ( x ) u }   for   any   u [ 0 , 1 ] .
We have F X ( Q X ( u ) ) = u for any u [ 0 , 1 ] . Differentiating both sides of this equality with respect to u, we obtain F X ( Q X ( u ) ) Q X ( u ) = 1 for any u [ 0 , 1 ] . With the notation q X ( u ) = Q X ( u ) for any u [ 0 , 1 ] , it follows that q X ( u ) f X ( Q X ( u ) ) = 1 for any u [ 0 , 1 ] .
Let Ψ X T ( u ) = H X T ( Q X ( u ) ) for any u [ 0 , 1 ] .
For any u [ 0 , 1 ] , we obtain
Ψ X T ( u ) = E Z 1 1 u log T f X ( Z ) 1 u | [ Z > Q X ( u ) ] = E U 1 1 u log T f X ( Q X ( U ) ) 1 u | [ u < U < 1 ] ,
where U is a random variable uniformly distributed on [ 0 , 1 ] .
In this paper, we are concerned about comparing two absolutely continuous non-negative random variables from the point of view of Tsallis residual entropy. More precisely, if X and Y are absolutely continuous non-negative random variables, we compare Ψ X T ( u ) and Ψ Y T ( u ) for any u [ 0 , 1 ] .
In the proofs, we will make use of the lemma below.
Lemma 1 
(see [33]). Let g : [ 0 , ) [ 0 , ) an increasing function and h : [ 0 , 1 ] × [ 0 , ) R such that
E U h ( u , U ) | [ u < U < 1 ] 0   for   any   u [ 0 , 1 ] .
Then
E U h ( u , U ) g ( U ) | [ u < U < 1 ] 0   for   any   u [ 0 , 1 ] .

3. Fundamental Results

Definition 1. 
We say that X is smaller than Y in Tsallis quantile entropy order (and denote by X T Y ) if Ψ X T ( u ) Ψ Y T ( u ) for any u [ 0 , 1 ] .
In the last several years, stochastic orders and inequalities have been used intensively in many areas of probability and statistics, like reliability theory, queuing theory, survival analysis, biology, economics, insurance, actuarial science, operations research and management science. The simplest way of comparing two distribution functions is by a comparison of the associated means. Because this comparison is based on only two numbers (the means), it is sometimes not very informative. Moreover, the means to not exist is possible. In many applications, we have more detailed information concerning the comparison of two distribution functions than just the two means. If we compare two distribution functions with the same mean (or that are centered about the same value), we can compare the dispersion of these distributions. The simplest way of doing this is by the comparison of the associated standard deviations. But, again, the comparison depends on only two numbers (the standard deviations), which are at times not very informative. As mentioned above, it is possible for the standard deviations to not exist. The concept of stochastic orders plays a major role in the theory and practice of statistics. It generally refers to a set of relations that may hold between a pair of distributions of random variables. In reliability, the stochastic orders which compare life distributions based on different characteristics are used to study aging properties, to develop bounds on reliability functions, to compare the performance of policies and systems and to derive new inference procedures. Many of such orders are defined in terms of concepts based on distribution functions.
The theorem below is the main result of this paper.
Theorem 1. 
The following assertions are equivalent:
1. 
X T Y .
2. 
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0 for any t 0 .
Proof. 
From Definition 1, X T Y if and only if
E U 1 1 u log T f X ( Q X ( U ) ) 1 u | [ u < U < 1 ] E U 1 1 u log T f Y ( Q Y ( U ) ) 1 u | [ u < U < 1 ]   for   any   u [ 0 , 1 ] .
If we take Q X ( U ) = Z in the preceding inequality, the following equivalences are valid for any u [ 0 , 1 ] :
X T Y E Z log T f X ( Z ) F ¯ X ( F X 1 ( u ) ) | [ Z > F X 1 ( u ) ] E Z log T f Y ( Q Y ( F X ( Z ) ) ) F ¯ X ( F X 1 ( u ) ) | [ Z > F X 1 ( u ) ] E Z log T f X ( Z ) F ¯ X ( F X 1 ( u ) ) log T f Y ( Q Y ( F X ( Z ) ) ) F ¯ X ( F X 1 ( u ) ) | [ Z > F X 1 ( u ) ] 0 E Z f Y ( F Y 1 ( F X ( Z ) ) ) α 1 log T f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) | [ Z > F X 1 ( u ) ] 0 .
In order to obtain the conclusion, it is sufficient to denote F X 1 ( u ) = t . □
Definition 2 
(see [61]). We say that:
1. 
X is smaller than Y in the dispersive order (and write X d i s p Y ) if
f X ( x ) f Y ( F Y 1 ( F X ( x ) ) )   for   any   x 0 .
2. 
X is smaller than Y in the convex transform order (and write X c Y ) if the function
[ 0 , ) x f X ( x ) f Y ( F Y 1 ( F X ( x ) ) )   is   non-negatively   is   increasing .
The dispersive order is a basic concept for comparing spread among probability distributions, with applications to order statistics, spacings and convolutions of independent random variables. The convex transform order is used to make precise comparisons between the skewness of probability distributions on the real line. From the point of view of the aging interpretation, this order can be seen as identifying aging rates in a way that also works when lifetimes do not start simultaneously (for more details concerning these two orders, the reader can consult [61]).
Theorem 2. 
If X d i s p Y , then X T Y .
Proof. 
Assume that X d i s p Y . Then f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) for any x 0 , hence log T f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) 0 for any x 0 and the conclusion follows Theorem 1. □
Theorem 3. 
If X c Y and f Y ( 0 ) f X ( 0 ) , then X T Y .
Proof. 
Assume that X c Y . Then the function [ 0 , ) x f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) is non-negatively increasing; hence
f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) f X ( 0 ) f Y ( 0 ) 1   for   any   x 0 .
With Theorem 1, we obtain the conclusion. □

4. Closure and Reversed Closure Properties

We consider X 1 , , X n and Y 1 , , Y n to be independent and identically distributed (i.i.d.) copies of X and Y, respectively, and
X 1 : n = min { X 1 , , X n } ,   X n : n = max { X 1 , , X n } ,
Y 1 : n = min { Y 1 , , Y n } ,   Y n : n = max { Y 1 , , Y n } .
Theorem 4. 
If X T Y , then X n : n T Y n : n .
Proof. 
Because X T Y , we can determine with Theorem 1 that
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0 for any t 0 .
It can be seen that, for any x 0 ,
F X n : n ( x ) = F X ( x ) n ,
F Y n : n ( x ) = F Y ( x ) n ,
f X n : n ( x ) = n F X ( x ) n 1 f X ( x ) ,
f Y n : n ( x ) = n F Y ( x ) n 1 f Y ( x ) ,
F Y n : n 1 ( F X n : n ( x ) ) = F Y 1 ( F X ( x ) ) ,
f Y n : n ( F Y n : n 1 ( F X n : n ( x ) ) ) = n F X ( x ) n 1 f Y ( F Y 1 ( F X ( x ) ) )
and
f X n : n ( x ) f Y n : n ( F Y n : n 1 ( F X n : n ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) .
Then
E Z f Y n : n F Y n : n 1 F X n : n ( Z ) α 1 log T f X n : n ( Z ) f Y n : n F Y n : n 1 F X n : n ( Z ) | [ Z > t ] = E Z n ( F X ( Z ) ) n 1 α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ]   for   any   t 0 .
Because the function
[ 0 , ) x n ( F X ( x ) ) n 1 α   is   non-negatively   increasing ,
it follows, via inequality (1) and Lemma 1, that
E Z f Y n : n F Y n : n 1 F X n : n ( Z ) α 1 log T f X n : n ( Z ) f Y n : n F Y n : n 1 F X n : n ( Z ) | [ Z > t ] 0   for   any   t 0 .
The relationship X n : n T Y n : n follows Theorem 1. □
Theorem 5. 
If X 1 : n T Y 1 : n , then X T Y .
Proof. 
Because X 1 : n T Y 1 : n , we have, by Theorem 1, that
E Z f Y 1 : n F Y 1 : n 1 F X 1 : n ( Z ) α 1 log T f X 1 : n ( Z ) f Y 1 : n F Y 1 : n 1 F X 1 : n ( Z ) | [ Z > t ] 0 for any t 0 .
We can see that, for any x 0 ,
F ¯ X 1 : n ( x ) = F ¯ X ( x ) n ,
F ¯ Y 1 : n ( x ) = F ¯ Y ( x ) n ,
f X 1 : n ( x ) = n F ¯ X ( x ) n 1 f X ( x ) ,
f Y 1 : n ( x ) = n F ¯ Y ( x ) n 1 f Y ( x ) ,
F Y 1 : n 1 F X 1 : n ( x ) = F Y 1 F X ( x ) ,
f Y 1 : n ( F Y 1 : n 1 ( F X 1 : n ( x ) ) ) = n F ¯ X ( x ) n 1 f Y ( F Y 1 ( F X ( x ) ) )
and
f X 1 : n ( x ) f Y 1 : n ( F Y 1 : n 1 ( F X 1 : n ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) .
Then
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] = E Z n F ¯ X ( Z ) ) n 1 α f Y 1 : n F Y 1 : n 1 F X 1 : n ( Z ) α 1 log T f X 1 : n ( Z ) f Y 1 : n F Y 1 : n 1 F X 1 : n ( Z ) | [ Z > t ] for   any   t 0 .
Because the function
[ 0 , ) x n F ¯ X ( x ) ) n 1 α   is   non-negatively   increasing ,
it follows, via inequality (2) and Lemma 1, that
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0 .
By applying Theorem 1, we obtain that X T Y . □
The natural step is to generalize the preceding two theorems from a finite number n to a random variable N.
We consider X 1 , X 2 , and Y 1 , Y 2 , as sequences of independent and identically distributed copies of X and Y, respectively. Let N be a positive integer random variable with the probability mass function p N ( n ) = P ( N = n ) , n = 1 , 2 , and such that N is independent of X i s and Y i s . Take
X 1 : N = min { X 1 , , X N } , X N : N = max { X 1 , , X N }
and
Y 1 : N = min { Y 1 , , Y N } , Y N : N = max { Y 1 , , Y N } .
Theorem 6. 
If X T Y , then X N : N T Y N : N .
Proof. 
Because X T Y , we can determine by Theorem 1 that
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0 for any t 0 .
One can see that, for any x 0 ,
F X N : N ( x ) = n = 1 F X ( x ) n p N ( n ) ,
F Y N : N ( x ) = n = 1 F Y ( x ) n p N ( n ) ,
f X N : N ( x ) = n = 1 n F X ( x ) n 1 p N ( n ) · f X ( x )
and
f Y N : N ( x ) = n = 1 n F Y ( x ) n 1 p N ( n ) · f Y ( x ) .
It was proven in [61] that
F Y N : N 1 F X N : N ( x ) = F Y 1 F X ( x )   for   any   x 0 .
Hence, for any x 0 ,
f Y N : N ( F Y N : N 1 ( F X N : N ( x ) ) ) = n = 1 n F X ( x ) n 1 p N ( n ) · f Y ( F Y 1 ( F X ( x ) ) )
and
f X N : N ( x ) f Y N : N ( F Y N : N 1 ( F X N : N ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) .
Then
E Z f Y N : N F Y N : N 1 F X N : N ( Z ) α 1 log T f X N : N ( Z ) f Y N : N F Y N : N 1 F X N : N ( Z ) | [ Z > t ] = E Z n = 1 n ( F X ( Z ) ) n 1 p N ( n ) α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ]   for any   t 0 .
Because the function
[ 0 , ) x n = 1 n ( F X ( x ) ) n 1 p N ( n ) α   is   non-negatively   increasing ,
it follows, via inequality (3) and Lemma 1, that
E Z f Y N : N F Y N : N 1 F X N : N ( Z ) α 1 log T f X N : N ( Z ) f Y N : N F Y N : N 1 F X N : N ( Z ) | [ Z > t ] 0   for any   t 0 .
The conclusion thus follows Theorem 1. □
Theorem 7. 
If X 1 : N T Y 1 : N , then X T Y .
Proof. 
Because X 1 : N T Y 1 : N , we can determine by Theorem 1 that
E Z f Y 1 : N F Y 1 : N 1 F X 1 : N ( Z ) α 1 log T f X 1 : N ( Z ) f Y 1 : N F Y 1 : N 1 F X 1 : N ( Z ) | [ Z > t ] 0 for any t 0 .
We can see that, for any x 0 ,
F ¯ X N : N ( x ) = n = 1 F ¯ X ( x ) n p N ( n ) ,
F ¯ Y N : N ( x ) = n = 1 F ¯ Y ( x ) n p N ( n ) ,
f X 1 : N ( x ) = n = 1 n F ¯ X ( x ) n 1 p N ( n ) · f X ( x )
and
f Y 1 : N ( x ) = n = 1 n F ¯ Y ( x ) n 1 p N ( n ) · f Y ( x ) .
It was proven in [61] that
F Y 1 : N 1 F X 1 : N ( x ) = F Y 1 F X ( x )   for   any   x 0 .
Hence, for any x 0 ,
f Y 1 : N ( F Y 1 : N 1 ( F X 1 : N ( x ) ) ) = n = 1 n F ¯ X ( x ) n 1 p N ( n ) · f Y ( F Y 1 ( F X ( x ) ) )
and
f X 1 : N ( x ) f Y 1 : N ( F Y 1 : N 1 ( F X 1 : N ( x ) ) ) = f X ( x ) f Y ( F Y 1 ( F X ( x ) ) ) ,
Then
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] = E Z f Y 1 : N F Y 1 : N 1 F X 1 : N ( Z ) α 1 n = 1 n ( F ¯ X ( Z ) ) n 1 p N ( n ) α log T f X 1 : N ( Z ) f Y 1 : N F Y 1 : N 1 F X 1 : N ( Z ) | [ Z > t ]   for   any   t 0 .
Because the function
[ 0 , ) x n = 1 n ( F ¯ X ( x ) ) n 1 p N ( n ) α   is   non-negatively   increasing ,
it follows, via inequality (4) and Lemma 1, that
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0 .
By Theorem 1, we conclude that X T Y . □

5. Preservation of Tsallis Quantile Entropy Order in the Proportional Hazard Rate Model

We consider the following proportional hazard rate model (see [61]), namely for any θ > 0 , for which we take X ( θ ) and Y ( θ ) as two absolutely continuous non-negative random variables with the survival functions ( F ¯ X ) θ and ( F ¯ Y ) θ , respectively.
Theorem 8. 
1. 
If θ 1 and X T Y , then X ( θ ) T Y ( θ ) .
2. 
If 0 < θ 1 and X ( θ ) T Y ( θ ) , then X T Y .
Proof. 
For any x 0 , we can obtain:
F ¯ X ( θ ) ( x ) = F ¯ X ( x ) θ ,
F ¯ Y ( θ ) ( x ) = F ¯ Y ( x ) θ ,
f X ( θ ) ( x ) = θ F ¯ X ( x ) θ 1 f X ( x ) ,
f Y ( θ ) ( x ) = θ F ¯ Y ( x ) θ 1 f Y ( x ) ,
F Y ( θ ) 1 F X ( θ ) ( x ) = F Y 1 F X ( x ) ,
f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = θ F ¯ X ( x ) θ 1 f Y F Y 1 F X ( x )
and
f X ( θ ) ( x ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then:
E Z f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) α 1 log T f X ( θ ) ( Z ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) | [ Z > t ] = E Z θ F ¯ X ( Z ) θ 1 α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for any   t 0 .
1.
If 0 < θ 1 and X T Y , then the function
[ 0 , ) x θ F ¯ X ( x ) θ 1 α   is   non-negatively   increasing
and
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0 .
Using Lemma 1, we can determine that X ( θ ) T Y ( θ ) .
2.
If θ 1 and X ( θ ) T Y ( θ ) , then the function
[ 0 , ) x θ F ¯ X ( x ) θ 1 α   is   non-negatively   increasing
and
E Z f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) α 1 log T f X ( θ ) ( Z ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) | [ Z > t ] 0 for   any   t 0 .
Using Lemma 1, we can determine that X T Y .

6. Preservation of Tsallis Quantile Entropy Order in the Proportional Reversed Hazard Rate Model

We consider the following proportional reversed hazard rate model (see [61]), namely for any θ > 0 , for which we take X ( θ ) and Y ( θ ) as two absolutely continuous non-negative random variables with the distribution functions ( F X ) θ and ( F Y ) θ , respectively.
Theorem 9. 
1. 
If θ 1 and X T Y , then X ( θ ) T Y ( θ ) .
2. 
If 0 < θ 1 and X ( θ ) T Y ( θ ) , then X T Y .
Proof. 
We can determine for any x 0 :
F X ( θ ) ( x ) = F X ( x ) θ ,
F Y ( θ ) ( x ) = F Y ( x ) θ ,
f X ( θ ) ( x ) = θ F X ( x ) θ 1 f X ( x ) ,
f Y ( θ ) ( x ) = θ F Y ( x ) θ 1 f Y ( x ) ,
F Y ( θ ) 1 F X ( θ ) ( x ) = F Y 1 F X ( x ) ,
f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = θ F X ( x ) θ 1 f Y F Y 1 F X ( x )
and
f X ( θ ) ( x ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then:
E Z f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) α 1 log T f X ( θ ) ( Z ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) | [ Z > t ] = E Z θ F X ( Z ) θ 1 α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for any   t 0 .
1.
If θ 1 and X T Y , then the function
[ 0 , ) x θ F X ( x ) θ 1 α   is   non-negatively   increasing
and
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0 .
Using Lemma 1, we can determine that X ( θ ) T Y ( θ ) .
2.
If 0 < θ 1 and X ( θ ) T Y ( θ ) , then the function
[ 0 , ) x θ F X ( x ) θ 1 α   is   non-negatively   increasing
and
E Z f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) α 1 log T f X ( θ ) ( Z ) f Y ( θ ) F Y ( θ ) 1 F X ( θ ) ( Z ) | [ Z > t ] 0 for   any   t 0 .
Using Lemma 1, we can determine that X T Y .

7. Preservation of Tsallis Quantile Entropy Order in the Proportional Odds Model

We work with the following proportional odds model (see [62]), namely for any θ > 0 , for which we take the proportional odds random variables X p and Y p , defined by the survival functions F ¯ X p ( x ) = θ F ¯ X ( x ) 1 ( 1 θ ) F ¯ X ( x ) and F ¯ Y p ( x ) = θ F ¯ Y ( x ) 1 ( 1 θ ) F ¯ Y ( x ) , respectively, for any x 0 .
Theorem 10. 
1. 
If θ 1 and X T Y , then X p T Y p .
2. 
If 0 < θ 1 and X p T Y p , then X T Y .
Proof. 
For any x 0 we have
F ¯ X p ( x ) = θ F ¯ X ( x ) 1 ( 1 θ ) F ¯ X ( x ) ,
F ¯ Y p ( x ) = θ F ¯ Y ( x ) 1 ( 1 θ ) F ¯ Y ( x ) ,
f X p ( x ) = θ 1 ( 1 θ ) F ¯ X ( x ) 2 · f X ( x ) ,
f Y p ( x ) = θ 1 ( 1 θ ) F ¯ Y ( x ) 2 · f Y ( x ) ,
F Y p 1 F X p ( x ) = F Y 1 F X ( x ) ,
f Y p F Y p 1 F X p ( x ) = θ 1 ( 1 θ ) F ¯ X ( x ) 2 · f Y F Y 1 F X ( x )
and
f X p ( x ) f Y p F Y p 1 F X p ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then
E Z f Y p F Y p 1 F X p ( Z ) α 1 log T f X p ( Z ) f Y p F Y p 1 F X p ( Z ) | [ Z > t ] = E Z θ 1 ( 1 θ ) F ¯ X ( Z ) 2 α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] for   any t 0 .
1.
Assume that X T Y and θ 1 . Then
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0
and the function
[ 0 , ) x θ 1 ( 1 θ ) F ¯ X ( x ) 2 α   is   non-negatively   increasing .
Hence, by Lemma 1, we obtain X p T Y p .
2.
Assume that X p T Y p and 0 < θ 1 . Then
E Z f Y p F Y p 1 F X p ( Z ) α 1 log T f X p ( Z ) f Y p F Y p 1 F X p ( Z ) | [ Z > t ] 0   for   any t 0
and the function
[ 0 , ) x θ 1 ( 1 θ ) F ¯ X ( x ) 2 α   is   non-negatively   increasing .
Hence, by Lemma 1, we obtain X T Y .

8. Preservation of Tsallis Quantile Entropy Order in the Record Values Model

Let { X i | i 1 } and { Y i | i 1 } be sequences of i.i.d. random variables from the random variables X and Y, respectively, with survival functions F ¯ X and F ¯ Y , respectively, and density functions f X and f Y , respectively. We consider the nth record times T n X and T n Y , respectively, defined via T 1 X = 1 and T n + 1 X = min { j > T n X | X j > X T n X } for any n 1 and T 1 Y = 1 and T n + 1 Y = min { j > T n Y | Y j > Y T n Y } , respectively.
We denote X T n X = d e f R n X and Y T n Y = d e f R n Y , respectively, and call them the nth record values (see [63]).
For any x 0 , we can obtain
F ¯ R n X ( x ) = F ¯ X ( x ) j = 0 n 1 Λ X ( x ) j j ! = Γ ¯ n Λ X ( x ) ,
F ¯ R n Y ( x ) = F ¯ Y ( x ) j = 0 n 1 Λ Y ( x ) j j ! = Γ ¯ n Λ Y ( x ) ,
f R n X ( x ) = 1 Γ ( n ) Λ X n 1 ( x ) f X ( x )
and
f R n Y ( x ) = 1 Γ ( n ) Λ Y n 1 ( x ) f Y ( x ) ,
where Γ ¯ n is the survival function of a Gamma random variable with the shape parameter n and the scale parameter 1, Λ X ( x ) = log F ¯ X ( x ) is the cumulative failure rate function of X and Λ Y ( x ) = log F ¯ Y ( x ) is the cumulative failure rate function of Y.
Theorem 11. 
Let m , n N = d e f { 1 , 2 , } .
1. 
If X T Y , then R n X T R n Y .
2. 
If n > m 1 and R m X T R m Y , then R n X T R n Y .
Proof. 
1.
If X T Y , then
E Z f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y F Y 1 F X ( Z ) | [ Z > t ] 0   for   any   t 0 .
We have, for any x 0 ,
F R n Y 1 F R n X ( x ) = F Y 1 F X ( x ) ,
f R n Y F R n Y 1 F R n X ( x ) = 1 Γ ( n ) Λ X n 1 ( x ) f Y F Y 1 F X ( x )
and
f R n X ( x ) f R n Y F R n Y 1 F R n X ( x ) = f X ( x ) f Y F Y 1 F X ( x ) .
Then
E Z f R n Y F R n Y 1 F R n X ( Z ) α 1 log T f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) | [ Z > t ] = E Z 1 Γ ( n ) Λ X n 1 ( Z ) α f Y F Y 1 F X ( Z ) α 1 log T f X ( Z ) f Y ( F Y 1 ( F X ( Z ) ) ) | [ Z > t ] for   any   t 0 .
Because the function
[ 0 , ) x 1 Γ ( n ) Λ X n 1 ( x ) α   is   non-negatively   increasing ,
we obtain via Lemma 1 that
E Z f R n Y F R n Y 1 F R n X ( Z ) α 1 log T f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) | [ Z > t ] 0   for   any t 0 ,
i.e.,  R n X T R n Y .
2.
If R m X T R m Y , then
E Z f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) α 1 log T f R m X ( Z ) f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) | [ Z > t ] 0   for   any t 0 .
For any x 0 , we can determine
F R m Y 1 F R m X ( x ) = F R n Y 1 F R n X ( x ) = F Y 1 F X ( x ) ,
f R m X ( x ) f R m Y F R m Y 1 F R m X ( x ) = f R n X ( x ) f R n Y F R n Y 1 F R n X ( x ) = f X ( x ) f Y F Y 1 F X ( x )
and
f R n Y ( F R n Y 1 ( F R n X ( x ) ) ) = Γ ( m ) Γ ( n ) Λ X ( x ) n m f R m Y ( F R m Y 1 ( F R m X ( x ) ) ) .
Then
E Z f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) α 1 log T f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) | [ Z > t ] = E Z Γ ( m ) Γ ( n ) Λ X ( Z ) n m α f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) α 1 log T f R m X ( Z ) f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) | [ Z > t ] for   any   t 0 .
Because the function
[ 0 , ) x Γ ( m ) Γ ( n ) Λ X ( x ) n m α   is   non-negatively   increasing
and
E Z f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) α 1 log T f R m X ( Z ) f R m Y ( F R m Y 1 ( F R m X ( Z ) ) ) | [ Z > t ] 0   for   any t 0 ,
using Lemma 1, we obtain that
E Z f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) α 1 log T f R n X ( Z ) f R n Y ( F R n Y 1 ( F R n X ( Z ) ) ) | [ Z > t ] 0   for   any t 0 ,
i.e., R n X T R n Y .

9. Conclusions

We introduced Tsallis quantile entropy order between two random variables, found necessary and sufficient conditions for it and proved closure and reversed closured properties of this order under parallel and series operations. We also showed that Tsallis quantile entropy order is preserved in some stochastic models, like proportional hazard rate model, proportional reversed hazard rate model, proportional odds model and record values model. In this way, there are generalized results from other papers working with Tsallis residual entropy instead of Shannon residual entropy (which is used in [30,31,32]), Rényi residual entropy (which is used in [33,34]), Varma residual entropy (which is used in [35]) and Awad-Varma residual entropy (which is used in [36]). The difference is that we work with Tsallis residual entropy instead of other residual entropies considered in the aforementioned papers.

Author Contributions

Conceptualization, R.-C.S. and V.P.; methodology, R.-C.S. and V.P.; software, R.-C.S. and V.P.; validation, R.-C.S. and V.P.; formal analysis, R.-C.S. and V.P.; investigation, R.-C.S. and V.P.; writing—original draft, R.-C.S. and V.P.; writing—review and editing, R.-C.S. and V.P.; visualization, R.-C.S. and V.P.; supervision, R.-C.S. and V.P.; project administration, R.-C.S. and V.P. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are very much indebted to the anonymous referees and to the editors for their most valuable comments and suggestions which improved the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  3. Beck, C.; Cohen, E.G.D. Superstatistics. Phys. A 2003, 322, 267–275. [Google Scholar] [CrossRef]
  4. Tsekouras, G.A.; Tsallis, C. Generalized entropy arising from a distribution of q indices. Phys. Rev. E 2005, 71, 046144. [Google Scholar] [CrossRef] [PubMed]
  5. Abe, S.; Suzuki, N. Law for the distance between successive earthquakes. J. Geophys. Res. 2003, 108, 2113. [Google Scholar] [CrossRef]
  6. Darooneh, A.H.; Dadashinia, C. Analysis of the spatial and temporal distributions between successive earthquakes: Nonextensive statistical mechanics viewpoint. Phys. A 2008, 387, 3647–3654. [Google Scholar] [CrossRef]
  7. Hasumi, T. Hypocenter interval statistics between successive earthquakes in the twodimensional Burridge-Knopoff model. Phys. A 2009, 388, 477–482. [Google Scholar] [CrossRef]
  8. Jiang, Z.Q.; Chen, W.; Zhou, W.X. Scaling in the distribution of intertrade durations of Chinese stocks. Phys. A 2008, 387, 5818–5825. [Google Scholar] [CrossRef]
  9. Kaizoji, T. An interacting-agent model of financial markets from the viewpoint of nonextensive statistical mechanics. Phys. A 2006, 370, 109–113. [Google Scholar] [CrossRef]
  10. Lima, J.; Silva, R., Jr.; Santos, J. Plasma oscillations and nonextensive statistics. Phys. Rev. E 2000, 61, 3260. [Google Scholar] [CrossRef]
  11. Soares, A.D.; Moura, N.J., Jr.; Ribeiro, M.B. Tsallis statistics in the income distribution of Brazil. Chaos Solitons Fractals 2016, 88, 158–171. [Google Scholar] [CrossRef]
  12. Oikonomou, N.; Provata, A.; Tirnakli, U. Nonextensive statistical approach to non-coding human DNA. Phys. A 2008, 387, 2653–2659. [Google Scholar] [CrossRef]
  13. Abe, S.; Suzuki, N. Itineration of the Internet over nonequilibrium stationary states in Tsallis statistics. Phys. Rev. E 2003, 67, 016106. [Google Scholar] [CrossRef]
  14. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Phys. A 2014, 407, 350–359. [Google Scholar] [CrossRef]
  15. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  16. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy, divergence rates and weighted divergence rates for Markov chains. I: The alpha-gamma and beta-gamma case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2017, 18, 293–301. [Google Scholar]
  17. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. II: The weighted case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 3–10. [Google Scholar]
  18. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. III: The Cressie and Read case and applications. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 413–421. [Google Scholar]
  19. Toma, A. Model selection criteria using divergences. Entropy 2014, 16, 2686–2698. [Google Scholar] [CrossRef]
  20. Toma, A.; Karagrigoriou, A.; Trentou, P. Robust model selection criteria based on pseudodistances. Entropy 2020, 22, 304. [Google Scholar] [CrossRef] [PubMed]
  21. Raşa, I. Convexity properties of some entropies. Results Math. 2018, 73, 105. [Google Scholar] [CrossRef]
  22. Raşa, I. Convexity properties of some entropies. II. Results Math. 2019, 74, 154. [Google Scholar] [CrossRef]
  23. Preda, V.; Dedu, S.; Iatan, I.; Dănilă Cernat, I.; Sheraz, M. Tsallis entropy for loss models and survival models involving truncated and censored random variables. Entropy 2022, 24, 1654. [Google Scholar] [CrossRef]
  24. Trivellato, B. The minimal k-entropy martingale measure. Int. J. Theor. Appl. Financ. 2012, 15, 1250038. [Google Scholar] [CrossRef]
  25. Trivellato, B. Deformed exponentials and applications to finance. Entropy 2013, 15, 3471–3489. [Google Scholar] [CrossRef]
  26. Hirică, I.-E.; Pripoae, C.-L.; Pripoae, G.-T.; Preda, V. Lie symmetries of the nonlinear Fokker-Planck equation based on weighted Kaniadakis entropy. Mathematics 2022, 10, 2776. [Google Scholar] [CrossRef]
  27. Pripoae, C.-L.; Hirică, I.-E.; Pripoae, G.-T.; Preda, V. Lie symmetries of the nonlinear Fokker-Planck equation based on weighted Tsallis entropy. Carpathian J. Math. 2022, 38, 597–617. [Google Scholar] [CrossRef]
  28. Iatan, I.; Dragan, M.; Dedu, S.; Preda, V. Using probabilistic models for data compression. Mathematics 2022, 10, 3847. [Google Scholar] [CrossRef]
  29. Wang, X.; Li, Y.; Qiao, Q.; Tavares, A.; Liang, Y. Water quality prediction based on machine learning and comprehensive weighting methods. Entropy 2023, 25, 1186. [Google Scholar] [CrossRef] [PubMed]
  30. Ebrahimi, N. How to measure uncertainty in the residual lifetime distribution. Sankhyā A 1996, 58, 48–56. [Google Scholar]
  31. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  32. Sunoj, S.M.; Sankaran, P.G. Quantile based entropy function. Statist. Probab. Lett. 2012, 82, 1049–1053. [Google Scholar] [CrossRef]
  33. Nanda, A.K.; Sankaran, P.G.; Sunoj, S.M. Rényi’s residual entropy: A quantile approach. Statist. Probab. Lett. 2014, 85, 114–121. [Google Scholar] [CrossRef]
  34. Yan, L.; Kang, D.-T. Some new results on the Rényi quantile entropy ordering. Stat. Methodol. 2016, 33, 55–70. [Google Scholar] [CrossRef]
  35. Sfetcu, S.-C. Varma quantile entropy order. Analele Ştiinţifice Univ. Ovidius Constanţa 2021, 29, 249–264. [Google Scholar] [CrossRef]
  36. Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. Ordering Awad-Varma entropy and applications to some stochastic models. Mathematics 2021, 9, 280. [Google Scholar] [CrossRef]
  37. Furuichi, S.; Minculete, N.; Mitroi, F.-C. Some inequalities on generalized entropies. J. Inequal. Appl. 2012, 2012, 226. [Google Scholar] [CrossRef]
  38. Furuichi, S.; Minculete, N. Refined Young inequality and its application to divergences. Entropy 2021, 23, 514. [Google Scholar] [CrossRef]
  39. Răducan, A.M.; Rădulescu, C.Z.; Rădulescu, M.; Zbăganu, G. On the probability of finding extremes in a random set. Mathematics 2022, 10, 1623. [Google Scholar] [CrossRef]
  40. Rădulescu, M.; Rădulescu, C.Z.; Zbăganu, G. Conditions for the existence of absolutely optimal portfolios. Mathematics 2021, 9, 2032. [Google Scholar] [CrossRef]
  41. Băncescu, I. Some classes of statistical distributions. Properties and applications. Analele Ştiinţifice Univ. Ovidius Constanţa 2018, 26, 43–68. [Google Scholar] [CrossRef]
  42. Catană, L.-I.; Răducan, A. Stochastic order for a multivariate uniform distributions family. Mathematics 2020, 8, 1410. [Google Scholar] [CrossRef]
  43. Catană, L.-I. Stochastic orders for a multivariate Pareto distribution. Analele Ştiinţifice Univ. Ovidius Constanţa 2021, 29, 53–69. [Google Scholar] [CrossRef]
  44. Suter, F.; Cernat, I.; Dragan, M. Some information measures properties of the GOS-concomitants from the FGM family. Entropy 2022, 24, 1361. [Google Scholar] [CrossRef] [PubMed]
  45. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  46. Sati, M.M.; Gupta, N. Some characterization results on dynamic cumulative residual Tsallis entropy. J. Probab. Stat. 2015, 8, 694203. [Google Scholar] [CrossRef]
  47. Rajesh, G.; Sunoj, S.M. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 933–943. [Google Scholar] [CrossRef]
  48. Toomaj, A.; Atabay, H.A. Some new findings on the cumulative residual Tsallis entropy. J. Comput. Appl. Math. 2022, 400, 113669. [Google Scholar] [CrossRef]
  49. Kumar, V. Characterization results based on dynamic Tsallis cumulative residual entropy. Commun. Stat. Theory Methods 2017, 46, 8343–8354. [Google Scholar] [CrossRef]
  50. Di Crescenzo, A.; Longobardi, M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002, 39, 434–440. [Google Scholar] [CrossRef]
  51. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plan. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  52. Sachlas, A.; Papaioannou, T. Residual and past entropy in actuarial science and survival models. Methodol. Comput. Appl. Probab. 2014, 16, 79–99. [Google Scholar] [CrossRef]
  53. Di Crescenzo, A.; Toomaj, A. Extension of the past lifetime and its connection to the cumulative entropy. J. Appl. Probab. 2015, 52, 1156–1174. [Google Scholar] [CrossRef]
  54. Nair, N.U.; Sankaran, P.G.; Balakrishnan, N. Quantile-Based Reliability Analysis; Springer: New York, NY, USA, 2013. [Google Scholar]
  55. Calì, C.; Longobardi, M.; Ahmadi, J. Some properties of cumulative Tsallis entropy. Phys. A 2017, 486, 1012–1021. [Google Scholar] [CrossRef]
  56. Khammar, A.H.; Jahanshahi, S.M.A. On weighted cumulative residual Tsallis entropy and its dynamic version. Phys. A 2018, 491, 678–692. [Google Scholar] [CrossRef]
  57. Sunoj, S.M.; Krishnan, A.S.; Sankaran, P.G. A quantile-based study of cumulative residual Tsallis entropy measures. Phys. A 2018, 494, 410–421. [Google Scholar] [CrossRef]
  58. Alomani, G.; Kayid, M. Further properties of Tsallis entropy and its application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef]
  59. Baratpour, S.; Khammar, A.H. Results on Tsallis entropy of order statistics and record values. Istat. J. Turk. Stat. Assoc. 2016, 8, 60–73. [Google Scholar]
  60. Athreya, K.; Lahiri, S. Measure Theory and Probability Theory; Springer Science+Business Media, LLC.: New York, NY, USA, 2006. [Google Scholar]
  61. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Science+Business Media, LLC.: New York, NY, USA, 2007. [Google Scholar]
  62. Navarro, J.; del Aguila, Y.; Asadi, M. Some new results on the cumulative residual entropy. J. Statist. Plann. Inference 2010, 140, 310–322. [Google Scholar] [CrossRef]
  63. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. Records; John Wiley & Sons: New York, NY, USA, 1998. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sfetcu, R.-C.; Preda, V. Order Properties Concerning Tsallis Residual Entropy. Mathematics 2024, 12, 417. https://doi.org/10.3390/math12030417

AMA Style

Sfetcu R-C, Preda V. Order Properties Concerning Tsallis Residual Entropy. Mathematics. 2024; 12(3):417. https://doi.org/10.3390/math12030417

Chicago/Turabian Style

Sfetcu, Răzvan-Cornel, and Vasile Preda. 2024. "Order Properties Concerning Tsallis Residual Entropy" Mathematics 12, no. 3: 417. https://doi.org/10.3390/math12030417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop