1. Introduction
Information-theoretic measures, particularly entropy and its variants, have recently emerged as powerful tools in reliability inference and statistical modeling, providing new perspectives on uncertainty quantification, system characterization, and inferential methodologies. Within this framework, residual extropy offers a complementary dual to entropy, enabling innovative approaches to the study of consecutive
r-out-of-
n:G systems, which represent a fundamental class of multicomponent reliability structures. While much of the existing literature has focused on the information properties of general technical systems, growing attention has turned toward consecutive
r-out-of-
n configurations and their variants, motivated by their wide range of engineering applications. Such systems are typically classified as linear or circular, depending on the spatial arrangement of their components, and as good or failure systems according to their operating criteria. In particular, a linear consecutive
r-out-of-
n:G system consists of
n independent and identically distributed components arranged sequentially, with system performance determined by the operation of
r contiguous components. For illustration, consider a street with
n parallel parking spaces designed for standard-sized vehicles; a bus, due to its larger dimensions, requires two contiguous spaces. The parking system is operational if at least two adjacent spaces are available, which corresponds to a linear consecutive 2-out-of-
n:G system (see Gera [
1]). In particular, the consecutive system framework includes the series system as the case
and the parallel system as the case
= 1. These structural variations highlight the versatility of consecutive systems and their importance in practical applications, where they serve as effective models for diverse reliability scenarios. Consequently, the reliability properties of consecutive systems have been widely studied under various assumptions, with significant contributions from several authors [
2,
3,
4,
5,
6,
7,
8]. More recently, Eryilmaz [
9] showed that the lifetime distribution of linear consecutive
-out-of-
:G systems admits a simple and tractable form when
. Motivated by this simplification, the present study focuses on scenarios where this condition holds, as it facilitates both the mathematical analysis and the derivation of useful results.
Assume that the lifetime of each component in these systems is represented by
, where the corresponding order statistics are denoted by
. We assume that the lifetimes of the components follow a probability density function (pdf)
and a cumulative distribution function (cdf)
. The lifetime of the system is denoted by
. Thus, if
, the reliability (survival) function of
is given by the following (see, e.g., Eryilmaz [
9]):
where
, is the reliability function of the component lifetimes. It follows that:
Ebrahimi et al. [
10] significantly advanced the link between information theory and reliability by examining the information properties of order statistics. Building on Shannon’s seminal work [
11], differential entropy has since become a fundamental concept in probability theory and one of the most widely used measures of uncertainty.
Let
be a nonnegative random variable with the pdf
. It is known that the Shannon differential entropy of
is expressed as
, provided that the expectation exists. Lad et al. [
12] recently introduced extropy, a novel measure of uncertainty dual to entropy. Thus, extropy of a nonnegative random variable
supported on the interval
with
and
is defined as follows:
where
, for
, denotes the quantile function or left continuous inverse of
and the random variable U is uniformly distributed on [0, 1]. Like entropy, as
increases, the
approaches to uniformity, indicating that it evaluates the uniformity of the distribution. As the probability density function becomes less concentrated, predicting the outcome of a random draw from
h(
x) becomes more difficult. Distributions with sharp peaks are associated with low extropy, whereas those with more evenly spread probabilities correspond to higher extropy.
Although entropy and extropy serve as measures of uncertainty and dispersion, they exhibit important differences that prevent them from inducing the same ordering over distributions. One key distinction lies in their ranges: differential entropy can take any real value , whereas extropy is always non-positive, taking values in . Moreover, for any continuous distribution with density , it holds that , a consequence of the inequality for all . Additionally, if , denotes the variance of X, then implies , ensuring . This is due to the fact that , but the converse is not true.
A notable advantage of extropy is its computational tractability, especially for complex distributions such as mixture distributions. While entropy often involves intractable integrals for mixture models, extropy can frequently be expressed in closed form or computed more efficiently due to its quadratic structure in the probability density function. This makes extropy particularly useful in applications involving model comparison, clustering, or information-based inference with mixture densities. However, extropy also has limitations. Unlike entropy, which has deep connections to fundamental principles in physics (e.g., the second law of thermodynamics) and information theory (e.g., Shannon’s source coding theorem), extropy lacks such foundational interpretations. Additionally, because extropy is always bounded above by zero and behaves differently under transformations (e.g., scaling), it may be less intuitive or less directly applicable in certain theoretical contexts where entropy’s properties are essential.
In various situations, the engineers consider and quantify uncertainty in a system’s lifetime, as it directly impacts system reliability. Let
denote the lifetime of a new system. Let us assume that the uncertainty of the system is quantified by
. In certain scenarios, operators may possess information regarding the system’s current age. For instance, they may know that the system is operating at the time
and may be interested in measuring the uncertainty of its residual lifetime, i.e.,
. In these situations,
is no longer useful. Consequently, the residual extropy is introduced by (see, e.g., Toomaj et al. [
13])
The pdf of
is represented by the following:
Additionally,
denotes the quantile function of
Extropy and its dynamic versions have attracted considerable attention in recent years, particularly in the contexts of order statistics, record values, and system reliability. Toomaj et al. [
13] demonstrated that extropy effectively ranks the uniformity of a broad class of absolutely continuous distributions and highlighted several theoretical advantages of this measure over entropy. Notably, they derived a closed-form expression for the extropy of finite mixture distributions and extended the analysis to dynamic settings by introducing residual and past extropy. Building on this foundation, Qiu [
14] and Qiu and Jia [
15,
16] investigated the extropy and residual extropy of order statistics and record values, establishing key results concerning characterization, monotonicity, and bounds. In engineering applications, Qiu et al. [
17] explored the extropy of lifetimes in mixed systems under the assumption of independent and identically distributed component lifetimes. More recently, Shrahili and Kayid [
18] offered new perspectives on the residual extropy of order statistics, while Saha and Kayal [
19] introduced copula-based extropy measures, elucidating their connections to dependence structures.
Parallel developments have also expanded the landscape of information-theoretic tools in reliability theory. These include cumulative residual entropy, extended fractional cumulative past entropy, paired ϕ-entropy, weighted (residual) varentropy, and dynamic varentropy, among others [
20,
21,
22,
23,
24]. Most recently, Kayid and Shrahili [
25] analyzed consecutive
k -out-of-
n systems through the lens of fractional generalized cumulative residual entropy, deriving several structural properties.
Despite these advances, the residual extropy of consecutive systems—particularly consecutive k-out-of-n:G systems—remains largely unexplored. This paper fills that gap by studying the variability and uncertainty of system lifetimes using residual extropy. A key advantage of residual extropy over many alternative information measures is its computational tractability, especially for complex or composite lifetime distributions. This analytical simplicity enhances its practical utility, making it a powerful and accessible tool for quantifying uncertainty in reliability modeling.
Therefore, the remainder of this paper is organized as follows.
Section 2 derives an explicit expression for the residual extropy of consecutive
-out-of-
:G systems and establishes its connection to the residual extropy of samples from a uniform distribution. Preservation properties under stochastic orderings and useful bounds are also presented.
Section 3 provides several monotonicity and characterization results.
Section 4 investigates the extropy of conditional consecutive
-out-of-
:G systems under the assumption that all components are functional at time
.
Section 5 introduces a parametric estimator based on exponential component lifetimes for estimating the residual extropy and illustrates its performance using both simulated and real datasets. Finally,
Section 6 summarizes the main findings and outlines concluding remarks, contributions and concludes the study.
Throughout this paper, we consider non-negative random variables denoted by Z and Y. These variables have absolutely cdfs denoted by and , survival functions denoted by and , and pdfs denoted by and , respectively. The terms “increasing” and “decreasing” are used in a non-strict sense. We adopt the following notions:
is less than in the usual stochastic order, denoted by , if for all
is less than in the hazard rate order, denoted by , if is increasing in
is less than in the dispersive order, denoted by , if
where
and
are the left continuous inverses of
and
, respectively. For informal definitions and properties of these notions, we refer readers to the work of Shaked and Shanthikumar [
26].
2. Residual Extropy of Consecutive System
In the subsequent analysis, we focus on investigating the residual extropy of
. Uncertainty measured by the density of
, which shows the predictability of the residual lifetime of consecutive
-out-of-
:G systems. Let
denote the lifetimes of the components of these systems, and the ordered lifetimes of the components are represented by
. Throughout this paper, we denote the lifetime of the series system as
in contexts of interchangeability, where
indicates equality in distribution. The residual extropy of the consecutive
-out-of-
:G system, denoted by
, is expressed as the extropy of
. To this aim, it is known that the transformed component lifetimes
for
are i.i.d. random variables uniformly distributed on the interval
. So, the pdf of
, when
, can be expressed as follows:
where
. The pdf (8) is derived by observing that the Jacobian of
is given by
. The expression
represents the Jacobian of the transformation. Consequently, we have
. On the other hand, the reliability function of
can be obtained as
In the rest of this paper, we use the notation
to indicate that the random variable
has a truncated beta distribution with the density function
where
denotes the lower incomplete beta function. The next lemma is crucial for the subsequent analyses and simplifies the computation of the residual extropy of consecutive
-out-of-
:G systems.
Lemma 1. Let stand for the lifetime of the consecutive -out-of-n:G system having the i.i.d. component lifetimes uniformly distributed in [0, 1]. Then, for
, we have
Proof. Let us define and . Using Equations (4), (8) and (9), is expressed as
this completes the proof. □
The lemma above covers the range . For the boundary case , corresponding to the series system, the next result follows directly, and its proof is therefore omitted.
Lemma 2. If
represents the lifetime of the series system with the i.i.d. component lifetimes uniformly distributed in [0, 1], then, for all
, we have
Remark 1. An explicit expression, as stated in Lemma 1, can be directly obtained from Equation (11) after some algebraic manipulation:
After simplifying, for all
, we have the following representation
In the upcoming theorem, we express the extropy of using the previously mentioned transformations and referencing Lemma 1.
Theorem 1. Let
stand for the lifetime of the consecutive
-out-of-
system having the i.i.d. component lifetimes with pdf
and cdf
, respectively. Then, for
, we have
in which
, for all
.
Proof. Assume that and are defined as in the proof of Lemma 1. In view of (1), (2) and (4) and using the change of variable , yields the following representation:
The final equality is an immediate consequence of Lemma 1, this completes the proof. □
In the case of a series system, the following theorem provides a formal statement of its residual extropy properties and highlights its role as a boundary case among consecutive r-out-of-n:G systems.
Theorem 2. If
represents the lifetime of the series system having the i.i.d. component lifetimes with pdf
and cdf
, respectively, then, for all
, we have
in which
, for all
.
The age of the component lifetimes in a consecutive -out-of-:G system plays a crucial role in determining the behavior of the residual extropy lifetimes.
Definition 1. Let
be a nonnegative random variable with probability density function , survival function , and hazard rate function defined as . The random variable is said to exhibit an increasing failure rate (IFR) property if increases with , and it shows a decreasing failure rate (DFR) property if decreases with .
The following theorem establishes the relationship between the IFR property of the parent distribution and the residual extropy of the consecutive -out-of-:G system lifetime. Several well-established probability distributions, including the Weibull, gamma, Pareto, exponential, and log-logistic distributions, exhibit the IFR property. Consequently, Theorem 3 is applicable to these distributions as well.
Theorem 3. If
is IFR, then for all is increasing in .
Proof. We recall that if
and the component lifetimes are IFR, then
is IFR according to Theorem 4 of Eryilmaz and Navarro [
27]. Therefore, the proof is completed using Theorem 5.3 of Toomaj et al. [
13]. □
The following example illustrates the application of Theorems 1–3.
Example 1. Suppose that
, such that
for
, be the lifetime of a linear consecutive
-out-of-:G system having the i.i.d. component lifetimes follows the Rayleigh distribution with survival function given by
The Rayleigh distribution coincides with the chi distribution with two degrees of freedom; its square is exponential. Through a straightforward calculation, it can be shown that:
Due to Lemma 1, we obtain
On the other hand, one can see that
Thus, for all
, Equation (12) and the previous expressions yield the following result
for all
. In the particular case where
, the residual extropy for the series system, as derived in Equation (14), is
where
is known as the upper incomplete gamma function and is defined as follows:
Generally, obtaining an explicit analytical expression for
is a challenge. So, we employ a computational approach to investigate the behavior of
for the special case
and
over time
.
Figure 1 summarizes the numerical analysis, illustrating the relationship between
and
and values of
. These trends align with Theorem 3, which shows that the residual extropy decreases with
for IFR random variables.
This decreasing trend carries important practical implications. Residual extropy quantifies the uncertainty about the remaining lifetime of a system given that it has survived up to time t. A decline in residual extropy over time indicates that the system becomes more predictable as it ages: the probability density function of the residual lifetime becomes increasingly concentrated, reducing uncertainty. In reliability terms, this aligns with the behavior of IFR systems, which are more likely to fail as they become older.
An explicit expression for the residual extropy of the series system in the Rayleigh distribution has been derived. However, a closed form for the residual extropy of consecutive -out-of-:G system for these distributions when . This limitation also arises for other distributions, where no closed form exists for the residual extropy of consecutive r-out-of-:G systems. To address this challenge, we derive bounds for the residual extropy, and, in particular, establish a theorem that provides a lower bound expressed in terms of the residual extropy of the corresponding system under the uniform distribution on [0, 1] and the mode of the original distribution.
Theorem 4. Under the conditions of Theorem 1, suppose we have
, where
is the mode of the pdf
. Then, for all
, we obtain
Proof. For all , it holds that
thus, we have
The result is readily obtained from (12) and this completes the proof. □
By applying the same reasoning and invoking Theorem 2, the lower bound for the series system can be obtained. In general, we establish a lower bound for the residual extropy of
in terms of the residual extropy of a uniform
-out-of-
:G system and the mode
of the original distribution.
Table 1 presents these lower bounds for several common distributions, as derived from Theorem 4.
In
Figure 2, we present the residual extropy bounds for various parameter values of the distributions listed in
Table 1.
The following theorem establishes that, for consecutive -out-of-:G systems with DFR components, the series system attains the minimum residual extropy. Several widely used families (e.g., Weibull, gamma, Pareto, exponential, log-logistic) admit IFR or DFR behavior under suitable parameter ranges; hence Theorem 5 applies in those cases.
Theorem 5. Let us assume that
, denotes i.i.d. lifetimes of the components having the DFR property. Then, for all
- (i)
it holds that , for all
- (ii)
it holds that , for all
Proof. (i) The relation is readily apparent because and hence recalling (1) the function
is increasing in
for all
. Moreover, if
demonstrates the DFR property, then
also possesses this property. Since
and
is DFR, thus utilizing Theorem 5.2. of Toomaj et al. [
13], infer that
for all
.
- (iii)
Referring to Proposition 3.2 by Navarro and Eryilmaz [
28], it can be deduced that
. Consequently, employing similar arguments as in Part (i) yields comparable outcomes, and this completes the proof. □
3. Monotone Properties and Characterization Results
The residual extropy of a consecutive system measures the uncertainty in its remaining lifetime, conditional on survival up to time . Its monotonic behavior describes how this uncertainty evolves as the system ages. For consecutive -out-of-:G systems, these properties reveal how the predictability of failure changes over time and under different structures. A decreasing residual extropy indicates that failure becomes more predictable with age, thereby supporting more effective maintenance planning. Conversely, systems with higher residual extropy retain greater unpredictability, which may complicate scheduling but reflects more resilient behavior. Such monotonicity analysis also enables systematic comparisons across system structures and component arrangements, making it an important tool in both theoretical studies of reliability and practical extropy-based assessment. In this section, we investigate the monotonic properties of residual extropy in consecutive systems and derive related characterization results. The following theorem establishes a key monotonicity property of residual extropy in consecutive systems.
Theorem 6. For
, if is decreasing in , then is also decreasing in .
Proof. Since the hazard rate function of is , where represents the hazard rate function of , Equations (1) and (2) yield the following:
where
As
, is strictly decreasing in
for
, thus
is strictly increasing in
. Consequently, the ratio
is also strictly increasing in
. Moreover, it holds that
. Thus, Theorem 3 of Kayid [
29] directly implies the decreasing monotonicity of
in
and this completes the proof. □
The following counterexample demonstrates that the result in Theorem 6 does not apply in the increasing case, meaning that even if increases with , then , may not necessarily increase for all .
Example 2. Assume that
, denotes the lifetime of the linear consecutive
-out-of- system. This system consists of 5 components arranged in a linear order. The system functions if and only if at least
consecutive components are functioning. We assume that the lifetimes of these components are i.i.d. following the Pareto type II distribution with the parameters 1 and 3 with the cdf given by
Figure 3 summarizes the numerical analysis showing the relationship between
and
for
. The figure indicates that
and
increase as
increases, while this is not the case for
when
. This suggests that an increase in
with respect to
does not necessarily imply an increase in
for all
when
.
Recently, Qiu and Jia [
16] showed that
decreases with
if the pdf
is a decreasing function on
. However, the following counterexample demonstrates that this theorem may not always apply. Before presenting the example, we introduce a theorem that highlights the monotonicity properties of the residual extropy in series systems.
Theorem 7. If
denotes the lifetime of the series system having the i.i.d. components with DFR lifetimes, then increases with .
Proof. It is well-known that if
is DFR, then
is also DFR, as the hazard rate function of
is given by
. The proof is concluded by referencing Theorem 5.3 from Toomaj et al. [
13], and this completes the proof. □
We now present a counterexample demonstrating that the result in Theorem 7 cannot be generalized to the consecutive
-out-of-
:G systems when
. Furthermore, we indicate that the conclusion of Theorem 6 by Qiu and Jia [
16] may not be valid.
Example 3. Let us consider a linear consecutive
-out-of-4:G system with lifetime
for
. The lifetimes of the components are i.i.d. with the following pdf
This follows a Log-Logistics distribution with both shape and scale parameters of unity, exhibiting the DFR property, and its pdf is decreasing in
. The numerical analysis, illustrated in
Figure 4, examines the relationship between
and
for
. The figure indicates that
increases with
, while this behavior does not apply to
and
. This suggests that the findings in Theorem 7 do not extend to the consecutive
-out-of-
:G systems.
Characterizing the underlying distribution is an important theme in the literature. In this context, Baratpour et al. [
30] showed that the Rényi entropy of the i
th order statistic uniquely determines the underlying distribution. Related results were obtained by Baratpour [
31] for the cumulative residual entropy of the first-order statistic. Qiu [
14] further demonstrated that the extropy of the i
th order statistic provides a similar characterization, and Qiu and Jia [
16] extended this to residual extropy. Motivated by these findings, we now investigate the characterization of the underlying distribution through the residual extropy of consecutive systems. Specifically, the derivative of the residual extropy can be expressed as
or equivalently
for all
, where
denotes the hazard rate function of
. To achieve our aim, we recall the problem of establishing a sufficient condition for the existence of a unique solution to the initial value problem (IVP):
where
is a function of two variables defined in a region
, and (
) is a point in
. Here,
is the unknown function. By the solution of (18), we find a function
which satisfies the following conditions:
- (i)
is differentiable on ,
- (ii)
the growth of lies in ,
- (iii)
and (iv) , for all .
The next theorem and lemma, originally presented by Gupta and Kirmani [
32], will be utilized in deriving our characterization results.
Theorem 8. Let
be a continuous function defined in a domain
, and let
satisfy Lipschitz condition (with respect to
) in
, that is
, for every point () and () in
. Then, the function
satisfying the IVP
and
, is unique.
Lemma 3. Suppose that the function
is continuous in a convex region exists and is continuous in . Then satisfies the Lipschitz condition in .
We conclude this section with a characterization of the underlying distribution based on the residual extropy of consecutive -out-of- :G systems.
Theorem 9. Let
and be the residual extropy of the consecutive r-out-of-n: system having the i.i.d. component lifetimes and with pdfs and , and cdfs and , respectively. Then if and only if for all for .
Proof. We just prove the sufficiency part where the necessity is trivial. From (17), we have
Taking the derivative of the above equation concerning
, we have
Assume that
for all
, and
. Then, for all
, we get
where
It follows from Theorem 8 and Lemma 3 that for all
, which implies
In view of
and
for all
, where
is defined in (9), we have
for all
this completes the proof. □
4. Conditional Residual Extropy of Consecutive Systems
In this section, we aim to evaluate the residual lifetime
under the condition that all components of the system are alive at time
. Then, it can be seen that the survival function of
can be written as follows (see Salehi et al. [
33]):
where
is defined in (7). It is worth mentioning that Salehi et al. [
33] studied the stochastic and aging properties of the residual lifetime of consecutive
-out-of-
systems. This was under the condition that at time
, where
is less than or equal to
, components of the system are in working condition. Additionally, they proposed the mean residual lifetime for such systems and derived various properties. Assuming that
is absolutely continuous with the pdf
, the pdf of
is given by
Given that
we have
where
is defined in (6). In the next, we focus on the study of the extropy of the random variable
which measures the amount of uncertainty present in the density of
with regard to the predictability of the system’s residual lifetime in terms of the extropy. The probability integral transformation
plays a crucial role in our investigation. The pdf of
is provided in (8). In the upcoming theorem, we express the extropy of
using the aforementioned transforms.
Theorem 10. Let
denote i.i.d. random variables representing the lifetimes of the components of the consecutive r-out-of-n:
system having the common
and cdf
. Then, for all
, the extropy of
can be expressed as follows:
Proof. Setting to (4) and (20) yields
In Equation (21),
represents the pdf of
as given in (8), this completes the proof. □
The survival function provides the probability that the system survives beyond , given that all components are functioning at time . However, the conditional residual extropy offers a more comprehensive assessment of the uncertainty associated with the remaining lifetime. Traditional measures, such as the mean residual life, may not adequately capture the randomness inherent in the failure process. Extropy, as an information measure, quantifies the degree of disorder or uncertainty in the conditional residual lifetime distribution. This perspective is particularly important for consecutive systems, whose complex structure can generate non-intuitive aging patterns. The proposed measure therefore serves as a useful tool for characterizing and comparing dynamic uncertainty across different operational conditions and system configurations, providing insights beyond those offered by classical reliability metrics.
In the next theorem, we investigate how the residual extropy of the conditional consecutive system lifetime varies with the components’ aging properties.
Theorem 11. For all
, if is IFR (DFR), then is decreasing (increasing) in .
Proof. The proof for the IFR case extends in the same manner to the DFR case. Accordingly, , where , is the hazard rate function of . On the other hand, one can conclude that , for all , and hence we have
If
, then
. This implies that Equation (21) represent as
for all
. This implies that
, this completes the proof. □
The following example illustrates the theoretical results established in Theorems 10 and 11.
Example 4. Let us assume a linear consecutive 3-out-of- system with lifetime
Figure 5 presents the configuration of this system, in which system reliability depends on the successful operation of at least three contiguous components.
- (i)
Assume that the component lifetimes are i.i.d. Pareto Type II with the survival function
Using this, it follows that
It follows from
Figure 6 that
is a monotonically increasing function with respect to both time
and the parameter
. This result suggests that as time
increases, the uncertainty about the remaining lifetime of the system, represented by
increases. Additionally, we observe that the distribution of the system component lifetimes exhibits the DFR property.
- (ii)
Assume follows a Weibull distribution with shape parameter and the given survival function
Upon algebraic simplification, we arrive at the following expression
An explicit expression for the aforementioned relation is intractable. Consequently, a numerical analysis is employed.
Figure 7 above shows
with respect to time
for
. In accordance with Theorem 11, the function
exhibits an increasing trend for DFR cases
and a decreasing trend for IFR cases
. These findings are graphically represented in
Figure 7.
The next theorem shows the relationship between and under specific aging conditions.
Theorem 12. For all
, if is IFR (DFR), then for all .
Proof. According to Theorem 11, if exhibits an IFR (DFR) property, then is a monotonically decreasing (increasing) function of time . This implies that is consistently less than or equal to (greater than or equal to) for all non-negative values of , this completes the proof. □
The next theorem establishes a comparison of the stochastic ordering properties of conditional residual extropy lifetimes in consecutive -out-of-:G systems, under the condition that all components operate beyond time >0.
Theorem 13. Let
and denote two residual lifetimes of consecutive -out-of- systems having i.i.d component lifetimes and from cdfs and , respectively. If and or is IFR, then for all .
Proof. Recalling Equation (21), it is sufficient to show that
is stochastically dominated by
in the dispersive order. Leveraging the assumption that
and one of these random variables has IFR property, it follows by directly applying the proof technique outlined in Theorem 5 of [
34] to conclude that
which completes the proof. □
Example 5. Assume two residual lifetimes
and
based on
i.i.d. component lifetimes
and
for
, with the survival functions
and
, respectively. Let
have a linear failure rate function with the survival function given by
Further assume that
is an exponential distribution with the survival function
. It can be seen that the hazard rate function of
is
and the hazard rate function of
is
. Since
, it follows that
. So, one can conclude that
based on the DFR property of
. Furthermore, both
and
have IFR property. Consequently, Theorem 13 implies that
for all
.
Finally, we demonstrate that the extropy of the lifetime of consecutive
-out-of-
systems, under the assumption that all components are operational at time
, uniquely determines the distribution function. To this end, our analysis focuses on a specific system configuration: the linear consecutive (
)-out-of-
:G system, subject to the condition
, where
ranges from
to
. For this purpose, we first recall the Muntz–Szász theorem, as presented in Higgins [
35], which will be employed to establish the main results that follow.
Lemma 4. For an integrable function
on the finite interval if , then for almost all , where is a strictly increasing sequence of positive integers satisfying .
It is worth pointing out that Lemma 4 is a well-established concept in functional analysis, stating that the sets
constitutes a complete sequence. Notably, Hwang and Lin [
36] expanded the scope of the Müntz–Szász theorem for the functions
, where
is both absolutely continuous and monotonic over the interval
. This lemma leads us to characterize uniquely the parent distribution using
.
Theorem 14. Let
and
denote two residual lifetimes of consecutive ()-out-of-
systems having
i.i.d component lifetimes
and
from cdfs
and
, respectively. Then
and
belong to the same family of distributions if and only if for fixed
Proof. We just prove the sufficiency part where the necessity is trivial. Relation (21) for the lifetime can be rewritten as follows:
for all
and
. The same argument also holds for
. Given the assumption that
, Equation (23) yields
Thus, it holds that
where
By taking
, Equation (24) can be rewritten as follows:
By applying Lemma 4 with the function
and considering the complete sequence
, one can conclude that
or equivalently
for all
. Since
, it follows that
for all
. Letting
implies
. Since
, (the same argument holds for
) we find
. Since this is true for all
, setting
, we conclude
, demonstrating that
and
have the same distribution functions, which completes the proof. □
5. Numerical Analysis
This section presents a simulation study of residual extropy in consecutive
r-out-of-
n:G systems, analyzed via the maximum likelihood estimator (MLE). To this aim, we consider random samples
from exponential, Rayleigh, and Lomax distributions, and obtain the MLE for the parameter
. The results are shown in
Table 2.
By applying Equations (1), (2), and (4), the corresponding formulation for the residual extropy of these systems is obtained as follows:
for all
. To evaluate the efficacy of our proposed estimator for
when applied to simulated data for the three distributions, we calculate both its average bias and root mean squared error (RMSE). By exploiting the invariance property of the MLE, an estimate of
under such three distributions can be obtained directly, yielding the following expression:
for all
, where
and
denote the pdf and cdf of any three distributions as determined by the MLE of the parameter. To assess the performance of the estimator, we computed the bias and root mean square error (RMSE) for different sample sizes (N = 20, 30, 40, 50) and various configurations of the system parameters
, and
. Each case was evaluated using 5000 independent replications, and the results are summarized in
Table A1,
Table A2,
Table A3,
Table A4,
Table A5,
Table A6,
Table A7,
Table A8 and
Table A9, which are provided in
Appendix A.
The simulation results presented in this study provide valuable insights into how the residual extropy estimation accuracy, measured by bias and RMSE, is influenced by three key factors: total sample size N, the number of component lifetimes of the system n, and the number of working components r. Our findings consistently demonstrate that the estimators for the three distributions are asymptotically well-behaved: both bias and RMSE decline monotonically as the total sample size N increases across all configurations of n and r. This pattern supports the theoretical property of consistency, indicating that the estimator converges to the true parameter value as more data become available. A second salient pattern is the strong influence of the number of working components r. When both N and n are fixed, then as r increases, from 0.1 to 2.0, the bias and RMSE of estimators increase for the exponential and Weibull distributions; however, for the Lomax distribution, it is the converse. This result underscores the number of working components r in determining estimator performance, particularly in small-sample settings. Nonetheless, the present findings provide strong empirical support for the reliability of the proposed estimator under realistic finite-sample conditions and clarify the interplay between sample size and number of working components in determining estimation accuracy.
It is important to note that the maximum likelihood estimator (MLE) derived in this study relies on exponential lifetime assumptions, which ensure analytical tractability and closed-form expressions. However, in practice, deviations from exponentiality may affect both efficiency and small-sample performance. To address this, robustness checks can be carried out by applying the estimator under alternative lifetime models such as Weibull, gamma, or Lomax, and comparing the resulting bias and variance. In addition, bootstrap resampling offers a distribution-free approach to assess variability and construct confidence intervals, while Bayesian methods provide a flexible alternative that incorporates prior information and yields full posterior distributions for residual extropy. These considerations suggest that, although the exponential case serves as a useful benchmark, complementary approaches are available to extend the robustness and applicability of the proposed framework.
We now illustrate the performance of the estimator on real data, assuming an exponential distribution.
Example 6. The air conditioning system in commercial aircraft, such as the Boeing 720, plays a vital role in ensuring passenger comfort and cooling avionic components. Failures in this system can lead to operational disruptions and safety issues, making accurate reliability modeling crucial. The exponential distribution is often employed in these scenarios, based on the assumption of a constant hazard rate, which is typically applicable during the mid-life phase of complex systems. In this context, we examine the performance of the air conditioning equipment in the Boeing 720 using a dataset comprising 35 observed failure times, which supports the applicability of exponential modeling. Shanker et al. [37] investigated the suitability of the exponential distribution for analyzing this type of reliability data. The individual time intervals are as follows: Dataset: 11, 35, 49, 170, 329, 381, 708, 958, 1062, 1167, 1594, 1925, 1990, 2223, 2327, 2400, 2451, 2471, 2551, 2565, 2568, 2694, 2702, 2761, 2831, 3034, 3059, 3112, 3214, 3478, 3504, 4329, 6367, 6976, 7846, 13403.
Jose and Sathar [
38] analyzed this dataset and concluded that the exponential distribution provides an adequate fit, estimating the failure rate as
. In the present study, we evaluate the estimation of the residual extropy of the consecutive
r-out-of-
n:G system
under the exponential model with the estimated
.
Table 3 reports both the theoretical values
and their empirical counterparts
for
computed directly from the observed failure data. Several important observations emerge from
Table 3: Across all values of
, the empirical estimates
closely track the theoretical predictions
, supporting the adequacy of the exponential model for this dataset. As the required number of functioning components r increases (from 3 to 6), the
decreases monotonically for any fixed
t. This is intuitive: demanding more components to survive simultaneously reduces system uncertainty. The empirical estimates reflect this trend consistently. The presented results show a correlation between both the theoretical and empirical estimators.