Previous Article in Journal
New Goodness-of-Fit Tests for the Kumaraswamy Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Non-Occurrence of the Inspection Paradox

by
Diana Rauwolf
1,* and
Udo Kamps
2
1
Department of Mathematics, RWTH Aachen University, D-52056 Aachen, Germany
2
Institute of Statistics, RWTH Aachen University, D-52056 Aachen, Germany
*
Author to whom correspondence should be addressed.
Stats 2024, 7(2), 389-401; https://doi.org/10.3390/stats7020024
Submission received: 18 March 2024 / Revised: 19 April 2024 / Accepted: 23 April 2024 / Published: 24 April 2024
(This article belongs to the Section Applied Stochastic Models)

Abstract

:
The well-known inspection paradox or waiting time paradox states that, in a renewal process, the inspection interval is stochastically larger than a common interarrival time having a distribution function F, where the inspection interval is given by the particular interarrival time containing the specified time point of process inspection. The inspection paradox may also be expressed in terms of expectations, where the order is strict, in general. A renewal process can be utilized to describe the arrivals of vehicles, customers, or claims, for example. As the inspection time may also be considered a random variable T with a left-continuous distribution function G independent of the renewal process, the question arises as to whether the inspection paradox inevitably occurs in this general situation, apart from in some marginal cases with respect to F and G. For a random inspection time T, it is seen that non-trivial choices lead to non-occurrence of the paradox. In this paper, a complete characterization of the non-occurrence of the inspection paradox is given with respect to G. Several examples and related assertions are shown, including the deterministic time situation.

1. Introduction

The inspection paradox, also known as the waiting time paradox or renewal paradox, describes a paradoxical effect where observing a running renewal process with events occurring at specific times leads to atypical findings, in the sense that the observed time interval between events may be longer than the other intervals. For example, this happens when the events in question are incoming claims of an insurance company and we arbitrarily select a time to observe the process (without knowledge of any claim arrival times). The time we select specifies an interval between two successive claims and we record the length of this time interval. It is stochastically larger than a regular (unobserved) interval between two successive incoming claims.
A renewal process can be used to model the times of incoming claims, where the waiting times between successive claims, called interarrival times, are modeled as realizations of independent and identically distributed non-negative random variables with cumulative distribution function F, for example. Thus, in a realized renewal process based on a non-degenerate distribution, we observe interarrival times of different lengths and, when inspecting the process at a certain time t, it will be very likely to observe a comparably larger time interval (cf. Feller [1], p. 13).
This paradoxical effect arises in various scenarios, such as waiting for a bus or a train (cf. Feller [1], p. 12–14, Masuda and Porter [2]), observing the lifetimes of identical batteries (cf. Ross [3], p. 460), in connection with sampling bias (cf. Stein and Dattero [4]), and in stochastic resetting (cf. Pal et al. [5]). In a medical context, Jenkins et al. [6] discussed how the perception of regularly occurring phase singularities, which are indicators for cardiac fibrillation, is influenced by the inspection paradox. They found that visual observation may systematically oversample phase singularities that last longer (potentially leading to errors) and that longer windows of observation can minimize the effect.
Much attention has been paid to the study of the inspection paradox, its properties, and implications (see, e.g., Gakis and Sivazlian [7], Angus [8], Ross [9]). In particular, considering a random variable for the time of inspection instead of a deterministic time leads to insights regarding the quantification of the effect (see, e.g., Kamps [10]). Herff et al. [11] derived an inequality for the length of the inspection interval with a random time and Rauwolf and Kamps [12] gave a general representation for the expected inspection interval length, which served as the basis for the main results in this work. Several explicit examples with random time and applications to earthquake and geyser data can be found in the literature (see, e.g., Liu and Peña [13], Rauwolf and Kamps [12]).
In the case of a deterministic inspection time t, the inspection paradox does not occur for a trivial choice of interarrival times having a degenerate distribution, i.e., for deterministic interval lengths. Moreover, it does not occur if the smallest possible interarrival time is larger than t; i.e., if the inspection is performed prior to the first event. However, for a random inspection time T with a left-continuous distribution function G, it is seen that there are examples with non-trivial choices of the distribution functions F and G where the paradox does not appear, meaning that the length of the inspection interval is also distributed as F.
In this general situation, we give a complete characterization of the non-occurrence of the inspection paradox with respect to the choice of G, as well as results for F in the classical case of degenerate G. The use of an additional random inspection time led to effects and a conclusion regarding the classical case with deterministic time, where non-occurrence of the paradoxical situations only happened for degenerate distributions.
In Section 2, we briefly recap the classical inspection paradox. Renewal processes with random time T are discussed in Section 3, along with examples. Section 4 contains a complete characterization of settings with respect to the distribution function G of T. A case with degenerate time t is studied in Section 5 regarding situations with non-occurrence of the inspection paradox leading to degenerate interarrival times.

2. The Classical Inspection Paradox Inequality

In order to formally introduce the inspection paradox, we first briefly recapitulate concepts of renewal processes. An introduction to renewal processes can be found in, e.g., Cox [14], Feller [1], Ross [3], Pinsky and Karlin [15], Mitov and Omey [16], and Kulkarni [17].
Let X 1 , X 2 , be a sequence of non-negative, independent, and identically distributed (iid) random variables on some probability space with a common distribution function F, F ( 0 ) < 1 . These random variables will be called interarrival times in the following. Then, the sequence of occurrence times ( S n ) n N 0 given by
S 0 = 0 and S n = i = 1 n X i , n N ,
defines a renewal process (see Figure 1). The corresponding renewal counting process is denoted by ( N ( t ) ) t 0 , where
N ( t ) : = n = 1 1 [ 0 , t ] ( S n ) , t 0 ,
counts the number of occurrences up to time t. In particular, N ( t ) n S n t holds for all n N 0 and for all t 0 .
When we inspect a renewal process at some fixed time t > 0 , exactly N ( t ) renewals have already taken place. The last renewal prior to t was at time S N ( t ) and the subsequent renewal will occur at time S N ( t ) + 1 . The renewal interval covering t is referred to as the “inspection interval” and its length is given by X N ( t ) + 1 = S N ( t ) + 1 S N ( t ) (see Figure 2). Representations for the survival function and the expected value of the inspection interval length X N ( t ) + 1 can be found in the literature (see, e.g., Gakis and Sivazlian [7], pp. 44–45).
The inspection paradox of renewal theory then states that
P ( X N ( t ) + 1 > x ) P ( X 1 > x ) for all x 0 and t 0
(cf. Angus [8], Ross [3]), which means that the inspection interval is stochastically larger than a common renewal interval, i.e., X N ( t ) + 1 st X 1 . Consequently, in terms of expected values, the mean inspection interval length exceeds the mean length of any regular renewal interval, in the sense that
E X N ( t ) + 1 E X 1 for all t 0 .
No paradoxical effect occurs in the trivial case where the interarrival times have a degenerate distribution, i.e., X i ϵ a , i N , for some a > 0 , as equality holds in the inspection paradox, i.e., P ( X N ( t ) + 1 > x ) = 1 ( a , ] ( x ) = P ( X 1 > x ) for all x 0 , and all events in the corresponding renewal process take place perfectly on time. In other words, all time intervals (including the inspected interval) have precisely the same length. Up to this point, it has remained open whether this is the only example in which equality holds for fixed t. The answer will be provided in the following sections by means of a generalization to a random inspection time T, for which the equality in (1) and (2) is characterized.

3. The Inspection Paradox with a Random Inspection Time

Instead of a fixed point in time t 0 , we can also consider a random variable to model the time of inspection. Let T be such a random inspection time, i.e., a non-negative random variable that is independent of the renewal process and has a left-continuous distribution function G given by G ( t ) = P ( T < t ) , t R .
Then, (1) implies that the paradoxical effect occurs as in the classical inspection paradox with
P ( X N ( T ) + 1 > x ) P ( X 1 > x ) for all x 0 ,
i.e., X N ( T ) + 1 st X 1 , and from (2) we conclude
E ( X N ( T ) + 1 ) E ( X 1 )
for the inspection paradox in terms of expectations.
In Section 2, we saw that equality in (1) in the fixed time case, i.e., for the choice T ϵ t , happens when the interarrival times have a degenerate distribution. The following example shows that, in a trivial case but for non-degenerate distributions of X 1 and T, equality in (3) and (4) holds true. Thus, introducing a random inspection time can lead to other cases with equality in (3) and (4).
Example 1.
Consider a Binomial renewal process with interarrival times having a geometric distribution on N , i.e.,
P ( X i = k ) = p ( 1 p ) k 1 , k N , i N ,
and a random inspection time T with a two-point distribution, i.e., with P ( T = 0 ) = P ( T = 1 / 2 ) = 1 / 2 . Since N ( t ) = 0 for t < 1 , we have X N ( t ) + 1 = X 1 , and thus X N ( T ) + 1 = X 1 holds. The situation is trivial in the sense that the inspection is made prior to X 1 . In particular, the distribution function G is constant on the support of the interarrival times with G ( k ) = P ( T < k ) = 1 for all k N .
On the other hand, in many well-known examples, Inequality (1) is strict for t > 0 and so is (3); e.g., this is the case for a Poisson process with exponentially distributed interarrival times. The same holds true in connection with random inspection times; we refer to Liu and Peña [13] who discussed the choice of an exponentially distributed random inspection time.
In fact, the gap between the expected inspection interval length and a common expected interval length can be quantified for any choice of distribution. Rauwolf and Kamps [12] derived the following representation
E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) + n = 1 Cov ( φ ( X n ) , G ( S n ) ) ,
where φ : [ 0 , ) [ 0 , ) is a measurable function such that all expected values and integrals are well-defined and exist finitely. If φ is monotone non-decreasing, then all covariance terms are non-negative. In particular, this leads to a representation for the expected inspection interval length by choosing φ ( x ) = x , x 0 ,
E X N ( T ) + 1 = E X 1 + n = 1 Cov ( X n , G ( S n ) ) ,
and to a representation for P ( X N ( T ) + 1 > z ) by choosing φ ( x ) = 1 ( z , ) ( x ) , x 0 , z 0 . Thus, Inequalities (3) and (4) can be derived from (5), and choosing T ϵ t leads to formulae in the classical case from Section 2.
As in Example 1, the introduction of a random inspection time T leads to various other possible and non-degenerate cases with non-occurrence of the inspection paradox. Concerning the identification of these respective situations, Formula (5) offers the option to examine cases where all covariance terms are equal to zero. On the other hand, Formula (5) also facilitates the study of situations with possible large gaps between E X N ( T ) + 1 and E X 1 , say. For examples, we refer to [12].
In the following discussion, we will need the left and right endpoints of the support of the interarrival times, the formal introduction of which is given in Notation 1.
Notation 1.
Let X be a random variable with right-continuous distribution function F. Let F 1 be the quantile function defined by F 1 ( y ) : = inf { x : F ( x ) y } , y ( 0 , 1 ) . Then, the left and right endpoints α and ω of the support of X are denoted by
α : = lim x 0 + F 1 ( x ) and ω : = lim x 1 F 1 ( x ) .
The support of X then lies in the interval [ α , ω ] if ω < or in [ α , ) , otherwise, and will be denoted by supp ( X ) .
The following result by Behboodian [18] given in Lemma 1 can be utilized to determine whether a covariance of two functions of a random variable is zero. In particular, this will be applied to determine whether the covariance terms in (5) are positive or zero. An alternative proof can be found in Rauwolf [19].
Lemma 1
(cf. [18], Theorem 2). Let X be a non-negative and non-degenerate random variable with support S : = supp ( X ) and probability distribution P. Let h 1 : [ 0 , ) R and h 2 : [ 0 , ) R be two monotone non-decreasing, measurable functions such that Cov ( h 1 ( X ) , h 2 ( X ) ) exists finitely. Then,
h 1 | S c 1 or h 2 | S c 2 [ P X ] for some c 1 , c 2 R Cov ( h 1 ( X ) , h 2 ( X ) ) = 0 .
If none of the functions h 1 and h 2 in Lemma 1 are constant on S , then the covariance of h 1 ( X ) and h 2 ( X ) is strictly positive.
Based on the representation with the covariance terms, we present an example of a renewal process with absolutely continuous interarrival times and a particular choice for the distribution function G of the random inspection time such that E X N ( T ) + 1 = E X 1 holds.
Example 2.
Let the interarrival times X i , i N , have a uniform distribution on the interval [ 1 , 1.3 ] with a density function f given by f ( x ) = 10 3 1 [ 1 , 1.3 ] ( x ) , x R . Furthermore, let the random inspection time T have the left-continuous distribution function G given in Figure 3. The function G is constant on the interval [ 1 , 1.3 ] , i.e., on the support of X 1 , and therefore, an application of Lemma 1 yields covariance Cov ( X 1 , G ( X 1 ) ) = 0 . Similarly, G is constant on the support of the occurrence times S 2 , S 3 , … from which equality in Equation (5) follows for any choice of φ. In between the supports of S n and S n + 1 , n N , the form of the distribution function G (left-continuous version) is arbitrary.
The equality in Example 1 can also be derived by applying Lemma 1. This approach facilitates finding new explicit examples with equality—especially in the case of random inspection times which allow for other possibilities than in the classical inspection paradox with a degenerate time T ϵ t .

4. Non-Occurrence of the Inspection Paradox

A general result regarding non-occurrence of the inspection paradox can be derived on the basis of Representation (5) with a random inspection time. This is realised in Theorem 1 and the result is applied to the special cases of equality in (3) and (4); see SubSection 4.2 and Remark 3, respectively. Either result can be used to decide whether a strict inequality holds for specific choices of distribution functions F and G. In particular, the condition for equality is easy to check and can therefore be utilized without calculating, e.g., the expected value of X N ( T ) + 1 explicitly.

4.1. General Results

This subsection is concerned with determining distributions for X 1 and T with distribution functions F and G, respectively, such that E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) holds, given that φ is not a constant function. Lemma 2 serves as a key component in the discussion and states that given the support endpoints of the interarrival times, we can explicitly calculate the smallest index n for which the succeeding occurrence times S n have overlapping supports.
Throughout, α is called an atom of (the distribution of) X 1 if P ( X 1 = α ) > 0 .
Lemma 2.
Let the interarrival times have a left support endpoint α > 0 and right support endpoint ω > α . Then, there exists a natural number κ N 0 at which the supports of two consecutive occurrence times S n and S n + 1 , n κ + 1 , overlap in the sense that P ( [ n α , n ω ] [ ( n + 1 ) α , ( n + 1 ) ω ] ) > 0 . In particular, κ is given by
κ α ω α , if ω k + 1 k α , k N , k , if ω = k + 1 k α for some k N and α and ω are no atoms of X 1 , k 1 , if ω = k + 1 k α for some k N and α or ω are atoms of X 1 .
Proof. 
Assuming that there is no point of overlap for any n N , i.e., that
n ω < ( n + 1 ) α for all n N
leads to ω lim n n + 1 n α = α which is a contradiction to the assumption that α < ω . Therefore, there exists a natural number for which the supports of successive occurrence times with large enough indices overlap. If ω k + 1 k α for some k N , then
n ω ( n + 1 ) α > 0 n α ω α + 1 .
If the right support endpoint ω is of the form ω = k + 1 k α for some k N and α or ω is an atom of X 1 , then the first point of overlap is at ( k + 1 ) α = k ω , i.e., the supports of S k and S k + 1 touch ( κ = k 1 ). If neither α nor ω are atoms of X 1 , then the first overlap happens for the supports of S k + 1 and S k + 2 (i.e., κ = k ). □
The trivial case α = 0 is excluded in Lemma 2 as the support of X 1 , covered by the interval [ 0 , ω ] (or [ 0 , ) ), is a subset of the support of S n , covered by [ 0 , n ω ] (or [ 0 , ) ), for all n N . In this case, the supports of all occurrence times overlap.
We will now determine cases with equality of the expected values in (5), i.e., non-occurrence of the inspection paradox, under the general assumption that G is a left-continuous distribution function.
Theorem 1.
Let the interarrival times ( X i ) i N have left support endpoint α > 0 , finite right support endpoint ω > α and let κ be defined as in Lemma 2.
Let φ : [ 0 , ) [ 0 , ) be a measurable, monotone non-decreasing function such that all expected values and integrals are well-defined and exist finitely. Furthermore, assume that φ is not constant on S P X 1 -almost surely.
Let T be a non-negative random variable with left-continuous distribution function G that is independent of the interarrival times ( X i ) i N .
Then, E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) is equivalent to
( i ) G ( x ) = j = 0 κ g j ( x ) 1 ( j ω , ( j + 1 ) α ] ( x ) + j = 1 κ c j 1 ( j α , j ω ] ( x ) + 1 ( ( κ + 1 ) α , ) ( x ) , x R , if α is no atom of X 1 , ( ii ) G ( x ) = j = 0 κ g j ( x ) 1 ( j ω , ( j + 1 ) α ] ( x ) + j = 1 κ c j 1 [ j α , j ω ] ( x ) + 1 [ ( κ + 1 ) α , ) ( x ) , x R , if α is an atom of X 1 ,
where 0 c 1 c κ 1 are constants and g j : ( j ω , ( j + 1 ) α ] [ 0 , 1 ] , j = 0 , , κ , are functions such that G is a left-continuous distribution function.
If ω = , then E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) is equivalent to G ( x ) = g 0 ( x ) 1 ( 0 , α ] ( x ) + 1 ( α , ) ( x ) , x R , if α is not an atom of X 1 , and to G ( x ) = g 0 ( x ) 1 ( 0 , α ] ( x ) + 1 [ α , ) ( x ) , x R , g 0 ( α ) = 1 , if α is an atom of X 1 .
Proof. 
With Equation (5), E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) is equivalent to
n = 1 Cov ( φ ( X n ) , G ( S n ) ) = 0 Cov ( φ ( X n ) , G ( S n ) ) = 0 for all n N ,
since φ and G are both monotone non-decreasing, and thus all covariances are non-negative. For n = 1 , applying Lemma 1 yields
Cov ( φ ( X 1 ) , G ( X 1 ) ) = 0 G ( x ) = c 1 for P X 1 - almost all x S
for some constant c 1 [ 0 , 1 ] due to the assumption that φ is not constant on S P X 1 -almost surely. First, assume that P ( X 1 = α ) = 0 , i.e., α is not an atom of X 1 . Since G as a distribution function is monotone non-decreasing and assumed to be left-continuous, we have G ( x ) = c 1 for all α < x ω . With the same arguments, we obtain for a general n 2
Cov ( φ ( X n ) , G ( S n ) ) = 0 0 Cov ( φ ( X n ) , G ( x + X n ) ) 0 d F * ( n 1 ) ( x ) = 0 Cov ( φ ( X n ) , G ( x + X n ) ) = 0 for P S n 1 - almost all x supp ( S n 1 ) G ( x + y ) = c n for P X n - almost all y S and for P S n 1 - almost all x supp ( S n 1 ) ,
for a constant c n [ 0 , 1 ] by using Lemma 1. This implies G ( x + y ) = c n for all n α < x + y n ω , due to the monotonicity and continuity of the distribution function G. Furthermore, the sequence of constants ( c n ) n N needs to satisfy c 1 c n for all n N (since G is monotone non-decreasing) and lim n c n = 1 (since lim x G ( x ) = 1 ).
With Lemma 2, κ + 1 N is the index where the supports overlap, and thus c n + 1 = c n + 2 = = 1 for all n κ . If ω = , then κ = 0 , and thus the distribution function G has to be of the form G ( x ) = g 0 ( x ) 1 ( 0 , α ] ( x ) + 1 ( α , ) ( x ) , x R . Otherwise, the distribution function G has to be of the form
G ( x ) = 0 , x 0 , g 0 ( x ) , 0 < x α , c 1 , α < x ω , g 1 ( x ) , ω < x 2 α , c 2 , 2 α < x 2 ω , c κ , κ α < x κ ω , g κ ( x ) , κ ω < x ( κ + 1 ) α , 1 , x > ( κ + 1 ) α ,
where 0 = : c 0 c 1 c κ c κ + 1 : = 1 and g j : ( j ω , ( j + 1 ) α ] [ 0 , 1 ] , j = 0 , , κ , are functions such that c j g j ( x ) c j + 1 for all j ω < x ( j + 1 ) α , j = 0 , , κ , and such that g 0 , , g κ are monotone non-decreasing and left-continuous.
If P ( X 1 = α ) > 0 , then G ( x ) = c n necessarily has to hold for all n α x n ω , n N . Thus, for G as in (7) to be left-continuous at the points α , , ( κ + 1 ) α , the additional assumption g j ( ( j + 1 ) α ) = c j + 1 , j = 0 , , κ , is needed.
For the opposite implication, noticing that G as in (7) is constant on the support of all occurrence times and applying Lemma 1, we see that the covariances Cov ( φ ( X n ) , G ( S n ) ) are all equal to zero. This leads to equality of the expected values. □
We note that the constant κ as introduced in Lemma 2 indicates the kind of support the interarrival times have. For example, κ = 0 can correspond to the case 0 < 2 α < ω , where ω may be infinite. In this case, the distribution of the random inspection time T can be chosen as T ϵ 0 , T ϵ α or T twopoint { 0 , α } among others. On the other hand, ω must be finite for κ 1 .
Remark 1.
The case X 1 ϵ α for α > 0 (which would correspond to α = ω in Theorem 1) is excluded, because it is already well-known from the literature (cf. Section 2) that X N ( t ) + 1 X 1 and that the expected values are always equal, i.e.,
E φ ( X N ( t ) + 1 ) = E φ ( X 1 ) for all t 0 .
The equality remains when introducing a random variable T. This can also be derived with the following argument: If X 1 ϵ α , then S n ϵ n α for all n N implies Cov ( φ ( X n ) , G ( S n ) ) = 0 for all n N and for any choice of the distribution function G, leading to equality for any φ.
Theorem 1 can be applied to any interarrival distribution. In particular, only the left support endpoint α and the right support endpoint ω of the interarrival times are of interest. Given a specific interarrival distribution and a distribution function G, Theorem 1 establishes whether or not the inspection paradox occurs.
Example 3.
The case ω = 2 α is a particular one (see Lemma 2 with k = 1 ). According to Theorem 1 where α is supposed to not be an atom of X 1 , i.e., κ = 1 , the inspection paradox does not occur if G is given by
G ( x ) = g 0 ( x ) 1 ( 0 , α ] ( x ) + c 1 1 ( α , 2 α ] ( x ) + 1 ( 2 α , ) ( x ) , x 0 ,
with g 0 and c 1 chosen such that G is left-continuous.
Here, for α = 1 , T may have a two-point distribution on S = { 1 , 2 } and X 1 may be uniformly distributed on the interval ( 1 , 2 ) .
In the case of α being an atom of X 1 , i.e., κ = 0 , G is given by
G ( x ) = g 0 ( x ) 1 ( 0 , α ] ( x ) + 1 [ α , ) ( x ) , x 0 ,
and g 0 ( α ) = 1 has to be fulfilled.
Furthermore, the following example derived from the results of Theorem 1 shows that, in the degenerate time case with T ϵ t , equality in (1) or (2) can also take place for non-degenerate interarrival times. Nevertheless, this situation is irrelevant, since the inspection time coincides with the lower bound of the support of X 1 .
Example 4.
For absolutely continuous interarrival times with a left endpoint α : = t > 0 (i.e., α is not an atom of X 1 ) and right endpoint ω > 2 t , which corresponds to the case κ = 0 , we have equality E φ ( X N ( t ) + 1 ) = E φ ( X 1 ) for any φ satisfying the assumptions of Theorem 1, since G ( x ) = 1 ( t , ) ( x ) , x 0 , is of the form (7) with g 0 0 .
Therefore, choosing φ ( x ) = 1 ( z , ) ( x ) and φ ( x ) = x for x 0 in Theorem 1, we obtain an example for which
P ( X N ( t ) + 1 > z ) = P ( X 1 > z ) , z 0 , and E X N ( t ) + 1 = E X 1 , respectively ,
holds even though the interarrival times have a distribution other than the degenerate distribution. Consequently, requiring equality for a single z and t is not sufficient in general to derive a characterization for the distribution of the interarrival times. This case will be considered in detail in Section 5.
The case α = 0 that was not included in Theorem 1 is studied separately in the following theorem.
Theorem 2.
Let the non-degenerate interarrival times have a left support endpoint α = 0 , where P ( X 1 = 0 ) = 0 . Let φ : [ 0 , ) [ 0 , ) be a measurable, monotone non-decreasing function such that all expected values and integrals are well-defined and exist finitely. Furthermore, assume that φ is not constant on S : = supp ( X 1 ) P X 1 -almost surely.
Let T be a non-negative random variable with a left-continuous distribution function G that is independent of the interarrival times ( X i ) i N .
Then, E φ ( X N ( T ) + 1 ) = E φ ( X 1 ) holds if and only if T has a degenerate distribution in 0 for every ω > 0 .
Proof. 
Analogously to the proof of Theorem 1, G ( x ) = c n must hold for all 0 < x n ω and for all n N , wherefore we obtain c 1 = = c n = 1 for all n N . Thus, G is a distribution function of the degenerate distribution in 0. The opposite direction follows directly from an application of Lemma 1. □
Remark 2.
If p P ( X 1 = 0 ) > 0 in Theorem 2 and G ( x ) = 1 ( 0 , ) ( x ) is the left-continuous version of the distribution function of the degenerate distribution in 0, then the first covariance Cov ( φ ( X 1 ) , G ( X 1 ) ) is positive and an inspection paradox occurs. This is the case as G is not constant P X 1 -almost surely on the support of the interarrival times.
In the above situation with φ ( x ) = x and P ( X 1 = 1 ) = 1 p , we find
P ( N ( 0 ) = k ) = p k , k N , and P ( N ( 0 ) = 0 ) = 1 p ,
and thus E X N ( 0 ) + 1 = 1 p + p 2 > 1 p = E X 1 .

4.2. Equality of the Survival Functions

Choosing φ ( x ) = 1 ( z , ) ( x ) , x 0 , z 0 , in Theorem 1 yields distributions with equality in the inspection paradox, in the sense that P ( X N ( T ) + 1 > z ) = P ( X 1 > z ) . In general, a random inspection time T having a distribution function of the particular form (7) is sufficient for equality in the inspection paradox. More precisely, the inspection paradox does not appear in this case and both random variables X N ( T ) + 1 and X 1 are identically distributed, as stated in the following corollary.
Corollary 1.
Let the interarrival times have a left support endpoint α > 0 and right support endpoint ω > α . If T has a distribution function of the form (7), then
P ( X N ( T ) + 1 > z ) = P ( X 1 > z ) for all z 0 ,
i.e., X N ( T ) + 1 = st X 1 .
Proof. 
Since the G given in (7) is constant on the supports of X n and of S n for all n N , applying Lemma 1 yields Cov ( 1 ( z , ) ( X n ) , G ( S n ) ) = 0 for all n N and for all z 0 as in the proof of Theorem 1. Therefore, with
P ( X N ( T ) + 1 > z ) = P ( X 1 > z ) + n = 1 Cov ( 1 ( z , ) ( X n ) , G ( S n ) ) = 0 for all z 0 ,
X N ( T ) + 1 and X 1 are identically distributed. □
In the characterization of Theorem 1, the function φ is assumed to be non-constant on S P X 1 -almost surely. Therefore, for the function 1 ( z , ) ( · ) to take both values 0 and 1 on the support of X 1 with positive probability, we require z supp ( X 1 ) .
Corollary 2.
Let the interarrival times have a left support endpoint α > 0 , right support endpoint ω > α and let S : = supp ( X 1 ) . Let T be a non-negative random variable with left-continuous distribution function G that is independent of the interarrival times ( X i ) i N . If there is a z S such that 1 ( z , ) ( · ) takes both values 0 and 1 on S with positive probability and
P ( X N ( T ) + 1 > z ) = P ( X 1 > z ) ,
then G is of the form (7).
Proof. 
This follows from Theorem 1 for the choice φ ( x ) = 1 ( z , ) ( x ) , x 0 . □
Corollary 2 shows that equality for an appropriate value of z is enough to determine the general form of the distribution function G (on finite intervals). Thus, if we have equality of the survival functions of X N ( T ) + 1 and X 1 for this z S , then T is of the form (7) and Corollary 1 yields X N ( T ) + 1 = st X 1 .
Remark 3.
The inspection paradox is also discussed in terms of expected values, as we have seen in Section 2 and Section 3. With the choice φ ( x ) = x , x 0 , it follows under the assumptions of Theorem 1 that equality of the expected values E X N ( T ) + 1 = E X 1 holds if and only if G is of the form (7).
Similarly, equality of the moments, i.e., E ( X N ( T ) + 1 m ) = E ( X 1 m ) , also determines the distribution of T. This can be obtained from Theorem 1 via the choice φ ( x ) = x m , x 0 , m > 0 .

5. Equality in the Degenerate Time Case

As discussed in Section 2 and in Remark 1, no paradoxical effect appears for the inspection interval length given degenerate interarrival times, regardless of whether the inspection time is random or deterministic. On the other hand, degenerate interarrival times are not the only example with this property (cf. Example 4). In this section, we further study equality in the inspection paradox by means of a fixed sequence of inspection times.
The following Theorem 3 states that having such a sequence of fixed times for which equality holds is sufficient for the interarrival times to have a degenerate distribution.
Theorem 3.
Let the interarrival times ( X i ) i N have a distribution function F with F ( 0 ) < 1 . Let ( t i ) i N ( 0 , ) be a sequence of monotone increasing times ( t i < t i + 1 , i N ) with lim i t i = , such that
E X N ( t i ) + 1 = E X 1 for all i N .
Then, X 1 = α almost surely for some α > 0 , i.e., F ( x ) = 1 [ α , ) ( x ) , x 0 .
Proof. 
We assume that the interarrival times have a left support endpoint α 0 and right support endpoint ω > α and define S : = [ α , ω ] if ω < and S = [ α , ) if ω = . Thus, it is assumed that the support contains at least two values and this is shown to lead to a contradiction in the following. Due to Representation (6) with T ϵ t i , equality holds if and only if
Cov ( X n , 1 ( t i , ) ( S n ) ) = 0 for all n N and for all i N .
In the case n = 1 , an application of Lemma 1 yields
Cov ( X 1 , 1 ( t i , ) ( X 1 ) ) = 0 for all n N and for all i N 1 ( t i , ) ( x ) = const for P X 1 - almost all x S and for all i N .
From (8), we obtain that S necessarily has to lie in a bounded interval, i.e., ω < . Let S n [ n α , n ω ] , n N . As in the proof of Theorem 1, the case n 2 leads to
1 ( t i , ) ( x + y ) = const for P X n - almost all x S , for P S n 1 - almost all y S n 1 and for all i N .
Due to Lemma 2, there exists a κ N 0 such that P ( [ n α , n ω ] [ ( n + 1 ) α , ( n + 1 ) ω ] ) > 0 for all n κ + 1 . This implies
n κ + 1 [ n α , n ω ] = [ ( κ + 1 ) α , ) ,
i.e., the supports of S κ + 1 , S κ + 2 , cover all real numbers greater than or equal to ( κ + 1 ) α . Since the unbounded sequence ( t i ) i N consists of monotone increasing real numbers, there exists a smallest k N with t k > ( κ + 1 ) α and an n N with n κ + 2 such that t k [ n α , n ω ] .
If n α < t k < n ω , then the indicator function
1 ( t k , ) ( x + y ) = 0 , x S and y ( n α x , t k x ] , 1 , x S and y ( t k x , n ω x ] , , i . e . , x + y S n ,
has a jump and is not constant for P X n -almost all x S and P S n 1 -almost all y S n 1 . Concerning the occurrence time S n , we obtain Cov ( X n , 1 ( t k , ) ( S n ) ) > 0 , which is in contradiction to the assumed equality.
If t k = n α , then t k lies in the preceding interval with ( n 1 ) α < t k < ( n 1 ) ω , since this interval overlaps with the interval [ n α , n ω ] due to n 1 κ + 1 . Again, the indicator function 1 ( t k , ) ( · ) is not constant for P X n 1 -almost all x S and P S n 2 -almost all y S n 2 . In consequence, Cov ( X n 1 , 1 ( t k , ) ( S n 1 ) ) > 0 is a contradiction to the equality.
If t k = n ω , then t k lies in the subsequent interval ( n + 1 ) α < t k < ( n + 1 ) ω , which implies Cov ( X n + 1 , 1 ( t k , ) ( S n + 1 ) ) > 0 , leading to a contradiction.
In conclusion, | S | = 1 must hold P X 1 -almost surely and there exists an α > 0 with X 1 = α almost surely and F ( x ) = 1 [ α , ) ( x ) , x 0 . The constant α has to be positive due to F ( 0 ) < 1 . □
Note that having only a finite number of times t i with equality in Theorem 3 does not suffice to derive that the interarrival times have a degenerate distribution. If there was a largest t N < , then X 1 could have a support lying to the right of t N and consisting of more than one number and we would still have equality.
The same arguments as in Theorem 3 lead to an analogous result in case of the classical inspection paradox inequality for a sequence of time points. In a similar way to the main result in Section 4, Theorem 3 and the following Corollary 3 can be combined by using a function φ that is assumed to be not constant almost surely on the support of the interarrival times.
Corollary 3.
Let the interarrival times ( X i ) i N have the distribution function F with F ( 0 ) < 1 . Let ( t i ) i N ( 0 , ) be a sequence of monotone increasing times ( t i < t i + 1 , i N ) with lim i t i = . Let z S : = supp ( X 1 ) such that 1 ( z , ) ( · ) takes values 0 and 1 on S with positive probability and
P ( X N ( t i ) + 1 > z ) = P ( X 1 > z ) for all i N .
Then, X 1 = α almost surely for some α > 0 , i.e., F ( x ) = 1 [ α , ) ( x ) , x 0 .
We note that in both Theorem 3 and Corollary 3, we did not assume that any particular point t i lies in the support of an occurrence time, as this assumption alone does not suffice (in Example 4, the time t lies in the support of X 1 ). In order to infer that the interarrival times have a degenerate distribution based on only one time point t, it is necessary to assume that t lies in the support of an occurrence time whose support overlaps with both the support of the preceding and the succeeding occurrence time. This is formally stated in Corollary 4.
Corollary 4.
Let the interarrival times ( X i ) i N have a distribution function F with F ( 0 ) < 1 . Assume that there exists a k N with k κ + 2 and a t > 0 such that t supp ( S k ) . Then,
E X N ( t ) + 1 = E X 1
implies X 1 = t / k almost surely, i.e., F ( x ) = 1 [ t / k , ) ( x ) , x 0 .
Proof. 
Due to the equality of the expected values and to Representation (6), the covariances Cov ( X n , 1 ( t , ) ( S n ) ) must be equal to zero for all n N . Assume that the support of S k contains at least two different points, then P ( supp ( S k 1 ) supp ( S k ) ) > 0 and P ( supp ( S k ) supp ( S k + 1 ) ) > 0 holds due to Lemma 2. As in the proof of Theorem 3, we can infer that t lies in the inner of either the support of S k , of S k 1 or of S k + 1 , respectively, which leads to a contradiction. Thus, supp ( S k ) = t implies P ( S k x ) = 1 [ t , ) ( x ) ; i.e., S k has a degenerate distribution in t. Since S k is the sum of k iid interarrival times, we then have X 1 = t / k almost surely. □

6. Conclusions

By considering a random inspection time in a renewal counting process, interesting effects come into play and new insights are gained regarding the distribution of the random time. With respect to the well-known inspection paradox, non-trivial choices of this distribution and the distribution of the interarrival times lead to non-occurrence of the paradox in contrast to the situations for a deterministic time. In the general case, a complete characterization of the (non-)occurrence of the inspection paradox with respect to G is given.

Author Contributions

Conceptualization, D.R. and U.K.; methodology, D.R.; validation, D.R. and U.K.; formal analysis, D.R.; investigation, D.R.; writing—original draft preparation, D.R.; writing—review and editing, D.R. and U.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
iidindependent and identically distributed
[ P ] P-almost surely

References

  1. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; John Wiley & Sons Inc.: New York, NY, USA, 1971; Volume II. [Google Scholar]
  2. Masuda, N.; Porter, M.A. The waiting-time paradox. Front. Young Minds Math. 2021, 8, 582433. [Google Scholar] [CrossRef]
  3. Ross, S.M. Introduction to Probability Models, 10th ed.; Academic Press: London, UK, 2010. [Google Scholar]
  4. Stein, W.E.; Dattero, R. Sampling bias and the inspection paradox. Math. Mag. 1985, 58, 96–99. [Google Scholar] [CrossRef]
  5. Pal, A.; Kostinski, S.; Reuveni, S. The inspection paradox in stochastic resetting. J. Phys. A Math. Theor. 2022, 55, 021001. [Google Scholar] [CrossRef]
  6. Jenkins, E.V.; Dharmaprani, D.; Schopp, M.; Quah, J.X.; Tiver, K.; Mitchell, L.; Xiong, F.; Aguilar, M.; Pope, K.; Akar, F.G.; et al. The inspection paradox: An important consideration in the evaluation of rotor lifetimes in cardiac fibrillation. Front. Physiol. 2022, 13, 920788. [Google Scholar] [CrossRef] [PubMed]
  7. Gakis, K.G.; Sivazlian, B.D. A generalization of the inspection paradox in an ordinary renewal process. Stoch. Anal. Appl. 1993, 11, 43–48. [Google Scholar] [CrossRef]
  8. Angus, J.E. The inspection paradox inequality. SIAM Rev. Soc. Ind. Appl. Math. 1997, 39, 95–97. [Google Scholar]
  9. Ross, S.M. The inspection paradox. Probab. Eng. Inf. Sci. 2003, 17, 47–51. [Google Scholar] [CrossRef]
  10. Kamps, U. Inspection paradox. In Encyclopedia of Statistical Sciences, 3(Update); Kotz, S., Read, C.B., Banks, D.L., Eds.; John Wiley & Sons Inc.: New York, NY, USA, 1999; pp. 364–366. [Google Scholar]
  11. Herff, W.; Jochems, B.; Kamps, U. The inspection paradox with random time. Stat. Pap. 1997, 38, 103–110. [Google Scholar] [CrossRef]
  12. Rauwolf, D.; Kamps, U. Quantifying the inspection paradox with random time. Am. Stat. 2023, 77, 274–282. [Google Scholar] [CrossRef]
  13. Liu, P.; Peña, E.A. Sojourning with the homogeneous Poisson process. Am. Stat. 2016, 70, 413–423. [Google Scholar] [CrossRef] [PubMed]
  14. Cox, D.R. Renewal Theory; Methuen & Co.: London, UK, 1962. [Google Scholar]
  15. Pinsky, M.A.; Karlin, S. An Introduction to Stochastic Modeling, 4th ed.; Academic Press: Burlington, VT, USA, 2011. [Google Scholar]
  16. Mitov, K.V.; Omey, E. Renewal Processes; Springer: Cham, Switzerland, 2014. [Google Scholar]
  17. Kulkarni, V.G. Modeling and Analysis of Stochastic Systems, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  18. Behboodian, J. Covariance inequality and its applications. Int. J. Math. Educ. Sci. Technol. 1994, 25, 643–647. [Google Scholar] [CrossRef]
  19. Rauwolf, D. Renewal Processes with Random Time. Ph.D. Thesis, RWTH Aachen University, Aachen, Germany, 2023. [Google Scholar]
Figure 1. Renewal process.
Figure 1. Renewal process.
Stats 07 00024 g001
Figure 2. Inspection interval.
Figure 2. Inspection interval.
Stats 07 00024 g002
Figure 3. A distribution function G with non-occurrence of the inspection paradox.
Figure 3. A distribution function G with non-occurrence of the inspection paradox.
Stats 07 00024 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rauwolf, D.; Kamps, U. On Non-Occurrence of the Inspection Paradox. Stats 2024, 7, 389-401. https://doi.org/10.3390/stats7020024

AMA Style

Rauwolf D, Kamps U. On Non-Occurrence of the Inspection Paradox. Stats. 2024; 7(2):389-401. https://doi.org/10.3390/stats7020024

Chicago/Turabian Style

Rauwolf, Diana, and Udo Kamps. 2024. "On Non-Occurrence of the Inspection Paradox" Stats 7, no. 2: 389-401. https://doi.org/10.3390/stats7020024

Article Metrics

Back to TopTop