Next Article in Journal
Game Theory for Predicting Stocks’ Closing Prices
Previous Article in Journal
Weak ψ-Contractions on Directed Graphs with Applications to Integral Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model

School of Mathematical Sciences, Soochow University, Suzhou 215006, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2674; https://doi.org/10.3390/math12172674
Submission received: 23 July 2024 / Revised: 22 August 2024 / Accepted: 25 August 2024 / Published: 28 August 2024

Abstract

:
In capture–recapture experiments, the presence of overdispersion and heterogeneity necessitates the use of the negative binomial regression model for inferring population sizes. However, within this model, existing methods based on likelihood and ratio regression for estimating the dispersion parameter often face boundary and nonidentifiability issues. These problems can result in nonsensically large point estimates and unbounded upper limits of confidence intervals for the population size. We present a penalized empirical likelihood technique for solving these two problems by imposing a half-normal prior on the population size. Based on the proposed approach, a maximum penalized empirical likelihood estimator with asymptotic normality and a penalized empirical likelihood ratio statistic with asymptotic chi-square distribution are derived. To improve numerical performance, we present an effective expectation-maximization (EM) algorithm. In the M-step, optimization for the model parameters could be achieved by fitting a standard negative binomial regression model via the R basic function glm.nb(). This approach ensures the convergence and reliability of the numerical algorithm. Using simulations, we analyze several synthetic datasets to illustrate three advantages of our methods in finite-sample cases: complete mitigation of the boundary problem, more efficient maximum penalized empirical likelihood estimates, and more precise penalized empirical likelihood ratio interval estimates compared to the estimates obtained without penalty. These advantages are further demonstrated in a case study estimating the abundance of black bears (Ursus americanus) at the U.S. Army’s Fort Drum Military Installation in northern New York.

1. Introduction

The use of capture–recapture data to infer population sizes is widely sought after across multiple fields, such as ecology and epidemiology. The capture–recapture sampling dates back to fisheries [1,2] and is used to estimate the abundance of species in the ecosystem [3]. Additionally, the number of populations of drug or alcohol addicts can be inferred by considering each visit to institutions as one capture [4,5]. Similarly, epidemiologists may leverage multiple incomplete lists of patients to estimate disease prevalence [6].
In traditional practice, the Poisson regression model is commonly used for modeling the capture–recapture data. However, the literature has demonstrated that the Poisson regression model may perform inadequately when the number of captures is subject to overdispersion, which means that the corresponding expectation is larger than variance. In capture–recapture experiments, the overdispersion often arises from individual heterogeneity. For example, animal species may exhibit flocking and spatiotemporal aggregation behaviors [7]. As argued in ref. [8], the population size may be underestimated if the model used ignores heterogeneity and overdispersion. Without any adjustments, using the Poisson regression model can yield unreliable statistical inferences, leading to incorrect interpretations. In this situation, ref. [9] argued that the negative binomial regression model might be more suitable.
Various estimation methods have been investigated for capture–recapture data within the framework of the negative binomial regression model. Based on conditional likelihood, ref. [8] introduced a maximum likelihood estimator for the regression parameter. Additionally, they derived a Horvitz–Thompson type estimator and a corresponding Wald-type interval estimator for estimating the population size. Building on the recapture probability ratios, refs. [10,11,12] investigated issues related to model diagnostics and population size estimation. Their simulations and empirical investigations revealed that the negative binomial regression model outperforms the Poisson regression model in the presence of overdispersion. However, these methods face challenges, and at least two specific issues arise. First, the likelihood function may suffer from identification and boundary problems for the dispersion parameter, leading to nonsensically large values of the Horvitz–Thompson type estimator associated with large standard errors; see, for example, refs. [10,11,13]. Second, the Newton–Raphson procedure may not be reliable and even fail to converge for maximizing the conditional likelihood function, as illustrated in refs. [14,15].
Empirical likelihood offers an alternative approach to mitigating the two issues mentioned earlier. Using empirical likelihood techniques, ref. [16] investigated the full semiparametric likelihood inference for population sizes in capture–recapture studies. Additionally, this method was extended by refs. [17,18,19,20] to include the continuous time case, one inflation and missing covariates. Experience shows that the semiparametric empirical-likelihood-based method usually outperforms the conditional-likelihood-based method in numerical experiments.
However, our simulation studies indicate that the point and interval estimators derived using maximum empirical likelihood estimation are prone to boundary issues, particularly when a small proportion of individuals are captured and severe overdispersion is present. For instance, as shown in our simulation studies with β 0 = ( 0.5 , 0.3 ) , k 0 = 0.5 , and N 0 = 250 , where the capture probability is as low as 40%, nearly 67% of simulated cases yield unreasonable empirical likelihood ratio interval estimates whose upper limits exceed the number of captured individuals by over 100 times. This boundary problem is similarly observed in Section 4 for a case study. A potential cause of this issue may be that in such scenarios, the empirical log-likelihood ratio function flattens out as the population size increases.
The following sections are arranged as follows. Section 2 revisits the semiparametric empirical likelihood method within the negative binomial regression model and introduces a penalized empirical likelihood estimation approach to addressing the boundary problem. Furthermore, the EM algorithm is designed to enhance the reliability of the estimation process. Section 3 presents a number of simulations to illustrate how the proposed methods perform in the finite-sample settings. These methods are put into practice for analyzing the black bear data from New York in Section 4. Finally, a discussion is presented in Section 5.

2. Methodology

2.1. Model and Data

Suppose there are N individuals in a population. Each individual is characterized by the number of captures, denoted by Y, a nonnegative interger-valued variable. A naive approach to modeling Y is to assume a Poisson distribution, Y Poisson ( θ ) , where θ > 0 represents the rate or expected number of captures. The Poisson model inherently assumes that all individuals in the population are homogeneous, meaning they share the same rate parameter θ . In addition, as noted in the introduction, a key limitation of the Poisson model is its inability to handle overdispersed data, where the variance exceeds the mean.
To address the heterogeneity and overdispersion, one can assume that individuals have varying rates. In practice, a common approach is to model the distribution of these rates using a gamma distribution as a prior. Specifically, θ Gamma ( r , p / ( 1 p ) ) , where r > 0 is the shape parameter and p / ( 1 p ) is the scale parameter with p ( 0 , 1 ) . With this prior, the marginal probability mass function of Y can be derived in a closed form:
p Y ( y ) = 0 p ( y θ ) p ( θ ) d θ = Γ ( y + r ) Γ ( y + 1 ) Γ ( r ) p y ( 1 p ) r , y = 0 , 1 , 2 , .
which corresponds to the Poisson–Gamma or negative binomial distribution in the probability textbook. This distribution models the number of successes before the rth failure occurs in a sequence of independent Bernoulli trials, with p representing the probability of success in each trial.
As highlighted in ref. [9], a common reparameterization of p Y ( y ) is often used to interpret the counting processes in ecological and biodiversity studies. This reparameterization expresses p Y ( y ) in terms of its mean μ and a dispersion or aggregation parameter k, which controls the variation in counts. By setting p = μ / ( k + μ ) and k = r , p Y ( y ) can be reformulated as:
p Y ( y ) = Γ ( y + k ) Γ ( y + 1 ) Γ ( k ) μ k + μ y k k + μ k .
When the individual covariates, denoted by X , are available, it becomes necessary to account for the heterogeneity induced by these covariates. To do so, a parametric model is used, specifically μ ( x ; β ) = exp ( β x ) , which relates the mean parameter to the individual covariates. Thus, given X = x , the conditional probability mass function of Y is expressed by:
P ( Y = y X = x ) = Γ ( y + k ) Γ ( y + 1 ) Γ ( k ) μ ( x ; β ) k + μ ( x ; β ) y k k + μ ( x ; β ) k = : f ( y , x ; β , k ) ,
where β represents the unknown regression coefficients and k > 0 represents the dispersion parameter. This formulation is referred to as the negative binomial regression model. As a special case, Equation (1) reduces to p Y ( y ) when all coefficients, except the intercept, are zero. The negative binomial regression model also includes the geometric regression model when k = 1 and reduces to the Poisson regression model as k .
Given X = x , the conditional expectation of Y is equal to μ ( x ; β ) , while the conditional variance is expressed as μ ( x ; β ) + { μ ( x ; β ) } 2 / k , indicating a quadratic relationship. The parameter k controls the degree of overdispersion: as k decreases, the variance increases, leading to greater overdispersion. Overdispersion is commonly observed in capture–recapture studies, where the variance significantly exceeds the mean. Consequently, the negative binomial regression model is often more appropriate for modeling capture–recapture data under conditions of severe overdispersion, as compared to the Poisson model.
Because the event { Y = 0 } is unobservable in capture–recapture studies, the zero-truncated version of model (1) is considered:
P ( Y = y Y > 0 , X = x ) = P ( Y = y X = x ) P ( Y > 0 X = x ) = f ( y , x ; β , k ) 1 ϕ ( x ; β , k ) , y = 1 , 2 , ,
where ϕ ( x ; β , k ) = [ k / { k + μ ( x ; β ) } ] k represents the conditional probability that an individual with a covariate x is not captured at all.
Consider a study that captured n distinct individuals, with ( x 1 , , x n ) and ( y 1 , , y n ) denoting their individual covariates and capture frequencies, respectively. Under the model (2), ref. [8] proposed a maximum conditional likelihood estimator ( β ˜ , k ˜ ) by maximizing:
i = 1 n f ( y i , x i ; β , k ) 1 ϕ ( x i ; β , k ) .
According to the principle of inverse probability weighting, the Horvitz–Thompson type estimator of N is defined as N ˜ = i = 1 n { 1 ϕ ( x i ; β ˜ , k ˜ ) } 1 . However, N ˜ might be inflated due to small detection probabilities.

2.2. Semiparametric Empirical Likelihood

The semiparametric empirical likelihood, initially derived from ref. [16], is an appealing technique for implementing the full likelihood method when capture–recapture data contain individual covariates. Taking the negative binomial regression model as an example, we provide a brief introduction to this technique.
Considering that n distinct individuals out of a total of N individuals were captured, n follows a binomial distribution and the corresponding probability is as follows:
P ( n ) = N n ( 1 α ) n α N n ,
where α = P ( Y = 0 ) represents the probability that a generic individual was not captured at all. For the given n individuals, the conditional probability of their covariates and capture counts is as follows:
i = 1 n P ( Y = y i , X = x i ) P ( Y > 0 ) = i = 1 n f ( y i , x i ; β , k ) P ( X = x i ) 1 α .
Multiplying these two expressions yields the full likelihood function:
N n α N n × i = 1 n { f ( y i , x i ; β , k ) P ( X = x i ) } .
In Equation (3), the marginal probability P ( X = x i ) is unknown and shall be addressed by the empirical likelihood method; see refs. [21,22] for more details. Technically, we assume that P ( X = x i ) = p i for i = 1 , 2 , , n , where p i ( 0 , 1 ) is subject to the constraint i = 1 n p i = 1 . With this substitution, we call the full likelihood the semiparametric empirical likelihood and refer to its logarithm as the empirical log-likelihood function, namely:
˜ ( N , β , k , α , { p i } ) = log N n + ( N n ) log ( α ) + i = 1 n log { f ( y i , x i ; β , k ) } + i = 1 n log ( p i ) .
By the definition of α and the iterated expectation theorem, it follows that α = E { P ( Y = 0 X ) } , or equivalently:
i = 1 n { ϕ ( x i ; β , k ) α } p i = 0 .
With the constraints for p i ’s, the profile empirical log-likelihood function can be derived using the Lagrange multiplier method on Equation (3):
( N , β , k , α ) = log N n + ( N n ) log ( α ) + i = 1 n log y i + k 1 y i + i = 1 n k log k k + μ ( x i ; β ) + y i log μ ( x i ; β ) k + μ ( x i ; β ) i = 1 n log [ 1 + ξ { ϕ ( x i ; β , k ) α } ] n log ( n ) ,
where ξ is the Lagrange multiplier, satisfying:
i = 1 n ϕ ( x i ; β , k ) α 1 + ξ { ϕ ( x i ; β , k ) α } = 0 .
Notice that there are a finite number of unknown parameters in the profile empirical log-likelihood function. By maximizing this function, we obtain the maximum empirical likelihood estimator, expressed as ( N ^ , β ^ , k ^ , α ^ ) = arg max { ( N , β , k , α ) } .

2.3. Penalized Empirical Likelihood Inference

When the number of captures exhibits severe overdispersion, both the estimators N ˜ and N ^ , proposed in Section 2.1 and Section 2.2, respectively, may exhibit spuriously large values, potentially leading to misleading conclusions. This issue has been addressed in ref. [13] (p. 84) for the Horvitz–Thompson type estimator. Our simulation studies further confirm that the empirical likelihood estimators may also suffer from the boundary problem. This issue may arise due to the limited information available about the population size, causing the profile empirical log-likelihood to fail in distinguishing between different values of large N.
To mitigate this problem, we intuitively incorporate prior information on the population size to reduce the probability of large values. We achieve this by augmenting the likelihood functions with an appropriate penalty term. Correspondingly, the penalized empirical log-likelihood function and its profile version are defined as:
˜ p ( N , β , k , α , { p i } ) = ˜ ( N , β , k , α , { p i } ) + f p ( N ) , p ( N , β , k , α ) = ( N , β , k , α ) + f p ( N ) .
where the penalty term f p ( N ) takes the form of C ( N ν ) 2 I ( N > ν ) , where ν is a lower bound of N, C 0 is a tuning parameter, and I ( · ) is the indicator function. For specific values of ν and C, a maximum penalized empirical likelihood estimator is proposed, namely, ( N ^ p , β ^ p , k ^ p , α ^ p ) = arg max { p ( N , β , k , α ) } .
From a Bayesian perspective, adding the penalty term f p ( N ) into the log-likelihood is equivalent to imposing a prior for N that has a mixture of the half-normal distribution N ( ν , ( 2 C ) 1 ) for N > ν and a uniform distribution U ( n , ν ) for n N ν . In other words, this penalty has no effect on the likelihood when n N ν and gradually decreases the likelihood when N > ν . The larger the population size, the more pronounced the decrease. Consequently, the penalized method encourages large values of N ^ to shrink towards ν . In practice, we recommend using the Chao estimator as the lower bound ν ; see ref. [23] for details about this estimator. Alternatively, the generalized Chao estimator proposed in ref. [24] can also be considered.
To derive the large-sample properties of the estimator ( N ^ p , β ^ p , k ^ p , α ^ p ) , we define some notation when the parameter vector ( N , β , k , α ) takes its true value, namely, ( N 0 , β 0 , k 0 , α 0 ) . Define ψ ( x ; β 0 , k 0 ) = log 1 + μ ( x ; β 0 ) / k 0 + 1 + k 0 / μ ( x ; β 0 ) 1 , φ = E { 1 ϕ ( X ; β 0 , k 0 ) } 1 , and:
S 2 ( y + k 0 1 , y ) = k = k 0 y + k 0 1 1 k 2 , y = 1 , 2 , 0 , y = 0 .
Let:
W = V 11 0 s × 1 0 V 14 0 s × 1 V 22 + V 25 V 55 1 V 52 V 23 + V 25 V 55 1 V 53 V 24 + V 25 V 55 1 V 54 0 V 32 + V 35 V 55 1 V 52 V 33 + V 35 V 55 1 V 53 V 34 + V 35 V 55 1 V 54 V 41 V 42 + V 45 V 55 1 V 52 V 43 + V 45 V 55 1 V 53 V 44 + V 45 V 55 1 V 54 ,
where V 11 = 1 α 0 1 , V 41 = V 14 = α 0 1 , V 44 = φ α 0 1 , V 45 = V 54 = ( 1 α 0 ) 2 φ , V 55 = ( 1 α 0 ) 4 φ ( 1 α 0 ) 3 , and:
V 22 = E k 0 ϕ ( X ; β 0 , k 0 ) μ ( X ; β 0 ) 1 ϕ ( X ; β 0 , k 0 ) k 0 μ ( X ; β 0 , k 0 ) k 0 μ ( X ; β 0 ) X 2 { k 0 + μ ( X ; β 0 ) } 2 , V 23 = V 32 = E k 0 ϕ ( X ; β 0 , k 0 ) ψ ( X ; β 0 , k 0 ) μ ( X ; β 0 ) X { 1 ϕ ( X ; β 0 , k 0 ) } { k 0 + μ ( X ; β 0 ) } , V 33 = E ϕ ( X ; β 0 , k 0 ) ψ 2 ( X ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) + k 0 μ ( X ; β 0 ) + μ 2 ( X ; β 0 ) k 0 { k 0 + μ ( X ; β 0 ) } 2 + E { S 2 ( Y + k 0 1 , Y ) } , V 24 = E ϕ ( X ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) · k 0 μ ( X ; β 0 ) X k 0 + μ ( X ; β 0 ) , V 25 = V 52 = ( 1 α 0 ) 2 V 24 , V 34 = V 43 = E ϕ ( X ; β 0 , k 0 ) ψ ( X ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) , V 35 = V 53 = ( 1 α 0 ) 2 V 34 .
The following theorem presents the large-sample properties of the maximum penalized empirical likelihood estimator ( N ^ p , β ^ p , k ^ p , α ^ p ) associated with the penalized empirical likelihood ratio statistic of N.
Theorem 1.
Suppose that k 0 > 0 , 0 < α 0 < 1 , and the tuning parameter satisfies C = O ( N 2 ) . If W is nonsingular and φ < , as N 0 :
(a) 
N 0 { log ( N ^ p ) log ( N 0 ) , ( β ^ p β 0 ) , k ^ p k 0 , α ^ p α 0 } d N ( 0 , W 1 ) ;
(b) 
N 0 ( N ^ p / N 0 1 ) d N ( 0 , σ 2 ) , where
σ 2 = φ 1 V 42 V 43 V 22 V 23 V 32 V 33 1 V 24 V 34 ;
(c) 
2 { p ( N ^ p , β ^ p , k ^ p , α ^ p ) max ( β , k , α ) p ( N , β , k , α ) } d χ 1 2 , where χ 1 2 denotes the standard chi-square distribution.
Proof. 
As the proposed semiparametric empirical likelihood approach can be seen as an extension of the EL method of ref. [16] to the negative binomial regression model, the proof of Theorem 1 is very similar to those of Theorem 1 and Corollary 1 in ref. [16]. Here, we only highlight the difference and the formulae of V and Σ in our framework.
We first argue that ξ 0 = ( 1 α 0 ) 1 is the limit of ξ ^ , where ξ ^ is the solution to:
i = 1 n ϕ ( x i ; β ^ p , k ^ p ) α ^ p 1 + ξ { ϕ ( x i ; β ^ p , k ^ p ) α ^ p } = 0 .
For this purpose, we define a function:
( N , β , k , α , ξ ) = log N n + ( N n ) log ( α ) i = 1 n log [ 1 + ξ { ϕ ( x i ; β , k ) α } ] + i = 1 n log y i + k 1 y i + k log k k + μ ( x i ; β ) + y i log μ ( x i ; β ) k + μ ( x i ; β ) n log ( n ) + f p ( N ) .
It can be seen that p ( N , β , k , α ) = ( N , β , k , α , ξ * ) , where ξ * is the solution to / ξ = 0 . The first partial derivatives of ( N , β , k , α , ξ ) with respect to α and ξ are:
α = N n α + ξ i = 1 n 1 1 + ξ { ϕ ( x i ; β , k ) α } , ξ = i = 1 n ϕ ( x i ; β , k ) α 1 + ξ { ϕ ( x i ; β , k ) α } .
Setting the above equations to zero gives ξ ^ = ( N ^ p n ) / ( n α ^ p ) . Since α 0 is the probability of never being captured, n follows a Binomial distribution Bi( N 0 , 1 α 0 ). When ( N ^ p , α ^ p ) is consistent, it follows from the strong law of large numbers that as N 0 :
ξ ^ = N ^ / N 0 n / N 0 ( n / N 0 ) α ^ p p 1 ( 1 α 0 ) ( 1 α 0 ) α 0 = 1 1 α 0 = ξ 0 ,
where p denotes convergence in probability.
Below, we derive the formulae of V and Σ . Let γ = ( γ 1 , γ 2 , γ 3 , γ 4 , γ 5 ) , with γ 1 = N 0 ( N / N 0 1 ) , γ 2 = N 0 ( β β 0 ) , γ 3 = N 0 ( k k 0 ) , γ 4 = N 0 ( α α 0 ) ,
γ 5 = N 0 ( ξ ξ 0 ) . Define
H ( γ ) = ( N 0 + N 0 1 / 2 γ 1 , β 0 + N 0 1 / 2 γ 2 , k 0 + N 0 1 / 2 γ 3 , α 0 + N 0 1 / 2 γ 4 , ξ 0 + N 0 1 / 2 γ 5 ) .
According to Lemma 2 in the Supplementary Material of [16], deriving the formula of V is equivalent to calculating the first two partial derivatives of H ( γ ) with respect to γ at 0 . It follows from the law of large numbers and the central limit theorem that:
H ( 0 ) γ 1 = N 0 1 / 2 { S 1 ( N 0 , n ) + log ( α 0 ) + f p ( N 0 ) } = N 0 1 / 2 n / N 0 1 α 0 + 1 + O p ( N 0 1 / 2 ) , H ( 0 ) γ 2 = N 0 1 / 2 i = 1 n μ ( x i ; β 0 ) 1 ϕ ( x i ; β 0 , k 0 ) + y i k 0 x i k 0 + μ ( x i ; β 0 ) , H ( 0 ) γ 3 = N 0 1 / 2 i = 1 n ψ ( x ; β 0 , k 0 ) 1 ϕ ( x i ; β 0 , k 0 ) y i k 0 + μ ( x i ; β 0 ) + S 1 ( y i + k 0 1 , y i ) , H ( 0 ) γ 4 = N 0 1 / 2 N 0 n α 0 i = 1 n 1 1 ϕ ( x i ; β 0 , k 0 ) , H ( 0 ) γ 5 = N 0 1 / 2 ( 1 α 0 ) i = 1 n ϕ ( x i ; β 0 , k 0 ) α 0 1 ϕ ( x i ; β 0 , k 0 ) ,
where the first equation uses the result (a) of Lemma A1 and Equation (A1) of Lemma A2 in the Appendix A. Using the result (b) of Lemma A1 and Equation (A2) of Lemma A2, we have:
2 H ( 0 ) γ 1 2 = N 0 S 2 ( N 0 , n ) + N 0 f p ( N 0 ) = 1 α 0 1 + O p ( N 0 1 / 2 ) .
Similarly, it can be verified that 2 H ( 0 ) / ( γ 4 γ 1 ) = 2 H ( 0 ) / ( γ 1 γ 4 ) = α 0 1 + O p ( N 0 1 / 2 ) and:
2 H ( 0 ) γ 2 γ 2 = N 0 1 i = 1 n k 0 ϕ ( x i ; β 0 , k 0 ) μ ( x i ; β 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 k 0 1 ϕ ( x i ; β 0 , k 0 ) y i k 0 μ ( x i ; β 0 ) x i 2 { k 0 + μ ( x i ; β 0 ) } 2 , 2 H ( 0 ) γ 2 γ 3 = 2 H ( 0 ) γ 3 γ 2 = N 0 1 i = 1 n [ k 0 ϕ ( x i ; β 0 , k 0 ) ψ ( x i ; β 0 , k 0 ) 1 ϕ ( x i ; β 0 , k 0 ) 2 + y i k 0 + μ ( x i ; β 0 ) 1 1 ϕ ( x i ; β 0 , k 0 ) · μ ( x i ; β 0 ) k 0 + μ ( x i ; β 0 ) ] μ ( x i ; β 0 ) x i k 0 + μ ( x i ; β 0 ) , 2 H ( 0 ) γ 2 γ 4 = 2 H ( 0 ) γ 4 γ 2 = N 0 1 i = 1 n ϕ ( x i ; β 0 , k 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 · k 0 μ ( x i ; β 0 ) x i k 0 + μ ( x i ; β 0 ) , 2 H ( 0 ) γ 2 γ 5 = 2 H ( 0 ) γ 5 γ 2 = ( 1 α 0 ) 2 N 0 1 i = 1 n ϕ ( x i ; β 0 , k 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 · k 0 μ ( x i ; β 0 ) x i k 0 + μ ( x i ; β 0 ) , 2 H ( 0 ) γ 3 2 = N 0 1 i = 1 n [ ϕ ( x i ; β 0 , k 0 ) ψ 2 ( x i ; β 0 , k 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 + 1 1 ϕ ( x i ; β 0 , k 0 ) · μ 2 ( x i ; β 0 ) k 0 { k 0 + μ ( x i ; β 0 , k 0 ) } 2 + y i { k 0 + μ ( x i ; β 0 , k 0 ) } 2 + S 2 ( y i + k 0 1 , y i ) ] , 2 H ( 0 ) γ 3 γ 4 = 2 H ( 0 ) γ 4 γ 3 = N 0 1 i = 1 n ϕ ( x i ; β 0 , k 0 ) ψ ( x i ; β 0 , k 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 , 2 H ( 0 ) γ 3 γ 5 = 2 H ( 0 ) γ 5 γ 3 = ( 1 α 0 ) 2 N 0 1 i = 1 n ϕ ( x i ; β 0 , k 0 ) ψ ( x i ; β 0 , k 0 ) { 1 ϕ ( x i ; β 0 , k 0 ) } 2 , 2 H ( 0 ) γ 4 2 = N 0 1 i = 1 n 1 { 1 ϕ ( x i ; β 0 , k 0 ) } 2 1 n / N 0 α 0 2 , 2 H ( 0 ) γ 4 γ 5 = 2 H ( 0 ) γ 5 γ 4 = ( 1 α 0 ) 2 N 0 1 i = 1 n 1 { 1 ϕ ( x i ; β 0 , k 0 ) } 2 , 2 H ( 0 ) γ 5 2 = ( 1 α 0 ) 4 N 0 1 i = 1 n 1 { 1 ϕ ( x i ; β 0 , k 0 ) } 2 ( 1 α 0 ) 3 .
With the arguments of Lemma A3 in the Appendix A, we have that the leading term of 2 H ( 0 ) / ( γ γ ) is as follows:
2 H ( 0 ) γ γ = V + O p ( N 0 1 / 2 ) , V = V 11 0 0 V 14 0 0 V 22 V 23 V 24 V 25 0 V 32 V 33 V 34 V 35 V 41 V 42 V 43 V 44 V 45 0 V 52 V 53 V 54 V 55 ,
where V i j s are the same as those defined in Equation (5).
Next, we derive the formula of Σ . Define u n = ( u n 1 , u n 2 , u n 3 , u n 4 , u n 5 ) , where:
u n 1 = N 0 1 / 2 n / N 0 1 α 0 + 1 , u n 2 = H ( 0 ) γ 2 , u n 3 = H ( 0 ) γ 3 , u n 4 = H ( 0 ) γ 4 , u n 5 = H ( 0 ) γ 5 .
It can be verified that H ( 0 ) / γ = u n + O p ( N 0 1 / 2 ) and E ( u n ) = 0 . Here, we only verify that E ( u n 2 ) = 0 and E ( u n 3 ) = 0 . In fact, it follows Lemma A3 that:
E ( u n 2 ) = N 0 1 / 2 E k 0 X μ ( X ; β 0 ) k 0 + μ ( X ; β 0 ) + k 0 X E ( Y X ) k 0 + μ ( X ; β 0 ) = 0 , E ( u n 3 ) = N 0 1 / 2 E ψ ( X ; β 0 , k 0 ) E ( Y X ) k 0 + μ ( X ; β 0 ) + E { S 1 ( Y + k 0 1 , Y ) X } = N 0 1 / 2 E ψ ( X ; β 0 , k 0 ) μ ( X ; β 0 ) k 0 + μ ( X ; β 0 ) log k 0 k 0 + μ ( X ; β 0 ) = 0 ,
where the last second equation uses Equation (A3) of Lemma A4 in the Appendix A.
For the covariance matrix of u n , it follows Lemma A4 that:
V ar ( u n 1 ) = α 0 1 1 , C ov ( u n 4 , u n 1 ) = α 0 1 , C ov ( u n 2 , u n 1 ) = C ov ( u n 2 , u n 4 ) = C ov ( u n 2 , u n 5 ) = 0 , C ov ( u n 3 , u n 1 ) = C ov ( u n 3 , u n 4 ) = C ov ( u n 3 , u n 5 ) = C ov ( u n 5 , u n 1 ) = 0 , V ar ( u n 2 ) = E 1 N 0 i = 1 N 0 μ ( X i ; β 0 ) 1 ϕ ( X i ; β 0 , k 0 ) + Y i k 0 X i I ( Y i > 0 ) k 0 + μ ( X i ; β 0 ) 2 = E μ ( X ; β 0 ) 1 ϕ ( X ; β 0 , k 0 ) + Y 2 k 0 2 I ( Y > 0 ) X 2 { k 0 + μ ( X ; β 0 ) } 2 = E μ 2 ( X ; β 0 ) { 1 ϕ ( X ; β 0 , k 0 ) } 2 + Y 2 2 Y μ ( X ; β 0 ) 1 ϕ ( X ; β 0 , k 0 ) k 0 2 I ( Y > 0 ) X 2 { k 0 + μ ( X ; β 0 ) } 2 = E k 0 μ ( X ; β 0 ) ϕ ( X ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) + k 0 + μ ( X ; β 0 ) k 0 μ ( X ; β 0 ) X 2 { k 0 + μ ( X ; β 0 ) } 2 = V 22 , C ov ( u n 2 , u n 3 ) = E [ μ ( X ; β 0 ) 1 ϕ ( X ; β 0 , k 0 ) + Y { ψ ( x ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) Y k 0 + μ ( X ; β 0 ) + S 1 ( Y + k 0 1 , Y ) } · k 0 X I ( Y > 0 ) k 0 + μ ( X ; β 0 ) ] = E [ { ϕ ( X ; β 0 , k 0 ) ψ ( x ; β 0 , k 0 ) 1 ϕ ( x ; β 0 , k 0 ) 1 k 0 + E [ Y S 1 ( Y + k 0 1 , Y ) X ] μ ( X ; β 0 ) + log k 0 k 0 + μ ( X ; β 0 ) } k 0 μ ( X ; β 0 ) X k 0 + μ ( X ; β 0 ) ] = V 23 , V ar ( u n 3 ) = E ψ ( X ; β 0 , k 0 ) 1 ϕ ( X ; β 0 , k 0 ) Y k 0 + μ ( X ; β 0 ) + S 1 ( Y + k 0 1 , Y ) 2 I ( Y > 0 ) = E [ { ψ ( X ; β 0 , k 0 ) } 2 1 ϕ ( X ; β 0 , k 0 ) + ( k 0 + 1 ) { μ ( X ; β 0 , k 0 ) } 2 + k 0 μ ( X ; β 0 , k 0 ) k 0 { k 0 + μ ( X ; β 0 , k 0 ) } 2 + E { S 1 ( Y + k 1 , Y ) } 2 X 2 E { Y S 1 ( Y + k 1 , Y ) X } k + μ ( X ; β 0 , k 0 ) ] = V 33 , V ar ( u n 4 ) = N 0 1 E i = 1 N 0 I ( Y i = 0 ) α 0 I ( Y i > 0 ) 1 ϕ ( X i ; β 0 , k 0 ) 2 = E I ( Y = 0 ) α 0 I ( Y > 0 ) 1 ϕ ( X ; β 0 , k 0 ) 2 = α 0 1 + E 1 1 ϕ ( X ; β 0 , k 0 ) ,
C ov ( u n 4 , u n 5 ) = E 1 α 0 N 0 i = 1 N 0 I ( Y i = 0 ) α 0 I ( Y i > 0 ) 1 ϕ ( X i ; β 0 , k 0 ) · { ϕ ( X i ; β 0 , k 0 ) α 0 } I ( Y i > 0 ) 1 ϕ ( X i ; β 0 , k 0 ) = ( 1 α 0 ) + ( 1 α 0 ) 2 E 1 1 ϕ ( X ; β 0 , k 0 ) , V ar ( u n 5 ) = N 0 1 ( 1 α 0 ) 2 E i = 1 N 0 { ϕ ( X i ; β 0 , k 0 ) α 0 } 2 I ( Y i > 0 ) { 1 ϕ ( X i ; β 0 , k 0 ) } 2 = ( 1 α 0 ) 2 E { ϕ ( X ; β 0 , k 0 ) α 0 } 2 1 ϕ ( X ; β 0 , k 0 ) = V 55 .
By the central limit theorem, as N 0 we have u n d N ( 0 , Σ ) , where:
Σ = V 11 0 0 V 14 0 0 V 22 V 23 0 0 0 V 32 V 33 0 0 V 41 0 0 2 V 45 ( 1 α 0 ) 2 V 44 V 55 ( 1 α 0 ) 2 0 0 0 V 55 ( 1 α 0 ) 2 V 55 .
Since the covariance matrix Σ has the same form as the Σ in Lemma 3 of the Supplementary Material in ref. [16], so does the matrix W . The rest of the proof is similar and omitted. This completes the proof of Theorem 1. □
When C = 0 , there is no penalty term and the likelihood functions with and without penalty coincide. This implies that the asymptotic results in Theorem 1 hold for the empirical likelihood estimators without penalty. Utilizing the result (c), a penalized empirical likelihood ratio interval estimator can be constructed, namely:
I p = N : 2 p ( N ^ p , β ^ p , k ^ p , α ^ p ) max ( β , k , α ) p ( N , β , k , α ) χ 1 2 ( 1 a ) ,
where χ 1 2 ( 1 a ) stands for the ( 1 a ) th quantile of χ 1 2 . Correspondingly, the empirical likelihood ratio interval estimator derived without penalty is as follows:
I = N : 2 ( N ^ , β ^ , k ^ , α ^ ) max ( β , k , α ) ( N , β , k , α ) χ 1 2 ( 1 a ) .
Despite both interval estimators asymptotically yielding correct coverage probability ( 1 a ) , our simulation studies indicate that I p generally outperforms I in terms of interval width.
Remark 1.
One might question whether overdispersion exists, or equivalently, whether the zero-truncated Poisson regression model adequately fit the data. Various methods have been proposed to address this question. See, for instances, refs. [8,11,25,26].

2.4. Numerical Implementation

In this section, we aim to develop an EM algorithm to facilitate the proposed estimation method described in Section 2.3. For better presentation, we begin by considering a special case when N is fixed. Our primary objective is to maximize the profile penalized empirical log-likelihood function for a given N, as specified in Equation (4). In other words, we shall design an EM algorithm to calculate the maximum penalized empirical likelihood estimator of ( β , k , α ) when N is fixed.
In this case, the observed data can be represented as O = { ( y 1 , x 1 ) , ( y 2 , x 2 ) , , ( y n , x n ) } , where each y i is positive. Additionally, the observed data include the counts ( y n + 1 , y n + 2 , , y N ) , all of which are zero. For these individuals not captured, their covariate information is missing and represented as O * = ( x n + 1 * , x n + 2 * , , x N * ) . According to the principle of empirical likelihood, the potential values of the x i * ’s are drawn from ( x 1 , x 2 , , x n ) , where the associated probabilities are ( p 1 , p 2 , , p n ) .
The observed and missing data constitute the complete data. The likelihood is as follows:
i = 1 n { P ( X = x i ) P ( Y = y i X = x i ) } × i = n + 1 N { P ( X = x i * ) P ( Y = 0 X = x i * ) } .
Correspondingly, the log-likelihood of θ = ( β , k , α , { p i } ) becomes:
c ( θ ) = i = 1 n [ log { f ( y i , x i ; β , k ) } + log ( p i ) ] + j = n + 1 N i = 1 n I ( x j * = x i ) log { f ( 0 , x i ; β , k ) } + log ( p i ) .
The core of the EM-algorithm is its iterative process, which consists of an expectation step (E-step) followed by a maximization step (M-step) in each iteration. Before these two steps, we use θ o l d = ( β o l d , k o l d , α o l d , { p i o l d } ) to denote the current value of parameters. In the E-step, we need to compute the expectation of c ( θ ) conditional on O and θ o l d . For this purpose, we calculate the conditional expectation of the indicator I ( x j * = x i ) , which is equal to:
P ( X = x i Y = 0 , θ = θ o l d ) = ϕ ( x i ; β o l d , k o l d ) p i o l d α o l d ,
where α o l d = i = 1 n ϕ ( x i ; β o l d , k o l d ) p i o l d denotes the current value of α . Correspondingly, the conditional expectation of the log-likelihood c ( θ ) is equal to:
Q θ θ o l d = i = 1 n log { f ( y i , x i ; β ) } + u i o l d log { f ( 0 , x i ; β , k ) } + i = 1 n 1 + u i o l d log ( p i ) = : 1 ( β , k ) + 2 ( { p i } ) .
where u i o l d = ( N n ) ϕ ( x i ; β o l d , k o l d ) p i o l d / α o l d represents the weight for i = 1 , , n .
The M-step consists of maximizing the function Q ( θ θ o l d ) . The separation of parameters ( β , k ) and p i ’s makes the maximization procedure much more elegant, which can be implemented using the following steps.
Step 1. 
Update ( β , k ) to ( β n e w , k n e w ) by maximizing 1 ( β , k ) . Given that 1 ( β , k ) can be interpreted as a weighted log-likelihood function, we propose that maximizing 1 ( β , k ) is analogous to fitting a negative binomial regression model to the observed counts ( y 1 , y 2 , , y n ) and the n-dimensional zero vector with covariates ( x 1 , x 2 , , x n , x 1 , x 2 , , x n ) and weights ( 1 , 1 , , 1 , u 1 o l d , u 2 o l d , , u n o l d ) . This step can be readily implemented through the glm.nb() function from the MASS package in R.
Step 2. 
Update p i values by maximizing 2 ( p 1 , , p n ) under the positive and sum-to-one constraints. This step yields a closed form, namely, p i n e w = ( u i o l d + 1 ) / i = 1 n ( u i o l d + 1 ) for i = 1 , , n .
Step 3. 
Update α by calculating α n e w = i = 1 n ϕ ( x i ; β n e w , k n e w ) p i n e w .
The E- and M-steps are repeated until the sequence of ( β n e w , k n e w , α n e w , { p i n e w } ) or ˜ p ( N , β n e w , k n e w , α n e w , { p i n e w } ) converges. The EM algorithm outlined above exhibits a desirable property under very general circumstances: the penalized empirical likelihood does not decrease with successive iterations. Given that the penalized empirical log-likelihood is bounded above by zero, the convergence of the sequence ( β n e w , k n e w , α n e w , { p i n e w } ) to a local maximum of ˜ p ( N , β , k , α , { p i } ) is always guaranteed.
To compute the maximum penalized empirical likelihood estimator ( N ^ p , β ^ p , k ^ p , α ^ p ) , the aforementioned EM algorithm remains applicable after some modifications. In this scenario, the current parameter is denoted by θ o l d = ( N o l d , β o l d , k o l d , α o l d , { p i o l d } ) and the weight is u i = ( N o l d n ) ϕ ( x i ; β o l d , k o l d ) p i o l d / α o l d in the E-step. In addition, the M-step incorporates a maximization step for the population size parameter.
Step 4. 
Calculate the updated value N n e w , by maximizing the partial log-likelihood function relavent on N, expressed as log N n + ( N n ) log ( α n e w ) + f p ( N ) . This optimization can be efficiently performed using the optimize() function available in the R software (version 4.3.1, https://www.r-project.org/).
The penalized empirical likelihood ratio confidence interval for N is computed by identifying the two zeros of the modified penalized likelihood ratio function:
m ( N ) = 2 p ( N ^ p , β ^ p , k ^ p , α ^ p ) max ( β , k , α ) p ( N , β , k , α ) χ 1 2 ( 1 a ) ,
where the search for these zeros is conducted within the intervals [ n , N ^ p ] and [ N ^ p , M ] , and M is a sufficiently large user-specified value ensuring that m ( M ) > 0 . This can be implemented via the uniroot() function available in the R software. In summary, the pseudocodes outlined in Algorithms A1–A3 (Appendix B) offer the procedures for calculating both the maximum penalized empirical likelihood estimator and the corresponding penalized empirical likelihood ratio confidence interval for N.

3. Simulations

To demonstrate the efficiency of penalized empirical likelihood estimators, several simulations are conducted and multiple synthetic datasets are analyzed. In the simulation settings, we fix the abundance N 0 at 250, 500, and 1000. We consider two different scenarios for generating the covariate X:
(A)
A binary variable, X Bi ( 1 , 0.3 ) , is used to represent a discrete-valued covariate, as in the case study presented in Section 4.
(B)
Alternatively, a continuous variable, X U ( 0 , 1 ) , is considered.
Given X = ( 1 , X ) , we simulate the count response Y from a negative binomial regression model (1), where the regression coefficient β 0 is set at ( 0.5 , 0.3 ) or ( 0.1 , 0.3 ) and the dispersion parameter k 0 is set at 0.5, 1, or 5.
Under each of the 18 ( 3 × 2 × 3 ) parameter combinations, we simulate 2000 random samples for each scenario. Subsequently, we calculate the maximum empirical likelihood estimators ( N ^ and N ^ p ) as well as the empirical likelihood ratio interval estimators ( I and I p ), where the estimators with the subscript p are derived via the penalized method. When the penalty is applied, we recommend an adaptive tuning parameter value of C = { 2 n ( ν n ) 2 } 1 , which is proven effective in our numerical studies.

3.1. Evaluation of Point Estimators

We evaluate the performance of two point estimators N ^ and N ^ p , by assessing their relative bias in percent (%Rbias) and relative mean squared error (RMSE). For a generic estimator N ˇ , the %Rbias and RMSE are as follows:
% Rbias = 1 2000 j = 1 2000 N ˇ j N 0 N 0 × 100 and RMSE = 1 2000 j = 1 2000 ( N ˇ j N 0 ) 2 N 0 ,
respectively, where N ˇ j represents the estimate derived from the jth random sample. Table 1 reports the simulated %Rbiases and RMSEs of both estimators N ^ and N ^ p .
We first examine the simulation results when β 0 = ( 0.1 , 0.3 ) with the average capture probability of 56%. From the last two columns of Table 1, we see that the %Rbiases and RMSEs of both estimators are comparable when k 0 = 5 , where the variance-to-mean ratio (VMR) is as low as 1.3. However, the results differ significantly when k 0 decreases to 1 and further to 0.5, with VMRs of 2.3 and 3.5, respectively. As the degree of overdispersion increases, the maximum empirical likelihood estimator N ^ uniformly exhibits larger biases and RMSEs than the maximum penalized empirical likelihood estimator N ^ p . For example, in Scenario A when k 0 = 0.5 and N 0 = 250 , the relative bias of N ^ is as large as 29.8%, and the RMSE of N ^ is about 44 times (1768/40) the RMSE of N ^ p .
Next, we examine the results when β 0 = ( 0.5 , 0.3 ) . In this scenario, the %Rbiases and RMSEs of both estimators are larger than those when β 0 = ( 0.1 , 0.3 ) . This is expected because the average capture probability reduces from 56% to 40%, making the sample provide less information about the model parameters. As in the aforementioned scenario, similar conclusions can be drawn, with the advantages of N ^ p over N ^ being even clear. All relative biases of N ^ p are smaller than 10%. In contrast, N ^ generally overestimates the population size, with the largest relative bias approaching 83% in Scenario B when k 0 = 0.5 and N 0 = 250 . In this case, its RMSE is about 58 times (4222/73) the RMSE of N ^ p .

3.2. Evaluation of Interval Estimators

We evaluate and contrast the performance of two empirical likelihood ratio interval estimators: I p (with penalty) and I (without penalty). This comparison focuses on their coverage probabilities and interval widths. As discussed in the introduction, the interval estimators may have unbounded upper limits. In the simulation, the interval estimator’s upper limit was set to the minimum of the right endpoint and 100 times the sample size.
Table 2 presents the simulation results at the 95% confidence level. Overall, the coverage accuracy of the two interval estimators is similar, with coverage probabilities either matching or slightly exceeding the nominal 95% level. However, the widths of I p are always shorter than those of I , indicating that the penalized empirical likelihood ratio interval estimator offers greater precision. Specifically, the width of I p is 12% (605/5112) of that of I in Scenario B when N 0 = 250 , k 0 = 0.5 , and β 0 = ( 0.1 , 0.3 ) .
Finally, we explore the possible explanations for the significant differences in the widths of the confidence intervals. For this purpose, Table 2 also reports the proportion of bounded cases, in which the upper limits are less than 100 times the observed sample sizes. In the other cases, the upper limits can be seen unbounded. From Table 2, it is clear that there are many unbounded cases for I . The smaller N 0 or k, the more likely this boundary problem occurs. In contrast, the upper limits of I p are always bounded. This demonstrates that the penalized empirical likelihood method effectively reduces the occurrence of boundary problems, leading to more precise intervals.

4. Case Study

Among the various bear species found in North America, the black bear (Ursus americanus) stands out as the most widely distributed game species. However, their populations have become fragmented, owing to over-harvesting. Understanding black bear demography is crucial for informing effective management and conservation strategies [27].
This study aims to make statistical inferences for the abundance of black bears at the U.S. Army’s Fort Drum Military Installation in northern New York. For this purpose, the capture–recapture experiment was conducted using an array of 38 baited “hair snares” during June and July 2006. The study area and the locations of the 38 hair snares are illustrated in Figure 1.4 in ref. [28]. Each week, for eight consecutive weeks, barbed wire traps were baited and checked for hair samples. Over the 8-week period, a total of 47 black bears were captured at least once, with their sex recorded upon capture. The original data structure also includes the trap locations and whether each bear was captured at each location. Although capture status was recorded for each bear across all 38 hair snares, we treat this as a standard capture–recapture dataset. Specifically, a bear was considered captured if it was caught at least once in any given week. The analyzed data include the number of weeks that bears were captured and the sex of each bear; see Table 3 for the frequencies.
We first examine the estimation results of abundance in the framework of the Poisson regression model. With this model, we find that the maximum empirical likelihood estimate equals 52. This point estimate is smaller than Chao lower bound estimate of 63, indicating that the Poisson regression model may be not adequate to fit this dataset. This inadequacy is further illustrated via the ratio plot (see Figure 1A), indicating a nonconstant relationship between the ratios of frequencies and the number of captures.
Now, we examine the estimation results in the framework of the negative binomial regression model. Applying the maximum empirical likelihood methods with and without penalty, we find that both approaches yield a point estimate of 72 with a standard error of 8.9 for the abundance, significantly higher than the estimate of 52 derived from the Poisson regression model. This difference can be attributed to overdispersion present in the negative binomial regression model, as indicated by an overdispersion parameter estimate of 1.2 with a standard error of 0.2. For completeness, we also calculate the ratio regression estimate proposed in ref. [12], which is 74 with a standard error of 10.2. Our point estimate is close to the ratio regression estimate but more efficient, as indicated by the smaller standard error.
Applying the empirical likelihood method with and without penalty, we obtain significantly different interval estimates of the abundance. At the 95% confidence level, the empirical likelihood ratio interval estimate ( I ) is [52, 1343], whereas the penalized empirical likelihood ratio interval estimate ( I p ) is [52, 201]. Clearly, I exhibits a significantly larger upper limit in comparison to I p . In order to understand this discrepancy, we plot the profile empirical log-likelihood ratio functions with and without penalty of the abundance (see Figure 1B). The two functions are very close when the abundance N is less than 100 but diverge when N > 100 . The empirical log-likelihood ratio function increases gradually without penalty, flattening as N increases. In contrast, the penalized log-likelihood ratio function exhibits a rapid ascent with increasing N, thereby enhancing the precision of the confidence interval. This demonstrates how the penalized empirical likelihood method effectively mitigates the boundary problem caused by overdispersion.
As suggested by one peer reviewer, we also applied the classical Lincoln–Petersen method [1,2] to the black bear data. For performing the Lincoln–Petersen method, the eight weeks of data were divided into two samples: the initial weeks constituted sample 1, while the remaining weeks formed sample 2. The Lincoln–Petersen estimator is calculated as n 1 n 2 / n 12 , where n 1 , n 2 , and n 12 represent the number of individuals in sample 1, sample 2, and the overlap between the two samples, respectively. The variance of this estimator is calculated as:
( n 1 + 1 ) ( n 2 + 1 ) ( n 1 n 12 ) ( n 2 n 12 ) ( n 12 + 1 ) 2 ( n 12 + 2 ) .
Table 4 presents the Lincoln–Petersen estimates along with their standard errors. It is clear that all estimates are consistently lower than the maximum penalized empirical likelihood estimate of 72. This discrepancy is likely due to the assumption of sample independence inherent in the Lincoln–Petersen method, which may not hold true in this context. Specifically, bears captured in the first sample may be more likely to be recaptured due to the lure, leading to a positive correlation between the two samples and an overestimation of capture probability. Additionally, the standard errors for the Lincoln–Petersen estimates are uniformly smaller than the standard error of 8.9 obtained from our proposed method. This difference might arise because the Lincoln–Petersen estimator assumes equal capture probability across all individuals, failing to account for heterogeneity, such as sex-based variation. In contrast, our proposed method explicitly incorporates sex as a factor, addressing this limitation.
Given that the true population size is unknown, we aim to validate the reliability of our method’s conclusions by assessing whether the negative binomial regression model accurately fits the observed frequency data. To this end, we have included a goodness-of-fit test in the revision. Specifically, for the black bear data, we define the χ 2 test statistic as:
χ 2 = ( e 1 m 1 ) 2 e 1 + ( e 2 m 2 ) 2 e 2 + ( e 3 m 3 ) 2 e 3 + ( e 4 m 4 ) 2 e 4 + ( e 5 m 5 ) 2 e 5 ,
where m 1 = 19 , m 2 = 11 , m 3 = 7 , m 4 = 2 , and m 5 = 8 represent the observed frequencies of captures occurring once, twice, three times, four times, and at least five times, respectively. The expected frequencies are computed as:
e k = n i = 1 n p ^ i f ( k , x i ; β ^ p , k ^ p ) i = 1 n p ^ i { 1 ϕ ( x i ; β ^ p , k ^ p ) } , k = 1 , 2 , 3 , 4 ,
with e 5 = n k = 1 4 e k , where the p ^ i ’s are those convergence values of the p i ’s obtained from the EM algorithm. The χ 2 test statistic, calibrated with a χ 4 2 distribution, yields a value of 1.66 with a p-value of 0.8. This result suggests that the negative binomial regression model fits the black bear data well, thereby supporting the reliability of our estimation results with this model.

5. Conclusions and Discussion

The use of the negative binomial regression model is prevalent to address heterogeneity and dispersion related to capture–recapture frequency data. As noted in refs. [10,11], fitting this model often leads to identification and boundary problems for the dispersion parameter and then yields unbounded population size estimates. To address this boundary problem, we proposed imposing a half-normal prior on the population size or equivalently decreasing the empirical log-likelihood function for large population sizes by adding a suitable penalty function. This penalized technique could improve robustness of the maximum penalized empirical likelihood estimator, ensuring the derivation of a consistently bounded interval estimator. This penalized empirical likelihood approach constitutes a significant contribution of our paper. Additionally, we introduced an efficient EM algorithm to maximize the penalized empirical likelihood function. Unlike the classical Newton-type optimization method, the EM algorithm guarantees local convergence of the numerical procedure and yields stable estimates of population size. Compared to estimators without penalty, the proposed maximum penalized empirical likelihood estimator exhibits a higher efficiency and the penalized empirical likelihood ratio interval estimator is more precise.
This paper assumes that the capture–recapture experiment is conducted at a single site. However, in ecological studies, experiments are often conducted at multiple sites or follow a spatial pattern [29]. For example, the case study in Section 4 belongs to this case. To accommodate these general application scenarios, the count data can be considered as copies of ( Y ( 1 ) , Y ( 2 ) , , Y ( J ) ) , where, for j = 1 , 2 , , J , Y ( j ) represents the number of times a generic individual was captured at the jth site. In such cases, the multivariate mixed negative binomial regression model used in ref. [30] can be adopted to fit the count data. Specifically, the conditional probability model (1) becomes:
P ( Y ( 1 ) = y ( 1 ) , Y ( 2 ) = y ( 2 ) , , Y ( J ) = y ( J ) X = x ) = 0 0 j = 1 J Γ ( y ( j ) + k j ) Γ ( y ( j ) + 1 ) Γ ( k j ) k j k j + λ j μ ( x ; β ) k j λ j μ ( x ; β ) k j + λ j μ ( x ; β ) y ( j ) h ( λ 1 , , λ J ; γ ) d λ 1 d λ J ,
where ( λ 1 , , λ J ) describes the sites’ random effects and h ( λ 1 , , λ J ; γ ) describes the spatial variation of capture intensities across sites. Following the approach used in ref. [30], one can opt for a gamma distribution as the mixing distribution, with a probability density function described by:
h ( λ 1 , , λ J ; γ ) = γ γ Γ ( γ ) exp ( γ λ ) λ γ 1 I ( λ 1 = = λ J = λ ) ,
where γ > 0 . When spatial auto-correlation arises, indicating that random effects are partially correlated across different sites, ( log ( λ 1 ) , , log ( λ J ) ) can be modeled by a Gaussian random field. The covariance matrix of this field can be determined by flexible spatial covariance functions [31].
Throughout the paper, the conditional mean of the count data are assumed to have a log-linear relationship with individual covariates, as depicted in model (1). In practice, the relationship between the conditional mean and covariates may be more complex. To adapt our approach for practical application scenarios, one can employ the negative binomial additive model with flexible smoothing functions for each covariate [32]. In addition, applying the proposed methods to a more generalized Poisson mixture model, where the Tweedie distribution is used as the mixing distribution [33], is a straightforward yet challenging task.

Author Contributions

Conceptualization, Y.J. and Y.L.; methodology, Y.J. and Y.L.; software, Y.J. and Y.L.; validation, Y.L.; formal analysis, Y.J.; data curation, Y.J.; visualization, Y.J.; writing—original draft preparation, Y.J.; writing—review and editing, Y.L.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Key R&D Program of China (2021YFA1000100 and 2021YFA1000101) and the National Natural Science Foundation of China (12101239, 12171157).

Data Availability Statement

The code and data required to reproduce analyses in the case study are available on Github at https://github.com/ecnuliuyang/AbunNB (accessed on 4 June 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Some Preparation for the Proof of Theorem 1

The following lemmas facilitate the proof of Theorem 1. Specifically, Lemma A1 establishes the magnitude bounds on the first two derivatives of the penalty function. Lemma A2 identifies the leading terms of the logarithmic derivatives of the gamma functions. Lemma A3 presents fundamental results regarding the expectations of sample-based functions. Finally, Lemma A4 provides key results for conditional expectations within the framework of the negative binomial regression model (1).
Lemma A1.
Suppose that f p ( N ) = C ( N ν ) 2 I ( N > ν ) , C = O p ( N 0 2 ) . Then (a) f p ( N 0 ) = O p ( N 0 1 ) ; and (b) f p ( N 0 ) = O p ( N 0 3 / 2 ) .
Proof. 
It suffices to show that ν = O p ( N 0 ) , which holds because ν is the lower bound of N 0 . □
Lemma A2.
Let Γ ( x ) be the gamma function. Define:
S 1 ( N , n ) = log { Γ ( N + 1 ) / Γ ( N n + 1 ) } N , S 2 ( N , n ) = 2 log { Γ ( N + 1 ) / Γ ( N n + 1 ) } N 2 .
We have:
S 1 ( N 0 , n ) = log ( α 0 ) + ( n / N 0 ) 1 + α 0 α 0 + O p ( N 0 1 ) ,
S 2 ( N 0 , n ) = 1 α 0 N 0 α 0 + O p ( N 0 3 / 2 ) .
Proof. 
We refer readers to pages 6 and 9 of the Supplementary Material of ref. [16]. □
Lemma A3.
Suppose r ( x ) is a nonzero function of x and g ( y ) is a function of y. Then the following results hold.
(a) 
If E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] < , we have:
E 1 N 0 i = 1 n r ( x i ) = E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] .
(b) 
If E [ { r ( X ) } 2 { 1 ϕ ( X ; β 0 , k 0 ) } ] < , we have:
1 N 0 i = 1 n r ( x i ) E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] = O p ( N 0 1 / 2 ) .
(c) 
If E [ r ( X ) E { g ( Y ) X } ] < and g ( 0 ) = 0 , we have:
E 1 N 0 i = 1 n g ( y i ) r ( x i ) = E [ r ( X ) E { g ( Y ) X } ] .
Specifically, the above equation equals μ ( X ; β 0 , k 0 ) if g ( y ) = y .
Proof. 
Let { ( X i , Y i ) : i = 1 , , N 0 } denote the independent and identically distributed (i.i.d) copies of ( X , Y ) for individuals in the population. Note that:
1 N 0 i = 1 n r ( x i ) = 1 N 0 i = 1 N 0 r ( X i ) I ( Y i > 0 ) ,
the right hand side of which is a summation of i.i.d random variables. Hence, Result (a) follows from the fact that
E { r ( X i ) I ( Y i > 0 ) } = E [ r ( X i ) E { I ( Y i > 0 ) | X i } ] = E r ( X i ) { 1 ϕ ( X ; β 0 , k 0 ) } .
where the last equality uses that ϕ ( x ; β 0 , k 0 ) = pr ( Y = 0 X = x ) .
For Result (b), we first write:
1 N 0 i = 1 n r ( x i ) E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] = 1 N 0 i = 1 N 0 r ( X i ) I ( D i > 0 ) E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] .
Because E [ { r ( X ) } 2 { 1 ϕ ( X ; β 0 , k 0 ) } ] < and r ( x ) is nonzero, by the central limit theorem we have:
N 0 1 / 2 1 N 0 i = 1 n r ( x i ) E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] d N [ 0 , V ar { r ( X ) I ( Y > 0 ) } ] ,
where
V ar { r ( X ) I ( Y > 0 ) } = E [ { r ( X ) } 2 { 1 ϕ ( X ; β 0 , k 0 ) } ] ( E [ r ( X ) { 1 ϕ ( X ; β 0 , k 0 ) } ] ) 2 <
and d stands for convergence in distribution. Hence, result (b) follows.
For result (c), it follows that:
E 1 N 0 i = 1 n g ( y i ) r ( x i ) = E [ r ( X ) E { g ( Y ) I ( Y > 0 ) X } ] = E [ r ( X ) E { g ( Y ) X } ] .
Lemma A4.
Under the negative binomial regression model (1), we have:
E [ S 1 ( Y + k 1 , Y ) X ] = log k k + μ ( X ; β ) ,
E [ Y S 1 ( Y + k 1 , Y ) X ] = 1 k log k k + μ ( X ; β ) μ ( X ; β ) ,
E [ S 2 ( Y + k 1 , Y ) X ] = log k k + μ ( X ; β ) 2 E [ { S 1 ( Y + k 1 , Y ) } 2 X ] .
Proof. 
Consider the negative binomial distribution:
q ( y ; μ , k ) = Γ ( y + k ) Γ ( y + 1 ) Γ ( k ) k k + μ k μ k + μ y , y = 0 , 1 , .
We use E q to denote the expectation of Y with respect to the probability mass function q ( y ; μ , k ) . Under regular conditions that the expectation operation and the first two partial derivatives of q ( y ; μ , k ) with respect to ( μ , k ) are exchangeable, we have:
E q log { q ( Y ; μ , k ) } k = 0 ,
E q 2 log { q ( Y ; μ , k ) } μ k = E q log { q ( Y ; μ , k ) } μ log { q ( Y ; μ , k ) } k ,
E q 2 log { q ( Y ; μ , k ) } k 2 = E q log { q ( Y ; μ , k ) } k 2 .
Since:
log { q ( y ; μ , k ) } k = log k k + μ y μ k + μ + S 1 ( y + k 1 , y ) ,
it follows from Equation (A6) and E q ( Y ) = μ that E q { S 1 ( Y + k 1 , Y ) } = log { k / ( k + μ ) } . and thus, Equation (A3) holds. Since:
E q 2 log { q ( Y ; μ , k ) } μ k = E q Y μ μ ( k + μ ) 1 1 k + μ = 0 , E q log { q ( Y ; μ , k ) } μ · log { q ( Y ; μ , k ) } k = E q k ( Y μ ) μ ( k + μ ) · log k k + μ Y μ k + μ + S 1 ( Y + k 1 , Y ) = k μ ( k + μ ) E q { Y S 1 ( Y + k 1 , Y ) } μ k + μ log k k + μ ,
where the last equation uses E q { ( Y μ ) 2 } = μ ( k + μ ) / k . It follows from Equation (A7) that E q { Y S 1 ( Y + k 1 , Y ) } = [ k 1 log { k / ( k + μ ) } ] μ , and thus, Equation (A4) holds.
It can be verified that:
E q 2 log { q ( Y ; μ , k ) } k 2 = 1 k 1 k + μ + E q { S 2 ( Y + k 1 , Y ) } , E q log { q ( Y ; μ , k ) } k 2 = log k k + μ 2 + E q μ Y k + μ 2 + E q [ { S 1 ( Y + k 1 , Y ) } 2 ] 2 μ k ( k + μ )
= log k k + μ 2 μ k ( k + μ ) + E q [ { S 1 ( Y + k 1 , Y ) } 2 ] .
Following Equation (A8), we have:
E q [ S 2 ( Y + k 1 , Y ) ] + E q [ { S 1 ( Y + k 1 , Y ) } 2 ] = log k k + μ 2 ,
and thus, Equation (A5) holds.

Appendix B. Pseudocodes to Perform the Penalized Empirical Likelihood Method

Pseudocodes are provided for the implementation of the penalized empirical likelihood method. Algorithm A1 details the steps for maximizing p ( N , β , k , α ) with respect to ( β , k , α ) for a fixed N using the EM algorithm. Algorithm A2 presents the process for obtaining the maximum penalized empirical likelihood estimator ( N ^ p , β ^ p , k ^ p , α ^ p ) . Finally, Algorithm A3 describes the procedure for constructing the penalized empirical likelihood ratio confidence interval I p .
Algorithm A1 Pseudocode to calculate max ( β , k , α ) p ( N , β , k , α ) via the EM algorithm
1:
Input: Observations { ( y 1 , x 1 ) , ( y 2 , x 2 ) , , ( y n , x n ) } , fixed value N, initial parameter values ( β new , k new , α new , { p i new } ) , and convergence threshold ϵ = 10 5
2:
Output:  p ( N , β old , k old , α old )
3:
do
4:
    Set ( β old , k old , α old , { p i old } ) = ( β new , k new , α new , { p i new } )
5:
    for  i = 1 , 2 , , n  do
6:
    Compute u i old = ( N n ) ϕ ( x i ; β old , k old ) p i old α old
7:
    end for
8:
  Update parameters ( β new , k new ) using the R function glm.nb() by fitting a negative binomial regression model to ( y 1 , y 2 , , y n , 0 , 0 , , 0 n ) with covariates ( x 1 , x 2 , , x n , x 1 , x 2 , , x n ) and weights ( 1 , 1 , , 1 , u 1 old , u 2 old , , u n old )
9:
    for  i = 1 , 2 , , n  do
10:
        Update p i new = u i old + 1 i = 1 n ( u i old + 1 )
11:
    end for
12:
    Compute α new = i = 1 n ϕ ( x i ; β new , k new ) p i new
13:
    Calculate the difference diff = p ( N , β new , k new , α new ) p ( N , β old , k old , α old )
14:
while  | diff | > ϵ
Algorithm A2 Pseudocode for estimating ( N ^ p , β ^ p , k ^ p , α ^ p ) using the EM algorithm
1:
Input: Observations { ( y 1 , x 1 ) , ( y 2 , x 2 ) , , ( y n , x n ) } , initial parameter values ( N n e w , β n e w , k n e w , α n e w , { p i n e w } ) , and threshold ϵ = 10 5
2:
Output: Estimate ( N o l d , β o l d , k o l d , α o l d )
3:
do
4:
    Set ( N o l d , β o l d , k o l d , α o l d , { p i o l d } ) = ( N n e w , β n e w , k n e w , α n e w , { p i n e w } )
5:
    for  i = 1 , 2 , , n  do
6:
        Compute u i o l d = ( N o l d n ) ϕ ( x i ; β o l d , k o l d ) p i o l d α o l d
7:
    end for
8:
   Update parameters ( β n e w , k n e w ) by fitting a negative binomial regression model to ( y 1 , y 2 , , y n , 0 , 0 , , 0 n ) with covariates ( x 1 , x 2 , , x n , x 1 , x 2 , , x n ) and weights ( 1 , 1 , , 1 , u 1 o l d , u 2 o l d , , u n o l d ) using the R function glm.nb()
9:
    for  i = 1 , 2 , , n  do
10:
        Calculate p i n e w = u i o l d + 1 i = 1 n ( u i o l d + 1 )
11:
    end for
12:
    Compute α n e w = i = 1 n ϕ ( x i ; β n e w , k n e w ) p i n e w
13:
    Update N n e w by maximizing log N n + ( N n ) log ( α n e w ) + f p ( N ) with respect to N using the R function optimize()
14:
    Calculate the difference diff = p ( N n e w , β n e w , k n e w , α n e w ) p ( N o l d , β o l d , k o l d , α o l d )
15:
while  | diff | > ϵ
Algorithm A3 Pseudocode for calculating I p at the ( 1 a ) confidence level
1:
Input: Observations { ( y 1 , x 1 ) , ( y 2 , x 2 ) , , ( y n , x n ) } and significance level a
2:
Output: Confidence interval [ N ^ p l , N ^ p u ]
3:
Apply Algorithm A2 to compute the maximum penalized empirical likelihood estimate ( N ^ p , β ^ p , k ^ p , α ^ p )
4:
Use the R function uniroot() to find the lower bound N ^ p l by searching the root of m ( N ) in the interval [ n , N ^ p ] , where m ( N ) is defined in Equation (6)
5:
for  k = 1 , 2 , do
6:
    Set M = 2 k N ^ p .
7:
    if  m ( M ) > 0  then
8:
        Break
9:
    end if
10:
end for
11:
Use the R function uniroot() to determine the upper bound N ^ p u by searching the root of m ( N ) within [ N ^ p , M ] , where m ( N ) is defined in Equation (6)

References

  1. Lincoln, F.C. Calculating Waterfowl Abundance on the Basis of Banding Returns; Number 118; U.S. Department of Agriculture: Washington, DC, USA, 1930. [Google Scholar]
  2. Petersen, C.G.J. The yearly immigration of young plaice in the Limfjord from the German sea. Rep. Dan. Biol. Stn. 1896, 6, 1–77. [Google Scholar]
  3. McCrea, R.S.; Morgan, B.J.T. Analysis of Capture–Recapture Data; Chapman & Hall/CRC: London, UK, 2014. [Google Scholar]
  4. Corrao, G.; Bagnardi, V.; Vittadini, G.; Favilli, S. Capture-recapture methods to size alcohol related problems in a population. J. Epidemiol. Community Health 2000, 54, 603–610. [Google Scholar] [CrossRef] [PubMed]
  5. Frischer, M.; Bloor, M.; Finlay, A.; Goldberg, D.; Green, S.; Haw, S.; McKeganey, N.; Platt, S. A new method of estimating prevalence of injecting drug use in an urban population: Results from a Scottish city. Int. J. Epidemiol. 1991, 20, 997–1000. [Google Scholar] [CrossRef] [PubMed]
  6. Gallay, A.; Vaillant, V.; Bouvet, P.; Grimont, P.; Desenclos, J.C. How many foodborne outbreaks of Salmonella infection occurred in France in 1995? Application of the capture-recapture method to three surveillance systems. Am. J. Epidemiol. 2000, 152, 171–177. [Google Scholar] [CrossRef] [PubMed]
  7. Lindén, A.; Mäntyniemi, S. Using the negative binomial distribution to model overdispersion in ecological count data. Ecology 2011, 92, 1414–1421. [Google Scholar] [CrossRef]
  8. Cruyff, M.J.L.F.; van Der Heijden, P.G.M. Point and interval estimation of the population size using a zero-truncated negative binomial regression model. Biom. J. 2008, 50, 1035–1050. [Google Scholar] [CrossRef]
  9. Stoklosa, J.; Blakey, R.V.; Hui, F.K. An overview of modern applications of negative binomial modelling in ecology and biodiversity. Diversity 2022, 14, 320. [Google Scholar] [CrossRef]
  10. Anan, O. Capture-Recapture Modelling for Zero-Truncated Count Data Allowing for Heterogeneity. Ph.D. Thesis, University of Southampton, Southampton, UK, 2016. [Google Scholar]
  11. Böhning, D. Power series mixtures and the ratio plot with applications to zero-truncated count distribution modelling. Metron 2015, 73, 201–216. [Google Scholar] [CrossRef]
  12. Rocchetti, I.; Bunge, J.; Böhning, D. Population size estimation based upon ratios of recapture probabilities. Ann. Appl. Stat. 2011, 5, 1512–1533. [Google Scholar] [CrossRef]
  13. Godwin, R.T. One-inflation and unobserved heterogeneity in population size estimation. Biom. J. 2017, 59, 79–93. [Google Scholar] [CrossRef]
  14. van Der Heijden, P.G.M.; Bustami, R.; Cruyff, M.J.L.F.; Engbersen, G.; van Houwelingen, H.C. Point and interval estimation of the population size using the truncated Poisson regression model. Stat. Model. 2003, 3, 305–322. [Google Scholar] [CrossRef]
  15. van Der Heijden, P.G.M.; Cruyff, M.J.L.F.; van Houwelingen, H.C. Estimating the size of a criminal population from police records using the truncated Poisson regression model. Stat. Neerl. 2003, 57, 289–304. [Google Scholar] [CrossRef]
  16. Liu, Y.; Li, P.; Qin, J. Maximum empirical likelihood estimation for abundance in a closed population from capture-recapture data. Biometrika 2017, 104, 527–543. [Google Scholar]
  17. Liu, Y.; Li, P.; Liu, Y.; Zhang, R. Semiparametric empirical likelihood inference for abundance from one-inflated capture–recapture data. Biom. J. 2022, 64, 1040–1055. [Google Scholar] [CrossRef]
  18. Liu, Y.; Liu, Y.; Li, P.; Qin, J. Full likelihood inference for abundance from continuous time capture–recapture data. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2018, 80, 995–1014. [Google Scholar] [CrossRef]
  19. Liu, Y.; Liu, Y.; Li, P.; Zhang, R. Two-step semiparametric empirical likelihood inference from capture–recapture data with missing covariates. Test 2024, in press. [Google Scholar] [CrossRef]
  20. Liu, Y.; Liu, Y.; Li, P.; Zhu, L. Maximum likelihood abundance estimation from capture-recapture data when covariates are missing at random. Biometrics 2021, 77, 1050–1060. [Google Scholar] [CrossRef]
  21. Owen, A.B. Empirical likelihood ratio confidence intervals for a single functional. Biometrika 1988, 75, 237–249. [Google Scholar] [CrossRef]
  22. Owen, A.B. Empirical likelihood ratio confidence regions. Ann. Stat. 1990, 18, 90–120. [Google Scholar] [CrossRef]
  23. Chao, A. Estimating the population size for capture–recapture data with unequal catchability. Biometrics 1987, 43, 783–791. [Google Scholar] [CrossRef]
  24. Böhning, D.; Vidal-Diez, A.; Lerdsuwansri, R.; Viwatwongkasem, C.; Arnold, M. A generalization of Chao’s estimator for covariate information. Biometrics 2013, 69, 1033–1042. [Google Scholar] [CrossRef]
  25. Gurmu, S. Tests for detecting overdispersion in the positive Poisson regression model. J. Bus. Econ. Stat. 1991, 9, 215–222. [Google Scholar] [CrossRef]
  26. Yehia, E.G. Power of Overdispersion Tests in Zero-Truncated Negative Binomial Regression Model. Am. J. Theor. Appl. Stat. 2021, 10, 152–157. [Google Scholar] [CrossRef]
  27. Beston, J.A. Variation in life history and demography of the American black bear. J. Wildl. Manag. 2011, 75, 1588–1596. [Google Scholar] [CrossRef]
  28. Royle, J.A.; Chandler, R.B.; Sollmann, R.; Gardner, B. Spatial Capture-Recapture; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  29. Tourani, M. A review of spatial capture–recapture: Ecological insights, limitations, and prospects. Ecol. Evol. 2022, 12, e8468. [Google Scholar] [CrossRef]
  30. Tzougas, G.; di Cerchiara, A.P. The multivariate mixed negative binomial regression model with an application to insurance a posteriori ratemaking. Insur. Math. Econ. 2021, 101, 602–625. [Google Scholar] [CrossRef]
  31. Schmidt, A.M.; Guttorp, P. Flexible spatial covariance functions. Spat. Stat. 2020, 37, 100416. [Google Scholar] [CrossRef]
  32. Thurston, S.W.; Wand, M.; Wiencke, J.K. Negative binomial additive models. Biometrics 2000, 56, 139–144. [Google Scholar] [CrossRef]
  33. Bonat, W.H.; Jørgensen, B.; Kokonendji, C.C.; Hinde, J.; Demétrio, C.G. Extended Poisson–Tweedie: Properties and regression models for count data. Stat. Model. 2018, 18, 24–49. [Google Scholar] [CrossRef]
Figure 1. (A) The ratio plot depicts the relationship between the frequency ( f y ) of capture counts (y), where the dashed line represents the fitted linear model. (B) The log-EL ratio curve shows the profile empirical log-likelihood ratio functions with penalty (red dashed line) and without penalty (black solid line) of the abundance.
Figure 1. (A) The ratio plot depicts the relationship between the frequency ( f y ) of capture counts (y), where the dashed line represents the fitted linear model. (B) The log-EL ratio curve shows the profile empirical log-likelihood ratio functions with penalty (red dashed line) and without penalty (black solid line) of the abundance.
Mathematics 12 02674 g001
Table 1. Relative biases in percent (%Rbiases) and relative mean squared errors (RMSEs) in simulation studies. The RMSE values are rounded to the nearest integer.
Table 1. Relative biases in percent (%Rbiases) and relative mean squared errors (RMSEs) in simulation studies. The RMSE values are rounded to the nearest integer.
β 0 = ( 0.5 , 0.3 ) β 0 = ( 0.1 , 0.3 )
k 0 = 0.5 k 0 = 1 k 0 = 5 k 0 = 0.5 k 0 = 1 k 0 = 5
N 0 N ^ N ^ p N ^ N ^ p N ^ N ^ p N ^ N ^ p N ^ N ^ p N ^ N ^ p
Scenario A
%Rbias25082.6−2.9234.94.2416.710.129.80.595.641.860.460.44
50037.83.8114.65.534.314.026.652.081.361.140.240.24
100019.87.903.272.731.441.441.971.620.550.53−0.06−0.06
RMSE250400673156773330571768401362454
5002558130881114514127153252344
100023362461239628286154171644
Scenario B
%Rbias25083.14−1.7530.384.1715.089.0224.231.006.072.200.280.25
50033.184.3011.014.663.723.576.482.031.010.860.190.19
100015.126.953.022.561.191.181.551.240.480.47−0.07−0.07
RMSE250422273122868297561143381282444
500247412172096343157148191744
100011682101158424245548151533
Table 2. Simulated coverage probabilities (CPs, unit: %) and average widths (AWs) of interval estimators I and I p at the 95% level, along with the proportion of bounded cases (PBC, unit: %) whose upper limits are less than 100 times the observed sample sizes. The AW and PBC values are rounded to the nearest integer.
Table 2. Simulated coverage probabilities (CPs, unit: %) and average widths (AWs) of interval estimators I and I p at the 95% level, along with the proportion of bounded cases (PBC, unit: %) whose upper limits are less than 100 times the observed sample sizes. The AW and PBC values are rounded to the nearest integer.
β 0 = ( 0.5 , 0.3 ) β 0 = ( 0.1 , 0.3 )
k 0 = 0.5 k 0 = 1 k 0 = 5 k 0 = 0.5 k 0 = 1 k 0 = 5
N 0 I I p I I p I I p I I p I I p I I p
Scenario A
CP25094.0093.9093.0593.1097.7597.8093.4593.6094.3594.3595.6095.60
50094.7094.7094.0094.0597.1597.1593.7593.9094.5594.5595.2595.25
100094.6094.8095.2595.2595.8095.8094.6594.6595.2095.2096.1096.10
AW250742290836363647410776317577330185171
500153274512948339297451104696576498200200
1000272117632046158497190914261152591582258258
PBC2503310046100671005810086100100100
5004610066100901008010098100100100
100067100891009910097100100100100100
Scenario B
CP25093.8593.693.5093.5597.1597.2093.9593.6593.9093.9595.1595.15
50094.8594.894.6094.6096.6596.6594.3094.4094.8594.9095.1095.10
100094.1594.295.3595.3596.1096.1094.8594.8595.6595.6595.7595.75
AW25060028205646973365494751126052122458174154
50097731804699917442462107548171010887504182181
10001225033434981215789682020841195545536237237
PBC2503410048100731006010088100100100
5004810070100931008310099100100100
1000701009210010010098100100100100100
Table 3. Description of the black bear data.
Table 3. Description of the black bear data.
Number of Captures1234567
Male11751211
Female8421013
Table 4. Lincoln–Petersen estimates and standard errors (SEs).
Table 4. Lincoln–Petersen estimates and standard errors (SEs).
Sample 1Sample 2 n 1 n 2 n 12 EstimateSE
week 1weeks 2–89457587.8
weeks 1–2weeks 3–8144310607.7
weeks 1–3weeks 4–8273818575.2
weeks 1–4weeks 5–8333824523.2
weeks 1–5weeks 6–8383122543.8
weeks 1–6weeks 7-8403122543.8
weeks 1–7week 845119556.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, Y.; Liu, Y. A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model. Mathematics 2024, 12, 2674. https://doi.org/10.3390/math12172674

AMA Style

Ji Y, Liu Y. A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model. Mathematics. 2024; 12(17):2674. https://doi.org/10.3390/math12172674

Chicago/Turabian Style

Ji, Yulu, and Yang Liu. 2024. "A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model" Mathematics 12, no. 17: 2674. https://doi.org/10.3390/math12172674

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop