Next Article in Journal
Symmetry in a Fractional-Order Multi-Scroll Chaotic System Using the Extended Caputo Operator
Previous Article in Journal
Nonequilibrium Casimir–Polder Interaction between Nanoparticles and Substrates Coated with Gapped Graphene
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Mean Estimators with Calibrated Minimum Covariance Determinant in Median Ranked Set Sampling

by
Abdullah Mohammed Alomair
1,* and
Usman Shahzad
2,3
1
Department of Quantitative Methods, School of Business, King Faisal University, Al-Ahsa 31982, Saudi Arabia
2
Department of Mathematics and Statistics, International Islamic University, Islamabad 44000, Pakistan
3
Department of Mathematics and Statistics, PMAS-Arid Agriculture University, Rawalpindi 46300, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(8), 1581; https://doi.org/10.3390/sym15081581
Submission received: 6 July 2023 / Revised: 20 July 2023 / Accepted: 9 August 2023 / Published: 13 August 2023
(This article belongs to the Section Mathematics)

Abstract

:
Calibration methods enhance estimates by modifying the initial design weights, for which supplementary information is exploited. This paper first proposes a generalized class of minimum-covariance-determinant (MCD)-based calibration estimators and then presents a novel class of MCD-based calibrated estimators under a stratified median-ranked-set-sampling (MRSS) design. Further, we also present a double MRSS version of generalized and novel classes of estimators. To assess and compare the performance of the generalized and novel classes of estimators, both real and artificial datasets are utilized. In the presented practical scenarios and real-world applications, we utilize information from a dataset comprising 800 individuals in Turkey from 2014. These data include body mass index (BMI) as the primary variable of interest and age values as auxiliary variables. The BMI results shows that the proposed estimators ( y ¯ P M I = 581.1897 , y ¯ P a M I = 544.8397 ) have minimum and ( y ¯ P M I I = 669.1822 , y ¯ P a M I I = 648.2363 ) have maximum PREs in the case of single and double MRSS for odd sample sizes. Similarly, ( y ¯ P M I = 860.0099 , y ¯ P a M I = 844.7803 ) have minimum and ( y ¯ P M I I = 974.5859 , y ¯ P a M I I = 953.7233 ) have maximum PREs in the case of single and double MRSS for even sample sizes. Additionally, we conduct a simulation study using a symmetric dataset.

1. Introduction

In numerous investigations conducted in real-world settings, particularly in the fields of ecology and environmental research, the main focus variable (referred to as Y) can be challenging to directly observe due to factors such as high costs, procedures that require a significant amount of labor, intrusiveness, or the possibility of harming the subjects under study. Despite the challenges and complexities involved in data collection, it is often relatively simple and cost-effective to rank the sampled units. To demonstrate this point, let us take the case of calliphoridae flies as an example. These flies possess an inherent survival mechanism that enables them to rapidly detect and inhabit a source of food, such as a decaying body, soon after it has perished. In investigations conducted after death, forensic entomologists often depend on the larvae of these flies to estimate the post-mortem interval. The larvae cease feeding once they reach their maximum size. By observing the volume of their intestinal contents (as their anterior intestine remains empty during further development), forensic entomologists can accurately determine the post-mortem interval. However, using radiographic techniques to assess changes in the maggots’ intestinal contents presents challenges (Sharma et al. [1]). On the other hand, since the larvae appear to continuously grow in length, measuring and ranking their length is relatively straightforward. Another example can be found in a health-related research endeavor aiming to obtain an average estimate of the cholesterol level of a population. Rather than performing intrusive blood tests on every participant in the sample, it is possible to visually rank the subjects based on their weights, and blood samples can be collected from only a small number of individuals.
McIntyre [2] provided the initial proposal of RSS. The RSS procedure can be described as follows: A sample is selected from a population N using simple random sampling (SRS). Each unit in this sample undergoes evaluation based on subjective criteria. Only the smallest unit is measured, while the rest are disregarded. Similarly, a second sample is selected, and only the second-smallest unit is measured, while the rest are disregarded. The process of selecting a new sample and measuring the subsequent smallest unit is repeated until the desired sample size is achieved.
Since its inception, ranked set sampling (RSS) has garnered significant attention from researchers and remains an active area of study. While it originated in horticulture with McIntyre’s foundational work in 1952, RSS has expanded its applications and is now being utilized in commercial settings. To delve deeper into the intricacies of RSS, interested readers can consult the works of Chen et al. [3], Hassan et al. [4], Bouza [5], Nagy et al. [6], and Benchiha et al. [7]. Shahzad et al. [8] successively used the ranked and true observations of auxiliary variables for mean estimation in MRSS. The three-fold use of auxiliary variables was suggested by Shahzad et al. [9] for mean estimation in MRSS. Bushan et al. [10] defined difference-type estimators in RSS. Muttlak [11] introduced a variation of RSS known as MRSS in order to estimate population means. Muttlak demonstrated that MRSS yields more precise estimates than RSS. In MRSS, rather than measuring the kth ( k = 1 , 2 , , ϑ ) minimum observation, the median of each sample within a cycle is measured. Essentially, MRSS can be seen as an adapted form of RSS designed to improve the accuracy of estimation.
The classic ratio estimator is widely recognized and commonly used in sampling theory to estimate population means (Oral and Oral [12]). Expanding upon this estimator, Al-Omari [13] introduced novel ratio-type estimators that incorporate the MRSS scheme. Later, Koyuncu [14] extended the concepts introduced by Al-Omari [13] and developed different types of estimators. However, all these efforts focused on ratio and difference-type mean estimation within the framework of MRSS. It is worth noting that these estimators rely on traditional descriptive statistics measures. However, no studies have been conducted on calibrated mean estimation under MRSS using robust covariance matrices, such as the MCD matrix. Thus, this study represents an initial step toward developing robust calibrated mean estimators within the MRSS framework.
The structure of this document is as follows: Section 2 begins by introducing the calibration technique and presenting the modified estimators for stratified MRSS. In Section 3 and Section 4, a fresh set of estimators is introduced under single and double MRSS schemes. In Section 5, a comprehensive simulation analysis is carried out to compare the effectiveness of the suggested estimators with alternative methods. The Section 6 offers concluding remarks that summarize the findings of this paper.

2. Generalized Class of Calibrated MCD-Based Estimators

MCD estimation was defined by Rousseeuw [15]. To estimate multivariate locations and dispersion with a high-breakdown point, it is necessary to assess the determinant of the Σ m (variance–covariance matrix). When Σ m is a positive semi-definite matrix with dimensions of n φ × n φ and P 1 positive eigenvalues, the determinant represents the product of these eigenvalues. Hence, a small determinant value indicates the presence of linear patterns in the data. The MCD method involves considering all subsets of size n φ from a dataset and calculating the determinant of Σ m for each subset. The MCD estimators are obtained by selecting the subset with the lowest determinant along with the typical 1 × P 1 mean vector and its corresponding P 1 × P 1 Σ m matrix. These estimators are discussed in the study by Muthukrishnan and Mahesh [16]. Note that estimators related to central tendency and dispersion can also be improved in the MCD framework by using auxiliary information.
Incorporating auxiliary information has the potential to greatly enhance the mean estimators. In various real-life scenarios, a linear correlation can be observed between a study variable Y and an auxiliary variable X (see Shahzad et al. [17] and Abbasi et al. [18]). As an example, consider the association between depression and suicide, where individuals with severe depression are more likely to commit suicide compared to those without depression (Johnson et al. [19]). Additionally, we can consider the established direct and positive correlation between body mass index (BMI) and total cholesterol levels (Schroder et al. [20]). These scenarios demonstrate how auxiliary variables can provide valuable information and contribute to more accurate mean estimation.
Zaman and Bulut [21] introduced the concept of MCD-based mean estimation using auxiliary information. Shahzad et al. [22] extended their work on handling missing observations. Zaman and Bulut [23] also introduced MCD-based variance estimators. To learn more about MCD-based mean and variance estimation within simple and stratified sampling designs, readers can refer to Zaman and Bulut [24,25]. However, to the best of our knowledge, no attention has been paid to MCD-based calibrated mean estimators in a stratified MRSS design.
Calibration is a technique used for the development of modified weights. Calibration estimation is a core methodology that seeks to refine initial weights by minimizing a designated measure of distance while incorporating auxiliary data. In the corresponding literature, scholars have investigated the application of calibration weighting in stratification to improve the precision of population parameter estimates. The generation of new calibration weights relies on two key components: (1) a distance function and (2) constraints. These two components serve as the basis for constructing enhanced calibration weights. Since the study variable and the auxiliary variables exhibit a strong correlation, effective weights for the auxiliary variable are expected to be effective for the research variable as well. Building on the pioneering work of Deville and Sarndal [26], many researchers have explored calibration estimation using different calibration constraints in survey sampling (see [14,27,28,29,30]). Drawing inspiration from these significant studies, we propose mean estimators using MRSS under the MCD framework.
In a stratified MRSS sampling design, we draw a random sample of size n φ without replacement from a population with a size of N φ in stratum φ (where φ = 1 , 2 , , L ). Let ( X i ( 1 ) , Y i [ 1 ] ) , ( X i ( 2 ) , Y i [ 2 ] ) , , ( X i ( n φ ) , Y i [ n φ ] ) represent the order statistics of X i ( 1 ) , X i ( 2 ) , , X i ( n φ ) and the imperfectly ranked order of Y i [ 1 ] , Y i [ 2 ] , , Y i [ n φ ] for the units in the φ -th stratum, where ( · ) and [ · ] indicate perfect ranking for X and imperfect ranking for Y, respectively. We denote the units measured using MRSS as m ( O ) for odd sample sizes and m ( E ) for even sample sizes.
Define the observed units m ( O ) for the case of an odd sample size in the φ -th stratum as ( X 1 ( n φ + 1 2 ) , Y 1 [ n φ + 1 2 ] ) , ( X 2 ( n φ + 1 2 ) , Y 2 [ n φ + 1 2 ] ) , , ( X n φ ( n φ + 1 2 ) , Y n φ [ n φ + 1 2 ] ) . Let
x ¯ s t ( m ( O ) ) = φ = 1 L W φ x ¯ φ ( m ( O ) ) and y ¯ s t ( m ( O ) ) = φ = 1 L W φ y ¯ φ ( m ( O ) )
y ¯ φ ( m ( O ) ) = 1 n φ i = 1 n φ Y i [ n φ + 1 2 ] and x ¯ φ ( m ( O ) ) = 1 n φ i = 1 n φ X i ( n φ + 1 2 )
denote the overall averages in the φ th strata in Equation (1) and the sample averages in the φ th stratum in Equation (2). Below,
( X 1 ( n φ 2 ) , Y 1 [ n φ 2 ] ) , ( X 2 ( n φ 2 ) , Y 2 [ n φ 2 ] ) , , ( X n φ 2 ( n φ 2 ) , Y n φ 2 [ n φ 2 ] ) , ( X n φ + 2 2 ( n φ + 2 2 ) , Y n φ + 2 2 [ n φ + 2 2 ] ) , ( X n φ + 4 2 ( n φ + 2 2 ) , Y n φ + 4 2 [ n φ + 2 2 ] ) , , ( X n φ ( n φ 2 ) , Y n φ [ n φ 2 ] )
x ¯ s t ( m ( E ) ) = φ = 1 L W φ x ¯ φ ( m ( E ) ) and y ¯ s t ( m ( E ) ) = φ = 1 L W φ y ¯ φ ( m ( E ) )
x ¯ φ ( m ( E ) ) = 1 n φ i = 1 n φ 2 X i ( n 2 ) + i = n + 2 2 n φ X i ( n + 2 2 ) and y ¯ φ ( m ( E ) ) = 1 n φ i = 1 n φ 2 Y i [ n φ 2 ] + i = n φ + 2 2 n φ Y i [ n φ + 2 2 ] ,
are the observed units, i.e., m ( E ) overall averages in the φ th strata Equation (3) and the sample averages in the φ th stratum Equation (4), for an even sample size. Note that W φ = N φ N is the stratum weight.
Now, we are provide a generalized class of calibration estimators under an MRSS design, which are expressed as
y ¯ T M ( i ) = φ = 1 L Ψ φ y ¯ φ ( m ( j ) ) , f o r   i = 1 , 2
and are subject to the following constraints:
φ = 1 L Ψ φ = φ = 1 L W φ ,
φ = 1 L Ψ φ Δ ^ x φ ( m ( j ) ) = φ = 1 L W φ Δ x φ ( m )
where Ψ φ is the calibrated weight, and j denotes odd and even sample sizes for MRSS, i.e., ( j = O , E ) . Defining the Lagrange function with its multipliers η 1 ( m ( j ) ) and η 2 ( m ( j ) ) yields
Δ ( m ( j ) ) = φ = 1 L ( Ψ φ W φ ) 2 τ φ W φ 2 η 1 ( m ( j ) ) φ = 1 L Ψ φ φ = 1 L W φ 2 η 2 ( m ( j ) ) φ = 1 L Ψ φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ x φ ( m ) .
φ = 1 L ( Ψ φ W φ ) 2 τ φ W φ is the chi-square distance function, where τ φ is 1 or a reciprocal of any known characteristics of auxiliary information. Through the following calculation, δ Δ ( m ( j ) ) δ Ψ φ = 0 , we obtain
Ψ φ = W φ + τ φ W φ η 2 ( m ( j ) ) Δ ^ x φ ( m ( j ) ) + η 1 ( m ( j ) ) ,
By inserting (9) into (6) and (7), we obtain
η 1 ( m ( j ) ) = φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 ,
η 2 ( m ( j ) ) = φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 .
By substituting η 1 ( m ( j ) ) and η 2 ( m ( j ) ) into (9), we obtain
Ψ φ = W φ + τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) ,
Inserting Ψ φ into y ¯ T M yields the calibrated mean estimator of the study variable
y ¯ T M ( i ) = φ = 1 L W φ y ¯ φ ( m ( j ) ) + φ = 1 L τ φ W φ φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) .
This estimator can be rewritten as
y ¯ T M ( i ) = y ¯ s t ( m ( j ) ) + b ^ j φ = 1 L W φ ( Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) ) ,
where
b ^ j = φ = 1 L τ φ W φ φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 .
y ¯ T M ( i ) = y ¯ s t ( m ( O ) ) + b ^ ( O ) φ = 1 L W φ ( Δ x φ ( m ) Δ ^ x φ ( m ( O ) ) ) when   n   is   odd y ¯ s t ( m ( E ) ) + b ^ ( E ) φ = 1 L W φ ( Δ x φ ( m ) Δ ^ x φ ( m ( E ) ) ) when   n   is   even .
Note that in a generalized class, Δ x φ ( m ) be any known characteristic of an auxiliary variable. By replacing Δ x φ ( m ) with some known parameters, for instance, the arithmetic mean X ¯ φ and the coefficient of variation C x φ , we can obtain the estimators reported in [17,29,30]. The cited authors defined these estimators under simple and MRSS designs. We adapt their work under the MCD framework, as shown in Table 1. Further, many other estimators can be developed by replacing Δ x φ ( m ) with some known population characteristics.

3. Novel Class of Calibrated MCD-Based Estimators

Taking inspiration from Koyuncu [14,28], we define the following MCD-based mean estimator under stratified MRSS:
y ¯ P M = φ = 1 L Ψ φ y ¯ φ ( m ( j ) ) ,
which is subject to the following constraints:
φ = 1 L Ψ φ x ¯ φ ( m ( j ) ) = φ = 1 L W φ X ¯ φ ( m )
φ = 1 L Ψ φ Δ ^ x φ ( m ( j ) ) = φ = 1 L W φ Δ x φ ( m )
φ = 1 L Ψ φ = φ = 1 L W φ ,
The Lagrange function is defined with its multipliers η 1 ( m ( j ) ) , η 2 ( m ( j ) ) , and η 3 ( m ( j ) )
Δ ( m ( j ) ) = φ = 1 L ( Ψ φ W φ ) 2 τ φ W φ 2 η 1 ( m ( j ) ) φ = 1 L Ψ φ x ¯ φ ( m ( j ) ) φ = 1 L W φ X ¯ φ ( m ) 2 η 2 ( m ( j ) ) φ = 1 L Ψ φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ x φ ( m ) 2 η 3 ( m ( j ) ) φ = 1 L Ψ φ φ = 1 L W φ .
Through the following calculation, δ Δ ( m ( j ) ) δ Ψ φ = 0 , we obtain
Ψ φ = W φ + τ φ W φ η 1 ( m ( j ) ) x ¯ φ ( m ( j ) ) + η 2 ( m ( j ) ) Δ ^ x φ ( m ( j ) ) + η 3 ( m ( j ) ) ,
By substituting (22) into (18)–(20), we obtain a system of equations containing three equations:
G ( 3 × 3 ) η ( 3 × 1 ) = F ( 3 × 1 ) ,
where
η ( 3 × 1 ) = η 1 ( m ( j ) ) η 2 ( m ( j ) ) η 3 ( m ( j ) ) ,
F ( 3 × 1 ) = φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) 0 ,
G ( 3 × 3 ) = φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ .
By solving Equation (23), we obtain
η 1 ( m ( j ) ) = D 1 ( m ( j ) ) H , η 2 ( m ( j ) ) = D 2 ( m ( j ) ) H , η 3 ( m ( j ) ) = D 3 ( m ( j ) ) H ,
where
D 1 ( m ( j ) ) = φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 + φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) ,
D 2 ( m ( j ) ) = φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ + φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) ,
D 3 ( m ( j ) ) = φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) + φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) .
H = φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 + 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) .
Substituting these values into (22) and (17) yields
y ¯ P M = y ¯ s t ( m ( j ) ) + η 1 ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) + η 2 ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) + η 3 ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) ,
= φ = 1 L W φ y ¯ φ ( m ( j ) ) + b ^ 1 φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( j ) ) + b ^ 2 φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( j ) ) ,
where
b ^ 1 ( j ) = D 4 ( m ( j ) ) H , b ^ 2 ( j ) = D 5 ( m ( j ) ) H ,
where
D 4 ( m ( j ) ) = φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ + φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) + φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) ,
D 5 ( m ( j ) ) = φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) + φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) + φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 .
y ¯ P M = y ¯ s t ( m ( O ) ) + b ^ 1 ( O ) φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( O ) ) + b ^ 2 ( O ) φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( O ) ) when   n   is   odd y ¯ s t ( m ( E ) ) + b ^ 1 ( E ) φ = 1 L W φ X ¯ φ ( m ) x ¯ φ ( m ( E ) ) + b ^ 2 ( E ) φ = 1 L W φ Δ x φ ( m ) Δ ^ x φ ( m ( E ) ) when   n   is   even .

4. Double-Stratified MRSS

In practical situations where the population mean of an auxiliary variable is not known, the double sampling method can be utilized to estimate the population mean. In this section, we specifically focus on a scenario where the mean of the auxiliary variable is unavailable. Our approach builds upon the framework established by Al-Omari [13] and Koyuncu [14] for the double MRSS design. However, we adapt their framework for a two-stage (double) MRSS design, where simple random sampling is employed in the first stage, followed by MRSS in the second stage. It is important to note that in the φ th stratum, n a φ = n φ 2 represents the sample size for the first phase, and n φ represents the sample size for the second phase. Let x ¯ a φ ( m ) , Δ ^ x a φ ( m ) denote the sample characteristics of the auxiliary variable in the first phase. In contrast, x ¯ φ ( m ( j ) ) , y ¯ φ ( m ( j ) ) and Δ ^ x φ ( m ( j ) ) represent the sample characteristics of the auxiliary variable and the study variable in the second phase within the φ th stratum.

4.1. Generalized Class of Calibrated MCD-Based Estimators

The generalized class of estimators under two-stage MRSS is given below
y ¯ T a M ( i ) = φ = 1 L Ψ a φ y ¯ φ ( m ( j ) ) ,
which is subject to the following constraints:
φ = 1 L Ψ a φ = φ = 1 L W φ ,
φ = 1 L Ψ a φ Δ ^ x φ ( m ( j ) ) = φ = 1 L W φ Δ ^ x a φ ( m )
Defining the Lagrange function with its multipliers η 1 ( m ( j ) ) and η 2 ( m ( j ) ) yields
Δ ( m ( j ) ) = φ = 1 L ( Ψ a φ W φ ) 2 τ φ W φ 2 η 1 ( m ( j ) ) φ = 1 L Ψ a φ φ = 1 L W φ 2 η 2 ( m ( j ) ) φ = 1 L Ψ a φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ ^ x a φ ( m ) .
By virtue of performing the following calculation, δ Δ ( m ( j ) ) δ Ψ φ = 0 , we obtain
Ψ a φ = W φ + τ φ W φ η 2 ( m ( j ) ) Δ ^ x φ ( m ( j ) ) + η 1 ( m ( j ) ) ,
By inserting (31) into (28) and (29), we obtain
η 1 ( m ( j ) ) = φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 ,
η 2 ( m ( j ) ) = φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 .
By substituting η 1 ( m ( j ) ) and η 2 ( m ( j ) ) into (31), we obtain
Ψ a φ = W φ + τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) ,
Inserting Ψ a φ into y ¯ T a M yields
y ¯ T a M ( i ) = φ = 1 L W φ y ¯ φ ( m ( j ) ) + φ = 1 L τ φ W φ φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) .
This estimator can be rewritten as
y ¯ T a M ( i ) = y ¯ s t ( m ( j ) ) + b ^ j φ = 1 L W φ ( Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) ) ,
where
b ^ j = φ = 1 L τ φ W φ φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 .
y ¯ T a M ( i ) = y ¯ s t ( m ( O ) ) + b ^ O φ = 1 L W φ ( Δ ^ x a φ ( m ) Δ ^ x φ ( m ( O ) ) ) when   n   is   odd y ¯ s t ( m ( E ) ) + b ^ E φ = 1 L W φ ( Δ ^ x a φ ( m ) Δ ^ x φ ( m ( E ) ) ) when   n   is   even .

4.2. Novel Class of Calibrated MCD-Based Estimators

The proposed estimator under stratified MRSS is given below:
y ¯ P a M = φ = 1 L Ψ a φ y ¯ φ ( m ( j ) ) ,
which is subject to the following constraints:
φ = 1 L Ψ a φ x ¯ φ ( m ( j ) ) = φ = 1 L W φ x ¯ a φ ( m )
φ = 1 L Ψ a φ Δ ^ x φ ( m ( j ) ) = φ = 1 L W φ Δ ^ x a φ ( m )
φ = 1 L Ψ a φ = φ = 1 L W φ ,
Defining the Lagrange function with its multipliers η 1 ( m ( j ) ) , η 2 ( m ( j ) ) , and η 3 ( m ( j ) ) yields
Δ ( m ( j ) ) = φ = 1 L ( Ψ a φ W φ ) 2 τ φ W φ 2 η 1 ( m ( j ) ) φ = 1 L Ψ a φ x ¯ φ ( m ( j ) ) φ = 1 L W φ x ¯ a φ ( m ) 2 η 2 ( m ( j ) ) φ = 1 L Ψ a φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ ^ x a φ ( m ) 2 η 3 ( m ( j ) ) φ = 1 L Ψ a φ φ = 1 L W φ .
Through the following calculation, δ Δ ( m ( j ) ) δ Ψ φ = 0 , we obtain
Ψ a φ = W φ + τ φ W φ η 1 ( m ( j ) ) x ¯ φ ( m ( j ) ) + η 2 ( m ( j ) ) Δ ^ x φ ( m ( j ) ) + η 3 ( m ( j ) ) ,
Substituting (44) into (40)–(42) yields a system of equations containing three equations. We can present this system of equations in matrix form, as follows
G ( 3 × 3 ) η ( 3 × 1 ) = F ( 3 × 1 ) ,
where
η ( 3 × 1 ) = η 1 ( m ( j ) ) η 2 ( m ( j ) ) η 3 ( m ( j ) ) ,
F ( 3 × 1 ) = φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) 0 ,
G ( 3 × 3 ) = φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ .
By solving Equation (45), we obtain
η 1 ( m ( j ) ) = D 1 ( m ( j ) ) H , η 2 ( m ( j ) ) = D 2 ( m ( j ) ) H , η 3 ( m ( j ) ) = D 3 ( m ( j ) ) H ,
where
D 1 ( m ( j ) ) = φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 + φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) ,
D 2 ( m ( j ) ) = φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ + φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) ,
D 3 ( m ( j ) ) = φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) + φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) .
H = φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 + 2 φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) .
By substituting these values into (44) and (39), we obtain
y ¯ P a M = y ¯ s t ( m ( j ) ) + η 1 ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) + η 2 ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) + η 3 ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) ,
= φ = 1 L W φ y ¯ φ ( m ( j ) ) + b ^ 1 φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( j ) ) + b ^ 2 φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( j ) ) ,
where
b ^ 1 ( j ) = D 4 ( m ( j ) ) H , b ^ 2 ( j ) = D 5 ( m ( j ) ) H ,
where
D 4 ( m ( j ) ) = φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ + φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) + φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ Δ ^ 2 x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) 2 ,
D 5 ( m ( j ) ) = φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) + φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) Δ ^ x φ ( m ( j ) ) φ = 1 L τ φ W φ y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) + φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 φ = 1 L τ φ W φ Δ ^ x φ ( m ( j ) ) y ¯ φ ( m ( j ) ) φ = 1 L τ φ W φ x ¯ φ ( m ( j ) ) 2 .
y ¯ P a M = y ¯ s t ( m ( O ) ) + b ^ 1 ( O ) φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( O ) ) + b ^ 2 ( O ) φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( O ) ) when   n   is   odd y ¯ s t ( m ( E ) ) + b ^ 1 ( E ) φ = 1 L W φ x ¯ a φ ( m ) x ¯ φ ( m ( E ) ) + b ^ 2 ( E ) φ = 1 L W φ Δ ^ x a φ ( m ) Δ ^ x φ ( m ( E ) ) when   n   is   even .
Note that just like the generalized class, we generate different members of the proposed family by replacing Δ x φ ( m ) with some known characteristic of an auxiliary variable and different values of τ φ , as shown in Table 1.

5. Numerical Illustration

5.1. Simulation Design

The objective of the simulation experiments in this section is to gain valuable insights into the effectiveness and efficiency of the estimators y ¯ P M I , y ¯ P M I I , y ¯ P M I I I , y ¯ P a M I , y ¯ P a M I I , and y ¯ P a M I I I compared to the estimators y ¯ T M ( 1 ) I , y ¯ T M ( 1 ) I I , y ¯ T M ( 1 ) I I I , y ¯ T a M ( 1 ) I , y ¯ T a M ( 1 ) I I , y ¯ T a M ( 1 ) I I I , y ¯ T M ( 2 ) I , y ¯ T M ( 2 ) I I , y ¯ T M ( 2 ) I I I , y ¯ T a M ( 2 ) I , y ¯ T a M ( 2 ) I I , and y ¯ T a M ( 2 ) I I I . For each stratum with size N φ = 1000 , four separate bivariate symmetric (Gaussian) distributions were employed. These distributions were symmetric and characterized by ( μ x = 1.99 , μ y = 3.99 ) , with variance–covariance matrices specified as follows:
  • Stratum 1
    Σ x y = 1.00 0.88 0.88 1.00
  • Stratum 2
    Σ x y = 1.00 0.77 0.77 1.00
  • Stratum 3
    Σ x y = 1.00 0.66 0.66 1.00
  • Stratum 4
    Σ x y = 1.00 0.33 0.33 1.00 .
Taking inspiration from the research conducted by Koyuncu [14,28], we extract samples from the aforementioned stratified population. While Koyuncu utilized stratified simple random sampling (SRS), we modify their framework to suit the stratified MRSS design. To ensure a fair comparison between the adapted estimators and the proposed ones, we obtained diverse samples under the MRSS scheme. We also provide the sizes of the diverse samples, which are listed in Table 2, where D a 1 , D a 2 , D a 3 , D a 4 , F a 1 , F a 2 , F a 3 , and F a 4 correspond to the total selected sample sizes for each respective stratum at the first and second stages.
We selected R 1 = 5000 samples of sizes n φ = D a 1 , D a 2 , D a 3 , and D a 4 under MRSS within a single-stage framework. For each kth sample, we calculated the estimate ( q ^ ( g 1 ) , q ^ ( g 2 ) ) of the population mean μ y , where
q ^ ( g 1 ) = y ¯ T M ( 1 ) I , y ¯ T M ( 1 ) I I , y ¯ T M ( 1 ) I I I , y ¯ T M ( 2 ) I , y ¯ T M ( 2 ) I I , y ¯ T M ( 2 ) I I I .
q ^ ( g 2 ) = y ¯ P M I , y ¯ P M I I , y ¯ P M I I I .
Adapting the work of Al-Omari [13] and Koyuncu [14], in the first stage, we selected R 1 = 5000 samples of sizes n a φ = n φ × n φ = D a 1 , D a 2 , D a 3 , D a 4 . Then, MRSS samples, i.e., n φ = F a 1 , F a 2 , F a 3 , F a 4 , were chosen from the n φ × n φ samples in the second (double) stage. Using drawn samples, we calculated the estimate ( q ^ ( g 1 ) , q ^ ( g 2 ) ) of the population mean μ y , where
q ^ ( g 1 ) = y ¯ T a M ( 1 ) I , y ¯ T a M ( 1 ) I I , y ¯ T a M ( 1 ) I I I , y ¯ T a M ( 2 ) I , y ¯ T a M ( 2 ) I I , y ¯ T a M ( 2 ) I I I .
q ^ ( g 2 ) = y ¯ P a M I , y ¯ P a M I I , y ¯ P a M I I I .
The expressions for the MSEs and PREs are as follows:
MSE ( q ^ ( g 1 ) ) = k 1 = 1 R 1 ( q ^ ( g 1 ) μ y ) 2 / R 1 .
MSE ( q ^ ( g 2 ) ) = k 1 = 1 R 1 ( q ^ ( g 2 ) μ y ) 2 / R 1 .
PRE ( q ^ ( g 1 ) , q ^ ( g 2 ) ) = MSE ( q ^ ( g 1 ) ) MSE ( q ^ ( g 2 ) ) × 100 ,
The different sample sizes of the symmetric population are provided in Table 2, and the PREs are provided in Table 3, Table 4, Table 5 and Table 6.

5.2. BMI Application

The performance of the q ^ ( g 2 ) estimators was further evaluated using real data. In this case, we utilized data from Turkey in 2014 concerning 800 individuals, where body mass index (BMI) served as the study variable, and age acted as the auxiliary variable. The dataset consisted of N = 800 observations, with a correlation coefficient of ρ x y = 0.60 , and mean values of μ y = 23.77 for the study variable (BMI) and μ x = 30.12 for the auxiliary variable (age). The coefficient of variation for the auxiliary variable was C x = 0.36 , and C y = 0.17 represented the coefficient of variation for the study variable. To stratify the dataset, we divided it into two strata based on gender. For details regarding BMI data, refer to the work conducted by Shahzad et al. [17]. Here are some key properties:
  • Stratum-I  N h 1 = 477 ρ y x h 1 = 0.62 ,   μ y h 1 = 22.36 ,   μ x h 1 = 27.68 C x h 1 = 0.36 C y h 1 = 0.17 .
  • Stratum-II  N h 2 = 323 ρ y x h 2 = 0.49 μ y h 2 = 25.85 μ x h 2 = 33.73 C x h 2 = 0.33 C y h 1 = 0.13 .
The BMI data results are given in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. It is evident from Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 that the PRE (Percentage of Relative Efficiency) values are greater than 100 for all the proposed estimators, indicating their superior performance compared to the adapted estimators. It is important to note that this conclusion has been drawn based on our simulation study. However, we have confidence that these results hold true in various other settings as well.

6. Conclusions

In this paper, our focus was on adapting the estimators proposed by Sinha et al. [29] and Garg and Pachori [30], specifically for the stratified MRSS design. MRSS is a well-established sampling technique that is widely recognized in this field. Additionally, this study introduces new calibrated estimators, based on the MCD, for estimating the population mean in both single- and double-stratified MRSS setups. It is worth noting that novel estimators with three constraints exhibit greater effectiveness compared to generalized estimators with two constraints. To support our findings, a simulation study was conducted. We also have plans to further expand on our present work using diverse RSS designs, for which the insights provided by Koyuncu [14,28] will be taken into consideration.

Author Contributions

Conceptualization, U.S.; Data curation, U.S.; Formal analysis, A.M.A. and U.S.; Funding acquisition, A.M.A.; Investigation, U.S.; Methodology, A.M.A.; Project administration, U.S.; Resources, U.S.; Software, U.S.; Supervision, U.S.; Validation, U.S.; Visualization, U.S.; Writing—original draft, A.M.A. and U.S.; Writing—review and editing, A.M.A. and U.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. 3802].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data used in this study are already available in [17].

Conflicts of Interest

The authors have no conflict of interest to declare.

References

  1. Sharma, R.; Garg, R.K.; Gaur, J.R. Various methods for the estimation of the post mortem interval from Calli-phoridae: A review. Egypt. J. Forensic Sci. 2015, 5, 1–12. [Google Scholar] [CrossRef] [Green Version]
  2. McIntyre, G. A method for unbiased selective sampling, using ranked sets. Aust. J. Agric. Res. 1952, 3, 385–390. [Google Scholar] [CrossRef]
  3. Chen, Z.; Bai, Z.; Sinha, B. Ranked Set Sampling: Theory and Applications; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2003; Volume 176. [Google Scholar]
  4. Hassan, A.S.; Almanjahie, I.M.; Al-Omari, A.I.; Alzoubi, L.; Nagy, H.F. Stress–strength modeling using median-ranked set sampling: Estimation, simulation, and application. Mathematics 2023, 11, 318. [Google Scholar] [CrossRef]
  5. Bouza, C.N. Ranked set sampling for the product estimator. Investig. Oper. 2013, 29, 201–206. [Google Scholar]
  6. Nagy, H.F.; Al-Omari, A.I.; Hassan, A.S.; Alomani, G.A. Improved estimation of the inverted Kumaraswamy distribution parameters based on ranked set sampling with an application to real data. Mathematics 2022, 10, 4102. [Google Scholar] [CrossRef]
  7. Benchiha, S.; Al-Omari, A.I.; Alomani, G. Goodness-of-Fit Tests for Weighted Generalized Quasi-Lindley Distribution Using SRS and RSS with Applications to Real Data. Axioms 2022, 11, 490. [Google Scholar] [CrossRef]
  8. Shahzad, U.; Ahmad, I.; Oral, E.; Hanif, M.; Almanjahie, I.M. Estimation of the population mean by successive use of an auxiliary variable in median ranked set sampling. Math. Popul. Stud. 2021, 3, 176–199. [Google Scholar] [CrossRef]
  9. Shahzad, U.; Ahmad, I.; Almanjahie, I.M.; Al-Omari, A.I. Three-fold utilization of supplementary information for mean estimation under median ranked set sampling scheme. PLoS ONE 2022, 10, e0276514. [Google Scholar] [CrossRef]
  10. Bhushan, S.; Kumar, A.; Zaman, T.; Al Mutairi, A. Efficient Difference and Ratio-Type Imputation Methods under Ranked Set Sampling. Axioms 2023, 12, 558. [Google Scholar] [CrossRef]
  11. Muttlak, H.A. Median ranked set sampling. J. Appl. Stat. Sci. 1997, 6, 245–255. [Google Scholar]
  12. Oral, E.; Oral, E. A Robust Alternative to the Ratio Estimator under Non-normality. Stat. Probab. Lett. 2011, 81, 930–936. [Google Scholar] [CrossRef]
  13. Al-Omari, A.I. Ratio estimation of the population mean using auxiliary information in simple random sampling and median ranked set sampling. Stat. Probab. Lett. 2012, 82, 1883–1890. [Google Scholar] [CrossRef]
  14. Koyuncu, N. New difference-cum-ratio and exponential type estimators in median ranked set sampling. Hacet. J. Math. Stat. 2016, 45, 207–225. [Google Scholar] [CrossRef]
  15. Rousseeuw, P.J. Multivar. Estim. High Breakdown Point. Math. Stat. Appl. 1985, 8, 37. [Google Scholar]
  16. Muthukrishnan, R.; Mahesh, K. Robust procedure for estimating multivariate location and scatter. American Inter-national Journal of Research in Science, Technology. Eng. Math. 2014, 6, 189–195. [Google Scholar]
  17. Shahzad, U.; Ahmad, I.; Alshahrani, F.; Almanjahie, I.; Iftikhar, S. Calibration-Based Mean Estimators under Strati-fied Median Ranked Set Sampling. Mathematics 2023, 11, 1825. [Google Scholar] [CrossRef]
  18. Abbasi, H.; Hanif, M.; Shahzad, U.; Emam, W.; Tashkandy, Y.; Iftikhar, S.; Shahzadi, S. Calibration Estimation of Cumulative Distribution Function Using Robust Measures. Symmetry 2023, 15, 1157. [Google Scholar] [CrossRef]
  19. Johnson, D.; Dupuis, G.; Piche, J.; Clayborne, Z.; Colman, I. Adult mental health outcomes of adolescent de-pression: A systematic review. Depress. Anxiety 2018, 35, 700–716. [Google Scholar] [CrossRef]
  20. Schroder, H.; Marrugat, J.; Elosua, R.; Covas, M.I. Relationship between body mass index, serum cholesterol, leisure-time physical activity, and diet in a Mediterranean Southern-Europe population. Br. J. Nutr. 2003, 90, 431–440. [Google Scholar] [CrossRef] [PubMed]
  21. Zaman, T.; Bulut, H. Modified regression estimators using robust regression methods and covariance matrices in stratified random sampling. Commun. Stat. Theory Methods 2020, 49, 3407–3420. [Google Scholar] [CrossRef]
  22. Shahzad, U.; Al-Noor, N.H.; Hanif, M.; Sajjad, I.; Anas, M.M. Imputation based mean estimators in case of missing data utilizing robust regression and variance–covariance matrices. Commun. Stat. Simul. Comput. 2022, 51, 4276–4295. [Google Scholar] [CrossRef]
  23. Zaman, T.; Bulut, H. An efficient family of robust-type estimators for the population variance in simple and stratified random sampling. Commun. Stat.-Theory Methods 2021, 52, 2610–2624. [Google Scholar] [CrossRef]
  24. Bulut, H.; Zaman, T. An improved class of robust ratio estimators by using the minimum covariance determinant estimation. Commun. Stat. Simul. Comput. 2022, 51, 2457–2463. [Google Scholar] [CrossRef]
  25. Zaman, T.; Bulut, H. A new class of robust ratio estimators for finite population variance. Sci. Iran. 2022. [Google Scholar] [CrossRef]
  26. Deville, J.C.; Särndal, C.E. Calibration estimators in survey sampling. J. Am. Stat. Assoc. 1992, 87, 376–382. [Google Scholar] [CrossRef]
  27. Singh, S.; Horn, S.; Yu, F. Estimation variance of general regression estimator: Higher level calibration approach. Surv. Methodol. 1998, 48, 41–50. [Google Scholar]
  28. Koyuncu, N. Calibration estimator of population mean under stratified ranked set sampling design. Commun. Stat. Theory Methods 2018, 47, 5845–5853. [Google Scholar] [CrossRef]
  29. Sinha, N.; Sisodia, B.V.S.; Singh, S.; Singh, S.K. Calibration approach estimation of the mean in stratified sampling and stratified double sampling. Commun. Stat.-Theory Methods 2017, 46, 4932–4942. [Google Scholar]
  30. Garg, N.; Pachori, M. Use of coefficient of variation in calibration estimation of population mean in stratified sampling. Commun. Stat.-Theory Methods 2019, 49, 5842–5852. [Google Scholar] [CrossRef]
Table 1. Family members of all classes.
Table 1. Family members of all classes.
MRSS Double MRSS
Estimators τ φ Estimators ( Δ ^ x φ ( m ( j ) ) , Δ x φ ( m ) )
y ¯ T M ( 1 ) I 1 y ¯ T a M ( 1 ) I ( x ¯ φ ( m ( j ) ) , X ¯ φ ( m ) )
y ¯ T M ( 1 ) I I 1 / C ^ x φ ( m ( j ) ) y ¯ T a M ( 1 ) I I ( x ¯ φ ( m ( j ) ) , X ¯ φ ( m ) )
y ¯ T M ( 1 ) I I I 1 / x ¯ φ ( m ( j ) ) y ¯ T a M ( 1 ) I I I ( x ¯ φ ( m ( j ) ) , X ¯ φ ( m ) )
y ¯ T M ( 2 ) I 1 y ¯ T a M ( 2 ) I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
y ¯ T M ( 2 ) I I 1 / C ^ x φ ( m ( j ) ) y ¯ T a M ( 2 ) I I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
y ¯ T M ( 2 ) I I I 1 / x ¯ φ ( m ( j ) ) y ¯ T a M ( 2 ) I I I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
y ¯ P M I 1 y ¯ P a M I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
y ¯ P M I I 1 / C ^ x φ ( m ( j ) ) y ¯ P a M I I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
y ¯ P M I I I 1 / x ¯ φ ( m ( j ) ) y ¯ P a M I I I ( C ^ x φ ( m ( j ) ) , C x φ ( m ) )
Table 2. Details of different sample sizes.
Table 2. Details of different sample sizes.
MRSS Double MRSS
( n φ 1 , n φ 2 , n φ 3 , n φ 4 ) Table ( n a φ 1 , n a φ 2 , n a φ 3 , n a φ 4 )
( n φ 1 , n φ 2 , n φ 3 , n φ 4 )
D a 1 ( 3 , 5 , 5 , 3 ) Table 3 ( 9 , 25 , 25 , 9 )
F a 1 Table 3 ( 3 , 5 , 5 , 3 )
D a 2 ( 4 , 6 , 6 , 4 ) Table 4 ( 16 , 36 , 36 , 16 )
F a 2 Table 4 ( 4 , 6 , 6 , 4 )
D a 3 ( 5 , 7 , 7 , 5 ) Table 5 ( 25 , 49 , 49 , 25 )
F a 3 Table 5 ( 5 , 7 , 7 , 5 )
D a 4 ( 6 , 8 , 8 , 6 ) Table 6 ( 36 , 64 , 64 , 36 )
F a 4 Table 6 ( 6 , 8 , 8 , 6 )
Table 3. PRE using ( D a 1 , F a 1 ) .
Table 3. PRE using ( D a 1 , F a 1 ) .
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 234.1993233.9116234.1762 y ¯ T a M ( 1 ) I 219.1993218.9116217.6762
y ¯ T M ( 1 ) I I 231.9974233.8001233.9743 y ¯ T a M ( 1 ) I I 228.9974227.8002227.4743
y ¯ T M ( 1 ) I I I 234.2525233.9648234.2294 y ¯ T a M ( 1 ) I I I 228.2525228.9648239.7294
y ¯ T M ( 2 ) I 637.2576636.4615637.1646 y ¯ T a M ( 2 ) I 635.3576633.4625633.8646
y ¯ T M ( 2 ) I I 631.7327629.9447631.6406 y ¯ T a M ( 2 ) I I 623.7327623.9447621.4416
y ¯ T M ( 2 ) I I I 637.7392636.9424637.6461 y ¯ T a M ( 2 ) I I I 626.7392626.9424623.2461
Table 4. PRE using ( D a 2 , F a 2 ) .
Table 4. PRE using ( D a 2 , F a 2 ) .
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 1118.2011118.6291118.467 y ¯ T a M ( 1 ) I 997.1999995.62831118.467
y ¯ T M ( 1 ) I I 1119.9951119.5141116.352 y ¯ T a M ( 1 ) I I 1103.99431103.51351109.852
y ¯ T M ( 1 ) I I I 1117.9851115.4131116.259 y ¯ T a M ( 1 ) I I I 1105.98371103.41291014.769
y ¯ T M ( 2 ) I 1661.3711654.8761664.519 y ¯ T a M ( 2 ) I 1658.37021569.99921662.219
y ¯ T M ( 2 ) I I 1748.9921751.8891749.367 y ¯ T a M ( 2 ) I I 1746.99081759.98531749.867
y ¯ T M ( 2 ) I I I 1662.9191665.5541664.981 y ¯ T a M ( 2 ) I I I 1659.91811668.55281663.471
Table 5. PRE using ( D a 3 , F a 3 ) .
Table 5. PRE using ( D a 3 , F a 3 ) .
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 223.9373225.4532223.9359 y ¯ T a M ( 1 ) I 201.93719202.4532215.9359
y ¯ T M ( 1 ) I I 227.9226229.4848227.9183 y ¯ T a M ( 1 ) I I 212.92255213.4848218.4183
y ¯ T M ( 1 ) I I I 224.2589225.6875224.1674 y ¯ T a M ( 1 ) I I I 219.26885209.6875217.6674
y ¯ T M ( 2 ) I 1728.53561805.41291723.6128 y ¯ T a M ( 2 ) I 1716.535611791.41291718.3119
y ¯ T M ( 2 ) I I 1821.47151899.42691816.4787 y ¯ T a M ( 2 ) I I 1803.471451892.42691806.9787
y ¯ T M ( 2 ) I I I 1759.33191836.56671754.4752 y ¯ T a M ( 2 ) I I I 1738.421791816.65681738.8752
Table 6. PRE using ( D a 4 , F a 4 ) .
Table 6. PRE using ( D a 4 , F a 4 ) .
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 883.2887901.9381873.6702 y ¯ T a M ( 1 ) I 856.2887878.9381865.6702
y ¯ T M ( 1 ) I I 889.8739908.7711879.1745 y ¯ T a M ( 1 ) I I 874.8739892.7711869.6745
y ¯ T M ( 1 ) I I I 882.8997901.6299873.2869 y ¯ T a M ( 1 ) I I I 868.8997886.6299866.7869
y ¯ T M ( 2 ) I 1586.70281622.65781568.9113 y ¯ T a M ( 2 ) I 1585.70281621.65781567.7013
y ¯ T M ( 2 ) I I 1585.62651621.26821568.9958 y ¯ T a M ( 2 ) I I 1583.62651629.26821566.5858
y ¯ T M ( 2 ) I I I 1593.42051629.97971575.8494 y ¯ T a M ( 2 ) I I I 1591.33051627.97971573.2494
Table 7. BMI PRE for odd samples.
Table 7. BMI PRE for odd samples.
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 581.1897625.0372592.5631 y ¯ T a M ( 1 ) I 563.1721602.1433573.5220
y ¯ T M ( 1 ) I I 560.0198605.8929580.8859 y ¯ T a M ( 1 ) I I 544.8397580.5889563.7429
y ¯ T M ( 1 ) I I I 602.3596644.1815604.2403 y ¯ T a M ( 1 ) I I I 581.5045623.6977583.3011
y ¯ T M ( 2 ) I 605.2976650.0379616.9057 y ¯ T a M ( 2 ) I 586.9033626.6819597.4703
y ¯ T M ( 2 ) I I 584.1277630.8936605.2285 y ¯ T a M ( 2 ) I I 568.5709605.1275587.6912
y ¯ T M ( 2 ) I I I 626.4675669.1822628.5829 y ¯ T a M ( 2 ) I I I 605.2357648.2363607.2494
Table 8. BMI PRE for even samples.
Table 8. BMI PRE for even samples.
PRE MRSS PRE Double MRSS
ϕ ^ y ¯ P M I y ¯ P M II y ¯ P M III ϕ ^ y ¯ Pa M I y ¯ Pa M II y ¯ Pa M III
y ¯ T M ( 1 ) I 881.1798925.0273892.5136 y ¯ T a M ( 1 ) I 863.1127902.1334873.5022
y ¯ T M ( 1 ) I I 860.0099905.8830880.8364 y ¯ T a M ( 1 ) I I 844.7803880.5790863.7231
y ¯ T M ( 1 ) I I I 902.3497944.1716904.1908 y ¯ T a M ( 1 ) I I I 881.4451923.6878883.2813
y ¯ T M ( 2 ) I 910.8645955.4416922.3886 y ¯ T a M ( 2 ) I 892.4915932.1689903.0573
y ¯ T M ( 2 ) I I 889.6946936.2973910.7114 y ¯ T a M ( 2 ) I I 874.1591910.6145893.2782
y ¯ T M ( 2 ) I I I 932.0344974.5859934.0658 y ¯ T a M ( 2 ) I I I 910.8239953.7233912.8364
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alomair, A.M.; Shahzad, U. Optimizing Mean Estimators with Calibrated Minimum Covariance Determinant in Median Ranked Set Sampling. Symmetry 2023, 15, 1581. https://doi.org/10.3390/sym15081581

AMA Style

Alomair AM, Shahzad U. Optimizing Mean Estimators with Calibrated Minimum Covariance Determinant in Median Ranked Set Sampling. Symmetry. 2023; 15(8):1581. https://doi.org/10.3390/sym15081581

Chicago/Turabian Style

Alomair, Abdullah Mohammed, and Usman Shahzad. 2023. "Optimizing Mean Estimators with Calibrated Minimum Covariance Determinant in Median Ranked Set Sampling" Symmetry 15, no. 8: 1581. https://doi.org/10.3390/sym15081581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop