Next Article in Journal
On a Version of Dontchev and Hager’s Inverse Mapping Theorem
Previous Article in Journal
Statistical Inferences about Parameters of the Pseudo Lindley Distribution with Acceptance Sampling Plans
Previous Article in Special Issue
One-Dimensional BSDEs with Jumps and Logarithmic Growth
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Consistency of Incomplete Functional Percentile Regression

by
Mohammed B. Alamari
1,
Fatimah A. Almulhim
2,*,
Ouahiba Litimein
3 and
Boubaker Mechab
3
1
Department of Mathematics, College of Science, King Khalid University, Abha 62529, Saudi Arabia
2
Department of Mathematical Sciences, College of Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Laboratory of Statistics and Stochastic Processes, University of Djillali Liabes, BP 89, Sidi Bel Abbes 22000, Algeria
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(7), 444; https://doi.org/10.3390/axioms13070444
Submission received: 24 May 2024 / Revised: 18 June 2024 / Accepted: 22 June 2024 / Published: 30 June 2024
(This article belongs to the Special Issue Stochastic Modeling and Its Analysis)

Abstract

:
This paper analyzes the co-fluctuation between a scalar response random variable and a curve regressor using quantile regression. We focus on the situation wherein the output variable is observed with random missing. For this incomplete functional data situation, we estimate the quantile regression by combining two principal nonparametric methods: the local linearity approach (LLA) and the kernel nearest neighbor (KNN) algorithm. We study the asymptotic properties of the constructed estimator by establishing, under general assumptions, uniform consistency over the number of neighborhoods. This asymptotic result provides good mathematical support for the selection of the optimal neighborhood. We examine the feasibility of the constructed estimator using artificially generated data. Moreover, we apply the quantile regression technique in food quality by predicting the riboflavin quantity in yogurt using spectrometry data.
MSC:
62G08; 62G10; 62G35; 62G07; 62G32; 62G30; 62H12

1. Introduction

Often, the impact of a functional explanatory variable on a scalar response variable is modeled using classical regression. The latter shows potential inappropriateness in some situations, especially when the distribution is multi-modal or strongly asymmetric. Alternatively, in this paper, we propose to handle this issue using the conditioned quantile, which is commonly used as a risk measure.
Recently, the quantile with regressor has become one of the most explored models in nonparametric statistics. Statistical fitting with the quantile function has been examined in various studies. In particular, when the explanatory variable has values in a finite-dimensional space, we can refer to [1,2,3] for the α -mixing case. The authors of this last work have demonstrated the asymptotic law of the conditioned median.
In the functional case, ref. [4] constructed an estimator of the quantile as linear regression using B-splines. The contribution in [5] considered the kernel estimator of the conditioned quantile in nonparametric terms. Under less restrictive conditions, ref. [6] established the convergence of the L P norm using a small ball probability function and the regularity of the conditioned density to determine the convergence rate. A new framework was proposed by [7] for the estimation of conditioned quantile extremes with a functional covariate. The latter employed nonparametric modeling techniques. Another estimation method for conditioned quantiles based on the L 1 norm was proposed by [8]. They studied the asymptotic convergence and normality distribution of this estimator. In spatially dependent functional variables, ref. [9] established the almost complete convergence (acc) of the conditioned quantile using a kernel estimator. The authors of [10] studied the LLA to estimate the conditioned quantile function. The LLA in functional data analysis has attracted remarkable interest from researchers over the last decade. For instance, ref. [11] introduced the initial version of this method, which was studied in the Hilbert space, and demonstrated the L 2 consistency of the LLA to estimate the regression estimator by analyzing its bias and variance. However, in the same context, ref. [12] proposed another estimator for the regression function by inverting the covariance operator and derived its asymptotic mean square error. A faster LLA was developed by [13] in a semi-metric space. It is worth mentioning that the LLA improves the efficiency of the kernel approach through bias reduction. In the i.i.d. case, the authors of [14] studied the conditioned distribution function and established the acc-convergence of this estimator. They also examined the mean square error of this estimator. Other studies have focused on the conditioned density using the LLA presented by [15]. They established the acc-convergence of this estimator and determined its asymptotic properties, including the conditioned mode. All of these results refer to independent data cases. However, in spatial function data, ref. [16] extended [13]’s results by demonstrating the acc-consistency of the indicated estimator. In the dependent case, ref. [17] studied the asymptotic normality of the LLA of the conditioned density where the observations include alpha mixing. However, for the case in which the response variable and the explanatory variable are functions, ref. [18] studied the regression function using the LLA. For the case in which the observations are independently identically distributed, ref. [19] established the uniform acc-convergence of the regression estimator.
In parallel, the presence of missing data presents a significant challenge in various applications [20,21]. The missing concept in data appears when some data of the variable are not stored. As an illustration, this missingness occurs when our data are gathered from diverse sources—for example, in medicine, when patients’ data are collected differently and routinely between clinics and hospitals. The authors of [22] assigned a value to unobserved points and analyzed the results as they were complete. Nonparametric regression functions have been estimated by Efromovich (2011) [23], when the predictors are missing at random (MAR). In the functional data case, the first studies on MAR were presented by [24]. They studied the asymptotic consistency of the regression operator when the response is missing at random and the curve regressor is observed. Ref. [25] constructed a kernel estimator under ergodic data to study the regression operator when the response is missing at random. However, conditioned mode estimation was studied in functional ergodic data and data missing at random in [26]. Furthermore, ref. [27] constructed a kernel estimator to study the conditioned hazard function when the response variable is MAR. Recently, ref. [28] studied the conditioned density, when the regressors are infinite-dimensional and the response is MAR, using the LLA and KNN methods.
The main feature of the present work is the new estimation approach for quantile regression. The latter combines two principles: nonparametric methods, such as the local linear approach (LLA), and the kernel nearest neighbor (KNN) algorithm. Both the LLA and KNN have been widely employed for quantile regression. For instance, the LLA was considered by [29] to perform quantile regression. They constructed two local linear estimators. The first one was defined by the scoring function of the quantile function, while the second one was obtained by inverting a local linear conditional distribution estimator. We return to [30] for the LLA estimation of spatial quantile regression. They proved the asymptotic normality of the constructed estimator. The KNN estimation of the quantile regression was popularized by [31]. We refer to [32] for a survey study on the KNN estimation of quantile regression. In parallel, the kernel estimation of quantile regression with missing observations was introduced by [33]. They proved the asymptotic normality of the constructed estimator when the covariate is real. The incomplete functional case was studied by [34]. While, in all of the previous works, the quantile regression is estimated by the LLA or by KNN separately, in the present work, we combine the LLA with KNN weighting to construct an estimator of the quantile function of a scalar response variable conditioned by a functional explanatory variable. We assume that the response variable is observed with missing data, and we demonstrate the uniform consistency of the constructed estimator over the number of neighborhoods. It is clear that KNN weighting offers an adaptable estimator, allowing us to integrate the vicinity feature of the estimator, which is very beneficial in functional data. However, the KNN algorithm provides an estimator with a random bandwidth, unlike the kernel estimator, with a scalar smoothing parameter. Of course, obtaining the asymptotic properties in KNN estimation is more challenging than in the traditional case. Moreover, the uniform consistency over the number of neighbors provides an important mathematical basis on which to select the best estimator. The computational ability of the KNN–LLA estimator is demonstrated using artificial and real data.
The remainder of the paper is organized as follows. In Section 2, we define the LLA quantile regression. In Section 3, we state the conditions and the main results. In Section 4, we discuss the applicability of the constructed estimator. The demonstration of the intermediate results is given in Appendix A.

2. The Quantile with Regressor and Its Estimators

We assume that ( X i , Y i ) represents n pairs of random functions, which are assumed to be sampled from the pair ( X , Y ) with a value in F × R , where F is a semi-metric space with a semi-metric d. Next, for a real number α ] 0 , 1 [ and a point x in F , we define the conditioned quantile of order α , T α ( x ) as the root of
min t I E [ Ξ α ( Y t ) | X = x ] , Ξ α ( y ) = y ( α 1 I y < 0 )
where 1 I A is the function indicator of A.
It is well known that local linear modeling is based on the approximation of the statistical parameter to estimate by a linear function. In the multivariate case, we use a Taylor expansion of this parameter. However, in FDA, there are several ways to extend local linear ideas. In this contribution, we adopt the fast functional local modeling introduced by [13]. In this case, we assume that z N x ; the operator Q α ( z ) is approximated by
Q α ( z ) = A + B ( z , x )
where A and B are estimated by A ^ and B ^ as the root of
min ( a , b ) R 2 i = 1 n Ξ α ( Y i A B ( X i , x ) ) K e ( h k 1 κ ( v , X i ) ) ,
where ( . , . ) is a known operator over F × F into I R such that, for all ε F , ( ε , ε ) = 0 , K e is a function, h : = h K , n is a positive sequence, and κ ( . , . ) is an operator over F × F such that d ( . , . ) = κ ( . , . ) . Clearly, we can write
Q α ( x ) = a . S o , Q α ( x ) ^ = a ^
Our aim, in this project, is to study the asymptotic features of the KNN estimator of the conditioned quantile with
λ k = min { λ I R + such that i = 1 n 1 I B ( v , λ ) ( X i ) = k , k N } .
In practice, the KNN approach is more appropriate in FDA, because the smoothing parameter is a random variable that is highly dependent on the data. It is a priori defined according to the vicinity of the conditioning point, which allows us to explore both the topological structure and the random component of the data.
In the missing responses, we observe a sample of ( X , Y , D ) , denoted by { ( X i , Y i , D i ) : 1 i n } , where D i = 0 if Y i is observed, and D i = 1 . Assume that D is a Bernoulli such that
I P ( x ) = I P ( D = 1 | X = x ) = I P ( D = 1 | X = x , Y = y )
where I P ( · ) is an unknown operator called the probability of observing Y knowing X.
Thus, from sample ( X i , Y i , D i ) i = 1 , , n of ( X , Y , D ) , we estimate A and B by minimizing
min ( A , B ) R 2 i = 1 n Ξ α ( Y i A B ( X i , x ) ) K e ( λ k 1 κ ( v , X i ) ) D i ,

3. Main Results

Our main objective is to uniformly define the a. co. of Q α ( x ) for the set of neighbors k ( 1 , n , 2 , n ) .
To accomplish this, we refer to φ x ( r 1 , r 2 ) = I P ( r 2 κ ( v , X ) r 1 ) and we assume the following hypotheses.
Hypothesis 1 (H1).
For any r > 0 , φ x ( r ) : = ψ x ( r , r ) > 0 and there exists a function φ x ( · ) , such that, for all t ( 1 , 1 ) ,
lim h 0 ψ x ( t h , h ) ψ x ( h ) = φ x ( t ) .
Hypothesis 2 (H2).
ρ > 0 such that, for all ( Q 1 , Q 2 ) [ Q α ( x ) ρ , Q α ( x ) + ρ ] 2 a n d x 1 , x 2 N x , we have
| F x ( Q 1 ) F z ( Q 2 ) | C 1 ( d k 2 ( x , z ) + | Q 1 Q 2 | k 1 ) , with   C 1 , k 1 , k 2 > 0 .
Hypothesis 3 (H3).
The function ( · , · ) is such that
{ ( i ) z F , C 2 | κ ( z , z ) | ( z , z ) C 3 κ ( z , z ) , where   C 2 > 0 , C 3 > 0 . ( i i ) sup z B ( u , r ) | ( z , u ) κ ( u , z ) = o ( r ) .
Hypothesis 4 (H4).
The function K e ( · ) is a differentiable positive function supported within ( 1 , 1 ) , such that
K e ( 1 ) 1 1 K e ( t ) φ x ( t ) d t K e ( 1 ) 1 1 ( t K e ( t ) ) φ x ( t ) d t K e ( 1 ) 1 1 ( t K e ( t ) ) φ x ( t ) d t K e ( 1 ) 1 1 ( t 2 K e ( t ) ) φ x ( t ) d t
is a positive definite matrix.
Hypothesis 5 (H5).
The class of functions
K = { · K e ( γ 1 d ( v , · ) ) , γ > 0 } is   a   measurable   pointwise   class
such that
sup P 0 1 1 + log N ( ϵ F P , 2 , K , d P ) d ϵ < ,
where P is a probability measure such that with P ( F 2 ) < , and F ( · ) is the functional envelope of K . d Q is the metric of L 2 ( P ) and N ( ϵ , K , d P ) is the smallest number of open balls. With respect to the L 2 ( P ) -metric, ϵ is the radius necessary to cover K . We denote by · P , 2 the L 2 ( 2 ) -norm.
Hypothesis 6 (H6).
The sequence of numbers ( 1 , n ) verifies
ψ x 1 2 , n n 0 a n d log n min n ψ x 1 λ 1 , n n , λ 1 , n 0 .
Theorem 
 
Under Hypotheses (H1)–(H6) and if f x ( Q α ( x ) ) > 0 , then
sup λ 1 , n k λ 2 , n Q ^ α ( x ) Q α ( x ) = O ψ x 1 λ 2 , n n min ( k 1 , k 2 ) + O a . c o log n λ 1 , n .
Proof of Theorem
 
We apply the same concept as utilized by [35] Specifically, we fix an arbitrary α ] 0 , 1 [ , which is expressed as follows:
I P sup λ 1 , n k λ 2 , n Q ^ α ( x ) Q α ( x ) Z n I P sup λ 1 , n k λ 2 , n Q ^ α ( x ) Q α ( x ) × 1 I ψ x 1 α λ 1 , n n h k ψ x 1 λ 2 , n n α Z n 2 + I P h ψ x 1 α λ 1 , n n , ψ x 1 λ 2 , n n α .
where
Z n = ϵ 0 ψ x 1 λ 2 , n n k + log n λ 1 , n for some ϵ 0 > 0 .
Using the result of [35]
n I P h ψ x 1 α λ 1 , n n , ψ x 1 λ 2 , n n α < .
Thus, it suffices to write
sup ψ x 1 α λ 1 , n n h ψ x 1 λ 2 , n n α Q α ˜ ( x ) Q α ( x ) = O a . c o . Z n .
Q α ˜ ( x ) is the scalar bandwidth estimator. We change λ k in (A8) by a certain h. For this, we define a vectorial operator
Ψ i ( δ ) = ( α 1 I [ Y i ( c + a ) + ( h n 1 d + b ) ( X i , x ) ] ) δ = ( c , d ) R 2
Finally, the statement of (6) is concluded by Lemma 1 in [10] on sequence
V n ( δ , h ) = 1 n ψ x ( h ) i = 1 n Ψ i ( δ ) 1 h 1 ( X i , x ) K e i D i ,
and [ S n , G n ] = ψ x 1 α λ 1 , n n , ψ x 1 λ 2 , n n α .

4. Computational Aspects

4.1. Simulation Result

In this section, we evaluate the performance of the built estimator over a finite sample. Specifically, we aim to show the easy application of the KNN estimator, and then we compare it to its competitors, such as the scalar bandwidth estimator Q ˜ α . The KNN local constant estimator
Q ¯ α ( x ) = inf { t I R : F ^ x ( t ) p } , where F ^ x ( t ) = i I n D i K e ( λ k 1 d ( v , X i ) ) 1 I Y i t i I n K e ( λ k 1 d ( v , X i ) ) K e ( λ k 1 d ( v , X i ) ) D i .
For this comparative study, we proceed with the following steps.
Step 1.
We generate a sequence of functional exploratory variables from
X ( t ) = s i n ( W 1 t ) + c o s ( W 1 t ) + 2 W 2 ,
where W 1 follows N ( 1 , 1 ) and W 2 follows N ( 1 , 1 ) . Such a sample of curves X i s is discretized in 100 equi-spaced grids in ( 2 π , 2 π ) . The functional curves are presented in Figure 1.
Step 2.
We generate the output variable Y using a functional regression model
Y = 1 2 π 2 π 2 π X 2 ( t ) 1 + X 2 ( t ) d t + ϵ ,
where ϵ is independent of X and has a Laplace distribution. This sampling algorithm shows that the law of Y X is defined by shifting the distribution of ϵ .
Next, to incorporate the theoretical part of this work, we pay attention in this comparative study to analyzing the behavior of the estimator using various missing levels. Specifically, we compare the resistance of the estimators Q ^ α , Q ˜ α , and Q ¯ α using different missing rates. Furthermore, the missing phenomena are modeled as
P ( X ) = expit 2 λ π 2 π 2 π 1 1 + X 2 ( t ) d t ,
where expit ( u ) = e u / ( 1 + e u ) . This consideration allows us to control the missing rate through the parameter λ . Typically, we examine three levels of censoring levels, which are strong, medium, and weak, which correspond to λ = 0.05 , λ = 0.5 , and λ = 5 , respectively. Empirically, the missing level is evaluated by
δ ¯ = 1 n i = 1 n δ i .
For the above values of λ , we observe that there are 5% missed observations when λ = 5 ; 30% missed observations for λ = 0.5 ; and more than 57% missing observations when λ = 0.05 .
Step 3.
We calculate the three estimators Q ^ α , Q ˜ α , and Q ¯ α . For the practical computation of these estimators, we consider a kernel K e as a quadratic kernel
K e ( t ) = 3 2 ( 1 t 2 ) 1 I [ 0 , 1 )
and we choose the locating functions δ and defined by
( v , x ) = ( X ( 2 ) ( t ) X ( 2 ) ( t ) ) 2 d t 1 / 2 and   ( v , x ) = θ ( t ) ( X ( 2 ) ( t ) X ( 2 ) ( t ) ) d t
where X ( i ) denotes the ith derivative of x, and θ denotes the eigenfunction associated with the greatest eigenvalue of the covariance matrix
1 n j = 1 n ( X j ( i ) X ( i ) ¯ ) t ( X j ( i ) X ( i ) ¯ ) .
The estimators are calculated using the straightforward modification of the routine code cond.quantile in the R package fda.usc. To highlight the flexibility of our estimation approach, we compare two selection procedures to select the optimal k and the optimal h. The first one is based on
M S E = 1 n i = 1 n Ξ α Y i T ¨ α ( X i ) rule ( 1 )
where T ¨ α refers to the leave-one-out curve estimator of Q ^ α , Q ˜ α , and Q ¯ α . Meanwhile, the second one is based on the mean square error cross-validation rule defined by
M S E = 1 n i = 1 n Y i T ¨ 0.5 ( X i ) 2 rule ( 2 ) .
This rule is optimized over k in { 5 , 10 , 15 , , 60 } and over h, the q-quantile of the vector distance between the curves ( q = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 ).
The finite sample performance of the above estimators is evaluated through the MAE error, expressed by
M A E = 1 n i = 1 n Q α ( X i ) T ¨ α ( X i ) .
We report the MAE errors of the three estimators in Table 1. This table summarizes the results for different missing levels, different sample sizes, different selectors, and three quartiles, Q 1 , Q 2 , and Q 3 .
Unsurprisingly, KNN–LLA appears to be preferred over the other estimators. In particular, for various situations, the MAE of KNN–LLA is between ( 0.06 , 1.12 ) and ( 0.09 , 3.22 ) for the fixed-bandwidth LLA. Both estimators are significantly more efficient than the estimator Q ¯ α . It produces an MSE ranging within ( 0.29 , 4.88 ) . Such a result shows the superiority of the LLA over the local constant method that is given in the statistical literature.

4.2. A Real Data Application

In this section, we illustrate the KNN–LLA of the quantile with regressor via a real data example. More precisely, we use the quantile with regressor to predict the riboflavin quantity in yogurt, using infrared curves. The latter is one of the B vitamins, which are crucial to yogurt’s quality. This vitamin is indispensable for the metabolism of protein, carbohydrates, and fats. It is one of the most important coenzymes necessary for antibody production, energy metabolism, cell respiration, growth, and development. The riboflavin prediction problem has been studied in many works in the past (see [36] for some algorithms). However, the novelty of the present work is the implementation of the functional KNN–LLA model in this forecasting problem. It is well documented that nonparametric tools are more widely adopted in this functional context. For this illustration, we employ 115 functions X i ( t ) , which represent light absorption at 120 waves. The wavelengths are selected to be between 270 and 550 nm. Such data are available at http://www.models.life.ku.dk/Yogurt (accessed on 31 December 2023). The functional data are plotted in Figure 2.
Furthermore, the output variable Y i is the quantity of riboflavin. We have 115 observations of ( X i , Y i ) i = 1 , 115 . We remove some observations from these data, and we compare two different strategies to handle the missing phenomenon. The first one is based on the ideas developed in the theoretical part of this work, where Y i is predicted by Q ^ α ( X i ) . In the second strategy, we replace the missed observations Y i 0 with
Y i 0 ¯ = 1 k i = 1 k Y i i 0 ,
where ( Y i 0 ) i = 1 , k are the k observed response variables corresponding to the KNN explanatory observations at X i 0 . Furthermore, we use the same kernel and keep the same selectors to choose the smoothing parameter. However, the smoothed property of the spectrometry curves is dependent on the choice of the metric d, as well as the locating function δ and . From Figure 2, we see that the curves have a discontinuous form. Thus, the PCA metric is more suitable for this situation. In this sense, we proceed with d = δ = . Finally, for this comparative study, we split the data many times (exactly 70 times), in a random manner, into subsets (90% learning sample and 10% testing sample) and we compute the
M S E = 1 n i = 1 n Y i Q ^ 0.5 ( X i ) 2
for both strategies. In the following figures, we give the boxplots for two cases: a moderate missing case (only 5% of observations are removed) and a strong missing case (45% of observations are removed).
Clearly, strategy 1 has significant advantages over strategy 2 (see Figure 3 and Figure 4). This conclusion reflects the statement made by [23]. The latter stated that the best way to handle the missing phenomenon is to ignore the missed observations. Indeed, the second strategy is based on double estimations, which can induce additional errors. Finally, we point out that the KNN estimation method is very easy to compute and fast to execute.
In the second illustration, we compare quantile regression to classical regression as estimated via the KNN–LLA approach in the functional MAR case using the same ideas. In this sense, the KNN–LLA estimator R ( x ) ^ of the classical regression R ( x ) = E [ Y | X = x ] is
( a ^ , b ^ ) = min ( a , b ) R 2 i = 1 n ( Y i A B ( X i , x ) ) 2 K e ( h k 1 κ ( v , X i ) ) , R ( x ) ^ = a ^ .
In particular, we compare it to median regression, which corresponds to Q 0.5 ( x ) ^ . We follow the same algorithm as in the first part, where strategy 1 is employed to model the missing observations. We use the same kernel, the same locating functions, and the same selector for the optimal number of neighborhoods. Now, in order to highlight the importance of median regression over classical regression, we compare their resistance to the outlier response data. To do this, we randomly multiply some observations by a varied constant C and we compare the absolute mean error
M A E q u a n t i l e = 1 n i = 1 n Y i Q ^ 0.5 ( X i )
and
M A E R e g r e s s i o n = 1 n i = 1 n Y i R ^ ( X i ) .
We obtain the following result.
The result of Table 2 confirms the superiority of quantile regression over classical regression, namely through its robustness property. It is clear that median regression is more robust than classical regression. In this sense, the variability of the MAE error with respect to the multiplicative constant C is more important in classical regression than in the median case.

5. Conclusions

This contribution focuses on the free distribution approximation of the quantile function. Based on the KNN–LLA approach, we construct a new estimator. Its asymptotic property is proven under a missing structure. The employment of this approach is checked through artificial data. This analysis highlights the theoretical component, where we conclude that the performance of the estimator is affected by the level of the missing aspects. One of the main features of the KNN–LLA approach is the possibility to derive the consistency of the estimator when the smoothing parameter is also a random variable. Such a statement allows us to diversify the selection procedure for the best number of neighborhoods. The present work also provides a number of important directions for the future. The first is the establishment of the limit law of the KNN–LLA estimator under the independence or dependence case. Generalizing the present results to some functional time series or spatial curve data constitutes an interesting direction too. The latter would permit us to enhance the practical relevance of the KNN–LLA estimator in quantile regression and increase the applicability of this model to more important, real-world problems, such as economic forecasting, climate modeling, or medical statistics. In addition to these future directions, the treatment of alternative functional models and/or other smoothing methods could be considered, such as the single index model, the polynomial LLA, or the recursive algorithm, among others.

Author Contributions

The authors contributed approximately equally to this work. Formal analysis, O.L. and B.M.; validation, F.A.A.; writing—review and editing, M.B.A. All authors have read and agreed to the final version of the manuscript.

Funding

This work was supported by: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R515), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) the Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/128/45.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are available through the link http://www.models.life.ku.dk/Yogurt (accessed on 31 December 2023).

Acknowledgments

The authors thank and extend their appreciation to the funders of this project: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R515), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) the Deanship of Scientific Research at King Khalid University through the Research Groups Program under grant number R.G.P. 1/128/45.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Lemma A1.
Under postulates (H1)–(H6), we have
sup S n h G n A n ( h ) = O ψ x 1 λ 2 , n n k + O a . c o . log n λ 1 , n
where A n ( h ) = V n ( 0 , h ) , with 0 = ( 0 , 0 ) .
Proof of Lemma A1.
For every ι = 0 , 1 , we divide A n ι ( · ) into dispersion and bias terms as
A n ι ( h ) = A n ι ( h ) I E A n ι ( h ) + I E A n ι ( h )
The dispersion part is assessed using Bernstein’s inequality for the empirical process. For this, we write h K , j = 2 j S n and B ( n ) = max { j : h K , j 2 G 0 } , for which
sup S n h G 0 α λ 1 , n log n A n ι ( h ) I E A n ι ( h ) max 1 j B ( k ) sup h K , j 1 h h K , j n ψ x ( h ) log n A n ι ( h ) I E A n ι ( h ) .
We define
ϖ n ( K , ι ) = 1 n i = 1 n Ψ i ( 0 , 0 ) h ι ι ( X i , x ) K e i D i I E Ψ i ( 0 , 0 ) h ι ι ( X i , x ) K e i D i
and we write
A n ι ( h ) I E A n ι ( h ) = 1 n ψ x ( h ) ϖ n ( K , ι ) .
Now, we evaluate the class of functions
M K , j , ι = ( z , y ) ( α 1 I [ Y i ( c + a ) + ( h n 1 d + b ) ( X i , x ) ] )
γ ι ι ( z , x ) K e γ 1 ( v , z ) , h K , j 1 γ h K , j .
Evidently, this class of functions has
D ( z , y ) C y 1 I B ( v , h K , j / 2 ) ( z )
as an envelope function. From the first part (A5), we obtain
I E D p ( V , Y ) C p ψ x ( h K , j / 2 ) .
Therefore,
σ 2 = O ( ψ x ( h K , J / 2 ) ) .
Concerning n, we have
n = I E n ϖ n ( K ) G K , j , ι C n ψ x ( h K , j / 2 ) .
Thus,
I P sup S n h G 0 n ψ x ( h ) log n A n ι I E A n ι ( h ) η 0 j = 1 B ( n ) I P 1 n ψ x ( h K , j / 2 ) log n k ϖ n ( K , ι ) G K , j , ι η 0 B ( n ) I P max 1 k n k ϖ n ( K , ι ) G K , j , ι η 0 n ψ x ( h K , j / 2 ) log n B ( n ) I P max 1 k n k ϖ n ( K , ι ) G K , j , ι > n + t log ( n ) n C η 0 2 .
Therefore,
sup λ 1 , n k λ 2 , n A n ι ( h ) I E A n ι ( h ) = O a . c o . log n λ 1 , n .
To establish the bias part, firstly, we write
Ω i 1 = α 1 I Y i a + b i K e i D i I E α 1 I Y i a + b i K e i D i
and
Ω i 2 = α 1 I Y i a + b i i K e i D i I E α 1 I Y i a + b i i K e i D i
where
A n 1 = 1 n ψ x ( h ) i = 1 n Ω i 1 , A n 2 = 1 n h ψ x ( h ) i = 1 n Ω i 2 .
On the other hand, we have
I E A n 1 = 1 ψ x ( h ) I E α 1 I Y 1 a + b 1 K e 1 D 1 P ( x ) ψ x ( h ) I E F x ( Q a ( x ) ) F X ( a + b 1 ) K e 1 C h k .
Hence, uniformly in h, we have
sup S n h G n I E A n 1 ( h ) C ψ x 1 λ 2 , n n α k .
Similarly,
I E A n 2 = 1 h ψ x ( h ) I E α 1 I Y 1 a + b 1 1 K e 1 D 1 P ( x ) h ψ x ( h ) I E F x ( Q a ( x ) ) F X ( a + b 1 ) 1 K e 1 C h k .
and
sup S n h G n I E A n 2 ( h ) C ψ x 1 λ 2 , n n α k
Lemma A2.
Under postulates (H1)–(H6), we have
sup S n h G n sup δ M E [ V n ( δ ) A n ] + f x ( T α ( x ) ) D δ = O a . c o ( 1 ) .
where
f x ( Q α ( x ) ) = t F 1 ( v , Q α ( x ) ) + t F 2 ( v , Q α ( x ) ) .
Proof of Lemma A2.
Under Hypotheses (H1)–(H6), we prove that
sup S n h G n sup δ M I E [ V n ( δ , h ) A n ( h ) ] + f x ( Q α ( x ) ) t D δ = O ψ x 1 λ 2 , n n k .
and
sup S n h G n sup δ M V n ( δ , h ) A n ( h ) I E [ V n ( δ , h ) A n ( h ) ] = O a . c o . log n λ 1 , n .
Concerning the term (A7), we write
V n ( δ , h ) A n = Z n 1 ( δ , h ) Z n 2 ( δ , h )
where
Z n 1 ( δ , h ) = 1 n ψ x ( h ) i = 1 n ( ψ α ( δ ) ψ α ( δ 0 ) ) K e i D i ,
and
Z n 2 ( δ , h ) = 1 n h ψ x ( h ) i = 1 n ( ψ α ( δ ) ψ α ( δ 0 ) ) i K e i D i .
On the other hand, we have
I E [ Z n 1 ( δ , h ) ] = 1 ψ x ( h ) I E 1 I Y 1 ( c + a ) + ( h 1 d + b ) 1 1 I Y 1 ( a + b 1 ) K e i D i = P ( x ) ψ x ( h ) I E [ ( F X ( ( ( c + a ) + λ k 1 d + b ) 1 ) F X ( a + b 1 ) K e 1 ) ] = P ( x ) ψ x ( h ) I E [ ( F X ( ( ( c + a ) + λ k 1 d + b ) 1 ) F X ( a + b 1 ) K e 1 ) ] + O ( G n k ) = P ( x ) ψ x ( h ) I E [ f x ( a + b 1 ) ( 1 , λ k 1 1 ) δ K e 1 ] + O ( G n k ) + o ( δ ) = f x ( Q α ( x ) ) P ( x ) ψ x ( h ) I E K e 1 , λ k 1 I E [ 1 K e 1 ] δ + O ( G n k ) + o ( δ ) .
Similarly,
I E [ Z n 2 ( δ , h ) ] = f x ( Q α ( x ) ) P ( x ) h ψ x ( h ) I E 1 K e 1 , λ k 1 I E [ 1 2 K e 1 ] δ + O ( G n k ) + o ( δ ) .
It follows that, for S n h G n ,
I E [ V n ( δ , h ) A n ( h ) ] = f x ( Q α ( x ) ) P ( x ) ψ x ( h ) I E [ K e i ] I E [ K e i λ k 1 i ] I E [ K e i λ k 1 i ] I E [ h k 2 i 2 K e i ] δ + O ( G n k ) + o ( δ ) .
Using the same ideas as in [15], we show that
h a I E [ a K e i b ] = ψ x ( h ) K e b ( 1 ) 1 1 ( u a K e b ( u ) ) φ x ( u ) d u + o ( ψ x ( h ) ) .
Hence,
sup S n h G n sup δ M I E [ V n ( δ , h ) A n ( h ) ] + f x ( Q α ( x ) ) D δ + o ( δ ) = O ψ x 1 λ 2 , n n k .
which implies the result (A7).
Concerning (A8), we use the fact that
B ( 0 , M ) j = 1 d n B ( δ j , l n ) , δ j = c j d j a n d l n = d n 1 = 1 / n .
Then, we take j ( δ ) = arg min j δ δ j and we use the fact that
sup S n h G n sup δ M V n ( δ , h ) A n ( h ) I E [ V n ( δ , h ) A n ( h ) ] sup S n h G n sup δ M V n ( δ , h ) V n ( δ j , h ) + sup S n h G n sup δ M V n ( δ j , h ) A n ( h ) I E [ V n ( δ j , h ) A n ( h ) ] + sup S n h G n sup δ M I E [ V n ( δ , h ) V n ( δ j , h ) ] .
Since
1 I [ Y < a ] 1 I [ Y < b ] 1 I [ Y b a b ]
and then
sup S n h G n sup δ M V n ( δ , h ) V n ( δ j , h ) 1 n ψ x ( h ) i ξ i D i ,
where
ϰ i n = 1 n i = 1 n ξ i n D i
for i = 1,2,3 with
ξ 1 i = sup S n h G n sup δ M 1 I Y i ( c j + a ) ( h 1 d j + b ) i C l n 1 h 1 i K e i
ξ 2 i = sup S n h G n sup δ M 1 I Y i ( c j + a ) ( h 1 d j + b ) i C l n 1 h 1 i K e i Y i
ξ 3 i = l n 1 h 1 i K e i .
For the three empirical processes ξ 1 n , ξ 2 n , and ξ 3 n , we define, respectively, the following class of functions:
W K , j , 1 = ( z , y ) sup δ M I y ( c j + a ) ( h 1 d j + b ) ( z , x ) C l n 1 h 1 d ( z , x ) , K e γ 1 ( v , z ) , for h K , j 1 γ h K , j ,
W K , j , 2 = ( z , y ) sup δ M I y ( c j + a ) ( h 1 d j + b ) ( z , x ) C l n 1 h 1 ( z , x ) y , K e γ 1 ( v , z ) , for h K , j 1 γ h K , j .
and
W K , j , 3 = z l n 1 h 1 ( z , x ) K e γ 1 ( v , z ) , h K , j 1 γ h K , j .
The envelope functions of these classes have, respectively, P 1 ( · , · ) , P 2 ( · , · ) , and P 3 ( · ) , such that
P 1 ( z , y ) C sup δ M I y ( c j + a ) ( h 1 d j + b ) ( z , x ) C l n 1 I B ( v , h K , j / 2 ) ( z ) ,
P 2 ( z , y ) C sup δ M I y ( c j + a ) ( h 1 d j + b ) ( z , x ) C l n y 1 I B ( v , h K , j / 2 ) ( z )
and
P 3 ( z ) C l n 1 I B ( v , h K , J / 2 ) ( z )
For the rest of the proof, we apply the Bernstein inequality on ξ ι n , ι = 1,2,3 using
σ 2 = O ( ψ x ( h K , j / 2 ) ) and n = O ( n ψ x ( h K , j / 2 ) )
then
sup S n h G n sup δ M V n ( δ , h ) V n ( δ j , h ) = O a . c o . log n λ 1 , n .
On the other hand, since
sup S n h G n sup δ M I E V n ( δ , h ) V n ( δ j , h ) I E 1 n ψ x ( h ) ξ n ( K ) C l n ,
and
l n = o log n n ψ x ( h ) .
We obtain
sup δ M I E [ V n ( δ , h ) V n ( δ j , h ) ] = O a . c o . log n n ψ x ( h ) .
Now, all that remains is to examine
I 2 = sup S n h G n sup δ M V n ( δ j , h ) A n ( h ) I E [ V n ( δ j , h ) A n ( h ) ] .
To do this, we write
V n ( δ j , h ) A n ( h ) I E [ V n ( δ j , h ) V n ( h ) ] = 1 n ψ x ( h ) ξ n ( K , j , ι )
with
ξ n ( K , j , ι ) = 1 n i = 1 n Θ i , j ι
where
Θ i , j ι : = ( Ψ i ( c j , d j ) Ψ i ( 0 , 0 ) ) h ι ι ( X i , x ) K e i D i I E ( Ψ i ( c j , d j ) Ψ i ( 0 , 0 ) ) h ι ι ( X i , x ) K e i D i .
Therefore, by the Bernstein inequality on ξ n ( K , j , ι ) , we use the class of functions
W K , j , ι = ( z , y ) = ( α 1 I [ Y i ( c + a ) + ( h n 1 d + b ) ( X i , x ) ] ) ( z , x ) γ ι ι ( z , x ) K e γ 1 ( v , z ) , h K , j 1 γ h K , j .
Evidently, this class of functions has an envelope function R ( · , · ) , such that
R ( z , y ) C y 1 I B ( v , h K , j / 2 ) ( z )
Thus, for
σ 2 = O ( ψ x ( h K , j / 2 ) ) and n = O ( n ψ x ( h K , j / 2 ) ) .
We show that
I P sup S n h G n sup δ M n ψ x ( h ) log n V n ( δ j , h ) A n ( h ) I E [ V n ( δ j , h ) A n ( h ) ] η d n max j I P sup S n h G n n ψ x ( h ) log n V n ( δ j , h ) A n ( h ) I E [ V n ( δ j , h ) A n ( h ) ] η log ( n ) n C η 0 2 + 1 / 2 .
Finally, a suitable choice of η > 0 completes the proof of Lemma A2. □

References

  1. Samanta, M. Nonparametric estimation of conditional quantiles. Statist. Probab. Lett. 1989, 7, 407–412. [Google Scholar] [CrossRef]
  2. Gannoun, A.; Saracco, J.; Yu, K. Nonparametric prediction by conditional median and quantiles. J. Statist. Plann. Inference 2003, 117, 207–223. [Google Scholar] [CrossRef]
  3. Zhou, Y.; Liang, H. Asymptotic normality for L1 norm Kernel estimator of conditional median under K-mixing dependence. J. Multivar. Anal. 2000, 73, 136–154. [Google Scholar] [CrossRef]
  4. Cardot, H.; Crambes, C.; Sarda, P. Estimation spline de quantiles conditionnels pour variables explicatives fonctionnelles. C. R. Math. Acad. Sci. Paris 2004, 339, 141–144. [Google Scholar] [CrossRef]
  5. Ferraty, F.; Laksaci, A.; Vieu, P. Estimating some characteristics of the conditional distribution in nonparametric functional models. Stat. Inference Stoch. Process. 2006, 9, 47–76. [Google Scholar] [CrossRef]
  6. Dabo-Niang, S.; Laksaci, A. Estimation non paramétrique de mode conditionnel pour variable explicative fonctionnelle. C. R. Math. Acad. Sci. Paris 2007, 344, 49–52. [Google Scholar] [CrossRef]
  7. He, F.Y.; Cheng, Y.B.; Tong, T.J. Nonparametric estimation of extreme conditional quantiles with functional covariate. Acta Math. Sin. (Engl. Ser.) 2018, 34, 1589–1610. [Google Scholar] [CrossRef]
  8. Laksaci, A.; Lemdani, M.; Ould Saïd, E. A generalized l1-approach for a Kernel estimator of conditional quantile with functional regressors: Consistency and asymptotic normality. Stat. Probab. Lett. 2009, 79, 1065–1073. [Google Scholar] [CrossRef]
  9. Laksaci, A.; Maref, F. Estimation non paramétrique de quantiles conditionnels pour des variables fonctionnelles spatialement dépendantes. Comptes Rendus Math. 2009, 347, 1075–1080. [Google Scholar] [CrossRef]
  10. Al-Awadhi, F.A.; Kaid, Z.; Laksaci, A.; Ouassou, I.; Rachdi, M. Functional data analysis: Local linear estimation of the L 1-conditional quantiles. Stat. Methods Appl. 2019, 28, 217–240. [Google Scholar] [CrossRef]
  11. Baìllo, A.; Grané, A. Local linear regression for functional predictor and scalar response. J. Multivar. Anal. 2009, 100, 102–111. [Google Scholar] [CrossRef]
  12. Berlinet, A.; Elamine, A.; Mas, A. Local linear regression for functional data. Ann. Inst. Stat. Math. 2011, 63, 1047–1075. [Google Scholar] [CrossRef]
  13. Barrientos, J.; Ferraty, F.; Vieu, P. Locally Modelled Regression and Functional Data. J. Nonparametr. Statist. 2010, 22, 617–632. [Google Scholar] [CrossRef]
  14. Demongeot, J.; Laksaci, A.; Madani, F.; Rachdi, M. Functional data: Local linear estimation of the conditional density and its application. Statistics 2013, 47, 26–44. [Google Scholar] [CrossRef]
  15. Demongeot, J.; Laksaci, A.; Rachdi, M.; Rahmani, S. On the local linear modelization of the conditional distribution for functional data. Sankhya A 2014, 76, 328–355. [Google Scholar] [CrossRef]
  16. Chouaf, A.; Laksaci, A. On the functional local linear estimate for spatial regression. Stat. Risk Model. 2012, 29, 189–214. [Google Scholar] [CrossRef]
  17. Xiong, X.; Zhou, P.; Ailian, C. Asymptotic normality of the local linear estimation of the conditional density for functional time-series data. Commun. -Stat.-Theory Methods. 2018, 47, 3418–3440. [Google Scholar] [CrossRef]
  18. Demongeot, J.; Naceri, A.; Laksaci, A.; Rachdi, M. Local linear regression modelization when all variables are curves. Stat. Probab. Lett. 2017, 121, 37–44. [Google Scholar] [CrossRef]
  19. Chahad, A.; Ait-Hennani, L.; Laksaci, A. Functional local linear estimate for functional relative-error regression. J. Stat. Theory Pract. 2017, 11, 771–789. [Google Scholar] [CrossRef]
  20. Rubin, D.B. Inference and missing data. Biometrika 1976, 63, 581–592. [Google Scholar] [CrossRef]
  21. Josse, J.; Reiter, J.P. Introduction to the special section on missing data. Stat. Sci. 2018, 33, 139–141. [Google Scholar] [CrossRef]
  22. Little, R.; Rublin, D. Statistical Analysis with Missing Data, 2nd ed.; John Wiley: New York, NY, USA, 2002. [Google Scholar]
  23. Efromovich, S. Nonparametric Regression with Predictors Missing at Random. J. Am. Stat. Assoc. 2011, 106, 306–319. [Google Scholar] [CrossRef]
  24. Ferraty, F.; Sued, F.; Vieu, P. Mean estimation with data missing at random for functional covariables. Statistics 2013, 47, 688–706. [Google Scholar] [CrossRef]
  25. Ling, N.; Liang, L.; Vieu, P. Nonparametric regression estimation for functional stationary ergodic data with missing at random. J. Statist. Plann. Inference 2015, 162, 75–87. [Google Scholar] [CrossRef]
  26. Ling, N.; Liu, Y.; Vieu, P. Conditional mode estimation for functional stationary ergodic data with responses missing at random. Statistics 2016, 50, 991–1013. [Google Scholar] [CrossRef]
  27. Bachir Bouiadjra, H. Conditional hazard function estimate for functional data with missing at random. Int. J. Stat. Econ. 2017, 18, 45–58. [Google Scholar]
  28. Almanjahie, I.M.; Mesfer, W.; Laksaci, A. The K nearest neighbors local linear estimator of functional conditional density when there are missing data. Hacet. J. Math. Stat. 2022, 51, 914–931. [Google Scholar] [CrossRef]
  29. Yu, K.; Jones, M. Local linear quantile regression. J. Am. Stat. Assoc. 1998, 93, 228–237. [Google Scholar] [CrossRef]
  30. Hallin, M.; Lu, Z.; Yu, K. Local linear spatial quantile regression. Bernoulli 2009, 15, 659–686. [Google Scholar] [CrossRef]
  31. Bhattacharya, P.K.; Gangopadhyay, A.K. Kernel and nearest-neighbor estimation of a conditional quantile. Ann. Stat. 1990, 18, 1400–1415. [Google Scholar] [CrossRef]
  32. Ma, X.; He, X.; Shi, X. A variant of K nearest neighbor quantile regression. J. Appl. Stat. 2016, 43, 526–537. [Google Scholar] [CrossRef]
  33. Cheng, P.E.; Chu, C.K. Kernel estimation of distribution functions and quantiles with missing data. Stat. Sin. 1996, 6, 63–78. [Google Scholar]
  34. Xu, D.; Du, J. Nonparametric quantile regression estimation for functional data with responses missing at random. Metrika 2020, 83, 977–990. [Google Scholar] [CrossRef]
  35. Kara-Zaitri, L.; Laksaci, A.; Rachdi, M.; Vieu, P. Data-driven kNN estimation for various problems involving functional data. J. Multivar. Anal. 2017, 153, 176–188. [Google Scholar] [CrossRef]
  36. Becker, E.M.; Christensen, J.; Frederiksen, C.S.; Haugaard, V.K. Front-face fluorescence spectroscopy and chemometrics in analysis of yogurt: Rapid analysis of riboflavin. J. Dairy Sci. 2003, 86, 2508–2515. [Google Scholar] [CrossRef]
Figure 1. A sample of functional regressors.
Figure 1. A sample of functional regressors.
Axioms 13 00444 g001
Figure 2. The spectral curves.
Figure 2. The spectral curves.
Axioms 13 00444 g002
Figure 3. Comparison of the MSE between the two strategies in the moderate missing case.
Figure 3. Comparison of the MSE between the two strategies in the moderate missing case.
Axioms 13 00444 g003
Figure 4. Comparison of the MSE between the two strategies in the strong missing case.
Figure 4. Comparison of the MSE between the two strategies in the strong missing case.
Axioms 13 00444 g004
Table 1. Comparison of the MAE between different quantile estimations.
Table 1. Comparison of the MAE between different quantile estimations.
EstimatorQuartileRulen λ = 5 λ = 0.5 λ = 0.05
Estimator Q ˜ α Q11500.23580.39970.4672
2500.20670.26760.3879
12500.14070.23400.3717
22500.18350.19800.2140
Estimator Q ^ α Q11500.36720.44261.0578
2500.76971.17251.7811
12500.30860.35080.682
22500.41620.51170.8106
Estimator Q ¯ α Q11501.36721.64261.0578
2501.76971.97252.7811
12500.90861.35081.4282
22501.04621.51171.9106
Estimator Q ˜ α Q21500.18400.22530.4102
2500.20050.23390.3650
12500.09120.18730.1494
22500.11010.19910.1616
Estimator Q ^ α Q21500.06570.16500.2107
2500.09840.19220.188
12500.06770.09710.1478
22500.03270.03030.0643
Estimator Q ¯ α Q21500.69670.98241.0788
2500.41760.45210.6588
12500.29520.38800.4950
22500.29650.37180.4301
Estimator Q ˜ α Q31501.8512.51033.2202
2501.20353.33393.5650
12501.71821.47232.5494
22501.31111.39912.18616
Estimator Q ^ α Q31500.92571.03501.1317
2501.08041.12321.188
12500.70170.97511.0278
22500.61270.83030.9613
Estimator Q ¯ α Q31502.69072.78244.8898
2501.81761.82511.9688
12501.14791.17801.18950
22501.09852.08982.1201
Table 2. Comparison of the resistance to outliers between conditional median and classical regression.
Table 2. Comparison of the resistance to outliers between conditional median and classical regression.
Constant CMedian RegressionClassical Regression
C = 11.371.72
C = 21.664.79
C = 52.2410.17
C = 104.1845.24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alamari, M.B.; Almulhim, F.A.; Litimein, O.; Mechab, B. Strong Consistency of Incomplete Functional Percentile Regression. Axioms 2024, 13, 444. https://doi.org/10.3390/axioms13070444

AMA Style

Alamari MB, Almulhim FA, Litimein O, Mechab B. Strong Consistency of Incomplete Functional Percentile Regression. Axioms. 2024; 13(7):444. https://doi.org/10.3390/axioms13070444

Chicago/Turabian Style

Alamari, Mohammed B., Fatimah A. Almulhim, Ouahiba Litimein, and Boubaker Mechab. 2024. "Strong Consistency of Incomplete Functional Percentile Regression" Axioms 13, no. 7: 444. https://doi.org/10.3390/axioms13070444

APA Style

Alamari, M. B., Almulhim, F. A., Litimein, O., & Mechab, B. (2024). Strong Consistency of Incomplete Functional Percentile Regression. Axioms, 13(7), 444. https://doi.org/10.3390/axioms13070444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop