Next Article in Journal
Retarded Gravity in Disk Galaxies
Previous Article in Journal
On q-Hermite Polynomials with Three Variables: Recurrence Relations, q-Differential Equations, Summation and Operational Formulas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Multiresponse Multipredictor Nonparametric Regression Model Using Mixed Estimator

1
Department of Mathematics, Faculty of Science and Technology, Airlangga University, Surabaya 60115, Indonesia
2
Research Group of Statistical Modeling in Life Sciences, Faculty of Science and Technology, Airlangga University, Surabaya 60115, Indonesia
3
Department of Mathematics, Faculty of Mathematics and Natural Sciences, The University of Jember, Jember 68121, Indonesia
4
Department of Statistics, Faculty of Sciences and Data Analytics, Sepuluh Nopember Institute of Technology, Surabaya 60111, Indonesia
5
Department of Statistics, Faculty of Science, Muğla Sıtkı Koçman University, Muğla 48000, Turkey
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(4), 386; https://doi.org/10.3390/sym16040386
Submission received: 30 December 2023 / Revised: 5 March 2024 / Accepted: 11 March 2024 / Published: 25 March 2024

Abstract

:
In data analysis using a nonparametric regression approach, we are often faced with the problem of analyzing a set of data that has mixed patterns, namely, some of the data have a certain pattern and the rest of the data have a different pattern. To handle this kind of datum, we propose the use of a mixed estimator. In this study, we theoretically discuss a developed estimation method for a nonparametric regression model with two or more response variables and predictor variables, and there is a correlation between the response variables using a mixed estimator. The model is called the multiresponse multipredictor nonparametric regression (MMNR) model. The mixed estimator used for estimating the MMNR model is a mixed estimator of smoothing spline and Fourier series that is suitable for analyzing data with patterns that partly change at certain subintervals, and some others that follow a recurring pattern in a certain trend. Since in the MMNR model there is a correlation between responses, a symmetric weight matrix is involved in the estimation process of the MMNR model. To estimate the MMNR model, we apply the reproducing kernel Hilbert space (RKHS) method to penalized weighted least square (PWLS) optimization for estimating the regression function of the MMNR model, which consists of a smoothing spline component and a Fourier series component. A simulation study to show the performance of proposed method is also given. The obtained results are estimations of the smoothing spline component, Fourier series component, MMNR model, weight matrix, and consistency of estimated regression function. In conclusion, the estimation of the MMNR model using the mixed estimator is a combination of smoothing spline component and Fourier series component estimators. It depends on smoothing and oscillation parameters, and it has linear in observation and consistent properties.

1. Introduction

Often, we have to carry out statistical analyses that involve relationships between several variables in the sense of functional relationships between response variables and predictor variables. The appropriate statistical analysis to use in cases like this is regression analysis. In this regression analysis, basically, we build a mathematical model that is usually called a regression model, where the functional relationship between variables in the model is represented by a regression function, and for analysis purposes, we must estimate this regression function. In general, there are two basic types of regression model, namely, a parametric regression model and nonparametric regression model. Meanwhile, if we combine these two basic types of regression model, we will obtain a regression model called a semiparametric regression model. The parametric regression model is suitable for use in cases where the pattern of the relationship between the response variables and predictor variables is known in the sense of indicating a certain curve shape, such as linear, quadratic, cubic, etc., either through the results of initial investigations based on scatter diagrams or from past experience regarding the relationship between these variables. Hereinafter, the nonparametric regression model is suitable for use in cases where the pattern of the relationship between the response variables and predictor variables is unknown in the sense that it does not indicate the shape of a particular curve. The shape of the curve is only assumed to be smooth, that is, it is contained in a Sobolev space [1].
To estimate the regression functions of the regression models such as a parametric regression model, nonparametric regression model, and semiparametric regression model, several smoothing techniques are used, which are represented by estimators. Until now, several estimators have been discussed by several previous researchers both theoretically and in application in several cases. Some of these estimators are local linear estimators, which are used to determine the boundary correction of regression function of the nonparametric regression model [2], to determine the bias reduction in the estimating regression function [3], and to design a standard growth chart used for assessing toddlers’ nutritional status [4]; as a local polynomial estimator used to estimate regression functions in cases of errors-in-variable [5], in case of correlated errors [6], and to estimate the regression function of a functional data regression model [7] and finite population regression model [8]; as a kernel estimator used for estimating nonparametric regression function [9], for estimating and investigating the consistency property of a regression function [10], and for estimating regression function in case of correlated errors [11]. However, these estimators (i.e., local linear, local polynomial, and kernel) are less flexible because they are highly dependent on the neighborhood of the target point, called bandwidth, so that we need a small bandwidth to estimate a fluctuating data model, and this will cause a curve of estimation result that is too rough. Also, these estimators do not include penalty function as smoothness factor; these estimators only consider the goodness of fit factor. This means that these estimators are not so good for estimating models of data that fluctuate in the subintervals, because these estimators will return the estimation results with large value of mean squared errors (MSEs). On the other hand, spline estimators, especially the smoothing spline estimator, have the ability to handle these problems because the splines consider not only the goodness of fit factor but also the smoothness factor [1,12]. Further, the spline estimators such as the smoothing spline, M-type spline, truncated spline, penalized spline, least square spline, linear spline, and B-spline estimators are more flexible than other estimators to use for estimating the nonparametric regression functions, especially for prediction and interpretation purposes [12,13]. These splines have been used and developed widely in several cases by many researchers. For example, Liu et al. [14] and Gao and Shi [15] used M-type splines for analyzing the variance in correlated data, and for estimating regression functions of nonparametric and semiparametric regression models, respectively; Chamidah et al. [16] used truncated splines to estimate mean arterial pressure for prediction purposes, Chamidah et al. [17] and Lestari et al. [18] developed truncated spline and smoothing spline estimators, respectively, for estimating semiparametric regression models and determining the asymptotic properties of the estimator; Tirosh et al. [19], Irizarry [20], Adams et al. [21,22], Lee [23], and Maharani and Saputro [24] discussed smoothing spline for problems of analyzing fractal-like signals, minimizing risk estimate, modeling ARMA observations and estimating smoothing parameter, selection smoothing parameter using simulation data, and determining GCV criterion, respectively; Wang [13], Wang and Ke [25], Gu [26], and Sun et al. [27] discussed smoothing splines in ANOVA models; Wang et al. [28] applied a bivariate smoothing spline to data of cortisol and ACTH hormones; Lu et al. [29] used a penalized spline for analyzing current status data; Berry and Helwig [30] compared tuning methods for penalized splines; Islamiyati et al. [31,32] developed a least square spline for estimating two responses of nonparametric regression models and discussed linear spline in the modeling of blood sugar; and Kirkby et al. [33] estimated nonparametric density using B-Spline. Additionally, Osmani et al. [34] estimated the coefficient of a rates model using kernel and spline estimators.
In statistical modeling such as regression modeling, we often have to analyze the functional relationship between response variable and predictor variable where there are two or more response variables and the between responses are correlated with each other. The regression models that draw a functional relationship between response variables and predictor variables with correlated responses are the multiresponse nonparametric regression (MNR) model and multiresponse semiparametric regression (MSR) model. Due to there being correlation between responses, in the estimation process of a regression function, we need to include a matrix called a weight matrix. The inclusion of a weight matrix in the estimation process is what differentiates the MNR model from a uniresponse nonparametric regression (UNR) model, where there is no correlation between responses. So, the process of estimating the regression functions of the MNR and MSR models requires a symmetric weight matrix, namely, a diagonal matrix. We can also use several estimators to estimate the regression functions of these MNR and MSR models. One of these estimators is the smoothing spline estimator. Currently, many researchers are paying attention to developing and applying the smoothing spline estimator in many areas of research. For example, Chamidah et al. [17], Lestari et al. [18] discussed smoothing splines in MSR models; Lestari et al. [35] and Wang et al. [28] discussed RKHS in an MNR model and applied an MNR model to determine associations between hormones, respectively. Because of its powerful and flexible properties, the smoothing spline estimator is one of the most popular estimators used for estimating the regression functions of UNR and MNR models.
The previous description shows that the smoothing spline estimator has been used in statistical analysis using regression model approaches such as UNR and MNR models. The smoothing spline estimator provides good fitting results because it can overcome the curve of data, which has a sharp decrease and increase pattern that results a relatively smooth curve. Also, using the smoothing spline provides several advantages in that it has unique statistical properties, enables visual interpretation, can handle smooth data and functions, and can readily handle data that change at certain subintervals [1,12,13]. Meanwhile, the Fourier series estimator is a popular smoothing technique in nonparametric regression modeling. The Fourier series also provides good statistical and visual interpretation among nonparametric regression models. Using the Fourier series estimator provides advantages such as being able to handle data that have a repeating pattern at a certain trend interval and to provide good statistical interpretation [36]. Suparti et al. [37], Mardianto et al. [38], and Amato et al. [39] proposed Fourier series methods for modeling inflation in Indonesia, for modeling longitudinal data, and for approximating separable models, respectively.
The research discussed in the previous description relating to the estimation of nonparametric regression models is still limited to the use of only one type of estimator, whereas in data analysis using a nonparametric regression approach, we are often faced with the problem of analyzing a set of data that has mixed patterns, namely, some of the data have a certain pattern and the rest of the data have a different pattern. To handle these kinds of data, we should use a mixed estimator, namely, a combination of more than one estimator. Mariati et al. [40] discussed a uniresponse multivariable nonparametric regression model using a mixed estimator comprising a smoothing spline and Fourier series that was applied to data of poor households in Bali Province. However, previous researchers, namely, Mariati et al. [40], discussed the use of a mixed estimator to estimate a nonparametric regression model, but they only discussed estimating a UNR model in the sense that there is no correlation between the response variables. In this study, therefore, we theoretically discuss a new estimation method using a mixed estimator for a nonparametric regression model with two or more response variables and predictor variables, and there is a correlation between the response variables. The model is called the multiresponse multipredictor nonparametric regression (MMNR) model. The mixed estimator used for estimating the MMNR model is a combination of smoothing spline and Fourier series suitable for analyzing data wherein some patterns partly change at certain subintervals, and others follow a recurring pattern in a certain trend. Since there is a correlation between responses in the MMNR model, a weight matrix is involved in its estimation process. Also, we apply the reproducing kernel Hilbert space (RKHS) method to penalized weighted least square (PWLS) optimization for estimating the regression function of our MMNR model. In this study, therefore, we discuss theoretically about determining the smoothing spline component and Fourier series component of the MMNR model, determining the goodness of fit and penalty components of PWLS optimization, determining the MMNR model estimate, determining the weight matrix, selecting optimal smoothing and oscillation parameters in the MMNR model, and investigating the consistency of the regression function estimator of the MMNR model. Additionally, the results of this study contribute to the development of the field of theoretical statistics, namely, statistical inference theory, regarding theories of estimation and hypothesis testing in multiresponse multipredictor nonparametric regression based on a mixed smoothing spline and Fourier series estimator.

2. Materials and Methods

To achieve the objectives of this research, in this section, we briefly present materials and methods such as the MMNR model, mixed smoothing spline and Fourier series estimator, reproducing kernel Hilbert space (RKHS), and penalized weighted least square (PWLS) optimization.

2.1. The MMNR Model

We first consider a paired dataset x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i , y r i for r = 1,2 , , R and i = 1,2 , , n . We say that the paired dataset follows an MMNR model if the relationship between y r i and ( x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i ) satisfies the following functional equation:
y r i = m r x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i + ε r i
where r = 1,2 , , R represents responses; i = 1,2 , , n represents observations; y r i is the value of the response variable at the i th observation and r th response; m r is the unknown regression function of the r th response; x r 1 i , x r 2 i , , x r p i are values of p smoothing spline predictor variables at the i th observation and r th response; t r 1 i , t r 2 i , , t r q i are values of q Fourier series predictor variables at the i th observation and r th response; and ε r i is the random error at the i th observation and r th response, which has mean of zero and variance of σ r i 2 . Here, we assume that the correlation between responses is ρ l s = ρ l f o r l = s 0 f o r l s .
Since the shape of the regression function m r in the MMNR model is assumed to be unknown and additive in nature, the regression function m r in Equation (1) can be presented as follows:
m r x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i = j = 1 p f r j x r j i + k = 1 q g r k t r k i
where r = 1,2 , , R ; i = 1,2 , , n ; j = 1 p f r j x r j i is a smoothing spline component of regression function m r in which f r j x r j i , j = 1,2 , , p are contained in a Sobolev space W 2 m a j , b j , namely, f r j W 2 m a j , b j ; and k = 1 q g r k t r k i is a Fourier series component of regression function m r that is approximated by a Fourier series function. Hereinafter, based on Equations (1) and (2), we have the following MMNR model:
y r i = j = 1 p f r j x r j i + k = 1 q g r k t r k i + ε r i ;   r = 1,2 , , R ;   i = 1,2 , , n .
Furthermore, the regression function of the MMNR model presented in (3) is estimated using a mixed smoothing spline and Fourier series estimator by applying the reproducing kernel Hilbert space (RKHS) method to the developed penalized weighted least square.

2.2. Mixed Smoothing Spline and Fourier Series Estimator

To estimate the regression function of the MMNR model presented in (3), we use a mixed smoothing spline and Fourier series estimator, which is suitable for analyzing data with some patterns that partly change at certain subintervals, and others that follow a recurring pattern in a certain trend. The regression function m r in the MMNR model in (3) consists of the component j = 1 p f r j x r j i , which is approached by smoothing spline, and the component k = 1 q g r k t r k i , which is approximated by the following Fourier series function:
g r k t r k i = b r k t r k i + 1 2 α 0 r k + v = 1 K α v r k cos v t r k i
where r = 1,2 , , R ; k = 1,2 , , p ; and i = 1,2 , , n .
Since we involve a smoothing spline estimator in the process of estimating the regression function, according to Wahba [12] and Wang [13], we should use the reproducing kernel Hilbert space (RKHS) method approach, which is applied to the developed penalized weighted least square (PWLS) optimization.

2.3. Reproducing Kernel Hilbert Space (RKHS)

The reproducing kernel Hilbert space (RKHS) method was first introduced by Aronszajn [41]. The RKHS method was developed for estimating nonparametric regression models and semiparametric regression models by several researchers such as Wang [13], Chamidah et al. [17], Lestari et al. [18], and Kimeldorf and Wahba [42]. We call a Hilbert space H an RKHS on a set X over field F if it meets certain conditions, as shown by Aronszajn [41], Berlinet and Thomas-Agnan [43], Paulsen [44], and Yuan and Cai [45]:
(i)
If H is a vector, then H ( X , F ) , namely, H is the subspace of a vector space over F , which is notated by F (X, F );
(ii)
If H is equipped with an inner product,   , , then it will be a Hilbert space;
(iii)
If E y   :   H F is a linear evaluation functional that is defined by E y f = f ( y ) for every y X, then the linear evaluation functional is bounded.
Furthermore, if H X , namely, H is an RKHS on X , then there exists a unique vector, k y H , for every y   X , such that f y = f   ,   k y for every f H . Note that every linear functional that is bounded will be given by the inner product with a unique vector in H , and under this condition, we call the function k y as a reproducing kernel (RK) for the point y . This means that the reproducing kernel (RK) for H is a two-variable function, which is defined by K x , y = k y ( x ) . It implies that K x , y = k y x = k y , k x and E y 2 = k y 2 = k y , k y = K ( y , y ) [41,42,43,44,45].
In this study, the RKHS method is employed to the developed penalized weighted least square (PWLS) optimization for obtaining the estimation of the regression function of the MMNR model presented in (3). In the following section, we provide the penalized weighted least square (PWLS) optimization that we developed according to the case of our interest.

2.4. Penalized Weighted Least Square (PWLS) Optimization

Since the regression function of the multiresponse multipredictor nonparametric regression (MMNR) model in (3) consists of two components, namely, the smoothing spline component j = 1 p f r j x r j i and the Fourier series component k = 1 q g r k t r k i , for estimating the regression function, we developed the PWLS optimization for a single-estimator case, namely, the smoothing spline estimator, only as proposed by Wang et al. [28], Lestari et al. [18], and Islamiyati et al. [31], to the two mixed estimators, namely, the smoothing spline and Fourier series estimators. In the process of estimating the regression function of the multiresponse multipredictor nonparametric regression (MMNR) model by using the developed penalized weighted least square (PWLS) optimization, we assume that g r k t r k i is a fixed function. Hence, we can estimate the smoothing spline component in the multiresponse multipredictor nonparametric regression (MMNR) model presented in (3) for every response r = 1,2 , , R by using the following developed penalized weighted least square (PWLS) optimization:
Min f rj W 2 m [ a j , b j ] n 1 i = 1 n w r i y r i j = 1 p f r j ( x r j i ) + k = 1 q b r k t r k i + 1 2 α 0 r k + v = 1 K α v r k cos v t r k i 2 + j = 1 p λ r j a j b j f r j x r j 2 d x r j  
where 0 < λ r j < ; r = 1,2 , , R ; j = 1,2 , , p ; and w r i represents a weight that is an inverse of variances. Note that we include a weight w r i in the developed penalized weighted least square (PWLS) presented in (5) because in this case there is a correlation between responses.

3. Results and Discussions

In this section, we theoretically discuss the estimating process of the multiresponse multipredictor nonparametric regression (MMNR) model. Firstly, suppose that we have a paired dataset such as x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i , y r i for r = 1,2 , , R and i = 1,2 , , n , where the functional relationship between predictor variables x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i and response variables y r i meets the MMNR model presented in Equation (3), namely:
y r i = m r x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i + ε r i = j = 1 p f r j x r j i + k = 1 q g r k t r k i + ε r i ;   r = 1,2 , , R   ;   i = 1,2 , , n
where information and assumptions about the multiresponse multipredictor nonparametric regression (MMNR) model are presented in Section 2.1.
Since this multiresponse multipredictor nonparametric regression (MMNR) model has more than one response variable and predictor variable, and has a regression function that is composed of two components, namely, a smoothing spline component and a Fourier series component, to simplify the process of estimating the multiresponse multipredictor nonparametric regression (MMNR) model by using the reproducing kernel Hilbert space (RKHS) method approach that is employed to the developed penalized weighted least square (PWLS) optimization, we should express the multiresponse multipredictor nonparametric regression (MMNR) model presented in Equation (3) in the form of matrix notation, as follows:
y 11 y 12 y 1 n y 21 y 22 y 2 n y R 1 y R 2 y R n R n × 1 = j = 1 p f 1 j x 1 j 1 j = 1 p f 1 j x 1 j 2 j = 1 p f 1 j x 1 j n j = 1 p f 2 j x 2 j 1 j = 1 p f 2 j x 2 j 2 j = 1 p f 2 j x 2 j n j = 1 p f R j x R j 1 j = 1 p f R j x R j 2 j = 1 p f R j x R j n R n × 1 + k = 1 q g 1 k t 1 k 1 k = 1 q g 1 k t 1 k 2 k = 1 q g 1 k t 1 k n k = 1 q g 1 k t 1 k n k = 1 q g 2 k t 2 k 2 k = 1 q g 2 k t 2 k n k = 1 q g R k t R k 1 k = 1 q g R k t R k 2 k = 1 q g R k t R k n R n × 1 + ε 11 ε 12 ε 1 n ε 21 ε 22 ε 2 n ε R 1 ε R 2 ε R n R n × 1 .
Furthermore, we may write the MMNR model presented in (6) as follows:
y 11 y 12 y 1 n y 21 y 22 y 2 n y R 1 y R 2 y R n R n × 1 = f 1 ( x 11 ) f 1 ( x 12 ) f 1 ( x 1 n ) f 2 ( x 21 ) f 2 ( x 22 ) f 2 ( x 2 n ) f R ( x R 1 ) f R ( x R 2 ) f R ( x R n ) R n × 1 + g ( t 11 ) g 1 ( t 12 ) g 1 ( t 1 n ) g 2 ( t 21 ) g 2 ( t 22 ) g 2 ( t 2 n ) g R ( t R 1 ) g R ( t R 2 ) g R ( t R n ) R n × 1 + ε 11 ε 12 ε 1 n ε 21 ε 22 ε 2 n ε R 1 ε R 2 ε R n R n × 1 .
Hence, the MMNR model given in (7) can be written as follows:
y = f + g + ε
where y = y 1 , y 2 , , y R T ; f = f 1 x 1 , f 2 x 2 , , f R x R T ; ε = ε 1 , ε 2 , , ε R T ;
  • g = g 1 t 1 , g 2 t 2 , , g R ( t R ) T ; y 1 = y 11 , y 12 , , y 1 n T ; y 2 = y 21 , y 22 , , y 2 n T ;…; y R = y R 1 , y R 2 , , y R n T ; f 1 x 1 = f 1 x 11 , f 1 x 12 , , f 1 ( x 1 n ) T ;
  • f 2 x 2 = f 2 x 21 , f 2 x 22 , , f 2 ( x 2 n ) T ;…; f R x R = f R x R 1 , f R x R 2 , , f R ( x R n ) T ;
  • g 1 t 1 = g 1 t 11 , g 1 t 12 , , g 1 ( t 1 n ) T ; g 2 t 2 = g 2 t 21 , g 2 t 22 , , g 2 ( t 2 n ) T ;…;
  • g R t R = g R t R 1 , g R t R 2 , , g R ( t R n ) T ; ε 1 = ε 11 , ε 12 , , ε 1 n T ;
  • ε 2 = ε 21 , ε 22 , , ε 2 n T ; …; ε R = ε R 1 , ε R 2 , , ε R n T .
Next, by letting z = y g , we may write the MMNR model given in (8) as follows:
z = f + ε
In the following sections, we discuss about determining smoothing spline component and Fourier series component of MMNR model, determining goodness of fit and penalty components of PWLS optimization, estimating the MMNR model, estimating the weight matrix, selecting optimal smoothing and oscillation parameters in the MMNR model, and investigating the consistency of regression function estimator of MMNR model.

3.1. Determining Smoothing Spline Component of MMNR Model

To determine the function of the smoothing spline component of the regression function of multiresponse multipredictor nonparametric regression (MMNR) model presented in Equation (1), we use the RKHS method, which is applied to PWLS optimization. Due to j = 1 p f r j x r j i being a smoothing spline component in the MMNR model presented in (3), we use the RKHS method to estimate it with the following steps: Firstly, suppose that we decompose a Hilbert space H into a direct sum of Hilbert subspaces H 0 with basis γ r j 1 , γ r j 2 , , γ r j m where m represents the order of polynomial spline, H 1 with basis β r j 1 , β r j 2 , , β r j n where n represents the number of observations, and H 0 H 1 (i.e., H 0 is perpendicular to H 1 ), as follows:
H = H 0 H 1 .
Hence, we express every function where f r j H as follows:
f r j = u r j + v r j = k = 1 m c r j k γ r j k + s = 1 n d r j s β r j s = γ r j T c r j + β r j T d r j
where u r j H 0 ; v r j H 1 ; c r j and d r j are constans; γ r j = γ r j 1 , γ r j 2 , , γ r j m T ; β r j = β r j 1 , β r j 2 , , β r j n T ; c r j = c r j 1 , c r j 2 , , c r j m T ; d r j = d r 1 j , d r 2 j , , d r m j T .
Secondly, suppose L x H is a bounded linear functional, and f r j H , then we have the following relationship:
L x f r j = L x ( u r j + v r j ) = u r j x r i + v r j ( x r i ) = f r j ( x r i ) .
Since L x H is a bounded linear functional, then according to Yuan and Cai [45] and Lestari et al. [35], there exists a representer ξ i H that meets the following equation:
L x f r j = ξ r j i , f r j = f r j ( x r i ) .
Based on Equations (10) and (11), we may write Equation (12) as follows:
f r j x r i = ξ r j i , f r j = ξ r j i , γ r j T c r j + ξ r j i , β r j T d r j .
Hence, for j = 1 , we may write Equation (13) as follows:
f r 1 x r i = ξ r 1 i , γ r 1 T c r 1 + ξ r 1 i , β r 1 T d r 1 ,   i = 1,2 , , n .
Next, based on Equation (14) for j = 1 and i = 1 , we have f r 1 x r 1 as follows:
f r 1 x r 1 = ξ r 11 , γ r 1 T c r 1 + ξ r 11 , β r 1 T d r 1 = c r 11 ξ r 11 , γ r 11 + c r 12 ξ r 11 , γ r 12 + + c r 1 n ξ r 11 , γ r 1 m + d r 11 ξ r 11 , β r 11 + d r 12 ξ r 11 , β r 12 + + d r 1 n ξ r 11 , β r 1 n .
Similarly, based on Equation (14) for j = 1 and i = 2 , we have f r 1 x r 2 as follows:
f r 1 x r 2 = ξ r 12 , γ r 1 T c r 1 + ξ r 12 , β r 1 T d r 1 = c r 11 ξ r 12 , γ r 11 + c r 12 ξ r 12 , γ r 12 + + c r 1 n ξ r 12 , γ r 1 m + d r 11 ξ r 12 , β r 11 + d r 12 ξ r 12 , β r 12 + + d r 1 n ξ r 12 , β r 1 n .
If we continue this process for j = 1 and i = 3,4 , , n in a similar way, then for i = n , we have f r 1 x r n as follows:
f r 1 x r n = ξ r 1 n , γ r 1 T c r 1 + ξ r 1 n , β r 1 T d r 1 = c r 11 ξ r 1 n , γ r 11 + c r 12 ξ r 1 n , γ r 12 + + c r 1 n ξ r 1 n , γ r 1 m + d r 11 ξ r 1 n , β r 11 + d r 12 ξ r 1 n , β r 12 + + d r 1 n ξ r 1 n , β r 1 n .
Furthermore, based on Equation (13) for j = 2 , we have f r 2 x r i as follows:
f r 2 x r i = ξ r 2 i , γ r 2 T c r 2 + ξ r 2 i , β r 2 T d r 2 ,   i = 1,2 , , n .
Hence, based on Equation (17) for j = 2 and i = 1 , we have f r 2 x r 1 as follows:
f r 2 x r 1 = ξ r 21 , γ r 2 T c r 2 + ξ r 21 , β r 2 T d r 2 = c r 21 ξ r 21 , γ r 21 + c r 22 ξ r 21 , γ r 22 + + c r 2 n ξ r 21 , γ r 2 m + d r 21 ξ r 21 , β r 21 + d r 22 ξ r 21 , β r 22 + + d r 2 n ξ r 21 , β r 2 n .
In similar way, based on Equation (17) for j = 2 and i = 2 , we have f r 2 x r 2 as follows:
f r 2 x r 2 = ξ r 22 , γ r 2 T c r 2 + ξ r 22 , β r 2 T d r 2 = c r 21 ξ r 22 , γ r 21 + c r 22 ξ r 22 , γ r 22 + + c r 2 n ξ r 22 , γ r 2 m + d r 21 ξ r 22 , β r 21 + d r 22 ξ r 22 , β r 22 + + d r 2 n ξ r 22 , β r 2 n .
We continue this process for j = 2 and i = 3,4 , , n in a similar way such that for i = n we have f r 2 x r n as follows:
f r 2 x r n = ξ r 2 n , γ r 2 T c r 2 + ξ r 2 n , β r 2 T d r 2 = c r 21 ξ r 2 n , γ r 21 + c r 22 ξ r 2 n , γ r 22 + + c r 2 n ξ r 2 n , γ r 2 m + d r 21 ξ r 2 n , β r 21 + d r 22 ξ r 2 n , β r 22 + + d r 2 n ξ r 2 n , β r 2 n .
Based on Equations (15)–(17), we obtain the smoothing spline component of the regression function m r of the MMNR model for j = 1 and i = 1,2 , , n as follows:
f r 1 = f r 1 ( x r 1 ) f r 1 ( x r 2 ) f r 1 ( x r n ) = ξ r 11 , γ r 11 ξ r 11 , γ r 12 ξ r 11 , γ r 1 m ξ r 12 , γ r 11 ξ r 12 , γ r 12 ξ r 12 , γ r 1 m ξ r 1 n , γ r 11 ξ r 1 n , γ r 12 ξ r 1 n , γ r 1 m c r 11 c r 12 c r 1 n + ξ r 11 , β r 11 ξ r 11 , β r 12 ξ r 11 , β r 1 n ξ r 12 , β r 11 ξ r 12 , β r 12 ξ r 12 , β r 1 n ξ r 1 n , β r 11 ξ r 1 n , β r 12 ξ r 1 n , β r 1 n d r 11 d r 12 d r 1 n f r 1 = U r 1 c r 1 + V r 1 d r 1 .
Next, in the same way as in the process of obtaining Equation (22), we obtain f r j for j = 2,3 , , p as follows:
f r 2 = U r 2 c r 2 + V r 2 d r 2 f r 3 = U r 3 c r 3 + V r 3 d r 3 f r p = U r p c r p + V r p d r p
Since ξ r j i , β r j i = β r j i , β r j i , we have matrix V r j as follows:
V r j = β r j 1 , β r j 1 β r j 1 , β r j 2 β r j 1 , β r j n β r j 2 , β r j 1 β r j 2 , β r j 2 β r j 2 , β r j n β r j n , β r j 1 β r j n , β r j 2 β r j n , β r j n .
Based on Equations (22) and (23), we obtain the smoothing spline component for the r th response, namely, f r , as follows:
f r = f r 1 f r 2 f r p = U r 1 c r 1 + V r 1 d r 1 U r 2 c r 2 + V r 2 d r 2 U r p c r p + V r p d r p = U r 1 0 0 0 U r 2 0 0 0 U r p c r 1 c r 2 c r p + V r 1 0 0 0 V r 2 0 0 0 V r 2 d r 1 d r 2 d r p = U r c r + V r d r .
Finally, based on Equation (25), we obtain the smoothing spline component for all responses r = 1,2 , ,   R , namely, f , as follows:
f ( R n × 1 ) = f 11 , f 12 , , f 1 n , f 21 , f 22 , , f 2 n , , f R 1 , f R 2 , , f R n T ( R n × 1 ) = d i a g u 11 , u 12 , , u 1 p , u 21 , u 22 , , u 2 p , , u R 1 , u R 2 , , u R p ( R n × R p ) c 11 , c 12 , , c 1 p , c 21 , c 22 , , c 2 p , , c R 1 , c R 2 , , c R p T ( R p × 1 ) + d i a g v 11 , v 12 , , v 1 p , v 21 , v 22 , , v 2 p , , v R 1 , v R 2 , , v R p ( R n × R p ) d 11 , d 12 , , d 1 p , d 21 , d 22 , , d 2 p , , d R 1 , d R 2 , , d R p T ( R p × 1 ) f = U c + V d
where
  • f = f 11 , f 12 , , f 1 n , f 21 , f 22 , , f 2 n , , f R 1 , f R 2 , , f R n T ;
  • U = d i a g u 11 , u 12 , , u 1 p , u 21 , u 22 , , u 2 p , , u R 1 , u R 2 , , u R p ;
  • c = c 11 , c 12 , , c 1 p , c 21 , c 22 , , c 2 p , , c R 1 , c R 2 , , c R p T ;
  • V = d i a g v 11 , v 12 , , v 1 p , v 21 , v 22 , , v 2 p , , v R 1 , v R 2 , , v R p ;
  • d = d 11 , d 12 , , d 1 p , d 21 , d 22 , , d 2 p , , d R 1 , d R 2 , , d R p T .
Thus, the smoothing spline component of the MMNR model can be presented in matrix notation as given in Equation (26), that is, f = U c + V d .

3.2. Determining Fourier Series Component of MMNR Model

We consider the Fourier series component of the regression function of the MMNR model presented by Equation (3), namely, k = 1 q g r k t r k i . Note that the curve shape of g r k t r k i is unknown and only assumed to be contained in a continuous space C 0 , π . Based on Equation (4), this component is approximated by Fourier series function as follows:
g r k t r k i = b r k t r k i + 1 2 α 0 r k + v = 1 K α v r k cos v t r k i ;   k = 1,2 , , q = b r k t r k i + 1 2 α 0 r k + α 1 r k cos t r k i + α 2 r k cos 2 t r k i + + α K r k cos K t r k i
Next, for i = 1,2 , , n , we may express the Fourier series function given in Equation (27) as follows:
g r k t r k i = t r k 1 1 / 2 cos t r k 1 cos 2 t r k 1 cos K t r k 1 t r k 2 1 / 2 cos t r k 2 cos 2 t r k 2 cos K t r k 2 t r k n 1 / 2                                                                             cos t r k n cos 2 t r k n cos K t r k n b r k α 0 r k α 1 r k α 2 r k α K r k ;   k = 1,2 , , q = T r k α r k
Hence, based on Equation (28), we can write the Fourier series component of the regression function of the MMNR model for i = 1,2 , , n as follows:
g r = k = 1 q g r k t r k i = k = 1 q T r k α r k = T r 1 α r 1 + T r 2 α r 2 + + T r q α r q = T r α r
where α r = b r 1 , α 0 r 1 , α 1 r 1 , , α K r 1 , , b r q , α 0 r q , α 1 r q , , α K r q T , r = 1,2 , , R , and
T r = t r 11 1 / 2 cos t r 11 cos K t r 11 t r q 1 1 / 2 cos t r q 1 cos K t r q 1 t r 12 1 / 2 cos t r 12 cos K t r 12 t r q 2 1 / 2 cos t r q 2 cos K t r q 2 t r 1 n 1 / 2                                                                                                                       cos t r 1 n cos K t r 1 n t r q n 1 / 2 cos t r q n cos K t r q n
Furthermore, based on Equation (29) and in the same way as in the process of obtaining Equation (26), we obtain the Fourier series component for all responses r = 1,2 , ,   R , namely, g , as follows:
g ( R n × 1 ) = g 11 , g 12 , , g 1 n , g 21 , g 22 , , g 2 n , , g R 1 , g R 2 , , g R n T ( R n × 1 ) = d i a g T 11 , T 12 , , T 1 q , T 21 , T 22 , , T 2 q , , T R 1 ,   T R 2 , , T R q ( R n × R q ) α 11 T , α 12 T , , α 1 q T , α 21 T , α 22 T , , α 2 q T , , α R 1 T ,   α R 2 T , , α R q T T ( R q × 1 ) = T α
where
  • g = g 11 , g 12 , , g 1 n , g 21 , g 22 , , g 2 n , , g R 1 , g R 2 , , g R n T ;
  • T = d i a g T 11 , T 12 , , T 1 q , T 21 , T 22 , , T 2 q , , T R 1 ,   T R 2 , , T R q ;
  • α = α 11 T , α 12 T , , α 1 q T , α 21 T , α 22 T , , α 2 q T , , α R 1 T ,   α R 2 T , , α R q T T .

3.3. Determining Goodness of Fit and Penalty Components of PWLS Optimization

To estimate the regression function of the MMNR model, we use the PWLS as presented in Equation (5), namely, by taking the solution to the following PWLS optimization:
Min f rj W 2 m [ a j , b j ] n 1 i = 1 n w r i y r i j = 1 p f r j ( x r j i ) + k = 1 q b r k t r k i + 1 2 α 0 r k + v = 1 K α v r k cos v t r k i 2 + j = 1 p λ r j a j b j f r j x r j 2 d x r j
where 0 < λ r j < ; r = 1,2 , , R ; j = 1,2 , , p ; and w r i represents a weight that is an inverse of random error variances.
Based on Equations (8) and (9), we have the MMNR model, which can be expressed as y = f + g + ε or z = f + ε where z  = y g . Therefore, by considering Equation (26), the PWLS optimization can be written in matrix notation as follows:
Q ( c , d ) = n 1 z f T W z f + j = 1 p λ r j a j b j f r j x r j 2 d x r j
where W is a symmetrical weight matrix that is the inverse of a covariance matrix of random errors. Details of the weight matrix can be found in Lestari et al. [18,35]. In addition, the PWLS optimization function presented in Equation (31) shows that the goodness of fit component of PWLS optimization is n 1 z f T W z f .
Next, we determine the penalty component of PWLS optimization. In the PWLS optimization function presented in Equation (31), j = 1 p λ r j a j b j f r j x r j 2 d x r j is a penalty component of PWLS optimization. In this step, we also express a j b j f r j x r j 2 d x r j in matrix notation as follows:
a j b j f r j x r j 2 d x r j = P 1 f r j 2 = P 1 f r j , P 1 f r j = P 1 γ r j T c r j + β r j T d r j , P 1 γ r j T c r j + β r j T d r j = d r j T β r j T β r j T d r j = d r j 1 d r j 2 d r j n β r j 1 , β r j 1 β r j 1 , β r j 2 β r j 1 , β r j n β r j 2 , β r j 1 β r j 2 , β r j 2 β r j 2 , β r j n β r j n , β r j 1 β r j n , β r j 2 β r j n , β r j n d r j 1 d r j 2 d r j n = d r j T   V r j   d r j .
Hence, based on Equation (32), we obtain the penalty component of PWLS optimization, which can be expressed in matrix notation for the r th response as follows:
j = 1 p λ r j a j b j f r j x r j 2 d x r j = j = 1 p λ r j   d r j T   V r j   d r j = d r 1 d r 2 d r p V r 1 λ r 1 0 0 0 V r 2 λ r 2 0 0 0 V r p λ r p d r 1 d r 2 d r p = d r 1 d r 2 d r p λ r 1 I 0 0 0 λ r 2 I 0 0 0 λ r p I V r 1 0 0 0 V r 2 0 0 0 V r p d r 1 d r 2 d r p = d r T     L r   V r d r .
Furthermore, based on Equation (33) and in the same way as in the process of obtaining Equation (26), the penalty component for all responses r = 1,2 , , R can be obtained as follows:
Let P r = j = 1 p λ r j a j b j f r j x r j 2 d x r j = j = 1 p λ r j d r j T V r j d r j be the r th response penalty, and the penalty component for all responses r = 1,2 , , R is presented by P as follows:
P ( R n × 1 ) = d 11 , d 21 , , d R 1 , d 12 , d 22 , , d R 2 , , d 1 p , d 2 p , , d R p T ( R n × R p ) × d i a g λ 11 I , λ 21 I , , λ R 1 I , λ 12 I , λ 22 I , , λ R 2 I , , λ 1 p I , λ 2 p I , , λ R p I ( R p × R p ) × d i a g V 11 , V 21 , , V R 1 , V 12 , V 22 , , V R 2 , , V 1 p , V 2 p , , V R p × ( R p × R p ) d 11 , d 21 , , d R 1 , d 12 , d 22 , , d R 2 , , d 1 p , d 2 p , , d R p ( R p × 1 ) = d T L V d
where
  • d = d 11 , d 21 , , d R 1 , d 12 , d 22 , , d R 2 , , d 1 p , d 2 p , , d R p ;
  • L = d i a g λ 11 I , λ 21 I , , λ R 1 I , λ 12 I , λ 22 I , , λ R 2 I , , λ 1 p I , λ 2 p I , , λ R p I ;
  • V = d i a g V 11 , V 21 , , V R 1 , V 12 , V 22 , , V R 2 , , V 1 p , V 2 p , , V R p .
Thus, the goodness of fit component of the PWLS optimization is presented as follows:
n 1 i = 1 n w r i y r i j = 1 p f r j ( x r j i ) + k = 1 q b r k t r k i + 1 2 α 0 r k + v = 1 K α v r k cos v t r k i 2 = n 1 z f T W z f
and the penalty component of the PWLS optimization is presented as follows:
j = 1 p λ r j a j b j f r j x r j 2 d x r j = d T L V d .

3.4. Estimating the MMNR Model

The PWLS optimization presented in Equation (31) is used to obtain the estimation of the MMNR model. Also, we definitely consider Equations (26), (30) and (34). In this section, we provide a complete explanation of the estimation process of the MMNR model. In Equation (31), we have the PWLS as follows:
Q ( c , d ) = n 1 z f T W z f + j = 1 p λ r j a j b j f r j x r j 2 d x r j = n 1 z f T W z f + d T L V d = n 1 z U c + V d T W z U c + V d + d T L V d = n 1 z U c V d T W z U c V d + d T L V d = n 1 z U c V d T W z U c V d + n   L d T V d = n 1 { z T W z z T W U c z T W V d c T U T W z + c T U T W U c + c T U T W V d d T V T W z + d T V T W U c + d T V T W V d + n L d T V d } .
In this step, we first determine the estimator for d , namely, d ^ , as follows:
Q ( c , d ) d | d = d ^ = 0   2 V T W z + 2 V T W U c + 2 V T W V d + 2 n L V d = 0   2 V T z W + U W c + W V + n L d ^ = 0   z W + U W c + W V + n L d ^ = 0 .
Let H = ( W V + n L ) such that Equation (35) can be written as follows:
z W + U W c + H d ^ = 0   d ^ = H 1 z W U W c .
Next, we determine the estimator for c , namely, c ^ , as follows:
Q ( c , d ) c | c = c ^ = 0     2 U T W z + 2 U T W U c ^ + 2 U T W V d = 0 .
By substituting Equation (36) into Equation (37), we obtain the following equation:
2 U T W z + 2 U T W U c ^ + 2 U T W V H 1 z W U W c ^ = 0 .
Since H = ( W V + n L ) , then we have V H 1 = ( I n L H 1 ) , and we may therefore write Equation (38) as follows:
2 U T W z + 2 U T W U c ^ + 2 U T W I n L H 1 z W U W c ^ = 0 .
Equation (39) returns the following result:
c ^ = U T H 1 W U 1 U T H 1 W z .
By substituting Equation (40) into Equation (36), we obtain the following result:
d ^ = H 1 W I U U T H 1 W U 1 U T H 1 W z .
Hence, based on Equations (26), (40) and (41), we obtain the following estimated smoothing spline component:
f ^ = U c ^ + V d ^ = U U T H 1 W U 1 U T H 1 W z + V H 1 W I U U T H 1 W U 1 U T H 1 W z = U U T H 1 W U 1 U T H 1 W + V H 1 W I U U T H 1 W U 1 U T H 1 W z = U U T H 1 W U 1 U T H 1 W + V H 1 W I U U T H 1 W U 1 U T H 1 W y g = A ( λ , K ) y g .
Thus, the estimation of the smoothing spline component of the MMNR model notated as f ^ λ , K ( x , t ) is given by
f ^ λ , K x , t = A λ , K y g = A λ , K y A λ , K g .
where A λ , K = U U T H 1 W U 1 U T H 1 W + V H 1 W I U U T H 1 W U 1 U T H 1 W , and W is a special symmetric weight matrix, namely, a diagonal matrix, which is an inverse of a covariance matrix of random errors. In this case, based on the assumption of random errors of the MMNR model, the weight matrix W is given as follows [18,35]:
W = d i a g W 1 , W 2 , , W R 1
where
W r = σ r 1 2 σ r ( 1,2 ) σ r ( 1 , n ) σ r ( 2,1 ) σ r 2 2 σ r ( 2 , n ) σ r ( n , 1 ) σ r ( n , 2 ) σ r n 2 ,   r = 1,2 , , R .
In the previous explanation, we obtained f ^ λ , K ( x , t ) , which is presented in (42). The next step is determining the Fourier series component of the regression function of the MMNR model. Let us look again at the MMNR model presented in Equation (8):
y = f + g + ε
We substitute Equation (42) into Equation (8) such that we obtain the following equation:
y = f + g + ε = A ( λ , K ) y g + g + ε ε = y [ A ( λ , K ) y g + g ] = y A λ , K y + A λ , K g g .
Next, we substitute Equation (30), namely, g = T α , into Equation (43) such that we obtain the equation for random error as follows:
ε = y A λ , K y + A λ , K T α T α .
Hence, we have
ε T ε = y A λ , K y + A λ , K T α T α T y A λ , K y + A λ , K T α T α = y T y 2 y T A λ , K y + 2 α T T T A λ , K y 2 α T T T y 2 α T T T A λ , K T A λ , K y + 2 α T T T A λ , K y 2 α T T T A λ , K T α + y T A λ , K T A λ , K y + α T T T A λ , K T A λ , K T α + α T T T T α .
Furthermore, based on Equation (44), we determine the estimator for α , namely, α ^ , as follows:
( ε T ε ) α | α = α ^ = 0 2 T T A λ , K y 2 T T y 2 T T A λ , K T A λ , K y + 2 T T A λ , K y + 4 T T A λ , K T α ^ + 2 T T A λ , K T A λ , K T α ^ + 2 T T T α ^ = 0 α ^ = 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K y .
Additionally, by substituting Equation (45) into Equation (30), we obtain the estimator for the Fourier series component of the regression function of the MMNR model, namely, g ^ λ , K ( x , t ) , as follows:
g ^ λ , K x , t = T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K y = G λ , K y
where G λ , K = T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K .
Hereinafter, by substituting Equation (46) into Equation (42) we obtain the estimator for the smoothing spline component of the regression function of the MMNR model as follows:
f ^ λ , K x , t = A λ , K I T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K y = F λ , K y
where F λ , K = A λ , K I T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K .
Finally, based on Equations (2), (8), (46) and (47), we obtain the estimation of the regression function of the MMNR model, which consists of estimations of the smoothing spline component and Fourier series component as follows:
m ^ λ , K ( x , t ) = f ^ λ , K ( x , t ) + g ^ λ , K ( x , t ) .
Thus, the estimation result of the MMNR model by using the mixed smoothing spline and Fourier series estimator is as follows:
y ^ = m ^ λ , K ( x , t ) = f ^ λ , K ( x , t ) + g ^ λ , K ( x , t ) = F λ , K y + G λ , K y = F λ , K + G λ , K y
where λ represents the smoothing parameter of the smoothing spline estimator, K represents the oscillation parameter of the Fourier series estimator,
F λ , K = A λ , K I T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K , G λ , K = T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K , and A λ , K = U U T H 1 W U 1 U T H 1 W + V H 1 W I U U T H 1 W U 1 U T H 1 W .
Next, in the following section, we discuss the estimation of weight matrix W.

3.5. Estimating Weight Matrix W

Suppose that we have a paired dataset x r 1 i , x r 2 i , , x r p i , t r 1 i , t r 2 i , , t r q i , y r i for r = 1,2 , , R and i = 1,2 , , n , which follows the MMNR model, and we assume that y = y 1 , y 2 , , y R T is a normally distributed random sample with a mean of m and covariance of W 1 such that we have a likelihood function as follows:
L m , W y = j = 1 n 1 2 π n 2 ( W r 1 ) 1 2 exp 1 2 y r j m r j T W r ( y r j m r j ) .
Since W = d i a g ( W 1 , W 2 , , W R ) , then we may write the likelihood function as follows:
L m , W y = 1 2 π n 2 2 ( W 1 1 ) n 2 exp 1 2 j = 1 n y 1 j m 1 j T W 1 ( y 1 j m 1 j ) × 1 2 π n 2 2 ( W 2 1 ) n 2 exp 1 2 j = 1 n y 2 j m 2 j T W 2 ( y 2 j m 2 j ) × × 1 2 π n 2 2 W R 1 n 2 exp 1 2 j = 1 n y R j m r j T W R y R j m R j .
Hence, the estimated weight matrix can be obtained by determining the solution to the following optimization:
Q = Max W L m , W y = Max W 1 1 2 π n 2 2 ( W 1 1 ) n 2 exp 1 2 j = 1 n y 1 j m 1 j T W 1 ( y 1 j m 1 j ) × Max W 2 1 2 π n 2 2 ( W 2 1 ) n 2 exp 1 2 j = 1 n y 2 j m 2 j T W 2 ( y 2 j m 2 j ) × × Max W R 1 2 π n 2 2 W R 1 n 2 exp 1 2 j = 1 n y R j m R j T W R y R j m R j .
Hereinafter, according to Johnson and Wichern [46], we can determine the maximum value of each component of the likelihood function by taking the following equations:
W ^ 1 = ε ^ 1 ε ^ 1 T R n = y 1 m ^ 1 y 1 m ^ 1 T R n   ,     W ^ 2 = ε ^ 2 ε ^ 2 T R n = y 2 m ^ 2 y 2 m ^ 2 T R n   ,   ,   W ^ R = ε ^ R ε ^ R T R n = y R m ^ R y R m ^ R T R n .
Furthermore, based on Equation (48), we have
y ^ = m ^ λ , K x , t = M ( λ , K ) y
where M λ , K = F λ , K + G λ , K . Hence, we obtain the maximum likelihood estimator for the weight matrix W as follows:
W ^ = d i a g W ^ 1 , W ^ 2 , , W ^ R
where W ^ 1 = I M 1 ( λ , K , σ ^ 1 2 ) y 1 y 1 T I M 1 ( λ , K , σ ^ 1 2 ) T R n , W ^ 2 = I M 2 ( λ , K , σ ^ 2 2 ) y 2 y 2 T I M 2 ( λ , K , σ ^ 2 2 ) T R n , …, W ^ R = I M R ( λ , K , σ ^ R 2 ) y R y R T I M R ( λ , K , σ ^ R 2 ) T R n .
This shows that the estimated weight matrix obtained is a symmetric matrix, especially a diagonal matrix whose main diagonal components are the estimated weight matrices of the first response, second response, and so on, up to the R-th response.
Next, in the following section, we provide discussion on selecting optimal smoothing and oscillation parameters in the MMNR model.

3.6. Selecting Optimal Smoothing and Oscillation Parameters in the MMNR Model

In statistical analysis using an MMNR model approach based on a mixed smoothing spline and Fourier series estimator, selecting the optimal smoothing parameter and oscillation parameter, namely, ( λ O p t , K o p t ), cannot be neglected, and it is crucial to good regression function fitting of the MMNR model based on the mixed smoothing spline and Fourier series estimator. In other words, the mixed smoothing spline and Fourier series estimator highly depends on both the smoothing parameter and oscillation parameter. Therefore, in this study, we also need to discuss selecting the optimal smoothing and oscillation parameters. There are some criteria used for selecting the smoothing parameter, including minimizing CV (cross-validation), GCV (generalized cross-validation), Mallows’ C p , and AIC (Akaike’s information criterion) [1,12]. However, according to Ruppert and Carroll [47], for good fitting of a regression function based on a spline estimator, Mallows’ C p and GCV were very satisfactory.
In this study, we use the GCV criterion, which has been developed for multiresponse cases to determine the optimal smoothing parameter and oscillation parameter values for good regression function fitting of the multiresponse multipredictor nonparametric regression (MMNR) model. Firstly, we determine the mean squared errors (MSEs) of the regression function presented in (48) as follows:
M S E λ , K = y y ^ T W y y ^ n = n 1 y m ^ λ , K ( x , t ) T W y m ^ λ , K ( x , t ) = n 1 y F λ , K + G λ , K y T W y F λ , K + G λ , K y = n 1 I F λ , K + G λ , K y T W I F λ , K + G λ , K y = n 1 W 1 2 I F λ , K + G λ , K y 2
where λ represents the smoothing parameter of the smoothing spline estimator, K represents the oscillation parameter of the Fourier series estimator,
  • F λ , K = A λ , K I T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K ,
  • G λ , K = T 2 A λ , K T A λ , K T T 1 A λ , K T I I A λ , K , and
  • A λ , K = U U T H 1 W U 1 U T H 1 W + V H 1 W I U U T H 1 W U 1 U T H 1 W .
Secondly, based on the M S E λ , K given in Equation (49), we have the GCV (generalized cross-validation) for the MMNR model as follows:
G C V λ , K = M S E λ , K n 1 t r a c e I F λ , K + G λ , K 2 = n 1 W 1 2 I F λ , K + G λ , K y 2 n 1 t r a c e I F λ , K + G λ , K 2 .
Hence, based on the GCV function presented in (50), we can obtain the optimal smoothing parameter ( λ ) and optimal oscillation parameter (K) values, namely, ( λ O p t , K o p t ), by taking the solution to the following optimization:
G o p t λ O p t , K o p t = Min λ R + ; K R + G C V λ , K = Min λ R + ; K R + n 1 W 1 2 I F λ , K + G λ , K y 2 n 1 t r a c e I F λ , K + G λ , K 2
where R + represents a positive real number set. Thus, the optimal values of both smoothing parameter and oscillation parameter, namely, ( λ O p t , K o p t ), can be obtained by taking the solution to the optimization function given in Equation (51).
Furthermore, in the following section, we determine one of the statistically sound estimator properties, namely, the consistency property.

3.7. Consistency of Regression Function Estimator of MMNR Model

To investigate the consistency of the regression function estimator of the MMNR model, m ^ λ , K ( x , t ) , firstly, we should investigate asymptotic properties of the mixed smoothing spline and Fourier series estimator, m ^ λ , K ( x , t ) , based on an integrated mean squared error (IMSE) criterion. In this study, we develop the IMSE from the uniresponse with one predictor case proposed by Wand and Jones [48] and multiresponse with one predictor case proposed by Lestari et al. [35] into a multiresponse with multipredictor case. Next, by considering Equation (48), the IMSE can be decomposed into two components, namely, the B i a s e d 2 ( δ ) component and V a r ( δ ) component, as follows:
I M S E δ = E a b [ m λ , K ( x , t ) m ^ λ , K ( x , t ) T W m λ , K ( x , t ) m ^ λ , K ( x , t ) ] d t   = B i a s e d 2 δ + V a r ( δ )
where B i a s e d 2 δ = a b E [ m λ , K ( x , t ) E m ^ λ , K ( x , t ) T W m λ , K ( x , t ) E m ^ λ , K ( x , t ) ] d t ,   V a r δ = a b E [ E m ^ λ , K ( x , t ) m ^ λ , K ( x , t ) T W E m ^ λ , K ( x , t ) m ^ λ , K ( x , t ) ] d t , δ = ( λ , K ) , W is a weight matrix, and m ^ λ , K ( x , t ) as given in Equation (48) is a regression function of the MMNR model.
Next, by using Theorem 2 in Lestari et al. [35], we obtain B i a s e d 2 ( δ ) O ( δ ) as n , where O ( · ) represents the “big oh”. We can find the details of the “big oh” in Wand and Jones [48], and Sen and Singer [49]. Also, by using Theorem 3 in Lestari et al. [35], we obtain V a r ( δ r ) O 1 n ( δ r ) 1 / 2 m as n and for every response r = 1,2 , , R . Hence, we obtain the asymptotic property of the mixed regression function estimator of the MMNR model based on the IMSE criterion as follows:
I M S E ( δ ) = B i a s 2 ( δ ) + V a r ( δ ) O ( δ ) + O ( θ )   as   n
where δ = δ 1 , δ 2 , , δ R T , θ = 1 n δ 1 1 2 m 1 n δ 2 1 2 m 1 n δ R 1 2 m T , δ r = ( λ r , K r ) , and r = 1,2 , , R .
Furthermore, based on Equation (53), we obtain an inequality as follows:
I M S E δ = B i a s 2 δ + V a r δ O δ + O ( θ ) O ( n δ )   as   n .
Hence, according to Sen and Singer [49] and Serfling [50], for any small positive number, ϑ > 0 , we obtain the following relationship:
Lim n P I M S E ( δ ) n δ > ϑ Lim n P I M S E ( δ ) > ϑ = 0 .
Since P I M S E ( δ ) > ϑ = 1 P ( I M S E δ ϑ ) then by considering Equation (54) and by applying the properties of probability, we obtain the following equation:
Lim n 1 P ( I M S E δ ϑ ) = 0     1 Lim n P I M S E δ ϑ = 0   Lim n P I M S E δ ϑ = 1 .
Equation (55) shows that the regression function estimator, m ^ λ , K ( x , t ) , of the multiresponse multipredictor nonparametric regression (MMNR) model obtained by using the mixed smoothing spline and Fourier series estimator is a consistent estimator based on the integrated mean squared error (IMSE) criterion.
Next, in the following section, we provide a simulation study.

3.8. Simulation Study

In this section, we provide a simulation study to show the performance of the mixed smoothing spline and Fourier series estimator in estimating the MMNR model. In other words, the simulation study provided is intended to show the sensitivity of parameter selection and its impact on model performance. In this simulation study, we used generation data of n = 50 with a correlation of 0.95 and a variance of 0.25. Here, we use a three-response MMNR model as follows:
y 1 i = 17 x 1 i 0.8 3 1.5 t 1 i cos 6 π t 1 i + ε 1 i   ,     i = 1,2 , , 50 . y 2 i = 18 x 2 i 0.8 3 1.75 t 2 i 1.2 cos 6 π t 2 i + ε 2 i   ,     i = 1,2 , , 50 . y 3 i = 19 x 3 i 0.8 3 2 t 3 i 1.5 cos 6 π t 3 i + ε 3 i   ,     i = 1,2 , , 50 .
Next, to show the influence of smoothing parameters (λ) and oscillation parameters (K), we use different smoothing parameters by generating lambda (λ) randomly, and different oscillation parameters by moving K from 1 to 4. We carry out this process 15 times until the optimal values of smoothing parameters (λ) and oscillation parameters (K) are obtained.
Simulation results including MSE (mean squared error) values, minimum GCV (generalized cross-validation) values, coefficient of determination ( R 2 ) values, and lambda (λ) values for every oscillation parameter value ( K = 1,2 , 3,4 ) are given in Table 1.
Table 1 presents the optimal lambda values based on the minimum GCV values from 15 repetitions of the MMNR model presented in Equation (56) for every oscillation parameter (K) where K = 1,2 , 3,4 . From Table 1, we can observe that every different value of both the smoothing parameter and oscillation parameter will give different performance to the estimation performance. These are indicated by the different coefficient of determination ( R 2 ) and MSE (mean squared errors) values as the values change due to oscillation parameters (K) and smoothing parameters (λ).
Next, based on all the minimum GCV values presented in Table 1, we chose the minimum GCV value from the four oscillation parameter (K) values. In Table 1, the minimum GCV value is produced by K = 1 , namely, 0.5786323, with an MSE value of 1.02363794, R 2 value of 0.95311712, and λ 1 = 0.84886039 ; λ 2 = 0.0021173 ; λ 3 = 0.001 . In Table 1, these values are shown in bold. These values are the optimal values and the best MMNR estimate is obtained based on these optimal values. Therefore, these values are then used to determine the estimated values of the MMNR model and to produce the plots of the estimated values of the MMNR model presented in Equation (56).
Hereinafter, we obtained the plots of estimated model values versus predictor (observation) values for every response together with the plots of their actual values. The plots of estimated model values versus predictor (observation) values for the first response together with the plots of actual values of the first response are presented in Figure 1. The plots of estimated model values versus predictor (observation) values for the second response together with the plots of actual values of the second response are presented in Figure 2. The plots of estimated model values versus predictor (observation) values for the third response together with the plots of actual values of the third response are presented in Figure 3.
From Figure 1, Figure 2 and Figure 3, we can observe that the values of the estimated MMNR models using the mixed smoothing spline and Fourier series estimator are close to the actual values. This is shown by the performance curves of the values of the estimated MMNR models, which tend to be similar to the actual values curves. Hence, based on the results of the simulation study presented in Table 1, and Figure 1, Figure 2 and Figure 3, it shows that the estimation value of the MMNR model in (56) depends on both the smoothing spline parameter and oscillation parameter. In other words, it shows that the selection of the smoothing spline parameter and oscillation parameter is sensitive and impacts the MMNR model’s performance.
Hereinafter, based on the results of the simulation study, we obtained the surface plots of the estimation results of the MMNR model presented in Equation (56). These surface plots of the estimated MMNR model are presented in three-dimensional plots (see Figure 4, Figure 5 and Figure 6).

4. Conclusions

The estimation result of the MMNR model by using the mixed smoothing spline and Fourier series estimator, presented by y ^ = m ^ λ , K ( x , t ) , is a combination of the smoothing spline component and Fourier series component estimators presented by f ^ λ , K ( x , t ) and g ^ λ , K ( x , t ) , respectively. The estimator of the MMNR model is highly dependent on the smoothing parameter ( λ ) and oscillation parameter (K). These optimal parameter values, namely, ( λ O p t , K o p t ), are determined by using the GCV criterion to determine the values that minimize the function G C V λ , K . Also, the estimator of the MMNR model by using the mixed smoothing spline and Fourier series estimator that we have obtained is linear with respect to the observations given in Equation (48). Additionally, the regression function estimator, m ^ λ , K ( x , t ) , of the MMNR model obtained by using the mixed smoothing spline and Fourier series estimator is a consistent estimator based on the IMSE criterion. This means that the mixed smoothing spline and Fourier series estimator is statistically a good estimator because it meets one of the criteria for the goodness of an estimator, namely, being consistent. It is therefore suitable for analyzing data with patterns that partly change at certain subintervals, and others that follow a recurring pattern in a certain trend. In addition, the estimated weight matrix that we obtained is a diagonal matrix where the main diagonal elements are the estimated weight matrix of the first response, the estimated weight matrix of second response, and so on, until the estimated weight matrix of the R-th response. The results of this study contribute to developing statistical inference theories such as estimation and hypothesis testing theories, especially the development of statistical inference theory of nonparametric regression.

Author Contributions

All authors contributed to this research article, namely, conceptualization, N.C., B.L. and I.N.B.; methodology, N.C., B.L. and I.N.B.; software, B.L., N.C. and D.A.; validation, B.L., N.C., I.N.B. and D.A.; formal analysis, B.L., N.C., I.N.B. and D.A.; investigation, resource and data curation, B.L., N.C., D.A. and I.N.B.; writing—original draft preparation, B.L. and N.C.; writing—review and editing, B.L. and N.C.; visualization, B.L. and N.C.; supervision, B.L., N.C., I.N.B. and D.A.; project administration, N.C. and B.L.; funding acquisition, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Airlangga Research Fund, Universitas Airlangga, Indonesia through a research scheme of the Airlangga’s Flagship Research (Penelitian Unggulan Airlangga—PUA) with contract number 312/UN3.15/PT/2023, dated 6 February 2023.

Data Availability Statement

All data are contained within this article.

Acknowledgments

The authors thank Airlangga University for technical support. The authors are also grateful to the editors and anonymous peer-reviewers of the Symmetry journal for providing comments, corrections, criticisms, and suggestions that were useful for improving the quality of this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Eubank, R.L. Nonparametric Regression and Spline Smoothing, 2nd ed.; Marcel Dekker: New York, NY, USA, 1999. [Google Scholar]
  2. Cheruiyot, L.R. Local linear regression estimator on the boundary correction in nonparametric regression estimation. J. Stat. Theory Appl. 2020, 19, 460–471. [Google Scholar] [CrossRef]
  3. Cheng, M.-Y.; Huang, T.; Liu, P.; Peng, H. Bias reduction for nonparametric and semiparametric regression models. Stat. Sin. 2018, 28, 2749–2770. [Google Scholar] [CrossRef]
  4. Chamidah, N.; Zaman, B.; Muniroh, L.; Lestari, B. Designing local standard growth charts of children in East Java province using a local linear estimator. Int. J. Innov. Creat. Change 2020, 13, 45–67. [Google Scholar]
  5. Delaigle, A.; Fan, J.; Carroll, R.J. A design-adaptive local polynomial estimator for the errors-in-variables problem. J. Am. Stat. Assoc. 2009, 104, 348–359. [Google Scholar] [CrossRef]
  6. Francisco-Fernandez, M.; Vilar-Fernandez, J.M. Local polynomial regression estimation with correlated errors. Comm. Stat. Theory Methods 2001, 30, 1271–1293. [Google Scholar] [CrossRef]
  7. Benhenni, K.; Degras, D. Local polynomial estimation of the mean function and its derivatives based on functional data and regular designs. ESAIM Probab. Stat. 2014, 18, 881–899. [Google Scholar] [CrossRef]
  8. Kikechi, C.B. On local polynomial regression estimators in finite populations. Int. J. Stats. Appl. Math. 2020, 5, 58–63. [Google Scholar]
  9. Wand, M.P.; Jones, M.C. Kernel Smoothing, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 1995. [Google Scholar]
  10. Cui, W.; Wei, M. Strong consistency of kernel regression estimate. Open J. Stats. 2013, 3, 179–182. [Google Scholar] [CrossRef]
  11. De Brabanter, K.; De Brabanter, J.; Suykens, J.A.K.; De Moor, B. Kernel regression in the presence of correlated errors. J. Mach. Learn. Res. 2011, 12, 1955–1976. [Google Scholar]
  12. Wahba, G. Spline Models for Observational Data; SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
  13. Wang, Y. Smoothing Splines: Methods and Applications; Taylor & Francis Group: Boca Raton, FL, USA, 2011. [Google Scholar]
  14. Liu, A.; Qin, L.; Staudenmayer, J. M-type smoothing spline ANOVA for correlated data. J. Multivar. Anal. 2010, 101, 2282–2296. [Google Scholar] [CrossRef]
  15. Gao, J.; Shi, P. M-Type smoothing splines in nonparametric and semiparametric regression models. Stat. Sin. 1997, 7, 1155–1169. [Google Scholar]
  16. Chamidah, N.; Lestari, B.; Massaid, A.; Saifudin, T. Estimating mean arterial pressure affected by stress scores using spline nonparametric regression model approach. Commun. Math. Biol. Neurosci. 2020, 2020, 72. [Google Scholar]
  17. Chamidah, N.; Lestari, B.; Budiantara, I.N.; Saifudin, T.; Rulaningtyas, R.; Aryati, A.; Wardani, P.; Aydin, D. Consistency and asymptotic normality of estimator for parameters in multiresponse multipredictor semiparametric regression model. Symmetry 2022, 14, 336. [Google Scholar] [CrossRef]
  18. Lestari, B.; Chamidah, N.; Budiantara, I.N.; Aydin, D. Determining confidence interval and asymptotic distribution for parameters of multiresponse semiparametric regression model using smoothing spline estimator. J. King Saud Univ.-Sci. 2023, 35, 102664. [Google Scholar] [CrossRef]
  19. Tirosh, S.; De Ville, D.V.; Unser, M. Polyharmonic smoothing splines and the multidimensional Wiener filtering of fractal-like signals. IEEE Trans. Image Process. 2006, 15, 2616–2630. [Google Scholar] [CrossRef]
  20. Irizarry, R.A. Choosing Smoothness Parameters for Smoothing Splines by Minimizing an Estimate of Risk. Available online: https://www.biostat.jhsph.edu/~ririzarr/papers/react-splines.pdf (accessed on 3 February 2024).
  21. Adams, S.O.; Ipinyomi, R.A.; Yahaya, H.U. Smoothing spline of ARMA observations in the presence of autocorrelation error. Eur. J. Stats. Prob. 2017, 5, 1–8. [Google Scholar]
  22. Adams, S.O.; Yahaya, H.U.; Nasiru, O.M. Smoothing parameter estimation of the generalized cross-validation and generalized maximum likelihood. IOSR J. Math. 2017, 13, 41–44. [Google Scholar]
  23. Lee, T.C.M. Smoothing parameter selection for smoothing splines: A simulation study. Comput. Stats. Data Anal. 2003, 42, 139–148. [Google Scholar] [CrossRef]
  24. Maharani, M.; Saputro, D.R.S. Generalized cross-validation (GCV) in smoothing spline nonparametric regression models. J. Phys. Conf. Ser. 2021, 1808, 12053. [Google Scholar] [CrossRef]
  25. Wang, Y.; Ke, C. Smoothing spline semiparametric nonlinear regression models. J. Comp. Graph. Stats. 2009, 18, 165–183. [Google Scholar] [CrossRef]
  26. Gu, C. Smoothing Spline ANOVA Models; Springer: New York, NY, USA, 2002. [Google Scholar]
  27. Sun, X.; Zhong, W.; Ma, P. An asymptotic and empirical smoothing parameters selection method for smoothing spline ANOVA models in large samples. Biometrika 2021, 108, 149–166. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, Y.; Guo, W.; Brown, M.B. Spline smoothing for bivariate data with applications to association between hormones. Stat. Sin. 2000, 10, 377–397. [Google Scholar]
  29. Lu, M.; Liu, Y.; Li, C.-S. Efficient estimation of a linear transformation model for current status data via penalized splines. Stat. Methods Med. Res. 2020, 29, 3–14. [Google Scholar] [CrossRef] [PubMed]
  30. Berry, L.N.; Helwig, N.E. Cross-validation, information theory, or maximum likelihood? A comparison of tuning methods for penalized splines. Stats 2021, 4, 701–724. [Google Scholar] [CrossRef]
  31. Islamiyati, A.; Zakir, M.; Sirajang, N.; Sari, U.; Affan, F.; Usrah, M.J. The use of penalized weighted least square to overcome correlations between two responses. BAREKENG J. Ilmu Mat. Dan Terap. 2022, 16, 1497–1504. [Google Scholar] [CrossRef]
  32. Islamiyati, A.; Raupong; Kalondeng, A.; Sari, U. Estimating the confidence interval of the regression coefficient of the blood sugar model through a multivariable linear spline with known variance. Stat. Transit. New Ser. 2022, 23, 201–212. [Google Scholar] [CrossRef]
  33. Kirkby, J.L.; Leitao, A.; Nguyen, D. Nonparametric density estimation and bandwidth selection with B-spline basis: A novel Galerkin method. Comput. Stats. Data Anal. 2021, 159, 107202. [Google Scholar] [CrossRef]
  34. Osmani, F.; Hajizadeh, E.; Mansouri, P. Kernel and regression spline smoothing techniques to estimate coefficient in rates model and its application in psoriasis. Med. J. Islam. Repub. Iran 2019, 33, 90. [Google Scholar] [CrossRef] [PubMed]
  35. Lestari, B.; Chamidah, N.; Aydin, D.; Yilmaz, E. Reproducing kernel Hilbert space approach to multiresponse smoothing spline regression function. Symmetry 2022, 14, 2227. [Google Scholar] [CrossRef]
  36. Bilodeau, M. Fourier smoother and additive models. Can. J. Stat. 1992, 20, 257–269. [Google Scholar] [CrossRef]
  37. Suparti, S.; Prahutama, A.; Santoso, R.; Devi, A.R. Spline-Fourier’s Method for Modelling Inflation in Indonesia. E3S Web Conf. 2018, 73, 13003. [Google Scholar] [CrossRef]
  38. Mardianto, M.F.F.; Gunardi; Utami, H. An analysis about Fourier series estimator in nonparametric regression for longitudinal data. Math. Stats. 2021, 9, 501–510. [Google Scholar] [CrossRef]
  39. Amato, U.; Antoniadis, A.; De Feis, I. Fourier series approximation of separable models. J. Comput. Appl. Math. 2002, 146, 459–479. [Google Scholar] [CrossRef]
  40. Mariati, M.P.A.M.; Budiantara, I.N.; Ratnasari, V. The application of mixed smoothing spline and Fourier series model in nonparametric regression. Symmetry 2021, 13, 2094. [Google Scholar] [CrossRef]
  41. Aronszajn, N. Theory of reproducing kernels. Trans. Am. Math. Soc. 1950, 68, 337–404. [Google Scholar] [CrossRef]
  42. Kimeldorf, G.; Wahba, G. Some results on Tchebycheffian spline functions. J. Math. Anal. Appl. 1971, 33, 82–95. [Google Scholar] [CrossRef]
  43. Berlinet, A.; Thomas-Agnan, C. Reproducing Kernel Hilbert Spaces in Probability and Statistics; Kluwer Academic: Norwell, MA, USA, 2004. [Google Scholar]
  44. Paulsen, V.I. An Introduction to the Theory of Reproducing Kernel Hilbert Space. Research Report. 2009. Available online: https://www.researchgate.net/publication/255635687_AN_INTRODUCTION_TO_THE_THEORY_OF_REPRODUCING_KERNEL_HILBERT_SPACES (accessed on 24 March 2022).
  45. Yuan, M.; Cai, T.T. A reproducing kernel Hilbert space approach to functional linear regression. Ann. Stat. 2010, 38, 3412–3444. [Google Scholar] [CrossRef]
  46. Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis; Prentice Hall: New York, NY, USA, 1982. [Google Scholar]
  47. Ruppert, D.; Carroll, R. Penalized Regression Splines, Working Paper; School of Operation Research and Industrial Engineering, Cornell University: Ithaca, NY, USA, 1997. [Google Scholar]
  48. Wand, M.P.; Jones, M.C. Kernel Smoothing; Chapman & Hall: London, UK, 1995. [Google Scholar]
  49. Sen, P.K.; Singer, J.M. Large Sample in Statistics: An Introduction with Applications; Chapman & Hall: London, UK, 1993. [Google Scholar]
  50. Serfling, R.J. Approximation Theorems of Mathematical Statistics; John Wiley: New York, NY, USA, 1980. [Google Scholar]
Figure 1. Plots of estimated model values versus predictor (observation) values for the first response ( y 1 ) together with the plots of actual values of the first response.
Figure 1. Plots of estimated model values versus predictor (observation) values for the first response ( y 1 ) together with the plots of actual values of the first response.
Symmetry 16 00386 g001
Figure 2. Plots of estimated model values versus predictor (observation) values for the second response ( y 2 ) together with the plots of actual values of the second response.
Figure 2. Plots of estimated model values versus predictor (observation) values for the second response ( y 2 ) together with the plots of actual values of the second response.
Symmetry 16 00386 g002
Figure 3. Plots of estimated model values versus predictor (observation) values for the third response ( y 2 ) together with the plots of actual values of the third response.
Figure 3. Plots of estimated model values versus predictor (observation) values for the third response ( y 2 ) together with the plots of actual values of the third response.
Symmetry 16 00386 g003
Figure 4. Surface plot of the estimated model versus predictors for the first response ( y 1 ).
Figure 4. Surface plot of the estimated model versus predictors for the first response ( y 1 ).
Symmetry 16 00386 g004
Figure 5. Surface plot of the estimated model versus predictors for the second response ( y 2 ).
Figure 5. Surface plot of the estimated model versus predictors for the second response ( y 2 ).
Symmetry 16 00386 g005
Figure 6. Surface plot of the estimated model versus predictors for the third response ( y 3 ).
Figure 6. Surface plot of the estimated model versus predictors for the third response ( y 3 ).
Symmetry 16 00386 g006
Table 1. Values of MSE, minimum GCV, R 2 , and λ.
Table 1. Values of MSE, minimum GCV, R 2 , and λ.
KMSEMinimum GCV R 2 λ
11.023637940.57863230.95311712 λ 1 = 0.84886039 ;   λ 2 = 0.00211737 ;
λ 3 = 0.001 .
22.201324822.09459040.90018631 λ 1 = 0.55180324 ;   λ 2 = 0.89946915 ;
λ 3 = 0.54313339 .
32.095123112.170497880.90527677 λ 1 = 0.74451376 ;   λ 2 = 0.62104441 ;
λ 3 = 0.27310874 .
42.032153242.101328580.90769321 λ 1 = 0.56110414 ;   λ 2 = 0.26679346 ;
λ 3 = 0.4535796 .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chamidah, N.; Lestari, B.; Budiantara, I.N.; Aydin, D. Estimation of Multiresponse Multipredictor Nonparametric Regression Model Using Mixed Estimator. Symmetry 2024, 16, 386. https://doi.org/10.3390/sym16040386

AMA Style

Chamidah N, Lestari B, Budiantara IN, Aydin D. Estimation of Multiresponse Multipredictor Nonparametric Regression Model Using Mixed Estimator. Symmetry. 2024; 16(4):386. https://doi.org/10.3390/sym16040386

Chicago/Turabian Style

Chamidah, Nur, Budi Lestari, I Nyoman Budiantara, and Dursun Aydin. 2024. "Estimation of Multiresponse Multipredictor Nonparametric Regression Model Using Mixed Estimator" Symmetry 16, no. 4: 386. https://doi.org/10.3390/sym16040386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop