Next Article in Journal
Petroleum Price Prediction with CNN-LSTM and CNN-GRU Using Skip-Connection
Previous Article in Journal
Existence and Uniqueness of Generalized and Mixed Finite Element Solutions for Steady Boussinesq Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Total Problem of Constructing Linear Regression Using Matrix Correction Methods with Minimax Criterion

by
Victor Gorelik
1,2 and
Tatiana Zolotova
3,*
1
Department of Simulation Systems and Operations Research, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
2
Department of Theoretical Informatics and Discrete Mathematics, Moscow Pedagogical State University, 119435 Moscow, Russia
3
Department of Data Analysis and Machine Learning, Financial University under the Government of RF, 125167 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 546; https://doi.org/10.3390/math11030546
Submission received: 19 November 2022 / Revised: 5 January 2023 / Accepted: 17 January 2023 / Published: 19 January 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
A linear problem of regression analysis is considered under the assumption of the presence of noise in the output and input variables. This approximation problem may be interpreted as an improper interpolation problem, for which it is required to correct optimally the positions of the original points in the data space so that they all lie on the same hyperplane. The use of the quadratic approximation criterion for such a problem led to the appearance of the total least squares method. In this paper, we use the minimax criterion to estimate the measure of correction of the initial data. It leads to a nonlinear mathematical programming problem. It is shown that this problem can be reduced to solving a finite number of linear programming problems. However, this number depends exponentially on the number of parameters. Some methods for overcoming this complexity of the problem are proposed.

1. Introduction

Methods for correcting improper and unstable problems are now widespread. Matrix correction (for the totality of all initial data) is applied to inconsistent systems of linear algebraic equations and inequalities and improper linear programming problems [1,2,3,4,5].
The problem of regression analysis can be considered as an improper interpolation problem, which consists of constructing a function f : X Y from some fixed class of functions Φ , such that the surface it describes exactly passes through the initial data points ( x 1 , y 1 ) , , ( x m , y m ) , i.e.,
y i = f x i , i = 1 , , m , f Φ .
Since data are often obtained experimentally, the interpolation problem becomes improper (without a solution). In this case, the problem of optimal correction (approximation) is considered. It is necessary to find a function, that together with some data set X H , y h is a solution to the interpolation problem, and this data set is the “closest” to the initial points X , y among all admissible parameters for which the interpolation problem has a solution. This approximation problem is formalized by introducing a certain matrix norm and finding for the given norm the minimum correction matrix X H , y h X , y .
When l 2 was used as the matrix norm, the matrix correction methods in problems of processing experimental data in the presence of noise in the input and output data led to the genesis of the total least squares method [6].
It has a probabilistic justification as a maximum likelihood method using the hypothesis of normal distribution of errors both when measuring the vector argument x and the values of the function y. This direction has been intensively developed in recent years. Various modifications of the method are proposed related to the consideration of structural equations, regularization, etc. (for example, [7,8,9,10,11,12,13,14,15,16,17,18,19]). At the same time, taking into account structural constraints leads to the fact that analytical methods become inapplicable, and the emphasis is shifted to numerical optimization methods.
The normal distribution law is often used when simulating random processes. This is explained by the convenience of its application in the study of random processes, and by the useful properties of the normal law (for example, stability). However, in some cases, the distributions of random indicators differ from normal. The rejection of the hypothesis of “normality” is due to the fact that the value of the coefficient of kurtosis is greater in statistical distributions that correspond to real data.
Such distributions of random variables have “heavy tails”, that is, the corresponding distribution density decreases slowly as | x | compared to the normal density. The deviation from the normal (Gaussian) distribution of random variables is observed in the financial and economic field and is typical, for example, for exchange rates, prices, and stock returns. This is confirmed both by the form of empirical densities (histograms) and by standard statistical techniques for detecting deviations from the normal distribution [20].
If the distribution of noise in statistical data differs from the normal distribution, then the least squares method loses its probabilistic justification. Various assumptions about the laws of distribution of random errors lead to the use of other matrix norms. In [5], it was shown that the maximum likelihood method, when using the hypothesis of exponential noise distribution, leads to a polyhedral norm l 1 , and the method of constructing linear regression is reduced to the solution of 2 n linear programming, where n is the number of linear regression parameters. In [21], these results were generalized to the problem of joint data transformation and approximation, which leads to a new class of parametric correction problems.
In this paper, we propose the use of a minimax approximation criterion, i.e., the use of the l norm of the correction matrix. Since the minimax criterion is usually associated with the name of P.L. Chebyshev, it is natural to call the proposed approach the total method of Chebyshev approximation.
Our goal is to show that, in this case, the approximation problem can also be reduced to solving a number of linear programming problems. However, it is reduced to solving 2 n linear programming problems, i.e., the growth is exponential according to n. Therefore, it is additionally necessary to propose some methods to overcome this difficulty.
To the best of the authors’ knowledge, these results are new. In the future, we intend to generalize them to cases of structural restrictions on the correction matrix. Preliminary studies show that under certain conditions it is possible to reduce the corresponding minimax approximation problems to linear programming.

2. The Total Problem of Constructing Linear Regression in the l Metric

The mathematical formulation of the problem of constructing linear regression is as follows. The initial data describing the dependence of the value y on the vector of variables x is the set of points ( x 1 1 , , x n 1 , y 1 ) , , ( x 1 m , , x n m , y m ) . We represent these data in the form of an information matrix
y X = y 1 x 1 1 x n 1 y 2 x 1 2 x n 2 y m x 1 m x n m .
We consider the problem of constructing by given m points such an affine function of n variables f : R n R of the form
f x = a 1 x 1 + a 2 x 2 + + a n x n + a 0 = a , x + a 0 ,
that the maximum of the moduli of deviations in all coordinates of all points from the hyperplane it defines is minimal.
Let us formulate the corresponding problem of correcting the system of linear equations. The condition for the points ( x 1 , y 1 ) , , ( x m , y m ) to belong to some hyperplane L can be written as
a , x i + a 0 = y i , i = 1 , , m ,
or in the matrix form
X e · a ¯ = y ,
where y = y 1 , y 2 , , y m T , e = 1 , 1 , , 1 T R m , a ¯ = a , a 0 T R n + 1 , X m   ×   n matrix, whose rows are vectors x i .
If a hyperplane cannot be drawn through the given points, then the resulting system of linear Equation (2) is inconsistent. It is usually overdetermined, i.e., m > n . We introduce a matrix H and a vector h of the corresponding dimensions so that the system [ X + H e ] a ¯ = y + h becomes consistent.
The problem of the minimum correction of the given system in the l norm will have the following form:
h 0 = inf H , h , a ¯ h H l [ X + H e a ¯ = y + h .
Let us formulate a theorem for the criterion for the minimum of the polyhedral norm l of the correction matrix, which makes it possible to obtain a solution to the problem of constructing a regression of this type, i.e., finding the optimal value of the coefficients a ¯ 0 .
Theorem 1.
Let m points x 1 1 , x 2 1 , , x n 1 , , x 1 m , x 2 m , , x n m be given in the space of features R n , and in the space R there is the set of answers y 1 , , y m , and there is no affine function (1) such that y i = f x i , i = 1 , , m .
Then, the problem of finding the minimum change in the information matrix of parameters y X in the sense of the minimum of the norm l , as a result of which the interpolation affine function exists, is equivalent to the problem of mathematical programming
u min u , v , q , q 0 , u v y i x i q q 0 , i = 1 , , m , u v y i + x i q + q 0 , i = 1 , , m , v + j = 1 n q j = 1 , u 0 , v 0 .
If there exists a solution to problem (4) u 0 , v 0 , q 0 , q 0 0 such that v 0 0 , then
h 0 = u 0 , a 0 = q 0 v 0 , a 0 0 = q 0 0 v 0 .
Proof. 
Let us consider the problem of correcting the inconsistent system of linear Equation (2). The problem of minimal correction (3) is the problem of minimizing l -norm of the extended matrix H ¯ = [ h H ] provided that the system of equations [ X + H e ] a ¯ = y + h is consistent.
The matrix H ¯ has dimension m × ( n + 1 ) . Let us denote its entries by h i j , i = 1 , , m , j = 0 , 1 , , n , rows of the matrix H ¯ − by h i .
To obtain a solution to problem (3) in a given norm, we find an expression of this norm in terms of vector norms. For this, the following definition will be useful.
The generalized φ , ψ − norm of an arbitrary matrix A is
A φ , ψ = max z 0 ψ A z φ ( z ) ,
where φ , ψ − some vector norms.
Let us show that the norm of the matrix A in the metric l is a special case of the generalized matrix norm, namely,
A 1 , = A l = max i , j a i j .
Indeed, by definition
A 1 , = max z 0 A z z 1 = max z 0 max i j a i j z j j | z j |
(for brevity, here the corresponding norms of vectors are denoted by 1 and ).
Obviously, this problem for the maximum ratio can be replaced by a conditional extremum:
max i j a i j z j max z , j z j = 1 .
The maximum of the linear function is achieved at the vertices of the admissible set, therefore, for a fixed i, the maximum by z of the expression j a i j z j is equal to max j a i j = a i j 0 and is achieved at the vector z, for which z j 0 = s i g n a i j 0 , and all other components are equal to zero. However, we are talking about maximizing the modulus of the sum, so we can put z j 0 = 1 . Further, since the operations of taking the maximum are permutable, the equality A 1 , = A l is proved.
It was proven in [4], that the minimum of the norm A φ , ψ of the matrix A, which for a fixed z is a solution to the system of equations A z = b , is equal to
A φ , ψ = ψ b φ ( z ) .
We apply this result to the equation [ X + H e ] a ¯ = y + h of the problem (3). We transform this equation into the form
h H 1 , a = y X a a 0 .
The minimum value of the generalized norm of the extended correction matrix, for which some fixed z satisfies the condition in problem (3), is
h H φ , ψ = ψ y X a a 0 φ 1 , a .
Thus, for a fixed a ¯ the minimal l -norm of the extended matrix is equal to
max i j h i j = max 1 i m y i x i a a 0 1 + j = 1 n a j .
The problem of minimum correction (3) in this case is reduced to minimizing the ratio on the right-hand side of (6) with respect to the variable a ¯ :
h 0 = min a , a 0 max 1 i m y i x i a a 0 1 + j = 1 n a j .
Let us introduce scalar variables v = 1 1 + j = 1 n a j , q 0 = v a 0 , vector q = v a and scalar variable u, satisfying the conditions
u v y i x i q q 0 , u v y i + x i q + q 0 , i = 1 , , m .
In this case, the conditions u 0 and v 0 are satisfied, but the variables q and q 0 can have any sign. In accordance with Formula (6), it is necessary to minimize in new variables the value
max 1 i m v y i x i q q 0 ,
which is equivalent to minimizing the variable u under the given constraints. Thus, we obtain the problem of mathematical programming (4) and Formula (5), taking into account the introduced change of variables. The theorem is proved. □
The complexity of solving the obtained mathematical programming problem (4) is associated with the presence of moduli of some variables. In the case of considering the norm l 1 , this complexity was overcome relatively easily and led, ultimately, to the solution of 2 n linear programming problems [5].
Problem (4) can also be reduced to solving a finite number of linear programming problems. This can be performed in the following way. Suppose that we know a priori the signs of the regression coefficients, i.e., signs of the components of the vector a ¯ . Then we can introduce new variables for them, which coincide with the original variables for positive components and have opposite signs for negative components. This is equivalent to the procedure for changing the signs of the elements of the corresponding columns of the matrix X. In this case, all regression coefficients are considered non-negative. Then, in problem (4), the elements of the columns of the information matrix change signs, and the modules simply disappear.
If from some a priori considerations we can judge the signs of the linear regression coefficients, then problem (4) is simply reduced to solving one linear programming problem. However, in the general case, we must enumerate all possible variants of the signs of the coefficients and from all the resulting linear programming problems choose the one that gives the smallest residual value u 0 .
Thus, in the general case, problem (4) is reduced to solving 2 n linear programming problems, i.e., the growth is exponential. What can you say about this? Generally speaking, the number of parameters (features) is usually much less than the number of data ( n m ). With 10–20 parameters, there is no problem enumerating the positive and negative values of the coefficients. In addition, from the meaning of the parameters, one can often judge at least some of the signs of the coefficients. If some of them are obvious, then the search decreases. We will discuss this issue below when considering practical examples. In the general case, one can use the branch and bound method.

3. Building a Demographic Trend in the Russian Federation

Consider practical examples based on real data taken from the Rosstat website [22]. We will analyze the birth rate in the Federal Districts depending on factors such as the size of the urban population, income, and investment.
Table 1 shows the data for 2019 for the Federal Districts of the Russian Federation: the birth rate, the share of the urban population, the income of the population, and investments in health care and social services.
Let us introduce the variables x 1 , x 2 , x 3 —the size of the urban population, incomes, and investments, respectively. Based on the initial data presented in Table 1, we will construct a regression dependence in the metric l first on two variables: the urban population x 1 and incomes x 2 . The information matrix has the form
y X = y 1 x 1 1 x 2 1 y 2 x 1 2 x 2 2 y 3 x 1 3 x 2 3 y 4 x 1 4 x 2 4 y 5 x 1 5 x 2 5 y 6 x 1 6 x 2 6 y 7 x 1 7 x 2 7 y 8 x 1 8 x 2 8 = 9.3 82.3 46.9 9.6 84.9 37.9 9.8 62.8 29.9 13.7 50.3 24.4 9.6 72.2 28.3 10.9 81.6 36.9 10.4 74.3 27.2 11.1 72.9 37.9 .
For this information matrix, problem (4) has the form:
u min u , v , q 0 , q 1 , q 2 , u v y i x 1 i q 1 + x 2 i q 2 q 0 , i = 1 , , 8 , u v y i + x 1 i q 1 + x 2 i q 2 + q 0 , i = 1 , , 8 , v + q 1 + q 2 = 1 , u 0 , v 0 .
The solution of this problem with moduli of the vector of variables q assumes the solution of 2 2 = 4 linear programming problems, which are obtained by enumerating the signs of the components of the vector q. The results of solving these linear programming problems are presented in Table 2.
We determine the minimum of four values of the objective function and the corresponding set of optimal values of the variables:
u 0 = 1.443 , v 0 = 0.009 , q 0 = 0.103 , 0.077 , q 0 0 = 16.554 .
Calculating the regression coefficients by Formula (5)
a 0 = q 0 v 0 = 0.103 , 0.077 0.009 = ( 0.103 , 0.094 ) ,
a 0 0 = q 0 0 v 0 = 16.554 0.009 = 16.415 ,
we obtain the regression equation
y = 0.103 x 1 + 0.094 x 2 + 16.415 .
Let us construct a regression dependence in the metric l on three variables, the values of which are presented in Table 1.
The information matrix will take the form
y X = y 1 x 1 1 x 2 1 x 3 1 y 2 x 1 2 x 2 2 x 3 2 y 3 x 1 3 x 2 3 x 3 3 y 4 x 1 4 x 2 4 x 3 4 y 5 x 1 5 x 2 5 x 3 5 y 6 x 1 6 x 2 6 x 3 6 y 7 x 1 7 x 2 7 x 3 7 y 8 x 1 8 x 2 8 x 3 8 = 9.3 82.3 46.9 91.3 9.6 84.9 37.9 26.6 9.8 62.8 29.9 31.6 13.7 50.3 24.4 14.7 9.6 72.2 28.3 41.9 10.9 81.6 36.9 28.8 10.4 74.3 27.2 33.6 11.1 72.9 37.9 24.3 .
For this information matrix, we obtain the problem (4):
u min u , v , q 0 , q 1 , q 2 , q 3 , u v y i x 1 i q 1 + x 2 i q 2 + x 3 i q 3 q 0 , i = 1 , , 8 , u v y i + x 1 i q 1 + x 2 i q 2 + x 3 i q 3 + q 0 , i = 1 , , 8 , v + q 1 + q 2 + q 3 = 1 , u 0 , v 0 .
The solution of this problem with moduli of the vector of variables q assumes the solution of 2 3 = 8 linear programming problems, which are obtained by enumerating the signs of the components of the vector q. The results of solving these linear programming problems are presented in Table 3.
We determine the minimum of eight values of the objective function and the corresponding set of optimal values of the variables:
u 0 = 1.207 , v 0 = 0.97 , q 0 = 0.099 , 0.202 , 0.074 , q 0 0 = 13.202 .
Calculating the regression coefficients by Formula (5)
a 0 = q 0 v 0 = 0.099 , 0.202 , 0.074 0.97 = ( 0.102 , 0.209 , 0.076 ) ,
a 0 0 = q 0 0 v 0 = 13.202 0.97 = 13.607 ,
we obtain the regression equation
y = 0.102 x 1 + 0.209 x 2 0.076 x 3 + 13.607 .
When comparing the results of constructing a regression equation for two and three variables (factors), attention is drawn to the fact that the coefficients for the corresponding variables are close in absolute values, and most importantly, the coincidence of their signs. This fact is not surprising, since the positive or negative influence of each factor must be stable, although this statement is not strict. This empirical statement can be used to shorten computational procedures. If the number of parameters is successively increased, then under the assumption that the sign of the coefficient for each factor is constant, at the next step, it is sufficient to solve two problems (positive and negative values of the coefficient for the added factor). With such a sequential procedure for constructing a regression in n variables, it is sufficient to solve 2 n linear programming problems instead of 2 n . In the given example of constructing a regression equation with three variables, it is enough to solve problems with the signs of the coefficients ( + + ) and ( + ), and in total, considering the example with two variables, six problems.

4. Conclusions

The paper considers the problem of constructing a linear regression assuming the presence of noise not only in the output but also in the input data.
An approach to the construction of linear regression as an improper interpolation problem was considered, based on a matrix correction of a system of linear equations expressing the condition for all points in the initial data space to belong to one hyperplane. Unlike most works in this area, instead of a quadratic approximation criterion leading to the total least squares method, it is proposed to use the norm of the matrix l as a correction (approximation) measure. In geometric interpretation, this means minimizing the maximum module of deviations from the hyperplane of all points in all coordinates.
Computationally, this approach leads to the solution of a set of linear programming problems; however, in the general case, their number grows exponentially with an increase in the number of parameters. Some methods to overcome this problem are suggested. An example of constructing demographic trends based on real data is given.
Thus, the main result of the work is to reduce the total minimax approximation method to a set of linear programming problems and to build procedures for their rational enumeration. In the future, the authors plan to develop this direction, related to the consideration of structural restrictions on the correction matrix of the initial data.

Author Contributions

Conceptualization, V.G. and T.Z.; methodology, V.G.; software, T.Z.; validation, T.Z.; formal analysis, V.G. and T.Z.; investigation, V.G. and T.Z.; resources, T.Z.; writing—original draft preparation, T.Z.; writing—review and editing, V.G.; visualization, T.Z.; supervision, V.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Regions of Russia. Socio-economic indicators 2019, https://rosstat.gov.ru/storage/mediabank/LkooETqG/Region_Pokaz_2020.pdf, accessed on 30 October 2021.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eremin, I.I. Theory of Linear Optimization (Inverse and Ill-Posed Problems); VSP: Utrecht, Germany, 2002. [Google Scholar]
  2. Gorelik, V.A. Matrix correction of a linear programming problem with inconsistent constraints. Comput. Math. Math. Phys. 2001, 11, 1615–1622. [Google Scholar]
  3. Gorelik, V.A.; Erohin, V.I. Optimal Matrix Correction of Inconsistent Systems of Linear Algebraic Equations by Minimal Euclidean Norm; CC RAS: Moscow, Russia, 2004. [Google Scholar]
  4. Gorelik, V.A.; Erohin, V.I.; Pechenkin, R.V. Numerical Methods for Correcting Improper Linear Programming Problems and Structural Systems of Equations; CC RAS: Moscow, Russia, 2006. [Google Scholar]
  5. Gorelik, V.A.; Trembacheva, O.S. Solution of the linear regression problem using matrix correction methods in the l1 metric. Comput. Math. Math. Phys. 2016, 56, 200–205. [Google Scholar] [CrossRef]
  6. Golub, G.H.; Van Loan, C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 17, 883–893. [Google Scholar] [CrossRef] [Green Version]
  7. Back, A. The matrix-restricted total least squares problem. Signal Process. 2007, 87, 2303–2312. [Google Scholar] [CrossRef]
  8. Rosen, J.B.; Park, H.; Glick, J. Total least norm formulation and solution for strucured problems. SIAM J. Matrix Anal. Appl. 1996, 17, 110–128. [Google Scholar] [CrossRef] [Green Version]
  9. Markovsky, I.; Van Huffel, S. Overview of total least-squares methods. Signal Process. 2007, 87, 2283–2302. [Google Scholar] [CrossRef] [Green Version]
  10. Ahmadi, A.A.; Majumdar, A. DSOS and SDSOS optimization: More tractable alternatives to sum of squares and semidefinite optimization. SIAM J. Appl. Algebra Geom. 2019, 3, 193–230. [Google Scholar] [CrossRef]
  11. Benitez, J.; Henseler, J.; Castillo, A.; Schuberth, F. How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research. Inf. Manag. 2020, 57, 103168. [Google Scholar] [CrossRef]
  12. Boyd, S.; Vandenberghe, L. Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  13. Caponnetto, A.; De Vito, E. Optimal rates for the regularized least-squares algorithm. Found. Comput. Math. 2007, 7, 331–368. [Google Scholar] [CrossRef]
  14. Farebrother, R.W. Linear Least Squares Computations; Routledge: London, UK, 2018. [Google Scholar]
  15. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
  16. Hair, J.F.; Sarstedt, M.; Pieper, T.M.; Ringle, C.M. The use of partial least squares structural equation modeling in strategic management research: A review of past practices and recommendations for future applications. Long Range Plan. 2012, 45, 320–340. [Google Scholar] [CrossRef]
  17. Lee, L.C.; Liong, C.Y.; Jemain, A.A. Partial least squares-discriminant analysis (PLS-DA) for classification of high-dimensional (HD) data: A review of contemporary practice strategies and knowledge gaps. Analyst 2018, 143, 3526–3539. [Google Scholar] [CrossRef]
  18. Ringle, C.M.; Sarstedt, M.; Mitchell, R.; Gudergan, S.P. Partial least squares structural equation modeling in HRM research. Int. J. Hum. Resour. Manag. 2020, 31, 1617–1643. [Google Scholar] [CrossRef]
  19. Sarstedt, M.; Ringle, C.M.; Hair, J.F.P. Partial least squares structural equation modeling. In Handbook of Market Research; Springer International Publishing: Cham, Switzerland, 2021; pp. 587–632. [Google Scholar]
  20. Shiryaev, A.N. Essentials of Stochastic Finance: Facts, Models, Theory; World Scientific Publishing Co. Pte. Ltd.: Singapore, 1999. [Google Scholar]
  21. Gorelik, V.A.; Zolotova, T.V. Method of Parametric Correction in Data Transformation and Approximation Problems. Lect. Notes Comput. Sci. 2020, 12422, 122–133. [Google Scholar]
  22. Regions of Russia. Socio-Economic Indicators 2019. Available online: https://rosstat.gov.ru/storage/mediabank/LkooETqG/Region_Pokaz_2020.pdf (accessed on 30 October 2021).
Table 1. Analyzed socio-economic indicators for the Russian regions for 2019.
Table 1. Analyzed socio-economic indicators for the Russian regions for 2019.
Federal RegionsTotal Fertility Rate (Number Born per 1000 People Population)Share of Urban Population (%)Per Capital Income (per Month)Investments in Healthcare and Social Services (Billion Rubles)
Central Region9.382.346.991.3
Northwestern Region9.684.937.926.6
South Region9.862.829.931.6
North Caucasian Region13.750.324.414.7
Volga Region9.672.228.341.9
Ural Region10.981.636.928.8
Siberian Region10.474.327.233.6
Far Eastern Region11.172.937.924.3
Table 2. The results of calculations when finding a solution to the problem (4) for two variables.
Table 2. The results of calculations when finding a solution to the problem (4) for two variables.
u 0 v 0 q 0 q 0 0
Signs of vector components q 0 Values of vector components q 0
2.3 1 + + ( 0 , 0 ) 12.7
1.443 1.009 + ( 0.103 , 0.095 ) 16.554
2 1.049 + ( 0 , 0.049 ) 14.913
1.584 1.082 ( 0.082 , 0 ) 18.742
Table 3. The results of calculations when finding a solution to the problem (4) for three variables.
Table 3. The results of calculations when finding a solution to the problem (4) for three variables.
u 0 v 0 q 0 q 0 0
Signs of vector components q 0 Values of vector components q 0
2.2 1 + + + ( 0 , 0 , 0 ) 11.5
1.437 1.008 + + ( 0.145 , 0.136 , 0 ) 16.328
2.052 1.016 + + ( 0 , 0.016 , 0 ) 12.273
1.812 1.061 + + ( 0 , 0 , 1.061 ) 13.618
1.601 1.103 + ( 0 , 0.057 , 0.047 ) 15.589
1.207 0.97 + ( 0.099 , 0.202 , 0.074 ) 13.202
1.528 1.198 + ( 0.098 , 0 , 0 ) 18.460
1.395 1.115 ( 0.087 , 0 , 0.028 ) 18.676
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gorelik, V.; Zolotova, T. Total Problem of Constructing Linear Regression Using Matrix Correction Methods with Minimax Criterion. Mathematics 2023, 11, 546. https://doi.org/10.3390/math11030546

AMA Style

Gorelik V, Zolotova T. Total Problem of Constructing Linear Regression Using Matrix Correction Methods with Minimax Criterion. Mathematics. 2023; 11(3):546. https://doi.org/10.3390/math11030546

Chicago/Turabian Style

Gorelik, Victor, and Tatiana Zolotova. 2023. "Total Problem of Constructing Linear Regression Using Matrix Correction Methods with Minimax Criterion" Mathematics 11, no. 3: 546. https://doi.org/10.3390/math11030546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop