*3.4. Polynomial Regression*

A model defined as summation of several linear functions is known as a polynomial regression model. The amount of basis functions used in the polynomial regression is associated to the number of inputs and is in concordance with the polynomial degree used.

When the first degree is defined, the polynomial can be defined as Equation (3). If the degree increases, the model becomes more complex. A second degree polynomial is shown in Equation (4).

$$f(\mathbf{x}) = \mathbf{c}\_0 + \mathbf{c}\_1 \mathbf{x}\_1 + \mathbf{c}\_2 \mathbf{x}\_2. \tag{3}$$

$$f(\mathbf{x}) = \mathbf{c}\_0 + \mathbf{c}\_1 \mathbf{x}\_1 + \mathbf{c}\_2 \mathbf{x}\_2 + \mathbf{c}\_3 \mathbf{x}\_1 \mathbf{x}\_2 + \mathbf{c}\_4 \mathbf{x}\_1^2 + \mathbf{c}\_5 \mathbf{x}\_2^2. \tag{4}$$

#### *3.5. Support Vector Machines for Regression*

The supervised machine learning algorithm known as the Support Vector Machine (SVM) is commonly used for classification. The original SVM algorithm needs only few changes to allow use for regression problems, and this new technique is called Support Vector Regression (SVR). The SVR performs a nonlinear transformation of the original data into a high-dimensional space, and it uses linear regression on this mapping data to calculate the desired output.

In this research, the Least Square SVR (LS-SVR) is used [39], it is a modified algorithm based on SVR that uses the Least Square to minimize the objective function [39]. This modification provides a comparable generalization performance to the SVR [40].

The LS-SVR regression algorithm replaces the insensitive by a classical squared loss function. Equation (5) is used to solve the linear LarushKuhn–Tucker.

$$
\begin{bmatrix} 0 & I\_n^T \\ I\_n & K + \gamma\_{-1}I \end{bmatrix} \begin{bmatrix} b\_0 \\ b \end{bmatrix} = \begin{bmatrix} 0 \\ \mathcal{Y} \end{bmatrix} \tag{5}
$$

where:

*In* is a vector of *n* ones;

*T* means transpose of a matrix or vector;

*γ* a weight vector;

*b* regression vector;

*b*0 is the model offset.

> LS-SVR only needs to adjust two parameters: the weight vector (*γ*) and the kernel width (*σ*) [39].
