*3.1. SVM*

The SVM analytical method, proposed by Vapnik in 1992, is a widely used ML tool for classification and regression [34]. SVM regression is considered a nonparametric technique because it is based on kernel functions, where a linear epsilon neglected during SVM (ε-SVM) regression is implemented; it is also known as L1 loss. In ε-SVM regression, the training data set contains values of both predictor variables and observed response variables. For each training point x, the SVM deviates from yn (the response variable value) by a value not higher than ε, as shown in Figure 4. Simultaneously, the objective is to determine the maximum flattened function f(x), as expressed in Equation (5):

$$f(\mathbf{x}) = \mathbf{x}'\boldsymbol{\beta} + \mathbf{b} \tag{5}$$

where β is the linear predictor coefficient, and *b* is the bias. In this study, the SVM regression model was implemented in the MATLAB R2019b (MathWorks, Natick, MA, USA) environment using "fitrsvm", which employs linear learning.

**Figure 4.** Procedure of support vector regression from an input sample.
