2.5.1. Support Vector Machine (SVM)

SVM is a nonparametric learning algorithm for regression and classification goals and for hyperspectral data mining [56–58]. In the SVM procedure, the n-dimensional input vectors are conveyed into a high-dimensional feature space, and consequently, the optimal separating hyper-planes are developed [59]. Here, the SVM regression algorithm was used in multiple scenarios and designs to gain the best performance for modelling the relationship between the in-field hyperspectral data and the measured heavy metal concentration in grapevine leaves. To this end, the input vectors were linked to the outputs with a kernel function [12]. Regression SVM-type 1 with different kernel functions—i.e., radial basis functions (RBF), polynomials, and a sigmoid shape—was applied. In order to achieve an optimal training constant, V-fold cross validation was used, and kernel function parameters (coefficient, gamma, and degree) were altered to give a high-performance score [60]. More details about the assumptions and structure of SVM are provided by Stitson et al. [59] and Cristianini and Shawe-Taylor [61].

## 2.5.2. Multiple Linear Regressions (MLR)

MLR is a parametric regression algorithm that attempts a relationship model between two or more independent variables and a response variable with a linear fitting. It has the capacity to select appropriate input data. In this study, the forward selection method of MLR was applied to increases the R2 value by adding an independent variable [40]. The Durbin–Watson statistic was applied to test autocorrelation in the residuals from statistical regression analysis. Durbin–Watson values close to 2 (1.5–2.5) indicate that there is no autocorrelation detected in the samples. Additionally, in order to detect multicollinearity in regression analysis, thw variance inflation factor (VIF) was considered (VIFs exceeding 10 are signs of serious multicollinearity) [62,63]. The general form of the MLR equation is as follows:

$$\text{HMC } = \text{ a}0 + a\_1 LM\_1 + a\_2 LM\_2 + \dots + a\_{\text{ll}} LM\_{\text{ll}} \tag{2}$$

where HMC is the heavy metal concentration in grapevine leaves, a (i <sup>=</sup> 0,1, ... ,n) are the parameters generally estimated by least squares, and X (i <sup>=</sup> 1,2,...,n) are the independent variables (i.e., wavelengths and spectral indices).

#### **3. Results and Discussion**
