2.6.3. Coefficient of Determination (R2)

The coefficient of determination is a statistical metric used in the context of predictive modeling. The primary goal is to quantify the proportion of variance in the dependent variable that is predictable from the independent variable(s) in a statistical model. It is calculated as the ratio of the explained variation to the total variation of the dependent variable [90]. Equation (3) shows the R2 formula:

$$\mathcal{R}^2 = 1 - \frac{\sigma\_r^2}{\sigma^2} \tag{3}$$

where *σ*<sup>2</sup> *<sup>r</sup>* is the sum of the squared differences between the predicted values (from the model) and the actual values, and *σ*<sup>2</sup> is the sum of the squared differences between the actual values and the mean of the actual values.

#### 2.6.4. Percentage of Mean Absolute Error (%MAE)

This is a statistical metric that quantifies the magnitude of the difference between two continuous variables. It is commonly used to evaluate the accuracy of a predictive model by comparing the predicted values to the actual values of the dataset. It is calculated as the average of the absolute differences between the predicted and actual values. Its mathematical formulation is represented in Equation (4):

$$\% \text{MAE} = \left(\frac{\frac{1}{n}\sum\_{i=1}^{n}|y\_i - x\_i|}{P}\right) \times 100 \tag{4}$$

where *yi* is the value of the prediction, *xi* represents the observed value, *n* the total number of observations, and *P* the mean observed yield of each plot.

#### 2.6.5. Accuracy

The accuracy error metric is a metric to evaluate the performance of a model with categorical data. Accuracy is calculated as the ratio of the number of correct predictions made by the model to the total number of predictions. The accuracy was expressed as a percentage, with values closer to 100% indicating a higher degree of accuracy:

$$\text{Accuracy} = \left(\frac{Cp}{Tp}\right) \times 100\tag{5}$$

where *Cp* are correct predictions and *Tp* are total predictions.

#### 2.6.6. Kappa Index (KI)

The Kappa index (KI) is a measure of accuracy when comparing actual and predicted yield maps. KI is a widely used statistical metric that quantifies the agreement between two categorical classifications, considering the possibility of agreement by chance. The KI was calculated using the formula:

$$\text{KI} = \frac{(Oa - Ea)}{(1 - Ea)} \tag{6}$$

where *Oa* is the observed agreement and *Ea* is the expected agreement.
