*2.7. Evaluation Metrics of the Model Performance*

Several indicators [27] were used to evaluate the performance of the proposed model, such as accuracy (A), pixel accuracy (PA), mean intersection over union (MIoU), mean pixel accuracy (MPA), recall (R), precision (P) and F1 value, where TP, FP, TN and FN represented true positive, false positive, true negative and false negative, respectively.

$$\mathbf{A} = \frac{\mathbf{TP} + \mathbf{TN}}{\mathbf{TP} + \mathbf{TN} + \mathbf{FP} + \mathbf{FN}} \tag{11}$$

$$\text{PA} = \frac{\sum\_{i=0}^{k} p\_{ii}}{\sum\_{i=0}^{k} \sum\_{j=0}^{k} p\_{ij}} \tag{12}$$

$$\text{MPA} = \frac{1}{k+1} \sum\_{i=0}^{k} \frac{p\_{ii}}{\sum\_{j=0}^{k} p\_{ij}} \tag{13}$$

$$\text{MIoU} = \frac{1}{k+1} \sum\_{i=0}^{k} \frac{p\_{ii}}{\sum\_{j=0}^{k} p\_{ij} + \sum\_{j=0}^{k} p\_{ji} - p\_{ii}} \tag{14}$$

$$\mathcal{R} = \frac{\text{TP}}{\text{TP} + \text{FN}} \tag{15}$$

$$\text{IP} = \frac{\text{TP}}{\text{TP} + \text{FP}} \tag{16}$$

$$\text{F1} = \frac{2\text{P} \cdot \text{R}}{\text{P} + \text{R}} \tag{17}$$

Assume that there were *k* + 1 classes (0... *k*) in the dataset and 0 usually represented the background. *pij* indicated that it was originally class *i* and was predicted to be class *j*, and *pji* indicated that it was originally class *j* but was predicted to be class *i*. Pixel accuracy (PA) refers to the proportion of pixels predicted correctly in the total pixels. Mean pixel accuracy (MPA) was an improvement on PA. It calculated PA for each class and then averaged PA for all classes.
