3.2.2. Ground Truth

As shown in Figure 11, the NGLs can be visually interpreted due to their appearance of being bright and light green and is aggregated over tree crowns. According to a row of several years of inventory, the ground truth of the NGL over the images were visually interpreted and also validated in situ. In order to quantify and compare the effects of different target detection methods, there must be a NGL detection map as the standard and measure, i.e., ground truth of Region 1 and Region 2, as shown in Figure 12. Table 2 tabulates the number of pixels in NGL and non-NGL for Region 1 and 2. The NGL are only about 3–4% of the entire images.

**Figure 11.** 1000 × 1300 actual images of a forest in central Taiwan: (**a**) actual image of Region 1; and, (**b**) actual image of Region 2.

**Figure 12.** 1000 × 1300 Groundtruth of forest in central Taiwan: (**a**) Groundtruth of Region 1 in the original picture; (**b**) Groundtruth of Region 2 in the original picture.


**Table 2.** Rates of Sprout and Non-Sprout in the ground truth.

#### *3.3. Evaluation of Detection Results*

The research methods used in this study were introduced in previous sections. In order to validate whether the three methods that are proposed herein can improve the original global CEM, two methods for evaluating the precision are used in this paper. The first method is the ROC curve [51–53], which is used to calculate the detection effect of a hyperspectral algorithm. The second method is Cohen's kappa [54], which is an evaluation method extensively used in biology to calculate the model precision. In order to perform quantitative analysis, we further calculated the area under curve (AUC) for each ROC curves and overall accuracy (ACC).

## 3.3.1. ROC Curve

The main concept of ROC analysis [51–53] is a binary classification model, meaning there are only two classes of output, such as positive/negative, pass/failure, animal/non-animal, etc. For classification, a threshold must be given, and the threshold separates two classes. The probability of detection power (PD) and false alarm probability (PF) may differ under different thresholds. If the threshold is too high, then too many NGL will be estimated as non-NGL. If it is too low, then there will be more false alarms. To avoid this condition, PD and PF are calculated, respectively, by using different thresholds, and all threshold (τ) and PD and PF are drawn to obtain a ROC curve, as shown in Figure 13.

**Figure 13.** ROC Curve.

The optimum threshold (*τ*) depends on PD and PF to separate the NGL from the background, defined as:

$$
\tau = \text{Arg}\,\text{Max}\,\left(\mathcal{P}\_{\text{D}}(\tau) + (1 - \mathcal{P}\_{\text{F}}(\tau))\right) \tag{11}
$$

As we hope that PD is larger the better and that PF is smaller the better, then the optimum threshold (*τ*) can be obtained on this condition. According to the detection result and practical situation, an Error Matrix can be given.

Generally, the weights of PD and (1 − PF(τ) in Equation (11) are 0.5; as their weights are identical, it is often ignored. The weight of Equation (11) can be adjusted depending on different applications.

$$\tau = \text{Arg}\,\text{Max}\,\left(\mathbf{a} \ast \text{Pr}(\tau) + \mathbf{b} \ast \left(1 - \text{Pr}(\tau)\right)\right) \tag{12}$$

## 3.3.2. Cohen's Kappa

Cohen's kappa coefficient is a statistical evaluation method for measuring the consistency between the two classes. In image processing, the effect of a detector is generally measured by the ROC Curve, whereas Cohen's kappa is extensively used in biology to measure the efficiency of a detector. Cohen's kappa is an algorithm using the result of binarization to evaluate and calculate consistency. It uses the error matrix identical with the ROC Curve to calculate the kappa value.

According to Table 3, Cohen's kappa can be defined as

$$K = \frac{P\_o - P\_\varepsilon}{1 - P\_\varepsilon} = 1 - \frac{1 - P\_o}{1 - P\_\varepsilon} \tag{13}$$

$$P\_o = \frac{P\_a + P\_d}{P\_a + P\_b + P\_c + P\_d} = \frac{P\_a + P\_d}{N} \tag{14}$$

$$P\_{\varepsilon} = P\_{\text{Yes}} + P\_{\text{No}} = \frac{P\_{\text{a}} + P\_{\text{b}}}{N} . \frac{P\_{\text{a}} + P\_{\text{c}}}{N} + \frac{P\_{\text{c}} + P\_{\text{d}}}{N} . \frac{P\_{\text{b}} + P\_{\text{d}}}{N} \tag{15}$$

where P*o* represents the observation consistency (observed proportionate agreement) and P*e* represents the desired consistency (probability of random agreement). The *K* value ranges from −1 to 1. If the *K* value is smaller than 0, the detected result is worse than the stochastic prediction.


**Table 3.** Error Matrix of Cohen's kappa.
