*3.3. Classification & Accuracy Assessment*

Here, the results are presented according to the order of the various decision processes for the classification and validation. In an object-oriented classification approach, after the initial segmentation step (see Methodological Framework Step 3: Classification), the feature space optimization tool of eCognition defined the combination of object features that best differentiate classes. Specifically, the feature space optimization function defines the best combination of object features that gives the highest possible separability between classes, as illustrated by the example of features chosen to classify a QuickBird-2 image (Figure 7). In this particular case, the feature space optimization tool chose the standard deviation of G-NDVI, the mean G-NDVI, and the maximum difference between

all bands and band indices to differentiate floating kelp canopy from submerged seaweed/kelp, glint/waves and water sample classes. Mean G-NDVI alone can differentiate kelp from all other classes present in the image. However, the feature space optimization chose both the standard deviation of G-NDVI and the maximum difference between all bands and band indices because, when combined, they can differentiate among all classes (Figure 7). For the majority of the dataset, the feature space optimization tool selected between three and 10 features depending on the image, with, generally, the mean of the red-edge band (when available), the mean of the near-infrared band and/or the mean of the band indices selected.

**Figure 7.** A three-axis scatter plot of the top three features chosen by the Feature Space Optimization tool showing the separability of classes: the standard deviation (St. Dev) of G-NDVI; mean G-NDVI; and the maximum difference between all (G, R, NIR and G-NDVI) input bands. The example was done with a QuickBird-2 image.

After selecting the optimal features, we ran the classification according to the nearest neighbor algorithm, followed by an evaluation of the classification results, considering user, producer, and global accuracies (Table 5). The overall global accuracy for all sensors ranged from 88% to 94% (Table 5). Generally, producers' and users' accuracy for kelp was high (from 83% to 96% and from 90% to 100%, respectively; Figure 8C,D). Producers' accuracy for non-kelp classes were also high (from 89% to 100%). The lowest scores occurred within the non-kelp users' accuracy (from 64% to 100%) with errors occurring where floating kelp was misclassified as water (see example in Figure 8A,B). Lastly, we found no apparent differences when comparing the accuracy assessments that used concurrent and non-concurrent validation data for QuickBird-2, Geoeye-1 and Worldview-2 (Table 5).

We used, on average, 124 validation points (85 kelp points and 39 non-kelp points), except for with the classification of RapidEye and the aerial imagery. For these, only nine validation data points were available for each, and thus even though high accuracy was achieved, caution about the results is recommended. The lowest resolution satellite, Landsat-5 (30.0 m), included in the validation, had similar accuracy to the higher-resolution satellites (Table 5); however, it produced the lowest measure of users' accuracy for non-kelp targets (64%). Upon inspection, smaller thin fringing forests in steep nearshore areas were misclassified as water or omitted due to the lowest tide land mask's coarse resolution.

**Table 5.** A summary of the accuracy assessment where users' accuracy (%) refers to how often classes (non-kelp and kelp) on the map are present in situ, and producers' accuracy (%) refers to how often real features (non-kelp and kelp) on the ground are correctly classified on the maps.


**Figure 8.** Examples of (**A**,**B**) kelp that was misclassified as water, and examples of (**C**,**D**) kelp that was correctly classified during the accuracy assessment. Image sources: (**A**,**D**) Environment and Climate Change Canada; (**B**,**C**) Gendall, L.
