2.2.1. Step 1: Imagery Dataset

Archived high- to medium-resolution (0.5–60.0 m) multispectral satellite imagery was compiled from 20 satellite sensors spanning from 1973 to 2021 through diverse sources, including open data, private data sharing agreements, and commercial acquisition (Table 1). Archived imagery is not necessarily collected during optimal conditions for mapping, and therefore special consideration should be given to any factors that may lead to inaccurate maps of floating kelp forests, such as clouds, tidal height, glint, shadow, haze, water turbidity, waves, algal blooms and time of imagery acquisition [19,25,43,45,47]. To minimize possible inaccuracies, a set of criteria were developed, and images were visually assessed considering: glint, waves, shadows, clouds, the month and tidal height of acquisition. Each category was scored from 0 to 3, where the lower the score, the better the quality. For instance, the criteria for ideal conditions (score of 0) consisted of an acquisition time between June and October, a low tidal height (<3 m above lower low-water large tide chart datum) and the minimal presence (<5%) of glint, waves, shadow and cloud within the nearshore areas where floating kelp forests are found within the imagery [25,38]. Once combined, images showing suboptimal conditions for detection (overall criteria score ≥ 7) were removed from the dataset. Importantly, the quality score results should be framed within the context that more recent imagery is intrinsically more reliable for mapping floating kelp forests due to their higher spatial resolutions.

**Table 1.** Medium- to high-resolution satellite imagery used to develop the methodological framework (NDVI: the normalized difference vegetation index. G-NDVI: NDVI with green instead of red. RE/G: band ratio between red and green. R/Y: band ratio between red and yellow. R/G: band ratio between red and green. NIR/G: band ratio between near-infrared and green).



#### **Table 1.** *Cont.*

#### 2.2.2. Step 2: Preprocessing

After selecting the optimal images for mapping, the following techniques were applied to reduce geometric and radiometric uncertainties and enhance the spectral signal of floating kelp to improve classification accuracies [25]. Geometric distortions occur in imagery due to errors during acquisition, such as variations in altitude, attitude, the velocity of the satellite, earth curvature, atmospheric refraction, and nonlinearities in the satellite path [44,67]. All selected images were checked for geometric distortions against the ESRI satellite base map in the WGS 1984 coordinate system, and those with distortions were georectified in ArcGIS using ground control points and a nearest neighbor interpolation [44,68]. However, where overlaps between imagery occurred, the overlapping images were georectified to the previous images to ensure the best match. The root mean squared error (RMSE) was calculated to evaluate the quality of the georectification, and a threshold of less than two pixels was deemed acceptable, except for the Landsat imagery, which was given an allowance of one pixel due to the coarser spatial resolution.

Following the georectification, images were evaluated for radiometric/atmospheric issues, which may impact the band indices used and, consequently, the imagery classification outputs [69,70]. When possible, we obtained atmospherically corrected images from suppliers (Table 1). For the other imagery, a simple approach considering a histogram shift determined by the Rayleigh scattering factor was applied [71], hereafter referred to as the Rayleigh correction. This approach considers that the scattering intensity is inversely proportional to the fourth power of the wavelength (*λ*−4), and assumes that the darkest pixels in an image, corresponding to shadowed or offshore deep-water areas, should have null reflectance; however, because of Rayleigh scattering, nonzero values are recorded [71]. To account for the Rayleigh scattering, these nonzero values are subtracted from the spectral signal of each specific satellite band, considering the spectral relationship between bands according to the Rayleigh factor (*λ*−4) [71]. The initial step is to define the lowest reflectance value in the blue band acquired from the darkest (lowest reflectance) pixels within the image (*Bc*), which is consequently subtracted from each pixel in the blue band. If no blue band was available (i.e., SPOT 4 and 5, Landsat-1 to 3), the value from the darkest pixels in the green band was divided in half to account for the lower proportion of Rayleigh scattering occurring and used in place of the blue *Bc*. Then, for each remaining visible band, the following equation is used to calculate the Rayleigh correction value (*Rc*):

$$\mathcal{Rc} = \left( \left( 1 \Big/ \lambda\_b^4 \right) / \left( 1 \Big/ \left( \lambda\_{\text{vis}}^4 \right) \right) \times Bc\_{\text{st}} \right)$$

where *λ<sup>b</sup>* represents the mean wavelength (nm) of the blue band, and *λvis* represents the mean wavelength (nm) of whichever visible band the equation is being used to calculate the correction for (e.g., 560 nm when correcting the green band of Geoeye-1). To ensure images were properly corrected, we evaluated the corrected reflectance of water and floating kelp for a subset of imagery from each sensor and compared them to the known reflectance for floating kelp and water from the literature [25,44,62].

Following the required corrections, images were subjected to: (i) a lowest tide land mask; (ii) a deep-water mask; and (iii) a soft substrate mask to eliminate areas where floating kelp forests are not found, and to minimize processing time and false positives. Vegetation and intertidal seaweed on land have a high near-infrared (NIR) reflectance compared to kelp [72,73], and therefore removing these features enhances the ability to digitally differentiate floating kelp from water through contrast enhancement [25]. We created the land mask using an object-based segmentation (Trimble eCognition Developer V8.64) on the imagery with the lowest tide. For each resolution of imagery used in this study, we added a buffer of one pixel to minimize land reflectance adjacency impacts from the shoreline. To eliminate any areas where floating kelp forests were unable to grow [74,75], a 20-m deep-water mask was created using a bathymetry dataset from the Canadian Hydrographic Service [38,46,76]. Lastly, we masked shallow-soft sediment bottom, which is uninhabitable to kelp [77], using overlapping areas defined as soft sediment in the BC

Marine Conservation Analysis (BCMCA) benthic classes dataset [78] and the DFO bottom patch model [77].

The final task in the preprocessing workflow was to select the combination of bands, and band indices and/or band ratios, that perform best in the classification step. The normalized difference vegetation index (NDVI, i.e., the normalized difference between the reflectance of the near-infrared and red bands [79]) was used since it is commonly used to enhance floating kelp canopies in imagery from the Sentinel-2, Landsat and SPOT satellites [19,25,46,48,50,62–66] (Table 1). Additionally, for any imagery acquired by a sensor that had not been readily used for floating kelp forest detection in the literature, the M-statistic, a measure of class separability [80], was calculated for defining other possible band indices and ratios. A high M-statistic represents high separability between two classes with significant separation when M is larger than 1.0 [80,81]. For each sensor, we combined the two to three highest scoring band indices, or ratios, with the visible bands and visually assessed the combinations to choose which provided the best overall classification results (Table 1). The selected bands, band indices and ratios, were then linearly enhanced to maximize the spectral signal of floating kelp for the final input into the classification.

#### 2.2.3. Step 3: Classification

An object-based image analysis (OBIA) was used based on the recommendation for classifying dense floating kelp forests [25]. The OBIA approach combined a multiresolution segmentation followed by a supervised nearest neighbor classification using the Trimble eCognition Developer (V8.64) software to classify floating kelp forests within the imagery. The OBIA classification offers several advantages over pixel-based classification methods. OBIA has shown better accuracy than pixel-based methods when compared across a range of spatial resolutions and ecosystems [82–86] and allows for the size of objects to be scaled, so that object size remains relatively constant across different resolutions [84]. As part of the eCognition software, OBIA allows for the definition of features beyond the pixel values of the input data, including the mean and standard deviation of object radiometry, object size and shape, and the spatial relationships of objects. This is not considered in pixel-based classification, thus increasing separability among classes and reducing the contribution of noise to the classification comparatively to pixel-based methods [87]. In the OBIA, the final enhanced bands, band indices and/or band ratios for each image were subjected to a multiresolution segmentation (Scale: Table 2; Shape: 0.3; Compactness: 0.5) to group similar pixels into objects. From those objects, training classes corresponding to floating kelp, submerged kelp/understory seaweed, water, glint/waves, cloud, shadow and shallow water, were defined using expert knowledge. Figure 3 shows examples of the most common classes used in the OBIA. Of note, not all classes were present in all imagery. In particular, some classes, such as glint/waves and submerged kelp/understory seaweed were indistinguishable in the medium-resolution satellite imagery. Because classes varied by image, the feature space optimization tool in the eCognition software was used to mathematically calculate the best number and combination of object features, such as the spatial, spectral and contextual information (e.g., the mean of bands/band indices, the standard deviation of bands/band indices, the maximum difference across all values of all bands/band indices), to separate classes based on training samples [88]. This tool compares features of different sample classes to find the optimal combination that produces the largest average minimum distance between samples, to be used when categorizing the remaining image objects into those given classes [88]. The ability of the feature space optimization tool chosen features to separate classes was evaluated based on the analysis of boxplots and three-axis scatter plots of the top three selected features for a subset of images. Following this evaluation, we performed a nearest neighbor classification, using the optimal features defined by the feature space optimization tool, to categorize the remaining image objects into their respective classes. Before validation, the outputs were visually subjected to a quality assessment using a knowledge-based approach where erroneous classifications were manually reclassified in ArcGIS. Lastly, for the validation step, the outputs of the

classification were turned into two binary classes: floating kelp forests (1) and all other classes (0).

**Figure 3.** The most common class types used in the object-based image classification for highresolution (QuickBird-2 images from 2013 and 2017 at 2.6 m resolution) and medium-resolution (Landsat image from 1990 at 30.0 m resolution) satellite imagery.


**Table 2.** The different scale factors used to determine object size in the segmentation step of the classification.
