*2.3. Remote Sensing Acquisition*

Acquisitions were made by Hytech-Imaging (Plouzané, Brittany, France) using a NEO HysPex Mjolnir V-1240 sensor (Oslo, Norway) (Table 1). The sensor was set on an octocopter UAV based on Gryphon Dynamics X8 architecture (Figure 2), with a gStabi H16 stabilization, containing an Applanix APX15 inertial unit with an L1/L2 GPS receiver and a GPS L1/L2 Tallysman enabling geolocation. The UAV and the central acquisition unit of the sensor were remotely controlled by a radio link.

**Table 1.** Characteristics of the hyperspectral visible near infrared (VNIR) Mjolnir\_V-1240 sensor. FOV = field of view.


**Figure 2.** UAV octocopter used for the acquisitions.

Acquisitions were performed on the 24 June 2021 at a 64 m height to obtain a resolution of 2 cm (Table 2). To perform the image acquisition, two technicians were involved to pilot the UAV and to operate the hyperspectral sensor. The acquisition lasted about 30 min. The flight plan was designed to cover a subsection of the site of Porspoder, including all of the field sampling spots (Figure 1).


**Table 2.** Parameters of the aerial survey.

Images (Figure 3) were collected between 09h28 and 09h47 UTC at low tide (tidal coefficient 92 corresponding to tidal range of 6.1 m). During the acquisitions, light was diffused due to cloud cover.

**Figure 3.** Porspoder orthophoto (RGB) obtained during the flight on the 24 June 2021. Detailed sections of the color image illustrating the different bathymetric levels on the shore are represented: Pc-Fspi (red square), An (yellow square), Fser (black square) and He-Ld (blue square).

#### *2.4. Pre-Processing*

To obtain a georeferenced image in spectral radiance (W·m−2·sr−1·μm<sup>−</sup>1), the hyperspectral image was processed from raw data (level 0) to a radiometrically and geometrically calibrated image (level 1c) using the HYPIP (HYPperspectral Image Preprocessing) chain of Hytech-imaging that includes ATCOR/PARGE software applications (ReSe Applications, Wil, Switzerland). To calculate the surface reflectance, atmospheric corrections were

performed in a two-step process: first, using the ATCOR-4 software, and then empirically adjusting each spectrum. To adjust each spectrum, coefficients of gain and bias were calculated per spectral band, by linear regression between surface reflectance data and the reflectance signature. This reflectance signature was obtained by positioning pre-calibrated targets (tarps) near the area of interest overflown during the survey.

#### *2.5. Data Classification*

For this study, supervised classifications were performed, where categories (classes) correspond to spectral signatures defined by the user. A class contains a characteristic spectral signature for each dominating fucoid species, macroalgal group or an abiotic component and corresponds to homogeneous regions delineated on the UAV image. The software then assigns each pixel of the image into a cover type to which its signature is most comparable [60]. The supervised classifications were performed after defining regions of interests (ROIs) which are training data. ROIs were created for each class using 'ROI tool' in ENVI version 5.6.1 (Exelis Visual Information Solutions, Boulder, CO, USA) by manually circling pixel areas on the image. More than one training ROI were usually used to represent a particular class (ROIs = multiple polygons) (Table 3). The number of polygons and pixels per class depend on the surface occupied by each species. For example, covers of *P. canaliculata* and *F. spiralis* are low compared to those of *F. serratus* or *H. elongata*, which represent more homogeneous and larger classes. Classes were selected in agreement with the hyperspectral image and pictures taken during field sampling.

**Table 3.** Number of ROIs and pixels for each class.


Nine classes were thus defined for the site of Porspoder (Figure 4), including five classes of dominating Fucales ('*Pelvetia canaliculata*', '*Fucus spiralis*', '*Ascophyllum nodosum*', '*Fucus serratus*' and '*Himanthalia elongata*'), and two classes related to green and red seaweeds (respectively, 'Green' and 'Red') were created. A 'Substratum' class was defined grouping bedrock, boulders, gravel and sand, and, finally, a 'Water' class was also created, gathering immerged parts of the shore (pools or subtidal zone). The 'Substratum' and 'Water' classes were classified in the same way as the other classes and have subsequently been removed from the maps to improve their clarity and interpretation. Due to the complexity of accurately identifying benthic fauna on the UAV image, no appropriate class was created, and these data were grouped together as the 'Substratum' class.

Training data (i.e., ROIs mean spectra) were checked for class separability using the Jeffries–Matusita distance [61]. The values of the resulting output between each pair of classes ranged between 0 and 2, with values greater than 1.9 indicating almost perfect separability between them [46]. A large class separability indicates that accurate training areas have been selected, whereas values approaching zero suggest either the need for more training areas or classes that are inherently similar in their spectral properties.

**Figure 4.** Hierarchical tree of decision to make classes, inspired from Congalton et al. (1999) [62].

Two supervised classification methods were performed to test the representativeness of the spectral classes running the software ENVI version 5.6.1 (Exelis Visual Information Solutions, Boulder, CO, USA), i.e., the algorithms maximum likelihood classification (MLC) and spectral angle mapper (SAM).

MLC calculates the probability that an individual pixel belongs to a specific class and is based on an estimated probability density function derived from the defined reference classes [63]. MLC is a popular classifier [64]. The use of spectral profiles by this method requires ROIs based on multiple pixels. Following this method, the classification is based on the selection of the most representative spectral profiles in ROIs of the same class upon different flight lines. The MLC classifier assumes a Gaussian distribution for each input training class [65] and it can be expressed by the following equation:

$$g\_i(\mathbf{x}) = \ln p(\omega\_i) - \frac{1}{2} \ln |\sum\_i | - \frac{1}{2}(\mathbf{x} - m\_i)t \sum\_i^{-1} (\mathbf{x} - m\_i) \tag{1}$$

*i*

where *i* is a given spectral class, *x* equals n-dimensional data, *p*(*ωi*) is the probability that class *ω<sup>i</sup>* occurs in the image and it is assumed the same for all classes, | ∑ | is the

determinant of the covariance matrix of the data in class *ωi*, −1 ∑ *i* is the inverse matrix and *mi* is the mean vector. The advantage of MLC as a parametric classifier is that it considers the variance-covariance within the class distributions and, for normally distributed data, MLC performs better than the other known parametric classifiers [66]. However, for data with a non-normal distribution, the results may be unsatisfactory.

SAM identifies the spectral similarity between two spectra collected from an image or distributed from a spectral library [67]. The resulting classification is rather based on the angular orientations of spectral vectors [29]. Similarities within pairs of spectra (reference and classification) can be compared regardless of differences in brightness, and the pairs are treated as vectors in an n-dimensional space [68]. SAM is expressed by the following equation, taken from Kruse et al. 1993 [67]:

$$\alpha = \cos^{-1}\left[\frac{\sum\_{i=1}^{nb} t\_i r\_i}{\left(\sum\_{i=1}^{nb} t\_i r\_i\right)^{\frac{1}{2}} \left(\sum\_{i=1}^{nb} t\_i r\_i\right)^{\frac{1}{2}}}\right] \tag{2}$$

where *t* is the spectra for a pixel, *r* is for the reference spectrum pixel, α is the spectral angle between *t* an *r* (measured in radians or degrees) and *n* is the number of bands.

The average spectral reflectance curves from the ROIs were extracted since SAM requires endmember spectra. If two ROIs were identical, they were averaged in order to obtain one curve with the maximum possible data. The use of spectra derived directly from the image is usually better than using ground or library spectra due to better inclusions of errors related to atmospheric corrections, calibration and effects of sensor responses [29].

For both classifications (SAM and MLC), no detection threshold was selected, so that all pixels could be classified.
