2.6.2. Spectral Indices (SI)

New spectral indices (SI) have been constructed from the most outstanding absorption and reflectance features (peaks and valleys of the 2nd derivative, respectively) of the transformed spectrum. These indices can emphasize the distribution of different vegetation species in the salt marsh. Each SI is calculated according to Equation (2) (known as the normalized difference):

$$\rm{SI}\_{\rm B2-B1} = (B2 - B1)/(B2 + B1) \tag{2}$$

where SIB2-B1 is the calculated spectral index, B1 is the wavelength presenting the absorption feature, and B2 is the wavelength presenting the reflectance feature. This type of equation brings out characteristics not initially visible.

## *2.7. Validation*

## 2.7.1. Spectral Signatures

The spectral responses of our study area were previously studied in 2014–2015 (FAST project, [75]). In the FAST project, each sampling point was a 1 × 1 m area where five reflectance measurements were made using a field hyperspectral radiometer for the VNIR range (400–1000 nm).

The FAST spectra measurements are utilised here as a reference spectral library to identify species based on their spectral features. All spectra from our classification results are compared to the library using the Spectral Analyst tool from ENVI. The similarities of our classification spectra to those in the library are calculated by providing a similarity score to each spectrum in the library, with the highest score considered the closest match (i.e., the most confident spectral similarity). This analysis considers only the wavelength range available in the FAST library (400–1000 nm).

#### 2.7.2. Classification

Two methods are used to determine the SVM classification accuracy. The first one is the comparison of the classification results with the composition of species observed at 60 randomly sampled points in the study area. To buffer small errors associated with very precise locations, the species from the classification was determined as the prevalent class in a 15 cm diameter buffer area around each point.

The second method is the comparison with random pixels from other sources. In total, seven comparisons are performed from seven sets of random pixels. One set is obtained from the training ROIs. The other six sets are generated from the unsupervised classified image using different sampling methods: (1) two sets are generated with stratifiedproportionate samples (SP), with sizes directly related to the size of the classes; (2) two sets of equalized samples (Eq), with fixed size regardless of the class size; and (3) two sets of random samples (R), using 10% and 20% of the total pixels.

For each comparison, the accuracy is determined from (1) the overall accuracy, calculated by counting the correctly classified values and dividing by the total number of values; (2) producer accuracy, which measures the likelihood of correctly classifying a value into each class; (3) user accuracy, which shows the likelihood that a prediction belongs to the correct class. Each probability is determined by dividing the proportion of correct values by the total number of values in a class.
