*2.6. Habitat Classification*

The habitat mapping stemmed from the superspectral capabilities of the WorldView-3 sensor. In addition to the pansharpened optical (VIS+NIR) reflectance, the eight-band MIR contribution to landscape (and not seascape given its water absorption) mapping was tested, which necessitated radiometric and geometric corrections purposed to the pansharpening enhancement (see [19] for WorldView-3 pansharpening). An output of 16 spectral bands at 0.3 m was used as input predictors for a supervised classification based on the commonly used probabilistic maximum likelihood (ML) algorithm. This learner assumes that the statistics for each class in each spectral band are normally distributed, enabling the probability that a given pixel belongs to a specific class to be estimated.

The 3000 calibration pixels per class were used to build the ML model, while the 3000 validation pixels per class were intended for computing the confusion matrix (CM), from which the omission, commission misclassification (or error) and overall accuracy (OA) were drawn. These accuracy metrics were based only on the multi-colour rectangular regions of interest (see Figure 1b), and not on the whole scene. The omission misclassification corresponded to the rate at which sites were erroneously omitted from the correct class in the classified map, while the commission misclassification embodied the rate at which sites were correctly classified as ground-truth sites but were erroneously omitted from the correct class in the classified map. The OA and CM were used to analyze gain patterns at the scene and the class scale, respectively. First, the VIS, NIR and MIR spectral contributions were assessed by, respectively, adding the Coastal and yellow, RE-NIR1-NIR2, and MIR1-MIR2-MIR3-MIR4-MIR5-MIR6-MIR7-MIR8 bands, to the basic BGR combination. Second, the spatial contribution of the land-sea DSM was evaluated for the three spectral blendings. Third, the best predictions were further analyzed at the class level.
