Next Article in Journal
Extreme Short-Term Prediction of Unmanned Surface Vessel Nonlinear Motion Under Waves
Previous Article in Journal
Multi-Scale Pore Structure of Terrestrial, Transitional, and Marine Shales from China: Insights into Porosity Evolution with Increasing Thermal Maturity
Previous Article in Special Issue
The Integration of a Medium-Resolution Underwater Radioactivity System in the COSYNA Observing System at Helgoland Island, Germany
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Flying Robots Teach Floating Robots—A Machine Learning Approach for Marine Habitat Mapping Based on Combined Datasets

1
Institute of Marine Biological Resources and Inland Waters (IMBRIW), Hellenic Centre for Marine Research (HCMR), 16452 Athens, Greece
2
Institute of Marine Biology, Biotechnology and Aquaculture (IMBBC), Hellenic Centre for Marine Research (HCMR), 71500 Heraklion, Greece
3
Institute of Oceanography (IO), Hellenic Centre for Marine Research (HCMR), 71500 Heraklion, Greece
4
Cretaquarium, Hellenic Centre for Marine Research (HCMR), 71500 Heraklion, Greece
5
CESAM—Centre for Environmental and Marine Studies, Geosciences Department, University of Aveiro (UA), Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
6
Centre of Engineering and Product Development (CEiiA), 4450-017 Matosinhos, Portugal
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(3), 611; https://doi.org/10.3390/jmse13030611
Submission received: 10 February 2025 / Revised: 12 March 2025 / Accepted: 14 March 2025 / Published: 19 March 2025

Abstract

:
Unmanned aerial and autonomous surface vehicles (UAVs and ASVs, respectively) are two emerging technologies for the mapping of coastal and marine environments. Using UAV photogrammetry, the sea-bottom composition can be resolved with very high fidelity in shallow waters. At greater depths, acoustic methodologies have far better propagation properties compared to optics; therefore, ASVs equipped with multibeam echosounders (MBES) are better-suited for mapping applications in deeper waters. In this work, a sea-bottom classification methodology is presented for mapping the protected habitat of Mediterranean seagrass Posidonia oceanica (habitat code 1120) in a coastal subregion of Heraklion (Crete, Greece). The methodology implements a machine learning scheme, where knowledge obtained from UAV imagery is embedded (through training) into a classifier that utilizes acoustic backscatter intensity and features derived from the MBES data provided by an ASV. Accuracy and precision scores of greater than 85% compared with visual census ground-truth data for both optical and acoustic classifiers indicate that this hybrid mapping approach is promising to mitigate the depth-induced bias in UAV-only models. The latter is especially interesting in cases where the studied habitat boundaries extend beyond depths that can be studied via aerial devices’ optics, as is the case with P. oceanica meadows.

1. Introduction

Posidonia oceanica is an endemic seagrass species of the Mediterranean Sea that forms complex marine meadows and provides significant ecosystem services for ocean life, such as a suitable habitat and nursery grounds, water purification and oxygenation, carbon sequestration, and coastal protection [1,2]. Posidonia meadows were identified as a priority habitat type for conservation and management, aiming to reduce the negative impacts of the increasing coastal urbanization, the uncontrolled use of destructive fishing methods, climate change and the invasion of alien species (Habitats Directive 92/43/EEC, 1992). Given the documented lack of relevant information in parts of the Mediterranean sea and an estimated regression of meadows in the order of 34% in the last 50 years [3], the effectiveness of protection and restoration measures has to be regularly monitored using cost-, time- and effort-effective tools in order to detect changes in the extent and distribution of meadows in a timely manner. At the same time, monitoring the Posidonia meadows in shallow coastal areas is very important as an indicator of the habitat’s health, but is often subject to technical and operational constraints. The formation of dead Posidonia matte (i.e., dead rhizomes and roots, dead leaves, and drift epibionts) is also an important habitat component, functioning as an additional carbon sink, a food source for detritus feeders, and a natural banquette reducing the impact of wave energy [2]. Therefore, the extent and variability of areas where the dead Posidonia matte is aggregated is also a useful parameter that is worth monitoring using habitat mapping tools.
Habitat mapping is defined as the identification, delineation, and documentation of the spatial distribution of habitats in a specific geographical area [4]. Considering benthic habitat mapping, a variety of methods exist for data acquisition and analysis, including remote sensing via optical or acoustic means and field surveys by scientific divers [5], statistical methods for data integration/classification, and Geographic Information Systems (GIS) for spatial analysis and map production [4]. Methodologies based on the use of remote sensing for mapping the habitat of Posidonia meadows have been implemented using both optical [6,7,8] and acoustic [9,10,11,12] means, often accompanied by field surveys by scientific divers or some other ground-truthing (GT) approach. As expected, each family of methods has some advantages and some limitations [5,13], and as a result, methods combining both acoustic and optical techniques have emerged [14,15,16,17].
Advances in robotics, aerial drones, autonomous vehicles, and AI-driven analytics are providing marine science with powerful tools for more efficient, accurate, and large-scale data collection [18,19]. These technologies improve ocean monitoring and analysis, minimize the need for human intervention in remote or challenging environments, and significantly reduce operational costs. Robotics and autonomous systems contribute to high-resolution data collection [20], while AI facilitates data integration, pattern recognition, and predictive modeling, making complex ocean data more accessible and actionable [21]. The combined use of these technologies presents both opportunities and challenges for the development of novel sampling and analysis strategies.
Methodologies which are appropriate for coastal applications are preferable, if not necessary, for the purpose of mapping the habitat of Posidonia meadows. Small vessels, such as autonomous surface vehicles (ASVs), allow for better maneuverability in the coastal topography and can more easily traverse shallow waters [8,22]. In an alternative approach, the use of unmanned aerial vehicles (UAVs) with high-performance optics provides a low-cost method to retrieve habitat information at very high fidelity [23]. However, this method is also constrained by weather conditions [23], and, more importantly, has a limited range of application with respect to the bottom depth [24,25,26], which limits its applicability at more shallow depths than the expected potential habitat for Posidonia meadows.
The current study, performed as part of the project NAUTILOS (Horizon 2020; GA 101000825), aims to enhance temporal regularity and spatial resolution in marine monitoring and takes into account the Intergovernmental Oceanographic Commission Criteria and Guidelines on the Transfer of Marine Technology (UN SDG 14). More specifically, the study aims to develop an integrative method for mapping Posidonia meadows by combining optical data collected by a UAV, acoustic data collected by a portable MBES mounted on an ASV, and visual census (VC) datasets from scientific divers. Following data acquisition, the study employs a machine learning approach for the habitat mapping of Posidonia meadows. Independent classifiers for the optical and acoustic data are used in a transfer-learning scheme, where the high-fidelity photogrammetry dataset is utilized to improve the performance of a habitat classifier based solely on acoustic information. The advantages of this approach are twofold: (a) the methodology produces a model based on portable-MBES data that performs in-par with a photogrammetric classifier, and (b) following an initial training process, the acoustic classifier can operate based only on MBES data, in conditions beyond the capability of UAV optics, considering mainly the bottom depth but also the dynamic optical characteristics of coastal waters. The study focuses on maximizing the benefits of each component, while critically evaluating the limitations of this hybrid approach.

2. Materials and Methods

2.1. Study Area

The survey operations were carried out in Gournes area (35.33549, 25.28038) near HCMR infrastructures in Heraklion, Crete (Greece) (Figure 1).

2.2. Data Acquisition and Ground-Truthing

The field operations were carried out between the 20th and 26th of October 2023 and included (a) a UAV-based photogrammetry survey, (b) an ASV-mounted MBES survey, and (c) a VC conducted by scientific divers for GT. Due to operational challenges caused by interference between the radio communication systems used by the UAV, ASV, and their respective ground control stations (GCS), the UAV survey had to be ceased earlier. As a result, the original dataset presented an incomplete overlap between the UAV photogrammetry and the MBES survey polygons. To address this limitation, an additional aerial survey was conducted on 4 July 2024. A detailed manual comparison of the photogrammetry dataset from the two survey dates revealed no significant variations in habitat conditions within the overlapping mosaic areas. Consequently, the more recent dataset was selected for analysis to maximize the spatial intersection between the MBES and photogrammetry mosaics. This approach also allowed for the optimal integration of all GT data collected during the study.
The UAV utilized in this study was the DJI Mavic 2 Pro quadcopter, which has a takeoff weight of 900 g. The UAV was equipped with a high-performance Hasselblad L1D20c camera mounted on a fully stabilized three-axis gimbal, ensuring superior image stability and quality. The camera features a 1-inch CMOS RGB image sensor and 35 mm lens with a maximum aperture of f/2.8, enabling excellent light sensitivity and image clarity. To conduct the aerial survey, the DJI Pilot GCS software ver. 1.14 was employed, allowing for the precise execution of flight paths along 22 parallel transects. These transects were evenly spaced at 20 m intervals to ensure comprehensive area coverage. The UAV maintained a consistent flight height of 50 m above mean sea level and operated at a speed of 5 m/s. These parameters were selected to achieve a 70% image overlap between adjacent samples, which is critical for robust image stitching and analysis. Images were captured at 5 m intervals along the transects using a time-interval shot mode. The camera settings included an auto white balance configuration and shutter priority at 1/400 s, chosen to minimize motion blur and optimize image sharpness under varying lighting conditions. The survey yielded a total of 520 high-resolution RGB images, each with a resolution of 5472 × 3648 pixels (20 MP). These images formed the basis of the final dataset for subsequent analysis, providing detailed spatial and spectral information for the study area.
For the MBES survey, CEiiA’s ORCA ASV, designed for inland and coastal applications, was utilized. ORCA is a 3.4 m twin hull ASV equipped with a 240 kHz Imagenex DT101Xi MBES, which integrated a motion reference unit and a surface sound velocity sensor. DT101Xi was interfaced to a Differential Global Positioning System (D-GPS) module which supported Real-Time-Kinematic corrections via a mobile network connection. For data acquisition, DT101Xi was configured to utilize all 480 beams (effective beam width 0.75 degrees) with a sector size of 120 × 3 degrees and the transmission range was set manually at 10 m. To ensure accurate depth measurements, a Valeport SVP sound velocity sensor was also manually deployed at 21 locations within the survey area onboard the RHIB of HCMR “IOLKOS”. The acquired sound speed profiles were posteriorly applied on the refraction correction process of the MBES data.
The HCMR Scientific Diving Team performed GT operations in the study area, utilizing a combination of geotagged underwater photographs and video recordings. This methodology provided high-resolution spatial and visual data to complement other survey techniques. The divers were equipped with a surface D-GPS unit, an underwater camera (Sony RX100V) and an underwater video system (GoPro ver11), ensuring precise documentation of the surveyed habitats. At the test site, the team performed both horizontal and vertical VC transects to systematically document the underwater environment. One transect was strategically selected for detailed analysis. Along this transect, photographs were captured at predetermined intervals, targeting a diverse range of locations and habitats to ensure comprehensive coverage. Over the course of a 65 min dive, the divers successfully captured a total of 104 high-quality photographs, offering a robust dataset for subsequent analysis (Figure 2). These photographs not only provide valuable insights into the biodiversity and habitat complexity of the area but also serve as a crucial validation tool for remote sensing and mapping efforts.
Through VC, three primary habitat types were identified: sand (soft sediment), dead matte and P. oceanica meadows. Additionally, at very shallow depths (<1 m), scattered coastal reefs were observed. However, this habitat type was excluded from the analysis as it did not overlap with the ASV survey area. To establish GT points from the VC data, all photographs were geotagged using the free software GEOSetter ver. 3.4.16 (https://geosetter.de; accessed on 1 March 2023). To ensure precise geotagging, the internal clock of the underwater camera was synchronized with the portable D-GPS time. GEOSetter then assigned geotags to the photographs by matching their timestamps with the corresponding D-GPS tracking data.

2.3. Data Processing

The UAV-based imagery was processed using Pix4Dmapper ver. 4.4.12. With this SfM (Structure from Motion) software package, 3D models and 2D raster products can be generated in a fully automated five-step process, comprising (i) alignment of the photographs, (ii) calculation of a sparse point cloud, (iii) calculation of a dense 3D, (iv) polygonal mesh model generation and texture mapping, and the (v) generation of Digital Surface Models (DSMs) and ortho-rectification of the imagery [27]. Firstly, images were aligned with the accuracy parameter set to ‘high’. After the photo-alignment, this initial bundle adjustment created sparse point clouds from overlapping digital images. The sparse point clouds included the position and orientation of each camera position and the 3D coordinates of all image features. The internal camera geometry was modeled through self-calibration during the bundle adjustment [28]. Subsequently, dense point clouds were built based on multi-view stereopsis algorithms with high-quality and mild depth filtering. After filtering the dense point clouds according to points of confidence (points with values less than three views were removed), these were used for producing polygonal meshes and DSMs using an Inverse Distance Weighting interpolation. Finally, the DSMs generated ortho-rectified 8-bit RGB photomosaics of submerged habitats.
Raw MBES data collected by the onboard DT101Xi software ver. 1.02.08 were reprocessed with Autoclean software ver. 2023 3.1.0, by BeamWorx in order to incorporate sound profile information to improve the positioning of individual acoustic bottom samples, and apply filtering techniques to reduce noise, and remove potential artifacts and outliers from the dataset. The resulting georeferenced mosaics with a resolution of 1 m × 1 m for the bottom acoustic backscatter (BS) intensity (provided as an 8 bit integer) and depth were exported in GeoTIFF format. Each mosaic extends over a total area of 74,650 m2 within an enclosing rectangle of 332 m by 329 m in X (longitude) and Y (latitude) axes, respectively. Bathymetry exhibits small variations throughout the study area, with an average depth of 5.3 m and st. dev. 1.5 m. Regardless, the depth mosaic was not used in the rest of the analysis to avoid producing a depth-locked model.
For a given acoustic pulse frequency and incidence angle, the level of BS returned by the bottom’s surface mainly depends on physical properties, i.e., its acoustic hardness and spatial roughness, and the heterogeneity of the sediment [29]. Apart from geological formations, bottom-associated flora and fauna can also be a source of heterogeneity, typically increasing variability in the observed scattering [30]. For this reason, in addition to the MBES acoustic BS mosaic, secondary features based on the spatial structure of the BS and associated statistics were also extracted. In particular, the (a) spatial gradient (b) roughness (degree of irregularity, calculated by the largest inter-cell difference in a central pixel and its surrounding cell) and (c) spatial mean, variance, min and max for 2, 5, 10 and 20 m radius mosaics were produced directly from the original BS mosaic and exported as separate GeoTIFF files. The BS gradient and roughness were computed in QGIS version 3.28.15 [31], while the other BS statistics were calculated in Python v 3.11.7 [32].
Initial testing with the VC GT data and the photogrammetry mosaics produced models with unimpressive accuracy scores. This was attributed to the spatial distribution of GT points, which were close to each other along the dive transect (Figure 2 and Figure 3), providing a poor and inhomogeneous spatial coverage of the surveyed area and an unbalanced representation of the three habitat classes. However, due to the clarity of the photogrammetry mosaic and the uncomplicated habitat structure of the studied area, habitat identification at the required class level was also possible via the direct expert annotation of the image, as verified by comparisons at the GT data locations. Therefore, it was decided to enrich the VC GT dataset with additional points (Figure 3) to provide a more uniform coverage of the surveyed area and a balanced representation of all classes. This work, carried out by a member of the HCMR Scientific Diving Team, resulted in an augmented GT dataset including the manually annotated habitat characterizations and a subsample of the original GT VC data. Unless specified, for the rest of the manuscript “GT” will be used to refer to the augmented GT dataset.

2.4. Habitat Classification Models

The main habitat classification models implemented in this work include (a) a supervised classifier based on the georeferenced photogrammetry (UAV/optical) dataset and trained with the GT data, here called the “teacher” model, (b) a supervised classifier based only on the BS-derived (ASV/acoustic) dataset and trained with predictions from the teacher model, here called the “student” model, (c) a supervised classifier based on the BS-derived (ASV/acoustic) dataset and trained with the GT data, here called the “direct” model, implemented for method validation purposes, and (d) an unsupervised classifier based on the BS-derived (ASV/acoustic) dataset, also used for method validation.
The teacher and direct models can be considered single-source models, the former optical and the latter acoustic, both trained with the GT data. The student model is a hybrid model in the sense that, although it only uses acoustic data for prediction, it was produced via training with the teacher model’s predictions. At the same time, the model was developed so that, following the training procedure, it no longer requires information from the teacher model (or other UAV data), and can be used for surveys implemented solely with the ASV-mounted MBES system. Finally, as already stated, the MBES depth information (and any corresponding secondary features) were intentionally discarded, since otherwise the produced model could only be used for predictions in the depth range of the train dataset, which is dictated by the depth-limited UAV optics, and of course the topography of the train area.
All data preparation, classifier training, cross-validation, projections, and plotting were performed with the Python scientific stack [33,34,35], and especially the scikit-learn library [36].

2.4.1. Assembly of the Feature and Label Arrays

In the case of the teacher model, feature values were produced by interpolating the R, G, and B channels of the photogrammetry mosaic, at the locations of the 242 GT points, and the corresponding habitat types were used as labels.
For the student model, the feature value set consisted of all valid values from the BS and BS secondary feature mosaics (spatial gradient, roughness, and spatial statistics), excluding any data within a one (1) meter distance from any GT point. This operation resulted in a total of 69,080 23-element feature vectors. To train the student model, teacher model predictions were utilized. To achieve this, the XY coordinate pairs corresponding to each feature vector were used to interpolate the R, G, and B channels of the photogrammetry mosaic. The interpolated RGB values were, in turn, used as input to the teacher model to predict the corresponding labels, thus producing the label array for training the student model.
For the direct model, feature values were produced by interpolating the BS and BS secondary feature mosaics at the GT point locations, and the corresponding habitat types were used as labels. Due to the BS layers being smaller than the final photogrammetry mosaic, some GT points were outside the BS data and were not included, leading to a total of 148 23-element vectors for the features and corresponding labels for the direct model.
In the case of the unsupervised model, all valid values from the BS and BS secondary feature mosaics (spatial gradient, roughness, and spatial statistics) were used in the analysis, leading to a total of 73,282 23-element vectors. This is the same as the feature set of the student model, without the step of excluding data in the vicinity of the GT points.

2.4.2. Train/Test Splitting

Each feature/label set was split in a 0.8/0.2 fashion to produce the corresponding training and testing datasets. As a result of this procedure, for the teacher model, 193 points were used for training and 49 for testing, while for the student model, 55,264 points were used for training and 13,816 for testing, and for the direct model, 118 points were used for training and 30 for testing. The training datasets were then used for model parameter estimation and training, and testing datasets were withheld to assess each model’s performance against the “new” data. For the student and direct models, splitting was performed in a stratified manner to ensure a similar representation of the three habitat classes in training and testing, while for the teacher model, simple shuffling was used, since the GT data that were used were nearly perfectly balanced across the three classes.

2.4.3. Optimization of Model Parameters

For each of the supervised models, a Random Forest (RF) classifier was fitted with the appropriate training dataset. RF is an ensemble learning method that fits a number of randomized decision trees (hence, “forest”) to perform a specific classification task and uses averaging across the different estimators to improve the predictive accuracy and control overfitting [37]. RF classifiers have been previously utilized for marine habitat mapping in a variety of studies [16,38,39,40,41]. Before fitting the RF models, standardization of the feature vectors was performed by subtracting the mean and scaling to unit variance. RF hyperparameters were optimized in each case to maximize prediction power while reducing the risk of overfitting and selection bias. This was achieved by utilizing the method of k-fold cross-validation [42], i.e., using randomized fit/validate subsets of the training dataset while performing an exhaustive search (gridsearch) over a specified range of the hyperparameter space to find the parameters that, on average, produced the best model. In particular, (a) [1, 2, 3, 4, 5, 7, 10] was searched for the parameter ‘min_samples_leaf’, (b) [1, 2, 3, 4] was searched for the parameter ‘max_features’, and (c) [100, 200, 300, 500] was searched for the parameter ‘n_estimators’ for five folds of the training dataset of each classifier; for the parameter names, refer to the RandomForestClassifier of the Python Scikit-learn library. The best model that was produced was re-trained against the total training dataset and used as the final classifier.
For the unsupervised classifier, a Principal Component Analysis was applied to reduce the 23 dimensions of the feature array to three (3) principal components (PC), and then a k-means clustering algorithm was applied on the PC to separate the data into three (3) classes. The labeled output was compared with the teacher model prediction at the same locations and the unsupervised labels were associated with the appropriate habitat type based on the highest % of overlap and renamed accordingly.

2.4.4. Model Assessment and Projections

For the supervised classifiers’ assessment, each model’s average score and its st. dev. were calculated based on five individual folds of the specific train dataset, the score was compared to the model-specific test dataset, and the corresponding confusion matrix [43] was calculated. Accuracy, i.e., the fraction of correct predictions, was used as the score function for all RF models. The evaluation metrics used in the study include (a) the accuracy score, defined as the proportion of all correct classifications, and (b) the precision, defined as the proportion of all positive classifications that are actually positive. In particular, since this is a multi-class problem, the so-called macro-weighted precision was computed, i.e., the precision metric was calculated for each label separately in the classic binary sense and the averaged results were weighed by the number of instances of each class.
In addition to the above, specifically for the teacher model, replicates of the classifier were produced for different sizes of the train dataset in the context of a sensitivity analysis with respect to the size of the GT dataset. In particular, the GT dataset was split 25 times in a stratified manner with the training set ratio ranging from 5% (24 GT points) to 90% (229 GT points), and in each case, the remaining GT data were used for testing. The average score and its st. dev. based on five folds of the train dataset was computed for each of the 25 instances of the model, and the corresponding score against the test dataset was also evaluated. For the student and unsupervised models, the total accuracy scores and confusion matrices were also calculated based on the GT data intersecting the BS layers.
The final trained models were used to produce habitat maps for the study area and the total area per habitat type was calculated. For the student, direct, and unsupervised models, the degree of overlap for each class with respect to the corresponding prediction of the teacher model was also estimated.

3. Results

3.1. Teacher Model

For the teacher model, the train dataset accuracy score was 87.0% with 4.5% st. dev. estimated over five folds, and the test dataset’s total accuracy score was 85.7% with a precision of 85.6%. Focusing on the test dataset (Figure 4), the model is best at classifying Posidonia meadows, with 100% classification accuracy and 12% of false positives (misclassifications of dead matte). Sand habitat was also successfully detected, with 88% identified correctly and the rest misidentified as dead matte. Dead matte is the habitat where the model has the weakest prediction skill, with 31% of this habitat being misidentified as either Posidonia meadows or sand.
The teacher model performance with respect to the training dataset size is shown in Figure 5. Although the test scores are favorable even for relatively low counts for the training dataset size (<100), the train score and its st. dev. reveal that the model can be unstable when trained with very few data. For this specific case, a train set size with a count of ~130 and higher GT points seems to yield models with relatively stable scores and variance.

3.2. Student Model

The student model’s train dataset accuracy score was 81.1% with 0.2% st. dev. estimated with five folds. The test dataset total accuracy score was 82.1% and the corresponding precision was 82.0%. The classification capability to some extent mirrors that of the teacher model, albeit with different accuracy scores (Figure 6). In particular, when reviewing its performance on the test dataset, the student model was also best at classifying Posidonia meadows with 91% accuracy, with most failures misidentified as dead matter rather than sand. The model’s performance was also high for the sand habitat class with an accuracy score of 82% and misidentifications spread between dead matte and Posidonia meadows, again favoring the former. Dead matte was the most difficult habitat to identify for the student model, with 68% accuracy and failures spread evenly between Posidonia meadows and sand.
In addition to its training/test dataset, i.e., the teacher model prediction grid, the student model’s performance was also assessed against the GT dataset. As a reminder, the GT data locations and their respective predictions were not included in the training dataset of the student model. When considering the GT dataset, the student model achieves an accuracy score of 85.1% and precision of 85.4%, with some variations compared to the previously observed prediction patterns, as revealed by the corresponding confusion matrix (Figure 4). The main difference is a much-improved accuracy in the prediction for sand (100%) and a small reduction in the accuracy of Posidonia meadows and dead matte predictions, with respective scores of 89% and 64%.

3.3. Direct and Unsupervised Models

For the direct model, the train dataset accuracy score was 60.1% with 4.8% st. dev. estimated with five folds, and the test dataset’s total accuracy score was 63.3% with a corresponding precision of 61.8%. Posidonia meadows remains the class that was predicted best, with 79% accuracy and misidentifications as both sand and dead matte with a 2:1 ratio. Sand was predicted with a 75% accuracy score and misidentifications were evenly distributed between the other two classes. Dead matte identification was poor, with a score of 25% and the remaining 75% always classified as Posidonia meadows.
The unsupervised model achieved the low total accuracy score of 41% compared to the GT dataset with a precision of 47.2%. The best classified habitat type for this model is sand, although there was also a very high percentage of false positives and negatives for the same class. With a total score of 23% for the positive detection of Posidonia meadows and 45% for dead matte, the unsupervised model failed to perform an accurate classification of the GT data.

3.4. Predictions (Maps/Area Calculations)

Following the performance assessment of the habitat classifiers, predictions were performed for the entire study area (Figure 7). Teacher and student model predictions were nearly identical and both were able to reproduce the fine habitat structure, as seen on the aerial images (Figure 2 and Figure 3). The direct model was less accurate in this regard, with the most prominent difference being the more extended area classified as Posidonia meadows and fewer areas classified as dead matte and sand. The habitat structure produced by the unsupervised model more closely resembles the one produced by the direct classifier, with the former producing significantly larger areas classified as dead matte and more opaque patches in the sand class.
The aforementioned observations are also supported by the habitat spatial calculations from the produced maps (Table 1), where the teacher and student models yielded very similar results for the total area and showed a high degree of overlap for all habitat types. The direct and unsupervised classifiers deviate significantly from the teacher and student model predictions, with very different total area values for most habitat types and a very low degree of overlap.

4. Discussion

Effective habitat mapping of P. oceanica meadows can be performed by both optical (satellite or aerial) and acoustic means (most commonly single-beam echosounders, MBES, or sidescan SONAR systems). Each approach has different advantages and disadvantages, with a common issue being the limited applicability of optical methods at higher depths or in cases of reduced visibility, and the arguably more demanding sampling requirements when utilizing acoustics. The latter often results in lower quality when considering areas where both methods are applicable, which is especially true in the case of more affordable and portable MBES systems, such as those that are easier to integrate into autonomous vehicles. In order to overcome such issues, various approaches have been developed based on hybrid optical/acoustic techniques [14,15,16,17].
This work explores a new method for the habitat mapping of P. oceanica meadows, combining datasets collected by unmanned systems (UAVs and ASVs) capable of operating within the region of interest for this habitat, as well as VC performed by scientific divers. The purpose of this integrative method is to make the best use of the advantages in each methodology while producing a habitat classifier which is more efficient than any single approach would provide alone, retaining a balance between absolute performance and range of application, which are equally important in an operational mapping context. The method is based on two classifiers that are decoupled with respect to input, a teacher model based on photogrammetry, and a student model based solely on acoustics. A transfer learning scheme is used to train the student model with dense projections from the teacher model (which, in turn, is trained with GT data) and produce an acoustic classifier with a far better performance than when training directly uses GT data.
In the study area, the student model reached a total accuracy of 85.1% against the GT data, and 89% when considering P. oceanica meadows alone, a respective improvement of 24.9% and 25.7% when compared with the direct acoustic model. Notwithstanding this the initial training stage, the independence of the student model from optical data makes it applicable at any depth and, in an operational context, mapping can be solely supported by a single surveying device (the ASV). In addition, although the acoustic depth measurement was intentionally kept out of the student classifier feature list to produce a depth-independent model, the MBES depth information remains available and allows for the 3D mapping of the entire study area and thus the bathymetric distribution of the P. oceanica meadows, information which cannot yet be reliably retrieved by optics for the entire depth range of this habitat [24,25,26].
From a different perspective, this method makes feasible the use of a compact, low-cost MBES appropriate for autonomous use to reconstruct habitat information with similar fidelity to a highly accurate camera system. It is important to note that without the knowledge transfer from the optical dataset (teacher model), the MBES BS- and BS-derived mosaics were not adequate to produce an effective classifier from the GT data alone. Furthermore, the secondary features based on the BS mosaic at different scales proved to be necessary to obtain a high-performing classifier with the student model, which is similar to findings in other studies [15,44,45].
Due to the study area topography, it was impossible to verify the method’s applicability beyond the training depth or the validity of the predictions over a wide depth range. Although, as already discussed, depth and depth-derived features are not explicitly included in the model, and the range, slope, and other topography-dependent effects on MBES BS measurements remain important [46,47,48,49], as they could come into play and affect the model’s performance. The model’s secondary features, based on wide spatial-scale statistics, could perhaps be more robust to such effects, especially in the case of the depth ranges of interest for P. oceanica meadows, but the method’s applicability remains to be verified.
Considering the operational applicability of the method, one aspect is the model’s transferability to other areas. It is expected that habitat types other than those found in the training dataset of this study (rock, clay, sand with a different consistency, etc.) will pose an issue for the current model instance. It would also be especially interesting to see how the method would perform in the presence of different seagrass meadows, e.g., patches of Cymodocea nodosa. Each time a new habitat type is introduced to the model, retraining with a new photogrammetry dataset will be necessary. However, it is expected that the additional training stage will be required less often as the model is expanded to incorporate more habitat types. Ultimately, a training library generated by pooling data from a wide range of habitat types is expected to produce a universal model, completely obviating the need for UAV measurements. In addition to the above, as it uses an MBES which was not calibrated for BS, the trained model is an empirical model adapted to the specific instrument used during the field survey. Using different equipment will most likely require that the training stage is repeated, while the other alternative would be to perform an MBES calibration procedure to standardize the model. However, BS calibration in MBES is still a subject of active research [49,50], and depending on the task, an instrument-specific empirical model could be an acceptable compromise. Another future step for the presented method could be improving the teacher model, e.g., by using secondary features similar to the BS spatial statistics utilized in the case of the student classifier or image segmentation/feature extraction before the learning stage. Any improvement in the teacher model will likely benefit the student model as well and improve the methodology in general.

5. Conclusions

Following the appropriate adaptations, the presented method provides a final model which could be the workhorse in an operational mapping application. This approach makes it possible to utilize an ASV equipped with a portable MBES to produce results of similar quality to those of a UAV with an optical system. Moreover, by operating only with acoustic data, the model does not have the depth limitations of the UAV optics and also has the benefit of providing bathymetric information, which can be utilized for 3D habitat mapping. An ASV MBES survey designed to cover a region of interest can thus provide the necessary dataset for the model to produce a geo- and depth-referenced predicted habitat map. Finally, the method can be adapted to incorporate additional habitat types through ad hoc UAV surveys, allowing the model to integrate the new classes upon first encounter.
Considering the empirical nature of the produced model and the fact that all of the presented analysis can be automated, the integration of the algorithm into the ASV is also possible, providing opportunities for further automations, e.g., habitat map generation or flagging areas with low classification probabilities.

Author Contributions

Conceptualization: E.C., M.N., Z.K. and G.C.; formal analysis and methodology: Z.K. and G.C.; investigation: G.C., P.G., M.P., S.M., R.C., C.R.L., L.M.P., C.L., J.F., R.L. and A.S.; project administration: E.C. and C.R.L.; supervision: E.C.; writing, review, and editing: Z.K., G.C., E.C. and M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101000825 (NAUTILOS).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank the three anonymous reviewers for their constructive comments which helped improve the original manuscript.

Conflicts of Interest

Authors Catarina Lemos, Caio Lomba, João Fortuna, Rui Loureiro and André Santos were employed by the company Centre of Engineering and Product Development (CEiiA). The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BS(acoustic) Backscatter
GTGround-truthing
VCVisual census
ASVAutonomous Surface Vehicle
UAVUnmanned Aerial Vehicle
GCSGround control station
RFRandom Forest
MBESMultibeam Echosounder
D-GPSDifferential Global Positioning System
HCMRHellenic Centre for Marine Research
CEiiACentre of Engineering and Product Development
DSMDigital Surface Model
PCPrincipal Components

References

  1. Vizzini, S.; Sarà, G.; Michener, R.H.; Mazzola, A. The Role and Contribution of the Seagrass Posidonia oceanica (L.) Delile Organic Matter for Secondary Consumers as Revealed by Carbon and Nitrogen Stable Isotope Analysis. Acta Oecologica 2002, 23, 277–285. [Google Scholar] [CrossRef]
  2. Boudouresque, C.F.; Pergent, G.; Pergent-Martini, C.; Ruitton, S.; Thibaut, T.; Verlaque, M. The Necromass of the Posidonia oceanica Seagrass Meadow: Fate, Role, Ecosystem Services and Vulnerability. Hydrobiologia 2016, 781, 25–42. [Google Scholar] [CrossRef]
  3. Telesca, L.; Belluscio, A.; Criscoli, A.; Ardizzone, G.; Apostolaki, E.T.; Fraschetti, S.; Gristina, M.; Knittweis, L.; Martin, C.S.; Pergent, G.; et al. Seagrass Meadows (Posidonia oceanica) Distribution and Trajectories of Change. Sci. Rep. 2015, 5, 12505. [Google Scholar] [CrossRef] [PubMed]
  4. Baiocchi, V.; Cianfanelli, F.; Nocerino, E. Satellite Images and Bathymetric LiDAR for Mapping Seagrass Meadows: An Overview. In Earth Observation: Current Challenges and Opportunities for Environmental Monitoring; AIT Series. Trends in Earth Observation; Associazione Italiana di Telerilevamento: Firenze, Italy, 2024; Volume 3, pp. 87–92. [Google Scholar] [CrossRef]
  5. Lecours, V.; Devillers, R.; Schneider, D.; Lucieer, V.; Brown, C.; Edinger, E. Spatial Scale and Geographic Context in Benthic Habitat Mapping: Review and Future Directions. Mar. Ecol. Prog. Ser. 2015, 535, 259–284. [Google Scholar] [CrossRef]
  6. Topouzelis, K.; Makri, D.; Stoupas, N.; Papakonstantinou, A.; Katsanevakis, S. Seagrass Mapping in Greek Territorial Waters Using Landsat-8 Satellite Images. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 98–113. [Google Scholar] [CrossRef]
  7. Tomasello, A.; Bosman, A.; Signa, G.; Rende, S.F.; Andolina, C.; Cilluffo, G.; Cassetti, F.P.; Mazzola, A.; Calvo, S.; Randazzo, G.; et al. 3D-Reconstruction of a Giant Posidonia oceanica Beach Wrack (Banquette): Sizing Biomass, Carbon and Nutrient Stocks by Combining Field Data With High-Resolution UAV Photogrammetry. Front. Mar. Sci. 2022, 9, 903138. [Google Scholar] [CrossRef]
  8. Ventura, D.; Grosso, L.; Pensa, D.; Casoli, E.; Mancini, G.; Valente, T.; Scardi, M.; Rakaj, A. Coastal Benthic Habitat Mapping and Monitoring by Integrating Aerial and Water Surface Low-Cost Drones. Front. Mar. Sci. 2023, 9, 1096594. [Google Scholar] [CrossRef]
  9. Lo Iacono, C.; Mateo, M.A.; Gràcia, E.; Guasch, L.; Carbonell, R.; Serrano, L.; Serrano, O.; Dañobeitia, J. Very High-resolution Seismo-acoustic Imaging of Seagrass Meadows (Mediterranean Sea): Implications for Carbon Sink Estimates. Geophys. Res. Lett. 2008, 35, 2008GL034773. [Google Scholar] [CrossRef]
  10. De Falco, G.; Tonielli, R.; Di Martino, G.; Innangi, S.; Simeone, S.; Michael Parnum, I. Relationships between Multibeam Backscatter, Sediment Grain Size and Posidonia oceanica Seagrass Distribution. Cont. Shelf Res. 2010, 30, 1941–1950. [Google Scholar] [CrossRef]
  11. Di Maida, G.; Tomasello, A.; Luzzu, F.; Scannavino, A.; Pirrotta, M.; Orestano, C.; Calvo, S. Discriminating between Posidonia oceanica Meadows and Sand Substratum Using Multibeam Sonar. ICES J. Mar. Sci. 2011, 68, 12–19. [Google Scholar] [CrossRef]
  12. Tonielli, R.; Innangi, S.; Budillon, F.; Di Martino, G.; Felsani, M.; Giardina, F.; Innangi, M.; Filiciotto, F. Distribution of Posidonia oceanica (L.) Delile Meadows around Lampedusa Island (Strait of Sicily, Italy). J. Maps 2016, 12, 249–260. [Google Scholar] [CrossRef]
  13. Kenny, A.J.; Cato, I.; Desprez, M.; Fader, G.; Schüttenhelm, R.T.E.; Side, J. An Overview of Seabed-Mapping Technologies in the Context of Marine Habitat Classification. ICES J. Mar. Sci. 2003, 60, 411–418. [Google Scholar] [CrossRef]
  14. Pasqualini, V.; Pergent-Martini, C.; Clabaut, P.; Pergent, G. Mapping of Posidonia oceanica Using Aerial Photographs and Side Scan Sonar: Application off the Island of Corsica (France). Estuar. Coast. Shelf Sci. 1998, 47, 359–367. [Google Scholar] [CrossRef]
  15. Rende, S.F.; Bosman, A.; Di Mento, R.; Bruno, F.; Lagudi, A.; Irving, A.D.; Dattola, L.; Giambattista, L.D.; Lanera, P.; Proietti, R.; et al. Ultra-High-Resolution Mapping of Posidonia oceanica (L.) Delile Meadows through Acoustic, Optical Data and Object-Based Image Classification. J. Mar. Sci. Eng. 2020, 8, 647. [Google Scholar] [CrossRef]
  16. Price, D.M.; Felgate, S.L.; Huvenne, V.A.I.; Strong, J.; Carpenter, S.; Barry, C.; Lichtschlag, A.; Sanders, R.; Carrias, A.; Young, A.; et al. Quantifying the Intra-Habitat Variation of Seagrass Beds with Unoccupied Aerial Vehicles (UAVs). Remote Sens. 2022, 14, 480. [Google Scholar] [CrossRef]
  17. Panayotidis, P.; Papathanasiou, V.; Gerakaris, V.; Fakiris, E.; Orfanidis, S.; Papatheodorou, G.; Kosmidou, M.; Georgiou, N.; Drakopoulou, V.; Loukaidi, V. Seagrass Meadows in the Greek Seas: Presence, Abundance and Spatial Distribution. Bot. Mar. 2022, 65, 289–299. [Google Scholar] [CrossRef]
  18. Di Ciaccio, F.; Troisi, S. Monitoring Marine Environments with Autonomous Underwater Vehicles: A Bibliometric Analysis. Results Eng. 2021, 9, 100205. [Google Scholar] [CrossRef]
  19. Whitt, C.; Pearlman, J.; Polagye, B.; Caimi, F.; Muller-Karger, F.; Copping, A.; Spence, H.; Madhusudhana, S.; Kirkwood, W.; Grosjean, L.; et al. Future Vision for Autonomous Ocean Observations. Front. Mar. Sci. 2020, 7, 697. [Google Scholar] [CrossRef]
  20. Zereik, E.; Bibuli, M.; Mišković, N.; Ridao, P.; Pascoal, A. Challenges and Future Trends in Marine Robotics. Annu. Rev. Control 2018, 46, 350–368. [Google Scholar] [CrossRef]
  21. Ditria, E.M.; Buelow, C.A.; Gonzalez-Rivero, M.; Connolly, R.M. Artificial Intelligence and Automated Monitoring for Assisting Conservation of Marine Ecosystems: A Perspective. Front. Mar. Sci. 2022, 9, 918104. [Google Scholar] [CrossRef]
  22. Sánchez-Carnero, N.; Rodríguez-Pérez, D.; Llorens, S.; Orenes-Salazar, V.; Ortolano, A.; García-Charton, J.A. An Expeditious Low-Cost Method for the Acoustic Characterization of Seabeds in a Mediterranean Coastal Protected Area. Estuar. Coast. Shelf Sci. 2023, 281, 108204. [Google Scholar] [CrossRef]
  23. Doukari, M.; Batsaris, M.; Papakonstantinou, A.; Topouzelis, K. A Protocol for Aerial Survey in Coastal Areas Using UAS. Remote Sens. 2019, 11, 1913. [Google Scholar] [CrossRef]
  24. Del Savio, A.A.; Luna Torres, A.; Vergara Olivera, M.A.; Llimpe Rojas, S.R.; Urday Ibarra, G.T.; Neckel, A. Using UAVs and Photogrammetry in Bathymetric Surveys in Shallow Waters. Appl. Sci. 2023, 13, 3420. [Google Scholar] [CrossRef]
  25. Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a New Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22, 1379. [Google Scholar] [CrossRef] [PubMed]
  26. Szafarczyk, A.; Toś, C. The Use of Green Laser in LiDAR Bathymetry: State of the Art and Recent Advancements. Sensors 2022, 23, 292. [Google Scholar] [CrossRef]
  27. De Reu, J.; Bourgeois, J.; Bats, M.; Zwertvaegher, A.; Gelorini, V.; De Smedt, P.; Chu, W.; Antrop, M.; De Maeyer, P.; Finke, P.; et al. Application of the Topographic Position Index to Heterogeneous Landscapes. Geomorphology 2013, 186, 39–49. [Google Scholar] [CrossRef]
  28. Price, D.M.; Robert, K.; Callaway, A.; Lo Lacono, C.; Hall, R.A.; Huvenne, V.A.I. Using 3D Photogrammetry from ROV Video to Quantify Cold-Water Coral Reef Structural Complexity and Investigate Its Influence on Biodiversity and Community Assemblage. Coral Reefs 2019, 38, 1007–1021. [Google Scholar] [CrossRef]
  29. Huang, Z.; Siwabessy, J.; Cheng, H.; Nichol, S. Using Multibeam Backscatter Data to Investigate Sediment-Acoustic Relationships. J. Geophys. Res. Oceans 2018, 123, 4649–4665. [Google Scholar] [CrossRef]
  30. Anderson, T.; Holliday, V.; Kloser, J.; Reid, D.; Simrad, Y. Acoustic Seabed Classification of Marine Physical and Biological Landscapes; ICES Cooperative Research Report No. 286; International Council for the Exploration of the Sea: Copenhagen, Denmark, 2007; 183p. [Google Scholar]
  31. QGIS Development Team QGIS Geographic Information System 2023. Available online: https://www.qgis.org (accessed on 10 December 2024).
  32. van Rossum, G.; Drake, F.L. The Python Language Reference. In Python Documentation Manual/Guido Van Rossum; Fred, L.D., Ed.; Release 3.0.1 [Repr.]; Python Software Foundation: Hampton, NH, USA, 2010; ISBN 978-1-4414-1269-0. [Google Scholar]
  33. Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array Programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  34. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  35. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  36. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  37. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  38. Muhamad, M.A.H.; Che Hasan, R. Seagrass Habitat Suitability Models Using Multibeam Echosounder Data and Multiple Machine Learning Techniques. IOP Conf. Ser. Earth Environ. Sci. 2022, 1064, 012049. [Google Scholar] [CrossRef]
  39. Mahdavi, S.; Amani, M.; Parsian, S.; MacDonald, C.; Teasdale, M.; So, J.; Zhang, F.; Gullage, M. A Combination of Remote Sensing Datasets for Coastal Marine Habitat Mapping Using Random Forest Algorithm in Pistolet Bay, Canada. Remote Sens. 2024, 16, 2654. [Google Scholar] [CrossRef]
  40. Li, J.; Tran, M.; Siwabessy, J. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness. PLoS ONE 2016, 11, e0149089. [Google Scholar] [CrossRef]
  41. Wicaksono, P.; Aryaguna, P.A.; Lazuardi, W. Benthic Habitat Mapping Model and Cross Validation Using Machine-Learning Classification Algorithms. Remote Sens. 2019, 11, 1279. [Google Scholar] [CrossRef]
  42. Marcot, B.G.; Hanea, A.M. What Is an Optimal Value of k in K-Fold Cross-Validation in Discrete Bayesian Network Analysis? Comput. Stat. 2021, 36, 2009–2031. [Google Scholar] [CrossRef]
  43. Ting, K.M. Confusion Matrix. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2011; p. 209. ISBN 978-0-387-30768-8. [Google Scholar]
  44. Ahmed, K.I.; Demšar, U. Improving Seabed Classification from Multi-Beam Echo Sounder (MBES) Backscatter Data with Visual Data Mining. J. Coast. Conserv. 2013, 17, 559–577. [Google Scholar] [CrossRef]
  45. Janowski, L.; Trzcinska, K.; Tegowski, J.; Kruss, A.; Rucinska-Zjadacz, M.; Pocwiardowski, P. Nearshore Benthic Habitat Mapping Based on Multi-Frequency, Multibeam Echosounder Data Using a Combined Object-Based Approach: A Case Study from the Rowy Site in the Southern Baltic Sea. Remote Sens. 2018, 10, 1983. [Google Scholar] [CrossRef]
  46. Lurton, X.; Eleftherakis, D.; Augustin, J.-M. Analysis of Seafloor Backscatter Strength Dependence on the Survey Azimuth Using Multibeam Echosounder Data. Mar. Geophys. Res. 2018, 39, 183–203. [Google Scholar] [CrossRef]
  47. Schimel, A.C.G.; Beaudoin, J.; Parnum, I.M.; Le Bas, T.; Schmidt, V.; Keith, G.; Ierodiaconou, D. Multibeam Sonar Backscatter Data Processing. Mar. Geophys. Res. 2018, 39, 121–137. [Google Scholar] [CrossRef]
  48. Malik, M. Sources and Impacts of Bottom Slope Uncertainty on Estimation of Seafloor Backscatter from Swath Sonars. Geosciences 2019, 9, 183. [Google Scholar] [CrossRef]
  49. Trzcinska, K.; Tegowski, J.; Pocwiardowski, P.; Janowski, L.; Zdroik, J.; Kruss, A.; Rucinska, M.; Lubniewski, Z.; Schneider Von Deimling, J. Measurement of Seafloor Acoustic Backscatter Angular Dependence at 150 kHz Using a Multibeam Echosounder. Remote Sens. 2021, 13, 4771. [Google Scholar] [CrossRef]
  50. Eleftherakis, D.; Berger, L.; Le Bouffant, N.; Pacault, A.; Augustin, J.-M.; Lurton, X. Backscatter Calibration of High-Frequency Multibeam Echosounder Using a Reference Single-Beam System, on Natural Seafloor. Mar. Geophys. Res. 2018, 39, 55–73. [Google Scholar] [CrossRef]
Figure 1. Map of the coastal operations area in Heraklion (Crete) outlined by the red polygon.
Figure 1. Map of the coastal operations area in Heraklion (Crete) outlined by the red polygon.
Jmse 13 00611 g001
Figure 2. Map of the ground truthing diving track (dashed line) inside the area of interest (red polygon).
Figure 2. Map of the ground truthing diving track (dashed line) inside the area of interest (red polygon).
Jmse 13 00611 g002
Figure 3. The augmented ground-truth dataset (colored markers) and the visual census positions overlaid on the (a) green photogrammetry channel and (b) MBES backscatter intensity mosaics (unitless 8 bit integer).
Figure 3. The augmented ground-truth dataset (colored markers) and the visual census positions overlaid on the (a) green photogrammetry channel and (b) MBES backscatter intensity mosaics (unitless 8 bit integer).
Jmse 13 00611 g003
Figure 4. Confusion matrices for the different classifiers against the appropriate GT data subset; counts are normalized with respect to the true values for each class (i.e., each matrix row sum is equal to one).
Figure 4. Confusion matrices for the different classifiers against the appropriate GT data subset; counts are normalized with respect to the true values for each class (i.e., each matrix row sum is equal to one).
Jmse 13 00611 g004
Figure 5. Training (error bars equal to +/− one st. dev.) and test accuracy score of the teacher model vs. the size of the training dataset.
Figure 5. Training (error bars equal to +/− one st. dev.) and test accuracy score of the teacher model vs. the size of the training dataset.
Jmse 13 00611 g005
Figure 6. Confusion matrix of predictions from the student classifier against the corresponding classes in the test dataset. Darker colors indicate higher prediction rates.
Figure 6. Confusion matrix of predictions from the student classifier against the corresponding classes in the test dataset. Darker colors indicate higher prediction rates.
Jmse 13 00611 g006
Figure 7. Habitat classification of the study area as predicted by the four models.
Figure 7. Habitat classification of the study area as predicted by the four models.
Jmse 13 00611 g007
Table 1. Total area (m2) of each habitat class in the study area for the different model predictions; percentages correspond to the ratio of overlap with the corresponding class of the teacher model to the total area of the teacher model’s predictions for the same class.
Table 1. Total area (m2) of each habitat class in the study area for the different model predictions; percentages correspond to the ratio of overlap with the corresponding class of the teacher model to the total area of the teacher model’s predictions for the same class.
Habitat Type TeacherStudentDirectUnsupervised
Dead matte1.87 × 1041.84 × 104 (93.6%)0.97 × 104 (19.3%)3.49 × 104 (56.6%)
P. oceanica2.96 × 1043.01 × 104 (98.2%)4.58 × 104 (73.4%)1.44 × 104 (22.3%)
Sand2.25 × 1042.25 × 104 (96.4%)1.54 × 104 (40.0%)2.16 × 104 (48.7%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kapelonis, Z.; Chatzigeorgiou, G.; Ntoumas, M.; Grigoriou, P.; Pettas, M.; Michelinakis, S.; Correia, R.; Lemos, C.R.; Pinheiro, L.M.; Lomba, C.; et al. Flying Robots Teach Floating Robots—A Machine Learning Approach for Marine Habitat Mapping Based on Combined Datasets. J. Mar. Sci. Eng. 2025, 13, 611. https://doi.org/10.3390/jmse13030611

AMA Style

Kapelonis Z, Chatzigeorgiou G, Ntoumas M, Grigoriou P, Pettas M, Michelinakis S, Correia R, Lemos CR, Pinheiro LM, Lomba C, et al. Flying Robots Teach Floating Robots—A Machine Learning Approach for Marine Habitat Mapping Based on Combined Datasets. Journal of Marine Science and Engineering. 2025; 13(3):611. https://doi.org/10.3390/jmse13030611

Chicago/Turabian Style

Kapelonis, Zacharias, Georgios Chatzigeorgiou, Manolis Ntoumas, Panos Grigoriou, Manos Pettas, Spyros Michelinakis, Ricardo Correia, Catarina Rasquilha Lemos, Luis Menezes Pinheiro, Caio Lomba, and et al. 2025. "Flying Robots Teach Floating Robots—A Machine Learning Approach for Marine Habitat Mapping Based on Combined Datasets" Journal of Marine Science and Engineering 13, no. 3: 611. https://doi.org/10.3390/jmse13030611

APA Style

Kapelonis, Z., Chatzigeorgiou, G., Ntoumas, M., Grigoriou, P., Pettas, M., Michelinakis, S., Correia, R., Lemos, C. R., Pinheiro, L. M., Lomba, C., Fortuna, J., Loureiro, R., Santos, A., & Chatzinikolaou, E. (2025). Flying Robots Teach Floating Robots—A Machine Learning Approach for Marine Habitat Mapping Based on Combined Datasets. Journal of Marine Science and Engineering, 13(3), 611. https://doi.org/10.3390/jmse13030611

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop