Next Article in Journal
Numerical Simulation of Seabed Response Around Monopile Under Wave–Vibration
Previous Article in Journal
Characteristics and Evaluation of Living Shorelines: A Case Study from Fujian, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multicriteria Evaluation of Single Underwater Image Improvement Algorithms

by
Iracema del P. Angulo-Fernández
1,
Javier Bello-Pineda
2,
J. Alejandro Vásquez-Santacruz
1,3,
Rogelio de J. Portillo-Vélez
1,3,
Pedro J. García-Ramírez
4 and
Luis F. Marín-Urías
1,3,*
1
Faculty in the Civil and Environmental Engineering, Veracruzana University, Boca del Rio 94294, Veracruz, Mexico
2
Institute of Marine and Fishery Sciences Studies, Veracruzana University, Boca del Río 94294, Veracruz, Mexico
3
Faculty of Electrical and Electronics Engineering, Veracruzana University, Boca del Río 94294, Veracruz, Mexico
4
Institute of Engineering, Veracruzana University, Boca del Río 94294, Veracruz, Mexico
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(7), 1308; https://doi.org/10.3390/jmse13071308 (registering DOI)
Submission received: 29 May 2025 / Revised: 29 June 2025 / Accepted: 1 July 2025 / Published: 6 July 2025
(This article belongs to the Section Ocean Engineering)

Abstract

Enhancement and restoration algorithms are widely used in the exploration of coral reefs for improving underwater images. However, by selecting an improvement algorithm based on image quality metrics, image processing key factors such as the execution time are not considered. In response to this issue, herein is presented a novel method built on multicriteria decision analysis that evaluates the processing time and feature point increase with respect to the original image. To set the Decision Matrix (DM), both the processing time and keypoint increase criteria of the evaluated algorithms are normalized. The criteria weights in the DM are set in accordance with the application, and the quantitative metric used to select the best alternative is the highest Weighted Sum Method (WsuM) score. In this work, the DM of six scenarios is shown, since the setting of weights could completely change the decision. For this research’s target application of generating underwater photomosaics, the Dark Channel Prior (DCP) algorithm emerged as the most suitable under a weighting scheme of 75% for processing time and 25% for keypoint increase. This proposal represents a solution for evaluating improvement algorithms in applications where computational efficiency is critical.

1. Introduction

The Gulf of Mexico is an extremely complex, large, resilient marine ecosystem [1] where important coral reef areas have been reported. These ecosystems are considered critical habitats, as they provide an important range of ecosystem services to Mexico, Cuba, and the United States [2]. Recently, some authors have considered three regions as a general division of the Mexican Atlantic territory: the Reef Corridor of the Southwest Gulf of Mexico (RCSGM), the Yucatan and Campeche Bank, and the Mexican Caribbean [3]. In the RCSGM, two natural, protected areas are found: the Sistema Arrecifal Lobos-Tuxpan Flora and Fauna Protection Zone (SALT) and the Veracruz Reef System National Park (Spanish: PNSAV) [4]. There are records of fish species, stony corals, macroalgae, crustaceans, echinoderms, gastropods, native species, and also invasive species in the PNSAV; however, the health status of 50% of the coral reefs is poorly known [5]. The Census of Marine Life conducted between 2000 and 2010 collected data indicating “that at least 50% and potentially > 90 % of marine species remain undescribed by science”; however, Snelgrove [6] points out that emerging technologies are accelerating the detection of new habitats and taxonomic categorization. The ecological monitoring of coral reefs is the collection of data and information, both physical and biological, from the natural environment to detect changes in the processes or attributes measured [7,8]. This process begins by sampling a representative area of the reef that can be resampled over time to detect changes in reef conditions; hence, it is important to use standardized methods with standard sample sizes [8]. One of the most common monitoring methods to assess coral reef conditions is the Point Intercept Transect (PIT) method [9], and, as a variation of this method for research purposes, the video-transect method is applied [8]. Considering not every variable of the reef can be measured, indicators are used to report changes over time [8]. Indicators are metrics used to explain the causal relationship between an ecosystem attribute or process and ecosystem degradation [7]. The species–area relationship, cited in ecology as the power law, was observed in 1778 by Forster and mathematically modeled in 1921 by Arrhenius [10], increasing the interest of researchers in complexity metrics. Through pioneering research that included underwater imaging and the monitoring of ecological indicators in the PNSAV [11], a variation was found in abundance metric results, obtained from the comparison between data recorded in situ by diving and those recorded offline from videos filmed by a Remotely Operated Vehicle (ROV), and it was assumed that the image quality was affected by the visibility conditions in the water column. The quality of underwater images is affected by the presence of one or more factors such as water density, light attenuation, and the dispersion effect [12]. When working with large numbers of data, their quality is subject to issues at three points: the data source, generation, and processing. These issues compromise reliability, veracity, consistency, completeness, structure, and transmission; therefore, they require the use of data quality indicators [13] in the stage identified in our research as the underwater image preprocessing.
Underwater photomosaics are a resource of interest due to their applicability in different areas, such as inspection, detection of objects, control of underwater vehicles, studies in marine biology, and archaeology exploration [14]. One of the earlier records of underwater photomosaics is the composition assembled from the series of images captured from the Thresher submarine between 1963 and 1964, which was used to determine the possible causes of its sinking [15]. In 1987, the Woods Hole Oceanographic Institution (WHOI) built a Titanic photomosaic with approximately 100 images taken in 1985 by the Acoustically Navigated Geological Undersea Surveyor (ANGUS), requiring about 700 h to accomplish [16]. Another implementation of this technique involved the wreck site of the lumber schooner Rouse Simmons, where, in 2006, a digital video was recorded with a video camera mounted on a Diver Propulsion Vehicle (DPV), capturing 242 images that were overlaid and hand-assembled in Adobe Photoshop 7.0 to obtain a plan and profile views of the boat [17]. In marine biology, photomosaics have been used to detect changes in the abundance, cover, and size of benthic organisms, using an algorithm for image registration and the estimation of image motion and camera trajectory, achieving a two-dimensional high-resolution photomosaic of the reef benthos which is ideally useful in survey areas < 500 m 2 [18]. Image stitching, also known as image mosaicing, is a computer vision application where small-scale images are aligned to find overlaps with the aim of creating a single large-scale composition using an algorithm, known as the feature detector, to detect features or interest points (keypoints) [19]. Ancuti et al. demonstrated that a previous improvement of an underwater image increased the number of detected keypoints [14]. Many works have contributed to the improvement of a single underwater image, and can be mainly classified into two groups: image enhancement and image restoration [20,21,22]. The enhancement of underwater images incorporates image data without considering underwater image formation models [20,21,22]; meanwhile, the restoration of underwater images is governed by the criteria and prior knowledge of an underwater image formation model [20,21,22]. According to Alsakar et al., image enhancement methods can be categorized as spatial domain-based, frequency domain-based, color constancy-based, and deep learning-based [20]. Miao et al. suggest the following classifications of image restoration methods based on their underlying underwater imaging model: point spread function-based, Jaffe–McGlamery model-based, turbulence degradation model-based, and image dehazing-based [22]. Dark Channel Prior (DCP) is considered the most studied single-image dehazing-based method in the literature [23]. This approach, proposed in 2009 by He et al., recovers a scene radiance from the estimation of atmospheric light and a transmission map based on the observation “that at least one color channel has some pixels whose intensities are very low and close to zero” in haze-free outdoor images [24]. Sea-Thru [25] is an image restoration algorithm based on a revised underwater image formation model proposed by Akkaynak and Treibitz in 2018 [26] and was developed to estimate the backscatter and attenuation coefficients of an RGBD (Red, Green, Blue, Depth) image with the aim of recovering lost colors and achieving water removal [14]. Sea-Thru also works with monocular images. The Color Balance and Fusion Based on White Balancing (CBFWB) algorithm [14], belonging to the group of image restoration algorithms, includes a fusion with color-correction methods [22], stands out for “better exposedness of the dark regions, improved global contrast, and edges sharpness”, and is applicable to image processing with the particularity to work in a manner that is “reasonably independent of the camera settings” [14]. For the three algorithms selected for this work, we can give the following information: (a) DCP is the basis of 22 underwater restoration methods developed from 2010 to 2019 [22]; (b) Sea-Thru is the most recently proposed underwater image formation model [26]; and (c) an image improvement method comparison conducted by Ancuti et al. in 2018 found that CBFWB outperforms other methods in terms of the means of the average values of the PCQI, UCIQE, and UIQM metrics [14].
The image quality evaluation metrics applied for quantitative analysis are the general-purpose Patch-Based Contrast Quality Index (PCQI) [27], the metric proposed by Yang and Sowmya in 2015 known as Underwater Color Image Quality Evaluation (UCIQE) [28], and the metric presented in 2016 by Panneta et al. called the Underwater Image Quality Measure (UIQM) [29], all set to perform the underwater imaging appraisal [14,22]. However, image quality evaluation metrics do not consider computational complexity in the assessment of improvement algorithms. To select an underwater image improvement algorithm in this evaluation, as a preprocessing stage in the composition of underwater photomosaics, two key factors are considered: the processing time and the keypoint increase with respect to the original image, which suggests a multicriteria decision analysis. Jadhav and Sonar carried out a comparative study of multicriteria decision-making methods applied to software selection, evaluating the Analytic Hierarchy Process (AHP), Weighted Scoring Method (WScM), and Hybrid Knowledge-Based Systems (HKBS) [30]. The fact that an application similar to this project has not been documented heretofore, and taking as a reference the comparison of methods used to evaluate software and the low complexity of the Decision Matrix, the score of the well-known Weighted Sum Method (WSuM) of Peter C. Fishburn is used as a decision metric, and this model can be tuned to the needs of the problem. The techniques used in this work to perform feature detection using improved images are Scale-Invariant Feature Transform (SIFT), presented by Lowe in 2004; and Oriented FAST and Rotated BRIEF (ORB), proposed by Rublee et al. in 2011 [19]. SIFT is suitable for object recognition and image registration implementations because of its robustness under conditions of changes in scale, rotation, and illumination, while ORB manages to be a very useful method in the context of computational efficiency, which is why it is used in augmented-reality applications [31].
The objective of this evaluation is to select a suitable method to improve the monocular underwater image database provided by the Institute of Marine and Fishery Sciences Studies (Spanish: ICIMAP) of Veracruzana University [32]. Since this work is focused on applying computer vision techniques such as photomosaics to monitor coral reefs, evaluation metrics are the techniques used to detect and describe local points of interest in an image and the processing time of the improvement algorithm. Therefore, the contributions of this research are the following:
  • Application of three enhancement and restoration algorithms to the PNSAV monocular underwater image database with minimum hardware requirements;
  • Introduction of a novel multicriteria decision framework to evaluate the performance of image improvement algorithms built on two factors: detection of keypoints and processing time;
  • Incorporation of processing time as a decision criterion to address computational efficiency challenges, which are relevant for coral reef exploration applications where fast image processing is critical.
In this paper, an overall introduction is included in Section 1. Section 2 describes the materials and methods, and the results and discussion are described in Section 3. Finally, Section 4 closes this work with some conclusions and further work recommendations.

2. Materials and Methods

To implement digital image processing, the computing equipment used is of the commercial type with an AMD Ryzen 5 5500U processor, Radeon Graphics 2.10 GHz, and 16.0 GB RAM. Regarding the material evaluated, the images and videos stem from GoPro HERO4 Black and Ambarella MK-1 cameras, and are provided by the ICIMAP [32]. The programming language requirements and algorithm download links for reproducibility purposes are described in Table 1.
An overview diagram of the method for obtaining the keypoints and processing times, which starts to improve original monocular images with Sea-Thru, is presented in Figure 1. The available code for Sea-Thru reduces the original size of images to optimize computational resources. For comparative purposes, the original images are resized to the Sea-Thru outcome dimension prior to the implementation of the DCP and CBFWB algorithms. Every single image-processing time is recorded by finishing the image improvement, and the outcome image is saved to apply a feature detector. Keypoints are detected using SIFT and ORB techniques and the results are recorded.
A schematic diagram of the multicriteria evaluation method is sketched in Figure 2, where the increase in keypoints (obtained by the difference between the keypoints of the improved image and the keypoints of the original image resized) is the first criterion and processing time is the second criterion. Afterward, these criteria are normalized using the Min–Max method and weighted according to the research application. In the next step, a Weighted Sum Method score is quantified for each alternative, and, finally, the best choice corresponds to the highest score.
In this research, the underwater images processed with the Sea-Thru, DCP, and CBFWB algorithms comprise three experimental cases: (a) AMBA0150.jpg, an image captured with the Ambarella MK-1 camera; (b) a frame extracted from GOPR0223.mp4, a video taken with the GoPro 4 Black 4K; and (c) a sample of ten consecutive frames of the AMBA0064.mp4 video, captured with the Ambarella MK-1 camera. In the end, six different scenarios are shown to visualize the impact of weighting the criteria on the decision.

3. Results and Discussion

This section is divided into six parts to outline the experimental cases, the processing times of the improvement algorithms, the detection of feature points, and the decision matrices of the weighted normalized factors for different scenarios.

3.1. Experimental Case: AMBA0150.jpg—22/05/2018—Ambarella MK-1 Camera

The first experiment corresponds to the image AMBA0150.jpg, taken on the 22nd of May 2018. Figure 3a shows the original image, while Figure 3b shows the improved image obtained using the Sea-Thru algorithm, revealing the bright colors in the corals. Figure 3c presents the result of the application of the DCP algorithm, where there is an apparent improvement in contrast. The outcome in Figure 3d shows that the CBFWB method improves colors in a very similar way to Sea-Thru but with less contrast.

3.2. Experimental Case: Frame from GOPR0223.mp4—15/07/2018—GoPro 4 Black 4K Camera

The second case studied is the image obtained from the video GOPR0223.mp4 captured on the 15th of July 2018. Figure 4a corresponds to the frame extracted from the original video, while Figure 4b is the result of improving the image using the Sea-Thru algorithm: an image with red saturation in some pixels. Figure 4c shows that the DCP algorithm does not produce any visually perceptible change in the image. Finally, in Figure 4d, it is shown that the CBFWB method improves the contrast and improves colors uniformly.

3.3. Processing Time of Improvement Algorithms

To evaluate the performance of the image improvement algorithms, the processing time of each improvement method, applied to the first ten frames in a .png extension extracted from the AMBA0064.mp4 video, is considered.
It is pertinent to explain that codes shared on the web are used in this research. For this reason and for comparison purposes, a resized version of the original images is used to generate the DCP-improved and CBFWB-improved images, since the shared code for the Sea-Thru algorithm includes a resize of the image in its process.
In Table 2, the processing time results can be observed, with a mean for the Sea-Thru algorithm of 3.5 s, a mean for the DCP model of 0.2 s, and a mean for the CBFWB approach of 1 s. It is noticeable that the DCP model is the fastest.

3.4. Feature Point Detection with SIFT Technique

To evaluate the image improvement algorithms’ performance in terms of feature points, the first ten frames in the .png extension extracted from the AMBA0064.mp4 video are processed with the SIFT feature detector. In Figure 5, an example of keypoint detection obtained for the fr2.png (fr2_rs.png for resized version) is shown.
The SIFT technique obtains 732 keypoints for the original image shown in Figure 5a, 1002 keypoints for the image improved with the Sea-Thru algorithm displayed in Figure 5b, 858 keypoints for the frame restored with the DCP method shown in Figure 5c, and 856 keypoints for the frame improved with the CBFWB algorithm presented in Figure 5d.
A comparison of the detected feature points obtained using the SIFT technique is presented in Table 3, validating that the use of improvement algorithms increases the detection of keypoints.
The feature points obtained by the SIFT technique for each frame are plotted in Figure 6, showing that the Sea-Thru algorithm achieves an average increase of 210.8 keypoints, the DCP model an average increase of 94.3 keypoints, and, lastly, the CBFWB approach an average increase of 117.8 keypoints.

3.5. Feature Point Detection with ORB Technique

In this section, the same first ten frames extracted in .png format from the AMBA0064.-mp4 video are used. An example of keypoint detection on fr2.png (fr2_rs.png for resized version) obtained by the ORB technique is presented in Figure 7, where Figure 7a corresponds to the original frame with 397 keypoints, and Figure 7b corresponds to the frame improved using Sea-Thru with 411 keypoints. In Figure 7c, 409 keypoints can be observed for the DCP-improved image, and the 410 keypoints for the frame improved with CBFWB are displayed in Figure 7d.
In Table 4, the comparative results of the detected keypoints obtained by means of the ORB technique are presented, showing an estimated average increase of 10.2 keypoints achieved using the Sea-Thru algorithm, 8.3 keypoints achieved with the DCP model, and 9.8 keypoints achieved by applying the CBFWB approach, which indicates that Sea-Thru is the algorithm with the highest average increase in feature points. This result is validated with the chart presented in Figure 8, where Sea-Thru has the highest number of detected keypoints in 50% of the frames.

3.6. Weighted Assessment of the Processing Time and the Keypoint Increase Normalized Factors

To select an underwater image improvement algorithm to create underwater photomosaics at the preprocessing stage, two key factors are identified in this assessment: the processing time of the method and the increase in keypoints compared to the original image. Hence, the decision is approached as a multicriteria decision.
To model the Decision Matrix (DM), the first step is the normalization of the criteria using the Min–Max method presented in Equation (1):
x i = x i x m i n x m a x x m i n ,
where x i is the normalized value, x i is the original value, x m i n is the minimum value of x, and x m a x is the maximum value of x. This step is carried out for both the keypoint increase obtained by the SIFT technique and the keypoint increase recorded by the ORB technique, along with the processing times. Thereafter, the average of the normalized values is estimated for each improvement algorithm, discriminating between feature detectors. Lastly, the WSuM is applied as shown in Equation (2):
A i W S u M = j = 1 n w j x i j ,
where A i W S u M is the weighted sum of the i-th alternative, n is the total of the criteria, w j is the weight of the j-th criteria, and x i j is the normalized value of the i-th alternative in terms of the j-th criteria. In Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, six different scenarios are shown as a means to visualize the importance of setting the weight of criteria and feature detectors according to the particularities of the application.
In the DM in Table 5, the normalized average values of the alternatives with respect to the set criteria can be observed when the SIFT feature detector is used and the criteria weighted equally. These DM results indicate that DCP achieves the best underwater image improvement performance.
The DM shown in Table 6 corresponds to the normalized average values of the alternatives regarding the set criteria, equivalently weighing the time and using the ORB feature detector. The outcome of this DM shows that DCP has the best performance as an improvement method.
If the application has no time restriction and the weights are assigned as in the DM in Table 7 and Table 8, 5% processing time and 95% keypoint increase, the results would lead to Sea-Thru and CBFWB alternatives being chosen as the best performance improvement algorithms when the SIFT and ORB techniques are used, respectively.
For the final scenarios, where the processing time is set as the highest-weight criteria, 75% and 25% compared to the increase in keypoints, which is relevant for applications such as image preprocessing to create photomosaics where fast processing is preferred, the DM results obtained by the SIFT technique are displayed in Table 9, and the DCP algorithm outperforms the other models. The DM of Table 10 contains the WSuM scores of the alternatives considering the two criteria, obtained from the use of the ORB feature detector, placing DCP as the best-performing model. In Figure 9, the global results of the DMs are exposed, showing the weighted sum of each improvement method for the setting of scenarios included in the assessment.
Taking into account the processing time of the algorithms, DCP is the fastest model, with a mean of 0.2 s, followed by the CBFWB approach with an average of 1.0 s. In the end, the Sea-Thru algorithm is found to have a mean of 3.5 s. When implementing the three improvement algorithms for the AMBA0150.jpg image taken on the 22nd of May 2018, the improvements in contrast and color are visually perceptible. In the case of the frame obtained from the video GOPR0223.mp4 captured on the 15th of July 2018, the Sea-Thru algorithm produces an image with red saturation in some pixels, while the DCP algorithm does not make any visually perceptible changes in the image. The CBFWB method optically improves the contrast and the colors uniformly.
From a basic analysis of the feature detector outcomes, the Sea-Thru model has 1030 keypoints, the highest average number of features detected using the SIFT technique. With 418 keypoints, Sea-Thru and CBFWB obtain the highest mean number of features detected through the ORB method. In this research, to make the selection decision of the best underwater improvement algorithm with the performance criteria, the DMs of the six scenarios are presented from a multicriteria decision perspective to confirm the relevance of having clear application requirements; the setting of weights could completely change the decision. In our target application of photomosaics, where computational efficiency is critical, DCP provides the best performance. It is worth highlighting that the algorithm based on the DCP model does not make visually perceptible changes to images that have intensity values close to 0 in the red channel. To verify this result, the image in Figure 4 is split into RGB channels, visually identifying a red channel in black. According to [40], a normalized value of (0, 0, 0) in the RGB color cube corresponds to black; therefore, in this experimental case, there are low intensities without variation in red, causing a failed transmission estimation with the DCP method, as Drews also pointed out in [41]. This phenomenon usually occurs 5 m deep in turbid water and 20 m deep in clear water, when the long wavelength (red) light is degraded in the water [20].

4. Conclusions and Further Work

Applying enhancement or restoration methods to the PNSAV database increased the feature points in relation to the original images or frames extracted from a video, a highlight for image-processing applications such as keypoint matching. Recent research efforts have focused on matching the image colors to those that actually correspond to the object of study under the premise that the colors in the image change as the depth in the ocean increases, responding to the electromagnetic model of light. Nonetheless, this evaluation addresses the improvement of images with another purpose: preparing the image for identification and classification of species in a photomosaic, without estimating “real” color being a priority.
Based on the results generated from this evaluation, the powerful scope of the DCP algorithm is recognized. Even so, it is encouraged for researchers to pursue the assessment of this method for underwater monocular images taken at depths greater than 5 m, the threshold where the red color begins to attenuate. Furthermore, a general classification of improvement methods is needed, which until now has not been agreed upon by experts in this area. Additionally, it is recommended to resume underwater image improvement method research that does not include Deep Learning (due to its high hardware requirements) for preprocessing purposes with techniques such as image stitching. In further work, a third criterion built on keypoint validation could strengthen the multicriteria evaluation.

Author Contributions

Conceptualization, I.d.P.A.-F. and L.F.M.-U.; methodology, I.d.P.A.-F. and L.F.M.-U.; software, L.F.M.-U.; validation, J.B.-P.; formal analysis, P.J.G.-R.; investigation, I.d.P.A.-F.; resources, J.B.-P.; data curation, I.d.P.A.-F.; writing—original draft preparation, I.d.P.A.-F.; writing—review and editing, J.B.-P., J.A.V.-S., R.d.J.P.-V., P.J.G.-R. and L.F.M.-U.; visualization, J.A.V.-S., R.d.J.P.-V. and P.J.G.-R.; supervision, L.F.M.-U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The algorithms used in this work are mentioned in Table 1.

Acknowledgments

We gratefully acknowledge the support of the Secretaría de Ciencia, Humanidades, Tecnología e Innovación (SECIHTI), through scholarship 248463.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DCPDark Channel Prior
CBFWBColor Balance and Fusion Based on White Balancing
PNSAV (Spanish)Veracruz Reef System National Park
SIFTScale-Invariant Feature Transform
ORBOriented FAST and Rotated BRIEF
DMDecision Matrix
RCSGMReef Corridor of the Southwest Gulf of Mexico
SALTSistema Arrecifal Lobos-Tuxpan Flora and Fauna Protection Zone
PITPoint Intercept Transect
ROVRemotely Operated Vehicle
PCQIPatch-Based Contrast Quality Index
UCIQEUnderwater Color Image Quality Evaluation
UIQMUnderwater Image Quality Measure
ICIMAP (Spanish)Institute of Marine and Fishery Sciences Studies
WHOIWoods Hole Oceanographic Institution
ANGUSAcoustically Navigated Geological Undersea Surveyor
DPVDiver Propulsion Vehicle
RGBDRed, Green, Blue, Depth
SURFSpeeded-Up Robust Features
AHPAnalytic Hierarchy Process
WScMWeighted Scoring Method
HKBSHybrId Knowledge-Based Systems
WSuMWeighted Sum Method

References

  1. McKinney, L.D.; Shepherd, J.G.; Wilson, C.A.; Hogarth, W.T.; Chanton, J.; Murawski, S.A.; Sandifer, P.A.; Sutton, T.; Yoskowitz, D.; Wowk, K.; et al. The gulf of mexico. Oceanography 2021, 34, 30–43. [Google Scholar] [CrossRef]
  2. Gil-Agudelo, D.L.; Cintra-Buenrostro, C.E.; Brenner, J.; González-Díaz, P.; Kiene, W.; Lustic, C.; Pérez-España, H. Coral reefs in the Gulf of Mexico large marine ecosystem: Conservation status, challenges, and opportunities. Front. Mar. Sci. 2020, 6, 807. [Google Scholar] [CrossRef]
  3. Santander-Monsalvo, J.; Espejel, I.; Ortiz-Lozano, L. Distribution, uses, and anthropic pressures on reef ecosystems of Mexico. Ocean. Coast. Manag. 2018, 165, 39–51. [Google Scholar] [CrossRef]
  4. Ortiz-Lozano, L.; Gutiérrez-Velázquez, A.; Aja-Arteaga, A.; Argüelles-Jiménez, J.; Ramos-Castillo, V. Distribution, threats, and management of submerged reefs in the north of the reef corridor of the Southwest Gulf of Mexico. Ocean. Coast. Manag. 2021, 201, 105428. [Google Scholar] [CrossRef]
  5. Pérez-España, H.; Vargas-Hernández, J.M.; Horta-Puga, G.; Miranda-Zacarías, J.; Vázquez-Machorro, A.; Tello-Musi, J.L.; Sánchez-Castro, J.L.; González-Baca, C.A. Reporte del estado de saludo de los arrecifes: Parque Nacional Sistema Arrecifal Veracruzano; Technical report; Sea&Reef A.C.: Veracruz, Mexico, 2021. [Google Scholar]
  6. Snelgrove, P.V. An ocean of discovery: Biodiversity beyond the census of marine life. Planta Medica 2016, 82, 790–799. [Google Scholar] [CrossRef] [PubMed]
  7. Carrillo-García, D.M. Indicadores para monitorear la integridad ecológica de los arrecifes de coral:el caso del caribe mexicano. Bachelor’s thesis, Universidad Nacional Autónoma de México, Mexico City, Mexico, 2018. [Google Scholar]
  8. Hill, J.; Wilkinson, C. Methods for Ecological Monitoring of Coral Reefs; Australian Institute of Marine Science, Townsville: Cape Cleveland, QLD, Australia, 2004; Volume 117. [Google Scholar]
  9. Botello-López, F.J.; Vázquez-Camacho, C.; Mayani-Parás, F.; Vega-Orihuela, M.E.; Morales-Díaz, S.P. Protocolo para el Monitoreo Ecosistémico de Arrecifes de Coral en Áreas Naturales Protegidas; Technical Report; Comisión Nacional de Áreas Naturales Protegidas, Fondo Mexicano para la Conservación de la Naturaleza, Conservación Biológica y Desarrollo Social. A. C.: Mexico City, Mexico, 2022. [Google Scholar]
  10. Carey, M.; Boland, J.; Keppel, G. Generalized Logarithmic Species-Area Relationship Resolves the Arrhenius-Gleason Debate. Environ. Model. Assess. 2023, 28, 491–499. [Google Scholar] [CrossRef]
  11. Contreras-Juárez, M. Cambios en la estructura de la comunidad íctica de profundidades someras a mesofóticas y su relación con la complejidad topográfica en el Sistema Arrecifal Veracruzano. Master’s thesis, Instituto de Ciencias Marinas y Pesquerías, Universidad Veracruzana, Veracruz, Mexico, 2020. [Google Scholar]
  12. Cai, C.; Zhang, Y.; Liu, T. Underwater image processing system for image enhancement and restoration. In Proceedings of the 2019 IEEE 11th International Conference on Communication Software and Networks (ICCSN), Chongqing, China, 12–15 June 2019; pp. 381–387. [Google Scholar] [CrossRef]
  13. Taleb, I.; Serhani, M.A.; Bouhaddioui, C.; Dssouli, R. Big data quality framework: A holistic approach to continuous quality management. J. Big Data 2021, 8, 76. [Google Scholar] [CrossRef]
  14. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2018, 27, 379–393. [Google Scholar] [CrossRef] [PubMed]
  15. Bryant, J.B. Thresher Freedom of Information Act Request Yielding Results. Proceedings. May 2019. Available online: https://www.usni.org/magazines/proceedings/2019/may/thresher-freedom-information-act-request-yielding-results (accessed on 25 May 2024).
  16. WHOI. Piecing Together Titanic. 2010. Available online: https://www.whoi.edu/multimedia/piecing-together-titanic/ (accessed on 25 May 2024).
  17. Meverden, K.N.; Thomsen, T.L. Myths and Mysteries: Underwater Archaeological Investigation of the Lumber Schooner Rouse Simmons, Christmas Tree Ship; Technical report; National Oceanic and Atmospheric Administration: Silver Spring, MD, USA, 2008. [Google Scholar]
  18. Lirman, D.; Gracias, N.R.; Gintert, B.E.; Gleason, A.C.R.; Reid, R.P.; Negahdaripour, S.; Kramer, P. Development and application of a video-mosaic survey technology to document the status of coral reef communities. Environ. Monit. Assess. 2007, 125, 59–73. [Google Scholar] [CrossRef] [PubMed]
  19. Tareen, S.A.K.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar] [CrossRef]
  20. Alsakar, Y.M.; Sakr, N.A.; El-Sappagh, S.; Abuhmed, T.; Elmogy, M. Underwater image restoration and enhancement: A comprehensive review of recent trends, challenges, and applications. Vis. Comput. 2025, 41, 3735–3783. [Google Scholar] [CrossRef]
  21. Wang, Y.; Song, W.; Fortino, G.; Qi, L.Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  22. Yang, M.; Hu, J.; Li, C.; Rohde, G.; Du, Y.; Hu, K. An in-depth survey of underwater image enhancement and restoration. IEEE Access 2019, 7, 123638–123657. [Google Scholar] [CrossRef]
  23. Salazar-Colores, S.; Ramos-Arreguín, J.M.; Pedraza-Ortega, J.C.; Rodríguez-Reséndiz, J. Efficient single image dehazing by modifying the dark channel prior. EURASIP J. Image Video Process. 2019, 2019, 1–8. [Google Scholar] [CrossRef]
  24. Liu, S.; Rahman, M.; Wong, C.; Lin, S.; Jiang, G.; Kwok, N. Dark channel prior based image de-hazing: A review. In Proceedings of the 2015 5th International Conference on Information Science and Technology (ICIST), Changsha, China, 24–26 April 2015; pp. 345–350. [Google Scholar] [CrossRef]
  25. Akkaynak, D.; Treibitz, T. Sea-thru: A method for removing water from underwater images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1682–1691. [Google Scholar]
  26. Akkaynak, D.; Treibitz, T. A revised underwater image formation model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6723–6732. [Google Scholar]
  27. Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
  28. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
  29. Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2015, 41, 541–551. [Google Scholar] [CrossRef]
  30. Jadhav, A.; Sonar, R. Analytic hierarchy process (AHP), weighted scoring method (WSM), and hybrid knowledge based system (HKBS) for software selection: A comparative study. In Proceedings of the 2009 Second International Conference on Emerging Trends in Engineering & Technology, Nagpur, India, 16–18 December 2009; pp. 991–997. [Google Scholar] [CrossRef]
  31. Rodrigues, D. COMPUTER VISION: Introduction, Fundamentals, and Practical Applications; Diego Rodrigues: Porto Alegre, Río Grande del Sur, Brasil, 2024. [Google Scholar]
  32. ICIMAP. Santiaguillo and Anegadilla Image Database. 2018. Under Request to the Author. Available online: https://www.uv.mx/veracruz/icmp/ (accessed on 23 October 2023).
  33. Python Software Foundation. Python version: 3.10.6. Available online: http://www.opensource.org (accessed on 1 August 2022).
  34. Zhang, H. Single Image Haze Removal Using Dark Channel Prior. 2016. Available online: https://github.com/He-Zhang/image_dehaze (accessed on 9 March 2023).
  35. Gibson, J. Implementation of Sea-thru by Derya Akkaynak and Tali Treibitz. 2020. Available online: https://github.com/hainh/sea-thru (accessed on 9 March 2023).
  36. The MathWorks Inc. MATLAB version: 25.1.0.2852912 (R2025a) Prerelease Update 3, Natick, Massachusetts: The MathWorks Inc. Available online: https://www.mathworks.com (accessed on 12 February 2025).
  37. fergaletto. A Matlab Implementation of: Color Balance and Fusion for Underwater Image Enhancement. 2020. Available online: https://github.com/fergaletto/Color-Balance-and-fusion-for-underwater-image-enhancement (accessed on 24 April 2024).
  38. Marín-Urías, L.F. SIFT. 2025. Available online: https://github.com/lfmarin/Underwater_Image_Improvement/blob/main/benchmark/SIFT_benchmark.py (accessed on 15 March 2025).
  39. Marín-Urías, L.F. ORB. 2025. Available online: https://github.com/lfmarin/Underwater_Image_Improvement/blob/main/benchmark/ORB_benchmark.py (accessed on 15 March 2025).
  40. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson India Education Services: Chennai, Tamil Nadu, 2022; p. 420. [Google Scholar]
  41. Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 2–8 December 2013; pp. 825–830. [Google Scholar]
Figure 1. Overview diagram of method for improving algorithm implementation, registration of processing time, and keypoint detection.
Figure 1. Overview diagram of method for improving algorithm implementation, registration of processing time, and keypoint detection.
Jmse 13 01308 g001
Figure 2. Schematic diagram of proposed multicriteria evaluation method.
Figure 2. Schematic diagram of proposed multicriteria evaluation method.
Jmse 13 01308 g002
Figure 3. Comparative results of applying improvement algorithms to Ambarella MK-1 Camera image: (a) Original image. (b) Image improved with Sea-Thru. (c) Image improved with DCP. (d) Image improved with CBFWB.
Figure 3. Comparative results of applying improvement algorithms to Ambarella MK-1 Camera image: (a) Original image. (b) Image improved with Sea-Thru. (c) Image improved with DCP. (d) Image improved with CBFWB.
Jmse 13 01308 g003
Figure 4. Comparative results of applying improvement algorithms to frame from GoPro Hero4 Black video: (a) Original image. (b) Image improved with Sea-Thru. (c) Image improved with DCP. (d) Image improved with CBFWB.
Figure 4. Comparative results of applying improvement algorithms to frame from GoPro Hero4 Black video: (a) Original image. (b) Image improved with Sea-Thru. (c) Image improved with DCP. (d) Image improved with CBFWB.
Jmse 13 01308 g004
Figure 5. Detection of feature points using SIFT: (a) The original image: 732 keypoints. (b) Image improved with Sea-Thru: 1002 keypoints. (c) Image improved with DCP: 858 keypoints. (d) Image improved with CBFWB: 856 keypoints.
Figure 5. Detection of feature points using SIFT: (a) The original image: 732 keypoints. (b) Image improved with Sea-Thru: 1002 keypoints. (c) Image improved with DCP: 858 keypoints. (d) Image improved with CBFWB: 856 keypoints.
Jmse 13 01308 g005
Figure 6. Detection of feature points using SIFT: original image (blue), image improved with Sea-Thru (green), image improved with DCP (olive), image improved with CBFWB (pink).
Figure 6. Detection of feature points using SIFT: original image (blue), image improved with Sea-Thru (green), image improved with DCP (olive), image improved with CBFWB (pink).
Jmse 13 01308 g006
Figure 7. Detection of feature points using ORB: (a) Original image: 397 keypoints. (b) Image improved with Sea-Thru: 411 keypoints. (c) Image improved with DCP: 409 keypoints. (d) Image improved with CBFWB: 410 keypoints.
Figure 7. Detection of feature points using ORB: (a) Original image: 397 keypoints. (b) Image improved with Sea-Thru: 411 keypoints. (c) Image improved with DCP: 409 keypoints. (d) Image improved with CBFWB: 410 keypoints.
Jmse 13 01308 g007
Figure 8. Detection of feature points using ORB: original image (blue), image improved with Sea-Thru (green), image improved with DCP (olive), and image improved with CBFWB (pink).
Figure 8. Detection of feature points using ORB: original image (blue), image improved with Sea-Thru (green), image improved with DCP (olive), and image improved with CBFWB (pink).
Jmse 13 01308 g008
Figure 9. Global results of DMs showing the weighted sum of each improvement method for the setting of 6 evaluated scenarios, where t = processing time and k = keypoint increase.
Figure 9. Global results of DMs showing the weighted sum of each improvement method for the setting of 6 evaluated scenarios, where t = processing time and k = keypoint increase.
Jmse 13 01308 g009
Table 1. Programming language requirements and download links for codes.
Table 1. Programming language requirements and download links for codes.
AlgorithmProgramming LanguageDownload Link
DCPPython [33][34]
Sea-ThruPython [33][35]
CBFWBMatlab [36][37]
SIFTPython [33][38]
ORBPython [33][39]
Table 2. Comparative results for the processing time of the improvement algorithms.
Table 2. Comparative results for the processing time of the improvement algorithms.
Processing Time (seconds)
AlgorithmFr0Fr1Fr2Fr3Fr4Fr5Fr6Fr7Fr8Fr9
Sea-Thru3.73.63.23.23.22.64.13.35.13.1
DCP0.40.20.20.20.20.30.20.20.20.2
CBFWB1.01.01.01.01.01.01.01.01.01.0
Fr = frame.
Table 3. Comparative results of feature point detection with SIFT technique.
Table 3. Comparative results of feature point detection with SIFT technique.
Feature Points
AlgorithmFr0Fr1Fr2Fr3Fr4Fr5Fr6Fr7Fr8Fr9
Original805834732750784886879828853843
Sea-Thru10349931002100497310431102104610981007
DCP8579628588148759561005915985910
CBFWB9119958568318989801048953955945
Fr = frame.
Table 4. Comparative results of feature point detection with ORB technique.
Table 4. Comparative results of feature point detection with ORB technique.
Feature Points
AlgorithmFr0Fr1Fr2Fr3Fr4Fr5Fr6Fr7Fr8Fr9
Original419409397407408405403422407406
Sea-Thru427420411410419413420428415422
DCP422417409412421409420427412417
CBFWB420420410415423412421425416419
Fr = frame.
Table 5. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 50%, k = 50%).
Table 5. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 50%, k = 50%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time50%0.3240.9940.837
Keypoint increase50%0.7280.1940.302
WSuM0.5260.5940.569
Table 6. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 50%, k = 50%).
Table 6. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 50%, k = 50%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time50%0.3240.9940.837
Keypoint increase50%0.5410.4290.518
WSuM0.4330.7120.677
Table 7. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 5%, k = 95%).
Table 7. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 5%, k = 95%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time5%0.3240.9940.837
Keypoint increase95%0.7280.1940.302
WSuM0.7080.2340.329
Table 8. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 5%, k = 95%).
Table 8. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 5%, k = 95%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time5%0.3240.9940.837
Keypoint increase95%0.5410.4290.518
WSuM0.5300.4580.534
Table 9. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 75%, k = 25%).
Table 9. Decision Matrix to select the improvement algorithm with the best performance (SIFT feature detector, t = 75%, k = 25%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time75%0.3240.9940.837
Keypoint increase25%0.7280.1940.302
WSuM0.4250.7940.703
Table 10. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 75%, k = 25%).
Table 10. Decision Matrix to select the improvement algorithm with the best performance (ORB feature detector, t = 75%, k = 25%).
Alternatives
WeightSea-ThruDCPCBFWB
CriteriaProcessing time75%0.3240.9940.837
Keypoint increase25%0.5410.4290.518
WSuM0.3790.8530.757
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Angulo-Fernández, I.d.P.; Bello-Pineda, J.; Vásquez-Santacruz, J.A.; Portillo-Vélez, R.d.J.; García-Ramírez, P.J.; Marín-Urías, L.F. A Multicriteria Evaluation of Single Underwater Image Improvement Algorithms. J. Mar. Sci. Eng. 2025, 13, 1308. https://doi.org/10.3390/jmse13071308

AMA Style

Angulo-Fernández IdP, Bello-Pineda J, Vásquez-Santacruz JA, Portillo-Vélez RdJ, García-Ramírez PJ, Marín-Urías LF. A Multicriteria Evaluation of Single Underwater Image Improvement Algorithms. Journal of Marine Science and Engineering. 2025; 13(7):1308. https://doi.org/10.3390/jmse13071308

Chicago/Turabian Style

Angulo-Fernández, Iracema del P., Javier Bello-Pineda, J. Alejandro Vásquez-Santacruz, Rogelio de J. Portillo-Vélez, Pedro J. García-Ramírez, and Luis F. Marín-Urías. 2025. "A Multicriteria Evaluation of Single Underwater Image Improvement Algorithms" Journal of Marine Science and Engineering 13, no. 7: 1308. https://doi.org/10.3390/jmse13071308

APA Style

Angulo-Fernández, I. d. P., Bello-Pineda, J., Vásquez-Santacruz, J. A., Portillo-Vélez, R. d. J., García-Ramírez, P. J., & Marín-Urías, L. F. (2025). A Multicriteria Evaluation of Single Underwater Image Improvement Algorithms. Journal of Marine Science and Engineering, 13(7), 1308. https://doi.org/10.3390/jmse13071308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop