sensors-logo

Journal Browser

Journal Browser

Marine Imaging and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 26453

Special Issue Editors


E-Mail Website
Guest Editor
IRS-Lab, Computer Science and Engineering Department, Jaume I University, Avd. Sos Baynat s/n, 12071 Castellón de la Plana, Spain
Interests: visually guided grasping; multisensory based underwater manipulation; underwater intervention systems; telerobotics; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Departament d'Arquitectura i Tecnologia de Computadors, Universitat de Girona, Edifici P-IV, Campus de Montilivi, 17071 Girona, Spain
Interests: computer vision; underwater robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Aerospace, Mechanical and Mechatronic Engineering; Australian Centre for Field Robotics; University of Sydney, Sydney, NSW 2006, Australia

E-Mail Website
Guest Editor
Marine Imaging Lab, Charney School of Marine Sciences, University of Haifa, Haifa, Israel
Interests: underwater computer vision; marine imaging; vision for marine autonomous vehicles

Special Issue Information

Dear Colleagues,

As is well known, the acquisition and processing of underwater images is extensively affected by the effect of the aquatic medium on light propagation. The combined effect of absorption and scattering degrade the imagery in a much more significant way than the in-air imaging counterpart, leading to loss of color and contrast, and image blurriness. Capturing images underwater is essentially a short-range process, thus requiring special attention regarding the need to position the camera close to the scene or phenomena being observed. Further challenges arise when artificial illumination is needed, which amplifies back-scattering effects and creates strongly varying illumination conditions. Nonetheless, when compared to other forms of underwater sensing, optical imagery can provide unmatched levels of information, not only in terms of resolution, but also in terms of texture and color.

This Special Issue is aimed at bringing together new solutions for sensing and extraction of useful information from underwater imagery. We focus our interest on manuscripts thoroughly describing new vision-based systems to be used in highly unstructured and dynamic environments as well as innovative and efficient methods to process the data gathered in these scenarios. Both original research articles and reviews are welcome.

Original research papers should preferably not rely on processing information from public datasets but describe complete solutions to specific underwater applications, including sensor systems, fundamental methods, and experimental results. Manuscripts can alternatively focus on presenting new (annotated) datasets gathered by novel vision-based sensors and used in marine science and related applications, thus contributing to future benchmarking.
Reviews, presenting an analytical up-to-date overview of the state-of-the-art, would also be appropriate, provided they incorporate some quantitative and qualitative scoring of the exposed solutions through publicly available data.

We invite submissions from all areas of computer vision and image analysis relevant for, or applied to, underwater image sensing and analysis. Topics of interest include but are not limited to:

  • Marine imaging from drones;
  • New underwater optical sensor designs;
  • Underwater image enhancement;
  • Multispectral and hyperspectral sensing and calibration;
  • Underwater mapping in 3D and over time;
  • Physical models of reflectance and light transport;
  • Autonomous underwater navigation;
  • Optical sensing for autonomous underwater manipulation;
  • Underwater laser scanning and lidar;
  • Detection and monitoring of marine life;
  • Object tracking;
  • Automatic video annotation and summarization;
  • Classification, detection, segmentation using CNNs and deep learning;
  • Other forms of context-aware machine learning and image understanding.

If you have suggestions that you would like to discuss beforehand, please feel free to contact us. We look forward to your participation in this Special Issue.

Prof. Dr. Pedro J. Sanz
Dr. Nuno Gracias
Dr. Mitch Bryson
Dr. Tali Treibitz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords


  • New underwater optical sensor designs
  • Underwater image enhancement
  • Multispectral and hyperspectral sensing and calibration
  • Underwater mapping in 3D and over time
  • Physical models of reflectance and light transport
  • Autonomous underwater navigation
  • Optical sensing for autonomous underwater manipulation
  • Underwater laser scanning and lidar
  • Detection and monitoring of marine life
  • Object tracking
  • Automatic video annotation and summarization
  • Classification, detection, segmentation using CNNs and deep learning
  • Other forms of context-aware machine learning and image understanding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 4200 KiB  
Article
Underwater Object Detection and Reconstruction Based on Active Single-Pixel Imaging and Super-Resolution Convolutional Neural Network
by Mengdi Li, Anumol Mathai, Stephen L. H. Lau, Jian Wei Yam, Xiping Xu and Xin Wang
Sensors 2021, 21(1), 313; https://doi.org/10.3390/s21010313 - 5 Jan 2021
Cited by 21 | Viewed by 5820
Abstract
Due to medium scattering, absorption, and complex light interactions, capturing objects from the underwater environment has always been a difficult task. Single-pixel imaging (SPI) is an efficient imaging approach that can obtain spatial object information under low-light conditions. In this paper, we propose [...] Read more.
Due to medium scattering, absorption, and complex light interactions, capturing objects from the underwater environment has always been a difficult task. Single-pixel imaging (SPI) is an efficient imaging approach that can obtain spatial object information under low-light conditions. In this paper, we propose a single-pixel object inspection system for the underwater environment based on compressive sensing super-resolution convolutional neural network (CS-SRCNN). With the CS-SRCNN algorithm, image reconstruction can be achieved with 30% of the total pixels in the image. We also investigate the impact of compression ratios on underwater object SPI reconstruction performance. In addition, we analyzed the effect of peak signal to noise ratio (PSNR) and structural similarity index (SSIM) to determine the image quality of the reconstructed image. Our work is compared to the SPI system and SRCNN method to demonstrate its efficiency in capturing object results from an underwater environment. The PSNR and SSIM of the proposed method have increased to 35.44% and 73.07%, respectively. This work provides new insight into SPI applications and creates a better alternative for underwater optical object imaging to achieve good quality. Full article
(This article belongs to the Special Issue Marine Imaging and Recognition)
Show Figures

Figure 1

17 pages, 4767 KiB  
Article
Automatic Bluefin Tuna Sizing with a Combined Acoustic and Optical Sensor
by Pau Muñoz-Benavent, Vicente Puig-Pons, Gabriela Andreu-García, Víctor Espinosa, Vicente Atienza-Vanacloig and Isabel Pérez-Arjona
Sensors 2020, 20(18), 5294; https://doi.org/10.3390/s20185294 - 16 Sep 2020
Cited by 6 | Viewed by 2947
Abstract
A proposal is described for an underwater sensor combining an acoustic device with an optical one to automatically size juvenile bluefin tuna from a ventral perspective. Acoustic and optical information is acquired when the tuna are swimming freely and the fish cross our [...] Read more.
A proposal is described for an underwater sensor combining an acoustic device with an optical one to automatically size juvenile bluefin tuna from a ventral perspective. Acoustic and optical information is acquired when the tuna are swimming freely and the fish cross our combined sensor’s field of view. Image processing techniques are used to identify and classify fish traces in acoustic data (echogram), while the video frames are processed by fitting a deformable model of the fishes’ ventral silhouette. Finally, the fish are sized combining the processed acoustic and optical data, once the correspondence between the two kinds of data is verified. The proposed system is able to automatically give accurate measurements of the tuna’s Snout-Fork Length (SFL) and width. In comparison with our previously validated automatic sizing procedure with stereoscopic vision, this proposal improves the samples per hour of computing time by 7.2 times in a tank with 77 juveniles of Atlantic bluefin tuna (Thunnus thynnus), without compromising the accuracy of the measurements. This work validates the procedure for combining acoustic and optical data for fish sizing and is the first step towards an embedded sensor, whose electronics and processing capabilities should be optimized to be autonomous in terms of the power supply and to enable real-time processing. Full article
(This article belongs to the Special Issue Marine Imaging and Recognition)
Show Figures

Figure 1

16 pages, 41668 KiB  
Article
Recovering Depth from Still Images for Underwater Dehazing Using Deep Learning
by Javier Pérez, Mitch Bryson, Stefan B. Williams and Pedro J. Sanz
Sensors 2020, 20(16), 4580; https://doi.org/10.3390/s20164580 - 15 Aug 2020
Cited by 12 | Viewed by 2641
Abstract
Estimating depth from a single image is a challenging problem, but it is also interesting due to the large amount of applications, such as underwater image dehazing. In this paper, a new perspective is provided; by taking advantage of the underwater haze that [...] Read more.
Estimating depth from a single image is a challenging problem, but it is also interesting due to the large amount of applications, such as underwater image dehazing. In this paper, a new perspective is provided; by taking advantage of the underwater haze that may provide a strong cue to the depth of the scene, a neural network can be used to estimate it. Using this approach the depthmap can be used in a dehazing method to enhance the image and recover original colors, offering a better input to image recognition algorithms and, thus, improving the robot performance during vision-based tasks such as object detection and characterization of the seafloor. Experiments are conducted on different datasets that cover a wide variety of textures and conditions, while using a dense stereo depthmap as ground truth for training, validation and testing. The results show that the neural network outperforms other alternatives, such as the dark channel prior methods and it is able to accurately estimate depth from a single image after a training stage with depth information. Full article
(This article belongs to the Special Issue Marine Imaging and Recognition)
Show Figures

Figure 1

43 pages, 9869 KiB  
Article
3D Object Recognition Based on Point Clouds in Underwater Environment with Global Descriptors: A Survey
by Khadidja Himri, Pere Ridao and Nuno Gracias
Sensors 2019, 19(20), 4451; https://doi.org/10.3390/s19204451 - 14 Oct 2019
Cited by 11 | Viewed by 4456
Abstract
This paper addresses the problem of object recognition from colorless 3D point clouds in underwater environments. It presents a performance comparison of state-of-the-art global descriptors, which are readily available as open source code. The studied methods are intended to assist Autonomous Underwater Vehicles [...] Read more.
This paper addresses the problem of object recognition from colorless 3D point clouds in underwater environments. It presents a performance comparison of state-of-the-art global descriptors, which are readily available as open source code. The studied methods are intended to assist Autonomous Underwater Vehicles (AUVs) in performing autonomous interventions in underwater Inspection, Maintenance and Repair (IMR) applications. A set of test objects were chosen as being representative of IMR applications whose shape is typically known a priori. As such, CAD models were used to create virtual views of the objects under realistic conditions of added noise and varying resolution. Extensive experiments were conducted from both virtual scans and from real data collected with an AUV equipped with a fast laser sensor developed in our research centre. The underwater testing was conducted from a moving platform, which can create deformations in the perceived shape of the objects. These effects are considerably more difficult to correct than in above-water counterparts, and therefore may affect the performance of the descriptor. Among other conclusions, the testing we conducted illustrated the importance of matching the resolution of the database scans and test scans, as this significantly impacted the performance of all descriptors except one. This paper contributes to the state-of-the-art as being the first work on the comparison and performance evaluation of methods for underwater object recognition. It is also the first effort using comparison of methods for data acquired with a free floating underwater platform. Full article
(This article belongs to the Special Issue Marine Imaging and Recognition)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 2767 KiB  
Review
State of the Art of Underwater Active Optical 3D Scanners
by Miguel Castillón, Albert Palomer, Josep Forest and Pere Ridao
Sensors 2019, 19(23), 5161; https://doi.org/10.3390/s19235161 - 25 Nov 2019
Cited by 51 | Viewed by 9691
Abstract
Underwater inspection, maintenance and repair (IMR) operations are being increasingly robotized in order to reduce safety issues and costs. These robotic systems rely on vision sensors to perform fundamental tasks, such as navigation and object recognition and manipulation. Especially, active optical 3D scanners [...] Read more.
Underwater inspection, maintenance and repair (IMR) operations are being increasingly robotized in order to reduce safety issues and costs. These robotic systems rely on vision sensors to perform fundamental tasks, such as navigation and object recognition and manipulation. Especially, active optical 3D scanners are commonly used due to the domain-specific challenges of underwater imaging. This paper presents an exhaustive survey on the state of the art of optical 3D underwater scanners. A literature review on light projection and light-sensing technologies is presented. Moreover, quantitative performance comparisons of underwater 3D scanners present in the literature and commercial products are carried out. Full article
(This article belongs to the Special Issue Marine Imaging and Recognition)
Show Figures

Figure 1

Back to TopTop