sensors-logo

Journal Browser

Journal Browser

Imaging Sensor Systems for Analyzing Subsea Environment and Life

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 31126

Special Issue Editors


E-Mail Website
Guest Editor
Department de Matemàtiques i Informàtica, Universitat de les Illes Balears, 07122 Palma, Spain
Interests: marine robotics; real-time vision; control architectures for autonomous mobile robots
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Conway Institute of Biomolecular and Biomedical Science, University College Dublin, Belfield, Dublin 4, Ireland
Interests: Environmental imaging; Light Sheet Fluorescence Microscopy; Photogrammetry; Corals; Kite Aerial Imaging; Underwater Imaging systems and applications

E-Mail Website
Guest Editor
Systems, Robotics and Vision Group, Universitat de les Illes Balears, 07122 Palma, Spain
Interests: Computer vision; Convolutional neural networks; 3D visual inspection; Semantic segmentation using deep learning

Special Issue Information

Dear Colleagues,

Recent technologies are paving the use of innovative imaging systems applied to underwater scenarios. Such sensing systems, either carried out by humans on permanent stations or installed on board of varied types of unmanned vehicles, allow environmental and biology studies being extended in time, coverage, depth, precision, and data richness, compared to traditional surveys. As the amount of information generated by these new systems grows massively, efficient and precise methods to process it are necessary. This Special Issue aims at publishing highly rated manuscripts presenting the development and application of original image-based solutions addressed to the study of subsea life and environmental assessment from macroscopic to microscopic scales, including sensor systems, fundamental methods, and experimental results.

Provided they address a submarine ecosystem or environmental issue, topics could include, but are not be limited to:

  • Imaging systems based on vision, laser, or sonar
  • Multi-modal and/or multi-session data integration
  • Underwater image enhancement and restoration
  • Target detection, tracking, and identification
  • Automatic image/video annotation
  • Context-based marine robot guidance
  • 2D and 3D reconstruction of underwater scenarios
  • AI-based solutions

If you have suggestions that you would like to discuss beforehand, please feel free to contact us. We look forward to your participation in this Special Issue.

Dr. Gabriel Oliver-Codina
Dr. Emmanuel G. Reynaud
Dr. Yolanda González-Cid
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Underwater life imaging sensors
  • Benthic ecosystems observation
  • Marine environmental monitoring
  • Conservational harm sensing
  • Wildlife quality assessment

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

21 pages, 4794 KiB  
Article
Jellytoring: Real-Time Jellyfish Monitoring Based on Deep Learning Object Detection
by Miguel Martin-Abadal, Ana Ruiz-Frau, Hilmar Hinz and Yolanda Gonzalez-Cid
Sensors 2020, 20(6), 1708; https://doi.org/10.3390/s20061708 - 19 Mar 2020
Cited by 31 | Viewed by 7913
Abstract
During the past decades, the composition and distribution of marine species have changed due to multiple anthropogenic pressures. Monitoring these changes in a cost-effective manner is of high relevance to assess the environmental status and evaluate the effectiveness of management measures. In particular, [...] Read more.
During the past decades, the composition and distribution of marine species have changed due to multiple anthropogenic pressures. Monitoring these changes in a cost-effective manner is of high relevance to assess the environmental status and evaluate the effectiveness of management measures. In particular, recent studies point to a rise of jellyfish populations on a global scale, negatively affecting diverse marine sectors like commercial fishing or the tourism industry. Past monitoring efforts using underwater video observations tended to be time-consuming and costly due to human-based data processing. In this paper, we present Jellytoring, a system to automatically detect and quantify different species of jellyfish based on a deep object detection neural network, allowing us to automatically record jellyfish presence during long periods of time. Jellytoring demonstrates outstanding performance on the jellyfish detection task, reaching an F1 score of 95.2%; and also on the jellyfish quantification task, as it correctly quantifies the number and class of jellyfish on a real-time processed video sequence up to a 93.8% of its duration. The results of this study are encouraging and provide the means towards a efficient way to monitor jellyfish, which can be used for the development of a jellyfish early-warning system, providing highly valuable information for marine biologists and contributing to the reduction of jellyfish impacts on humans. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Graphical abstract

25 pages, 11187 KiB  
Article
Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories
by Vanesa Lopez-Vazquez, Jose Manuel Lopez-Guede, Simone Marini, Emanuela Fanelli, Espen Johnsen and Jacopo Aguzzi
Sensors 2020, 20(3), 726; https://doi.org/10.3390/s20030726 - 28 Jan 2020
Cited by 43 | Viewed by 7671 | Correction
Abstract
An understanding of marine ecosystems and their biodiversity is relevant to sustainable use of the goods and services they offer. Since marine areas host complex ecosystems, it is important to develop spatially widespread monitoring networks capable of providing large amounts of multiparametric information, [...] Read more.
An understanding of marine ecosystems and their biodiversity is relevant to sustainable use of the goods and services they offer. Since marine areas host complex ecosystems, it is important to develop spatially widespread monitoring networks capable of providing large amounts of multiparametric information, encompassing both biotic and abiotic variables, and describing the ecological dynamics of the observed species. In this context, imaging devices are valuable tools that complement other biological and oceanographic monitoring devices. Nevertheless, large amounts of images or movies cannot all be manually processed, and autonomous routines for recognizing the relevant content, classification, and tagging are urgently needed. In this work, we propose a pipeline for the analysis of visual data that integrates video/image annotation tools for defining, training, and validation of datasets with video/image enhancement and machine and deep learning approaches. Such a pipeline is required to achieve good performance in the recognition and classification tasks of mobile and sessile megafauna, in order to obtain integrated information on spatial distribution and temporal dynamics. A prototype implementation of the analysis pipeline is provided in the context of deep-sea videos taken by one of the fixed cameras at the LoVe Ocean Observatory network of Lofoten Islands (Norway) at 260 m depth, in the Barents Sea, which has shown good classification results on an independent test dataset with an accuracy value of 76.18% and an area under the curve (AUC) value of 87.59%. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Figure 1

15 pages, 7830 KiB  
Article
Automatic Annotation of Subsea Pipelines Using Deep Learning
by Anastasios Stamoulakatos, Javier Cardona, Chris McCaig, David Murray, Hein Filius, Robert Atkinson, Xavier Bellekens, Craig Michie, Ivan Andonovic, Pavlos Lazaridis, Andrew Hamilton, Md Moinul Hossain, Gaetano Di Caterina and Christos Tachtatzis
Sensors 2020, 20(3), 674; https://doi.org/10.3390/s20030674 - 26 Jan 2020
Cited by 14 | Viewed by 5266
Abstract
Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilize sub-sea Remotely Operated Vehicles (ROVs), launched [...] Read more.
Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilize sub-sea Remotely Operated Vehicles (ROVs), launched from a surface vessel and piloted over the pipeline. ROVs capture data from various sensors/instruments which are subsequently reviewed and interpreted by human operators, creating a log of event annotations; a slow, labor-intensive and costly process. The paper presents an automatic image annotation framework that identifies/classifies key events of interest in the video footage viz. exposure, burial, field joints, anodes, and free spans. The reported methodology utilizes transfer learning with a Deep Convolutional Neural Network (ResNet-50), fine-tuned on real-life, representative data from challenging sub-sea environments with low lighting conditions, sand agitation, sea-life and vegetation. The network outputs are configured to perform multi-label image classifications for critical events. The annotation performance varies between 95.1% and 99.7% in terms of accuracy and 90.4% and 99.4% in terms of F1-Score depending on event type. The performance results are on a per-frame basis and corroborate the potential of the algorithm to be the foundation for an intelligent decision support framework that automates the annotation process. The solution can execute annotations in real-time and is significantly more cost-effective than human-only approaches. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Figure 1

20 pages, 1846 KiB  
Article
Automatic Hierarchical Classification of Kelps Using Deep Residual Features
by Ammar Mahmood, Ana Giraldo Ospina, Mohammed Bennamoun, Senjian An, Ferdous Sohel, Farid Boussaid, Renae Hovey, Robert B. Fisher and Gary A. Kendrick
Sensors 2020, 20(2), 447; https://doi.org/10.3390/s20020447 - 13 Jan 2020
Cited by 40 | Viewed by 3990
Abstract
Across the globe, remote image data is rapidly being collected for the assessment of benthic communities from shallow to extremely deep waters on continental slopes to the abyssal seas. Exploiting this data is presently limited by the time it takes for experts to [...] Read more.
Across the globe, remote image data is rapidly being collected for the assessment of benthic communities from shallow to extremely deep waters on continental slopes to the abyssal seas. Exploiting this data is presently limited by the time it takes for experts to identify organisms found in these images. With this limitation in mind, a large effort has been made globally to introduce automation and machine learning algorithms to accelerate both classification and assessment of marine benthic biota. One major issue lies with organisms that move with swell and currents, such as kelps. This paper presents an automatic hierarchical classification method local binary classification as opposed to the conventional flat classification to classify kelps in images collected by autonomous underwater vehicles. The proposed kelp classification approach exploits learned feature representations extracted from deep residual networks. We show that these generic features outperform the traditional off-the-shelf CNN features and the conventional hand-crafted features. Experiments also demonstrate that the hierarchical classification method outperforms the traditional parallel multi-class classifications by a significant margin (90.0% vs. 57.6% and 77.2% vs. 59.0%) on Benthoz15 and Rottnest datasets respectively. Furthermore, we compare different hierarchical classification approaches and experimentally show that the sibling hierarchical training approach outperforms the inclusive hierarchical approach by a significant margin. We also report an application of our proposed method to study the change in kelp cover over time for annually repeated AUV surveys. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Figure 1

22 pages, 4805 KiB  
Article
An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering
by Yidan Liu, Huiping Xu, Dinghui Shang, Chen Li and Xiangqian Quan
Sensors 2019, 19(24), 5567; https://doi.org/10.3390/s19245567 - 16 Dec 2019
Cited by 18 | Viewed by 4773
Abstract
In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to [...] Read more.
In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to the use of chromatic and inhomogeneous artificial lighting devices. Since the latter situation is rarely studied in the field of underwater image enhancement, we propose a new model to include it in the analysis of underwater image degradation. Based on the theoretical study of the new model, a comprehensive method for enhancing underwater images under different illumination conditions is proposed in this paper. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. In the first module, the regional or full-extent color deviation caused by different types of incident light is corrected via frequency-based color-tone estimation. And in the second module, the residual low contrast and pixel-wise color shift problems are handled by combining the descattering results under the assumption of different states of the image. The proposed method is experimented on laboratory and open-water images of different depths and illumination states. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms many other methods in enhancing the quality of different types of underwater images, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, such as deep-sea images. Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Show Figures

Figure 1

Other

Jump to: Research

1 pages, 173 KiB  
Correction
Correction: Lopez-Vazquez et al. Video Image Enhancement and Machine Learning Pipeline for Underwater Animal Detection and Classification at Cabled Observatories. Sensors 2020, 20, 726
by Vanesa Lopez-Vazquez, Jose Manuel Lopez-Guede, Simone Marini, Emanuela Fanelli, Espen Johnsen and Jacopo Aguzzi
Sensors 2023, 23(1), 16; https://doi.org/10.3390/s23010016 - 20 Dec 2022
Viewed by 850
Abstract
The authors wish to correct the following error in the original paper [...] Full article
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)
Back to TopTop