Next Article in Journal
Towards Tomography-Based Real-Time Control of Multiphase Flows: A Proof of Concept in Inline Fluid Separation
Next Article in Special Issue
The Impact of Data Augmentations on Deep Learning-Based Marine Object Classification in Benthic Image Transects
Previous Article in Journal
Recent Advances in Bipedal Walking Robots: Review of Gait, Drive, Sensors and Control Systems
Previous Article in Special Issue
Super-Resolution and Feature Extraction for Ocean Bathymetric Maps Using Sparse Coding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery

1
ETSI Telecomunicación, Universidad de Málaga, 29071 Malaga, Spain
2
Science and Technology Unit, Umm al Qura University, Makkah 21955, Saudi Arabia
3
Department of Computer Science, National University of Technology, Islamabad 44000, Pakistan
4
Centro Oceanográfico de Cádiz (IEO-CSIC), Instituto Español de Oceanografía, 11006 Cádiz, Spain
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(12), 4441; https://doi.org/10.3390/s22124441
Submission received: 23 April 2022 / Revised: 7 June 2022 / Accepted: 7 June 2022 / Published: 12 June 2022
(This article belongs to the Special Issue Advanced Sensor Applications in Marine Objects Recognition)

Abstract

:
With the evolution of the convolutional neural network (CNN), object detection in the underwater environment has gained a lot of attention. However, due to the complex nature of the underwater environment, generic CNN-based object detectors still face challenges in underwater object detection. These challenges include image blurring, texture distortion, color shift, and scale variation, which result in low precision and recall rates. To tackle this challenge, we propose a detection refinement algorithm based on spatial–temporal analysis to improve the performance of generic detectors by suppressing the false positives and recovering the missed detections in underwater videos. In the proposed work, we use state-of-the-art deep neural networks such as Inception, ResNet50, and ResNet101 to automatically classify and detect the Norway lobster Nephrops norvegicus burrows from underwater videos. Nephrops is one of the most important commercial species in Northeast Atlantic waters, and it lives in burrow systems that it builds itself on muddy bottoms. To evaluate the performance of proposed framework, we collected the data from the Gulf of Cadiz. From experiment results, we demonstrate that the proposed framework effectively suppresses false positives and recovers missed detections obtained from generic detectors. The mean average precision (mAP) gained a 10% increase with the proposed refinement technique.

1. Introduction

Research in underwater image analysis has gained popularity in many applications of marine sciences. There are various research directions in underwater image analysis, for instance, underwater species classification and detections [1], seafloor image recognition [2], coral reef classification [3], and flora and fauna recognition [4]. Underwater image analysis requires a set of image processing tasks including underwater object detection, classification, visual content recognition, and image annotation of large-scale marine species [5]. Certain challenges such as turbidity, color variations, and illumination changes make underwater environments very difficult for the models to detect and classify the objects automatically.
There are thousands of species in the ocean all over the world. One of the most important commercial species in Europe is the Norway lobster Nephrops norvegicus. Figure 1 shows the Nephrops norvegicus species (hereafter referred to as Nephrops). This species is distributed from 10 m to 800 m of depth in the Atlantic NE waters and the Mediterranean Sea [6], where sediment is suitable for them to construct their burrows. This species excavates into and inhabits burrow systems mainly in muddy seabed sediments, with more than 40 percent silt and clay [7]. These burrows systems have a single or multiple openings or holes with characteristic features that make them different to burrows built for other burrowing species [8,9]. At least one opening has a crescent moon shape and a shallowly descending tunnel. It is often proof of expelled sediment forming a wide delta-like tunnel opening, and signals such as scratches and tracks are frequently observed. If a burrow system consists of more than one entrance, then the center of all the openings has a raised gain. It is assumed that each burrow system is occupied by a unique individual. Figure 2 shows the features of the Nephrops burrows system.
Nephrops spend most of their time inside the burrows, and their emergence behavior is influenced by several factors: time of year, light intensity, or tidal strength [10]. For this reason, abundance indices obtained from the commercial catch or the traditional bottom trawl surveys are thought to be poorly representative of the Nephrops population and they are not considered appropriate [11,12].
The abundance of Nephrops populations is currently monitored by underwater television (UWTV) surveys on many European grounds. The methodology used in UWTV surveys was developed in Scotland in the 1990s and is based on the identification and quantification of the burrows systems over the known area of Nephrops distribution [13]. Nephrops abundance from UWTV surveys is the basis of assessment and advice for managing these stocks [14].
Videos are recorded using a camera system mounted in a sledge with angle with respect to the bottom ranging between 37–60° depending to the country [15]. They are reviewed manually by trained experts and quantified following the protocol established by ICES [8,16].
With the recent advancement in artificial intelligence and computer vision technology, many researchers employ AI-based tools to analyze marine species. Some people use feature extraction mechanisms to count and identify the species while others use some advanced techniques [17] such as neural networks. Convolutional neural networks (CNN) bring a revolution in object detection. Deep convolutional neural networks gain tremendous success in the tasks of object detection [18,19], classification [20,21], and segmentation [22,23]. These networks are data-driven and require a huge amount of labeled data for training.
In our previous work [24], we developed a deep learning model based on state-of-the-art Faster RCNN [19] models Inceptionv2 [25] and MobileNetv2 [26] for the detection of Nephrops openings. Those models were trained on Gulf of Cadiz and Irish datasets. These models achieved good results in detecting the burrows from the image test data. However, when these trained models were tested on a video from Gulf of Cadiz, the accuracy of the detectors degraded. We figured out many false positive (FP) and missed true positive (TP) detections that adversely affect the accuracy of these models.
In this work, we proposed a detection refinement mechanism based on spatial–temporal information to enhance the detection of missed true positive and suppress the false positive detections. The work presented in [27] used the temporal information to track the faces and suppresses the false positive detections. Their approach used low-level tracking to detect the faces in real images. Furthermore, their approach does not recover the missed detections. In our case, the low-level tracking cannot be applied as we are using underwater videos and the objects we are detecting are not real species but the burrows on the ground, where the characteristics are very different than the natural image. The previous work integrates the temporal information to track the faces and suppress the false positives. In our approach we are using the spatial and temporal information to suppress the false positives and recover the missed detections. Our work is divided into two parts. At first, we trained the model using state-of-the-art Faster RCNN [19] models Inceptionv2 [25], ResNet50 [28], and ResNet101 [29] for the detection of Nephrops burrows. We built the dataset for training and testing the models. In the second part of our work, we presented a spatial–temporal-based detection refinement algorithm. We detected the burrows in each frame in a video sequence and then obtained the spatial and temporal information across the multiple frames to refine the Nephrops burrows detections. The spatial–temporal mechanism helped in suppressing the FP burrows and allowed us to find the missed TP detection that led us to achieve a better accuracy as well as tracking and counting burrows in a video sequence. Figure 3 shows the result of the detector that we trained using the Inception model. The bounding boxes in blue color show the ground truth, while the red color bounding boxes show the detections from the Inception model. Due to variation in camera direction and appearance of burrows, the detector accumulates FPs and missed detection in some frames. The figure clearly shows the missed detection in the intermediate frames.
To address these challenges, we proposed a detection refinement approach based on spatial–temporal analysis that enhances the mAP of a generic detector. Our proposed detection refinement mechanism identified these missed detections, recovered them, and suppressed the false positives. Generally, our approach has the following contributions:
i.
We propose the spatial–temporal filtering (STF) model that extracts the spatial and temporal information of all the detections of the consecutive frames of an input video by suppressing the false positives and recovering the missed detections. The proposed method will improve the performance of the generic detectors (such as Inception and ResNet, in our case).
ii.
We evaluate the performance of the proposed framework on our proposed novel dataset. From the experiment results, we demonstrate the effectiveness of the proposed approach.
The rest of the paper is organized as follows: the related work is presented in Section 2. The Materials and Methods section given in Section 3 presents the data collection method and proposed methodology to refine the detections. The achieved results with the proposed methodology are discussed in Section 4. Finally, Section 5 concludes the article.

2. Related Work

Object detection and classification is a challenging computer vision problem. Researchers have developed many methods for object detection and classification tasks. The existing object detection approaches use handcrafted feature-based models [30,31,32,33] and deep features models [34]. The hand-crafted features models use basic features such as shape [35], texture [36,37,38], and edges [35,38] to train the classifier. On the other hand, convolutional neural networks automatically learn hierarchical features from the training set. Deep learning replaces the handcrafted features and introduces some efficient algorithms for object detection and classification. Over the last few years, deep learning models have enjoyed tremendous success in various object detection and classification tasks. Due to this reason, deep learning models are also employed in the detection and classification of underwater species. Although the underwater environment is hard and challenging compared to the ground, the deep learning algorithms perform much better compared to the conventional and handcrafted features. State-of-the-art deep learning-based object detectors include region-based convolution network (R-CNN) [39], Fast R-CNN [40], and Faster R-CNN [19]. R-CNN uses deep ConvNet to classify the object proposals. R-CNN algorithm is computationally expensive as it uses a selective search [41] strategy to generate a large number of object proposals followed by the object proposal classification step. On the other hand, Fast R-CNN is the improvement of R-CNN, where a faster training process is achieved compared to R-CNN. Fast R-CNN uses multitasking in updating all the network layers and handling the loss which improves the speed and accuracy of the network. Compared to both methods, Faster R-CNN introduces region proposal network (RPN) as it combines the RPN with Fast R-CNN into a single network.
Li et al. [42] developed a deep learning model for the detection of marine objects. The model detects and recognizes fishes using deep convolutional network. They applied the Fast R-CNN algorithm to classify the twelve different classes of underwater fishes. They also introduced a dataset of 24,272 images of all these classes. They achieved more than 90% of accuracy in detection. Similarly, Villon et al. [43] applied the deep learning algorithms to the Fish4Knowledge dataset project to detect and classify the fishes. Rathi et al. [44] combined Faster R-CNN with three classification networks (ZF Net, CNN-M, and VGG16) to detect 50 fish and crustacean species from Queensland beaches and estuaries. The regional proposal method consists of a regional proposal network coupled with a classifier network. Xu et al. [45] applied the YOLO deep learning model to recognize the fishes in underwater videos. They used three different types of datasets that were recorded at real-world waterpower sites. They achieved an mAP up to 53.92%. Mandal et al. [46] presented a Faster R-CNN approach to identify the fishes and their different species using deep neural networks. Gundam et al. [47] also proposed a fish classification technique based on the Kalman filter that used partial automation of fish classification from underwater videos. Jalal et al. [1] proposed a hybrid approach that combines the YOLO-based object detection with optical flow and Gaussian matrix models to detect and classify the fishes from underwater videos. A similar method based on YOLO to detect and classify the fishes was proposed by Sung et al. [48]. They used 892 images and achieved the fish classification accuracy up to 93%. Jager et al. [49] proposed a deep CNN approach based on AlexNet architecture for the classification of fish species. They used the dataset of LifeCLEF 2015. Zhuang et al. [50] proposed a deep learning model based on SSD detector to automatically identify the fishes and their species. In their approach they used ResNet-10 as a classifier for species identification. Zhao et al. [51] proposed an automatic detection and classification method for fish and underwater species. The proposed method, called “Composed FishNet”, is based on the composite backbone and a path aggregation network. The composite backbone method is the improvement of ResNet. The enhanced path aggregation network is designed to improve the semantic information caused by unsampling. The results show that they achieved an average precision (AP) of 75.2%. Labao et al. [52] proposed a multilevel object detection network that used R-CNN as network framework. Their proposed network contained two region proposal networks and seven CNNs connected by long short-term memory (LSTM). The proposed network showed an improvement in the performance over the simple one-stage detection networks. Salman et al. [53] proposed an R-CNN-based two-stage automatic fish detection and location method. They used the fish motion information and combined it with the background and optical flow information to generate the candidate region of the fish. Their proposed model requires a fixed size input image and the candidate region extraction needs a substantial disk space as well.
Deep learning models also have been employed to detect marine objects other than fishes, such as planktons and corals. These two are also major components of the underwater marine ecosystem. Plankton are the basics of aquatic food. Dieleman et al. [54] used a deep neural network to classify the plankton. They introduced the inception module for image information extraction. Lee et al. [55] also proposed a deep neural network for plankton classification on a large dataset. Their convolutional neural network used three convolutional layers and two fully connected layers. The problem with the coral classification is its color, size, texture, and shape. Shiela et al. [56] introduced a local binary pattern for texture and color coordination. For classification purposes, they used the neural network with three backpropagation layers. Elawady et al. [57] used supervised CNN for the classification of corals. Table A1 in Appendix B summarizes the key findings of the papers discussed in this section.

3. Materials and Methods

In this section, we discuss the proposed methodology of improving the detections of Nephrops burrows. Figure 4 shows the pipeline of proposed framework. This section also presents the equipment and method used in the data collection in detail. Generally, the proposed framework has two sequential stages. The first stage is object detection, while detection refinement is performed during the second stage. During the first stage, we use state-of-the-art generic detectors, for example, Faster RCNN, Inception, ResNet50, and ResNet101, to detect the Nephrops burrows. For this purpose, we first divide the input video sequence into temporal segments, with each segment consisting of N number of frames. We then apply state-of-the-art detectors to each temporal segment to detect Nephrops burrows. The obtained results are passed to the refinement module that will employ spatial–temporal filtering (STF) to recover the missed detections from the frames and suppress the false positive detections. This process improves the mean average precision (mAP) of the results obtained from the detectors.

3.1. Nephrops Burrows Detections

To detect and classify the Nephrops burrows, state-of-the-art Faster R-CNN deep learning algorithms, Inceptionv2 [25], ResNet50 [28], and ResNet101 [29], were used to train the models. Figure 5 shows the pipeline of the proposed detection framework.

3.1.1. Data Collection

High-resolution footage was collected using a sledge during the 2018 Underwater TV (UWTV) survey at the Gulf of Cadiz by marine scientists who belong to IEO (Instituto Español de Oceanografía), a Spanish research institution devoted to promoting ocean research and knowledge, including government assessment for sustainable fisheries. A sledge is a stainless-steel underwater vehicle equipped with multiple cameras, sensors, lasers, and lights to record the footage. Figure 6 shows the setup of the instruments mounted in the sledge and a sample image, and a complete description is presented in Table 1.
Sampling on 70 stations were conducted in the 2018 UWTV survey. A station is a geostatistical location where the Nephrops burrow density is estimated to obtain the Nephrops abundance index over the known survey area using geostatistical analysis. At each station, the sledge was deployed and towed with constant speed between 0.6–0.7 knots to obtain the best possible conditions for counting Nephrops burrows. Once the sledge is stable on the seabed, a video footage of 10–12 min at 25 frames per seconds is recorded, which corresponds to 200 m swept, approximately. Vessel position (dGPS) and position of sledge, using a HiPAP transponder, are recorded every 1 to 2 s. The distance over ground (DOG) is estimated from the position of sledge in all stations, and the field of view of the video footage is 75 cm (FOV), which was confirmed using two line lasers. Out of all these 70 stations, we selected seven based on the better lighting conditions, high contrast, and high density of Nephrops burrows, as well as the better visibility of burrows. The recorded footages were saved into hard disks for further analysis on Nephrops density.

3.1.2. Image Annotation

The obtained frames were annotated using Microsoft VOTT [58] tool. We adopted the mechanism to annotate the burrows manually in the Microsoft VOTT image annotation tool and saved the annotations in Pascal VOC format. The saved XML annotation file contains image name, class name (Nephrops), and bounding box details of each object of interest in the image. The annotated frames led to formulating the ground truths (GT) for model training. To create the datasets for training and testing, from the set of annotated frames (more than 100,000), we selected those which contained Nephrops burrows, using the criteria of using only one frame per individual object, selected to increase the diversity of its appearance, which the aim of creating a small dataset which contained most of the typical cases of Nephrops burrows.

3.1.3. Annotation Validation

The Nephrops burrows annotation is a tedious job, and it requires a lot of experience to annotate a burrow, because different species build burrows with similar appearance on the bottom of the sea. Once all the burrows are annotated, it is very important to validate each one of them with the advice of marine experts from IEO institution, Gulf of Cadiz. Only the validated annotations were used in the model training.

3.1.4. Prepare Dataset

After validating all the annotations, the dataset was divided in two independent groups, the first one for training and the second one for testing purposes. Details are given in Table 2.

3.1.5. Model Training

We utilized transfer learning [59] to fine-tune the models in TensorFlow [60]. Inceptionv2 [25] is one of the architectures that have a high degree of accuracy, which helps to reduce the complexity of CNN. Inceptionv2 has 3 × 3 convolutions layers, which increases the performance of the network with respect to computational speed and processing.
ResNet50 [28] is a variant of the model ResNet. The ResNet50 has 48 convolutional layers, one max pool, and one average pool layer so it is a 50-layers-deep convolutional network. Out of these 50 layers, one layer is used in the first convolution with a kernel size of 7 × 7 64 kernels with stride 2 and a max pool of size 3 × 3 with stride 2, nine layers are used in the second convolution with a kernel size of 1 × 1, 64 kernels and 3 × 3, 128 kernels. In the next step, 12 layers are used with 1 × 1, 128; after that, a kernel of 3 × 3, 128, and, at last, a kernel of 1 × 1, 512. The fourth convolution uses 18 layers with kernel of 1 × 1, 256 and two more kernels with 3 × 3, 256 and 1 × 1, 1024. The fifth convolution uses nine layers with 1 × 1, 512 kernel with two more of 3 × 3, 512 and 1 × 1, 2048. Finally, the last layer is used for avg pool and a softmax function. ResNet50 is a widely used ResNet model.
The ResNet101 [29] is a dense convolutional neural network that is 101 layers deep. The first convolution has a kernel size of 7 × 7 64 kernels with stride 2 and a max pool of size 3 × 3 with stride 2. Nine layers are used in the second convolution with a kernel size of 1 × 1 64 kernels and 3 × 3 128 kernels. In the next step 12 layers are used with 1 × 1, 128; after that, a kernel of 3 × 3, 128, and, at last, a kernel of 1 × 1, 512. The fourth convolution uses 69 layers with kernel of 1 × 1, 256 and two more kernels with 3 × 3, 256 and 1 × 1, 1024. The fifth convolution uses 9 layers with 1 × 1, 512 kernel with two more of 3 × 3, 512 and 1 × 1, 2048. Finally, the last layer is used for avg pool and a softmax function. The ResNet50 and ResNet101 have better accuracy when compared to the other models for our problem.

3.1.6. Testing

To test our algorithm, we selected another station from the Gulf of Cadiz whose frames were not used in the training dataset. The test video, which is five minutes long and contains 7500 frames, was divided into temporal segments and then passed to our trained models to obtain the Nephrops burrows detections.

3.2. Detection Refinements

After the detections of Nephrops burrows, we performed a post analysis of the obtained results. After a critical analysis of the results, we found that the detectors encounter many FP and missed many TP, which degrades accuracy. To recover missed detections and suppress FP, we propose a detection refinement algorithm that exploits the spatial–temporal information among consecutive frames of the given temporal segment. The Inception, ResNet50, and ResNet101 models are tested on a video of five minutes in length. The proposed detection refinement algorithm takes V, λ, and W as inputs, where V is the video, λ, is a threshold value for displacement vector, the threshold value is the value of IoU (intersection over union) that is compared later with the IoU of detected Nephrops burrow, and W is a size of temporal window which determines the number of frames in the temporal window. These models provide a set of TP, FP, and missed detections. The criteria for definition of TP, FP, and working of the proposed detection algorithm is discussed in the next sections.

3.2.1. True Positives (TP)

The algorithm considers every detection as a TP if it is continuously detected by the detector within the temporal window and its average IoU in all the frames in the temporal window is more than or equal to the threshold value λ. Therefore, if the detector marks any FP detection as TP and the detection continues to occur in all the consecutive frames, then our algorithm considers it as a TP detection.

3.2.2. False Positives (FP)

The FP detections are those detections which are not detected in the consecutive frames and their combined IoU is less than the threshold value λ. These FP detections are also declared as FP in the ground truth dataset. The detectors detect them as TP because of camera angle (45°) and the position and angle of the burrow.

3.2.3. Missed Detections

The missed detections are those detections which are TP and are detected in some frames by the detector but missed in some intermediate frames due to position or visibility of the burrow. The missed detections are very important to identify because without identifying them we cannot track a burrow. We can increase the performance of models by recovering the missed detections.

3.3. Working of Detection Refinement Algorithm

The proposed algorithm is presented in Appendix A and shows the refinement mechanism using the spatial temporal analysis of data. This algorithm is divided into two sections, i.e., suppression of false positives and identification of missed detections. Figure 7 shows the basic processing steps of false positive suppression and missed detection identification and recovery.

3.3.1. Suppression of False Positives

The first step towards the refinement of detections is to suppress the FP. Let Fi = {B1, B2,…, Bn} be the frame i with n detections obtained with a deep learning model. Let sF be the set of consecutive frames within a temporal window with size W. The algorithm takes Bj for frame Fi as an input for refinement and provides a refined output as FR. To suppress the FP in the current frame i, we compute the overlapping of each detection Bj of the current frame and the detection in the next frame from sF.
The algorithm receives three inputs: an input video with detections V, threshold value λ, and temporal window size W. For each detection in the current frame bBj at frame Fi, we first identify the current detection location in the next frame of sF and then compute δκ = ΙoU value of current detection with consecutive k frame’s detection in sF using Compare_Displacement_Vector(fb_Index, fcb_Index) method (k = 1,…, W). Then, δavg = 1/W ∑δk is the estimated average within the temporal window. We marked the detection as FP if δavg < λ, and as TP if otherwise, suppressing the FP. We process the whole video V detections in the same way.

3.3.2. Identification of Missed Detections

After refining the detections by suppressing the FP in the previous step, the next step is to identify the missed detections that were missed by our detector. For this purpose, we track each detection BjFi to identify the missed detection. If the detection is found in frame i + 1, we continue to track it till the temporal window size W. If the current detection is not tracked in any frame, we mark that as missed detection and store it in the set indexSet. To calculate the value of the missed detection, we define the Set_BoundingBox_Value( ) method. We first compute the location of the missed detection from the indexSet. Letting Bj be the current detection and indexSetj the missed detection, we calculate the accumulative value of detection from the current frame till the indexSet location and then calculate the average, called bBValue_missing. As we are maintaining the number of frames N between the current detection and the missed detection, we calculate the missed detection value by adding the N value to the bBValue_missing. The missed detections information is then filled and updates the refined output FR.

4. Experiments and Results

In this section, we evaluate the results of different experiments performed using the proposed detection refinement algorithm. We use three different models (Inception, ResNet50, and ResNet101) for training with Gulf of Cadiz dataset. Each model is trained up to 100k iterations, and a log is maintained for each 10k iteration for evaluation.

4.1. Quantitative Analysis

In the quantitative analysis, an annotated video with frame rate of 25 fps is used for testing the Inception, ResNet50, and ResNet101 models. The video is divided into five temporal segments, each of one minute. Each temporal segment has 1500 frames.
We record number of detection from each temporal segment by all three models. The detection is then processed through the proposed detection refinement algorithm to identify the TP, FP, and missed detections. Table A2, Table A3, Table A4, Table A5 and Table A6 in Appendix B clearly show the obtained results in each temporal segment by each model and their corresponding improvement by the proposed detection refinement algorithm. The algorithm is run with W = 8, 12, and 16. In each temporal window, the algorithm is tested with λ = 0.3 and 0.4 and finds out the number of TP, FP, missed detection, and F1-score (geometric mean of precision and recall metrics) in each minute of the video.
Table 3 shows the accumulative ground truth (GT), TP, FP, and missed (Miss) detections along with the mean values of precision, recall, and F1-score of each temporal segment. The %Before is the result obtained before applying the STF, while the %After shows the results obtained after applying the refinement algorithm. Table 3 shows that ResNet101 gives the best F1-score in each one of the five temporal segments, followed by ResNet50 and Inception. It was found that a small IoU value of 0.3 is clearly better than 0.4 in terms of precision, recall, and F1-score values because area surrounding burrows is sometimes not well defined for all three models. The effect of window size W shows a trend of better results for smaller values (mostly, W = 8 is better than W = 12 and W = 16).
We performed experiments to find out the accuracy using mean average precision (mAP) after applying the detection refinement algorithm. We selected two different image sets from the third (image set 1) and fifth (image set 2) temporal segments. Each set consists of almost 200 images. Table 4 shows the definition of experiments performed.
Figure 8 and Figure 9 show the results of experiments performed on image sets 1 and 2, respectively. The graphs show the results of detections with and without applying the detection refinement algorithm. The performance is evaluated after every 10k iterations. Results clearly show that the mAP increases after applying the refinement algorithm for all three models (Inception (a), ResNet50 (b), and ResNet101 (c)) and iteration number. Figure 8 shows a higher improvement in mAP after applying the proposed refinement algorithm as compared to Figure 9, where some improvement is also achieved, in part due to that image set 1 had obtained a lower mAP before the refinement. Image set 2 has better quality as compared to the images in image set 1, in terms of better appearance of burrows and less camera movement artifacts. This suggest that mAP is quite sensitive to video quality and that the proposed refinement algorithm compensates for this to some degree.

4.2. Qualitative Analysis

In this section, we qualitatively analyze the performance of the proposed detection refinement algorithm by applying it to the results obtained from Inception, ResNet50, and ResNet101 models. The red bounding boxes on the images shown in this section are the original detections obtained from the models; green bounding boxes are the recovered missed detections after applying the refinement algorithm, and ground truth data are marked with blue bounding boxes.
Figure 10 shows a typical example of suppression of FP from the detections obtained from the Inception model. Figure 10a–c shows three frames where all burrows’ entrances are detected correctly but some FP detections are also obtained, yet are suppressed by our proposed algorithm, resulting in a correct detection, which is shown in Figure 10d–f.
A second rectification performed by the proposed detection refinement algorithm is the identification of missed detections. Figure 11 shows an example of six consecutive frames, before (a–f) and after (g–l) the application of this algorithm. Figure 11a shows two Nephrops burrows detections but missed one detection in (b–e) which is correctly rectified by the algorithm, as it is shown in the corresponding images (h–k). It can be shown also that ground truth annotations contain a third object in Figure 10d,f, which are correctly detected by the models, but are not shown in Figure 10a–c,e, possibly due to the viewing angle of some frames. However, the identification of missed detections has a good impact on the improvement of accuracy and precision of the results. A similar approach is followed to rectify the detections from ResNet50and ResNet101 models.

5. Conclusions

Deep learning algorithms were performed very well on the Gulf of Cadiz dataset in identifying the burrows of Nephrops norvegicus. We applied the Faster RCNN algorithms Inception, ResNet50, and ResNet101 for detections. To increase the results accuracy, a spatial–temporal-based detection refinement algorithm was proposed and tested. The proposed algorithm suppresses the false positive detections and recovers the missed true positive detections. The proposed method when integrated with any detector always increased the performance. The performance was calculated using mAP. This mechanism helps marine science experts in the assessment of the abundance of this species.
In future work, we plan to use diverse datasets from UWTV surveys conducted in other Nephrops stocks by other countries. We will train the YOLO detectors with more and diverse datasets. In addition, we plan to track the burrows to estimate the abundance of Nephrops. We also plan to correlate the spatial and morphological distribution of burrow holes to estimate the number of burrow systems that are present and compare with human inter-observer variability studies.

Author Contributions

Data collection, Y.V., A.N. and E.N.B.; images annotation, research methodology, and implementation, A.N., E.N.B. and S.D.K.; validation, S.D.K. and Y.V.; writing—original draft preparation, A.N.; writing—review and editing, E.N.B., S.D.K. and Y.V. All authors have read and agreed to the published version of the manuscript.

Funding

Open Access Article Processing Charges has been funded by University of Malaga.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the CN—Spanish Oceanographic Institute, Cadiz, Spain, for providing the dataset for research.

Conflicts of Interest

The authors declare no conflict of interests.

Appendix A

Algorithm A1: Detection Refinement
Input Data V, λ, W, where V is an input video, λ is a threshold value for displacement vector, W is a size of temporal window
Results FR = {F1, F2,…, Fn}, where FR is a list of refined frames
begin
        F = Extract_Frames_With_BoundingBox(V) // F = {F1, F2,…, Fn} where F is the list of frames and each one Fi = {B1, B2,…, Bn} has n bounding boxes Bj = {xj, yj, wj, hj}, where (xj, yj) are coordinates of initial pixel of the bounding box j and wj, hj are width and height.
        T = Extract_Duration(V) // T = {T1, T2, …, Tn} where T is total time of the video
        Foreach frame f F do
                   FR = Add_Frame(f)
                   sF = Create_Subset_Frame_W_Range(F) // sf is list of frames that need to
                          compare with current frame till ‘W’ temporal window size
           deleteFlag = Set (FALSE)
           Foreach boundingbox b f do
                  b_Index = Get_Bounding_Box_Index(b)
                  Foreach frame fc ∈ sF do //where fc = f+1
                     delta += Compare_Displacement_Vector (fb_Index, fcb_Index)
                  endFor
                  avgDelta = delta/W
                  if avgDelta < λ then
                              deleteFlag = Set (TRUE)
                  endif
                  if deleteFlag is FALSE then
                              FR= Add_ Bounding_Box_in_Frame (f, fb_Index)
                  endif
            endFor
            Foreach boundingbox b f do
                 indexSet = Identify_Missing_Detection(b, FR)
          endFor
          lastIndex = 0
          for index in FR do
               if index is in indexSet
                     j = index
                     for lastIndex to j
                                bBValue += Get_BoundingBox_Value(b, f lastIndex)
                    endfor
                    bBValue_missing = bBValue/j
                    Set_BoundingBox_Value(b, fj, bBValue_missing)
                    lastIndex = j;
               endif
          endFor
        endFor
       return FR
end

Appendix B

Table A1. Underwater object detection with key findings.
Table A1. Underwater object detection with key findings.
AuthorYearApproachObject DetectionDatasetPerformance
Parameters
Li et al.2015Deep Convolutional NetworkMarine ObjectsImageCLEF_Fish_TS dataset 24272 ImagesmAP
Villon et al.2016HOG, SVM and Deep LearningFish DetectionFish4Knowledge
13000 fish thumbnails
Precision, Recall, F-Score
Rathi et al.2018Faster R-CNN (ZF Net, CNN-M, VGG16)Fishes & crustacean speciesFish4Knowledge
27,142 Images
AP
Xu et al.2018YOLOFishes3 datasetsmAP
Mandal et al.2018Faster R-CNNFishesUni of Sunshine Coast
12365 Images
mAP
Jalal et al.2020YOLO based Hybrid approachFish ClassificationLifeCLEF 2015
93 Videos
F-Score
Sung et al.2017YOLOFish detection892 ImagesPrecision, Recall, FPS
Jager et al.2016CNN AlexNetFish ClassificationLifeCLEF2015AP, Precision, Recall,
Zhuang et al.2017ResNet-10Underwater Species SEACLEF2017AP
Zhao et al.2021Composed FishNetFish and Underwater Species detectionsSeaCLEF 2017
200,000 images
AP, F-Measure
Labao et al.2019Multilevel R-CNNFish detection300 Underwater ImagesPrecision, Recall, F-Score
Salman et al.2019Two stage R-CNNFish detectionFish4Knowledge, LCF-15Precision, Recall, F-Score
Lee et al.2016Three layers CNNPlankton detectionWHOI-Plankton database
3.2 million Images
F1-Score
Table A2. Detections and refinement results of 1st temporal segment.
Table A2. Detections and refinement results of 1st temporal segment.
1st Temporal Segment
GT = 255RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.316691365.170.294.995.277.280.8
80.4149261258.463.185.186.169.372.9
120.3165101564.770.694.394.776.780.9
120.468107926.730.238.941.831.635.1
160.3163124163.980.093.194.475.886.6
160.4661091925.933.337.743.830.737.9
ResNet5080.3188203173.785.990.491.681.288.7
80.4177312069.477.385.186.476.581.6
120.3186224372.989.889.491.280.390.5
120.4110981943.150.652.956.847.553.5
160.3175334168.684.784.186.775.685.7
160.4931151236.541.244.747.740.244.2
ResNet10180.3217262485.194.589.390.387.192.3
80.4164792064.372.267.570.065.971.0
120.3188552873.784.777.479.775.582.1
120.41001431839.246.341.245.240.245.7
160.3181622171.079.274.576.572.777.8
160.4961471337.642.739.542.638.642.7
Table A3. Detections and refinement results of 2nd temporal segment.
Table A3. Detections and refinement results of 2nd temporal segment.
2nd Temporal Segment
GT = 585RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3398336168.078.592.393.378.385.2
80.43241074655.463.275.277.663.869.7
120.3393387367.279.791.292.577.485.6
120.42711604146.353.362.966.153.359.0
160.33933811567.286.891.293.077.489.8
160.42691626846.057.662.467.553.062.2
ResNet5080.34204510571.889.790.392.180.090.9
80.43061598552.366.865.871.158.368.9
120.34046111469.188.586.989.577.089.0
120.42412247841.254.551.858.745.956.6
160.336310216862.190.878.183.969.187.2
160.423223310439.757.449.959.144.258.2
ResNet10180.34413110375.493.093.494.683.493.8
80.44331398974.089.275.779.074.883.8
120.34684910380.097.690.592.184.994.8
120.43092636852.864.454.058.953.461.6
160.34155714570.995.787.990.878.593.2
160.43002728951.366.552.458.951.962.4
Table A4. Detections and refinement results of 3rd temporal segment.
Table A4. Detections and refinement results of 3rd temporal segment.
3rd Temporal Segment
GT = 480RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3163234534.043.387.690.048.958.5
80.4132543727.535.271.075.839.648.1
120.3160264733.343.186.088.848.058.1
120.4106803022.128.357.063.031.839.1
160.3159274633.142.785.588.447.757.6
160.4641222813.319.234.443.019.226.5
ResNet5080.3291438760.678.887.189.871.583.9
80.4269656956.070.480.583.966.176.6
120.32805410658.380.483.887.768.883.9
120.42031315942.354.660.866.749.960.0
160.32746011457.180.882.086.667.383.6
160.41811535537.749.254.260.744.554.3
ResNet10180.33544010573.895.689.892.081.093.8
80.4335598869.888.185.087.876.787.9
120.33684611176.799.888.991.282.395.3
120.4302926462.976.376.679.969.178.0
160.33254513667.796.087.891.176.593.5
160.42681267955.872.368.073.461.372.8
Table A5. Detections and refinement results of 4th temporal segment.
Table A5. Detections and refinement results of 4th temporal segment.
4th Temporal Segment
GT = 468RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3304246465.078.692.793.976.485.6
80.4280485159.870.785.487.370.478.2
120.3296326763.277.690.291.974.484.1
120.4235934850.260.571.675.359.067.1
160.3293357262.678.089.391.373.684.1
160.42061224344.053.262.867.151.859.4
ResNet5080.3330286670.584.692.293.479.988.8
80.4284745060.771.479.381.968.876.3
120.3327318169.987.291.392.979.290.0
120.42471115052.863.569.072.859.867.8
160.3325339869.490.490.892.878.791.6
160.42321264949.660.064.869.056.264.2
ResNet10180.3388425082.993.690.291.386.492.4
80.4352783775.283.181.983.378.483.2
120.3387435782.794.990.091.286.293.0
120.42471833852.860.957.460.955.060.9
160.3380506181.294.288.489.884.692.0
160.42321983149.656.254.057.051.756.6
Table A6. Detections and refinement results of 5th temporal segment.
Table A6. Detections and refinement results of 5th temporal segment.
5th Temporal Segment
GT = 571RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3349267361.173.993.194.273.882.8
80.42651105846.456.670.774.656.064.3
120.3302737552.966.080.583.863.873.8
120.42191564238.445.758.462.646.352.8
160.33007510052.570.180.084.263.476.5
160.41991765134.943.853.158.742.150.2
ResNet5080.3390276768.380.093.594.478.986.6
80.4353645061.870.684.786.371.577.6
120.3360575663.072.986.387.972.979.7
120.42681493346.952.764.366.954.359.0
160.3358598562.777.685.988.272.582.6
160.42241934039.246.253.757.845.351.4
ResNet10180.3494415486.596.092.393.089.394.5
80.4436992876.481.381.582.478.881.8
120.3463724181.188.386.587.583.787.9
120.43092262154.157.857.859.455.958.6
160.3453825879.389.584.786.281.987.8
160.42582771645.248.048.249.746.748.8

References

  1. Jalal, A.; Salman, A.; Mian, A.; Shortis, M.; Shafait, F. Fish detection and species classification in underwater environments using deep learning with temporal information. In Ecological Informatics; Elsevier: Amsterdam, The Netherlands, 2020; Volume 57, p. 101088. ISSN 1574-9541. [Google Scholar] [CrossRef]
  2. Diesing, M.; Mitchell, P.; Stephens, D. Image-based seabed classification: What can we learn from terrestrial remote sensing? ICES J. Mar. Sci. 2016, 73, 2425–2441. [Google Scholar] [CrossRef]
  3. Kennedy, E.V.; Roelfsema, C.M.; Lyons, M.B.; Kovacs, E.M.; Borrego-Acevedo, R.; Roe, M.; Phinn, S.R.; Larsen, K.; Murray, N.J.; Yuwono, D.; et al. Reef Cover, a coral reef classification for global habitat mapping from remote sensing. Sci. Data 2021, 8, 196. [Google Scholar] [CrossRef] [PubMed]
  4. Ishibashi, S.; Akasaka, M.; Koyanagi, T.F.; Yoshida, K.T.; Soga, M. Recognition of local flora and fauna by urban park users: Who notices which species? Urban For. Urban Green. 2020, 56, 126867. [Google Scholar] [CrossRef]
  5. Daniel, L.; Martin, Z.; Timm, S. BIIGLE 2.0–Browsing and Annotating Large Marine Image Collections. Front. Mar. Sci. 2017, 4, 83. [Google Scholar] [CrossRef]
  6. Rice, A.L.; Chapman, C.J. Observations on the burrows and borrowing of two mud-dwelling decapod crustaceans Nephrops norvegicus and Goneplax romboides. Mar. Biol. 1971, 10, 330–342. [Google Scholar] [CrossRef]
  7. Campbell, N.; Allan, L.; Weetman, A.; Dobby, H. Investigating the link between Nephrops norvegicus burrow density and sediment composition in Scottish waters. ICES J. Mar. Sci. 2009, 66, 2052–2059. [Google Scholar] [CrossRef]
  8. Workshop on the Use of UWTV Surveys for Determining Abundance in Nephrops Stocks throughout European Waters. 2007, p. 198. Available online: https://www.ices.dk/sites/pub/CM%20Doccuments/CM-2007/ACFM/ACFM1407.pdf (accessed on 20 April 2022).
  9. Report of the Workshop and training course on Nephrops burrow identification (WKNEPHBID). Available online: https://archimer.ifremer.fr/doc/00586/69782/67673.pdf (accessed on 20 April 2022).
  10. Aguzzi, J.; Sardá, F. A history of recent advancenments on Nephrops norvegicus behavioral and physiological rhythms. Rev. Fish Biol. Fish. 2008, 18, 235–248. [Google Scholar] [CrossRef]
  11. Maynou, F.; Sardá, F. Influence of enviromental factor son commercial crawl catches of Nephrops norvegicus (L.). ICES J. Mar. Sci. 2001, 58, 1318–1325. [Google Scholar] [CrossRef]
  12. Aguzzi, J.; Sardá, F.; Abelló, P.; Company, J.B.; Rotllant, G. Diel and seasonal patterns of Nephrops norvegicus (Decapoda: Nephropidae) catchability in the western Mediterranean. Mar. Ecol. Prog. Ser. 2003, 258, 201–211. [Google Scholar] [CrossRef]
  13. ICES CM 2016/SSGIEOM:34; Report of the Workshop on Nephrops Burrow Counting, WKNEPS 2016 Report 9–11 November 2016. ICES: Reykjavík, Iceland, 2016; p. 62.
  14. Leocadio, L.; Weetman, A.; Wieland, K. Using UWTV Surveys to Assess and Advise on Nephrops Stocks; ICES Cooperative Research Report no. 340; ICES: Lorient, France, 2018; p. 49. [Google Scholar] [CrossRef]
  15. ICES. Report of the Working Group on Nephrops Surveys (WGNEPS); 6–8 November; ICES CM 2018/EOSG:18; ICES Cooperative Research Report: Lorient, France, 2018; p. 226. [Google Scholar]
  16. Mike, C.; Bell, F.R.; Tuck, I. Nephrops Species. In Lobsters: Biology, Management, Aquaculture and Fisheries; Phillips, B.F., Ed.; Wiley-Blackwell: Hoboken, NJ, USA, 2006; pp. 412–461. ISBN 978-1-4051-2657-1. [Google Scholar]
  17. Naseer, A.; Baro, E.N.; Khan, S.D.; Gordillo, Y.V. Automatic Detection of Nephrops norvegicus Burrows in Underwater Images Using Deep Learning. In Proceedings of the 2020 Global Conference on Wireless and Optical Technologies (GCWOT), Malaga, Spain, 6–8 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
  18. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef]
  19. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  20. Shima, R.; Yunan, H.; Fukuda, O.; Okumura, H.; Arai, K.; Bu, N. Object classification with deep convolutional neural network using spatialinformation. In Proceedings of the 2017 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 24–26 November 2017; pp. 135–139. [Google Scholar] [CrossRef]
  21. Soltan, S.; Oleinikov, A.; Demirci, M.F.; Shintemirov, A. Deep Learning-Based Object Classification and Position Estimation Pipeline for Potential Use in Robotized Pick-and-Place Operations. Robotics 2020, 9, 63. [Google Scholar] [CrossRef]
  22. Masubuchi, S.; Watanabe, E.; Seo, Y.; Sasagawa, T.; Watanabe, K.; Taniguchi, T.; Machida, T. Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materials. Npj 2D Mater. Appl. 2020, 4, 3. [Google Scholar] [CrossRef]
  23. Haque, I.R.I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 100297. [Google Scholar] [CrossRef]
  24. Naseer, A.; Baro, E.N.; Khan, S.D.; Vila, Y.; Doyle, J. Automatic Detection of Nephrops Norvegicus Burrows from Underwater Imagery Using Deep Learning. CMC-Comput. Mater. Contin. 2022, 70, 5321–5344. [Google Scholar] [CrossRef]
  25. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  26. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
  27. Ren, X. Finding people in archive films through tracking. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  28. Understanding and Coding a ResNet in Keras. Available online: https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33 (accessed on 20 March 2022).
  29. TensorFlow Core v2.8.0. Available online: https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet/ResNet101 (accessed on 20 March 2022).
  30. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  31. Dollár, P.; Appel, R.; Belongie, S.; Perona, P. Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef] [PubMed]
  32. Dollár, P.; Tu, Z.; Perona, P.; Belongie, S. Integral Channel Features; BMVC Press: Sussex, UK, 2009. [Google Scholar]
  33. Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object Detection with Discriminatively Trained Part-Based Models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef]
  34. Song, H.A.; Lee, S.-Y. Hierarchical representation using NMF. In International Conference on Neural Information Processing; Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8226, pp. 466–473. [Google Scholar] [CrossRef]
  35. Chan, A.B.; Morrow, M.; Vasconcelos, N. Analysis of crowded scenes using holistic properties. In Proceedings of the Performance Evaluation of Tracking and Surveillance workshop at CVPR, Miami, FL, USA, 2009; Available online: http://visal.cs.cityu.edu.hk/static/pubs/workshop/pets09-crowds.pdf (accessed on 20 March 2022).
  36. Saqib, M.; Khan, S.D.; Blumenstein, M. Texture-based feature mining for crowd density estimation: A study. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand, 21–22 November 2016; pp. 1–6. [Google Scholar]
  37. Zhang, C.; Li, H.; Wang, X.; Yang, X. Cross-scene crowd counting via deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 833–841. [Google Scholar]
  38. Chan, A.B.; Vasconcelos, N. Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 909–926. [Google Scholar] [CrossRef]
  39. Girshick, R.B.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23–28 June 2014. [Google Scholar]
  40. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–15 December 2015; pp. 1440–1448. [Google Scholar]
  41. Uijlings, J.R.R.; van de Sande, K.E.A.; Gevers, T.; Smeulders, A.W.M. Selective Search for Object Recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef]
  42. Li, X.; Shang, M.; Qin, H.; Chen, L. Fast accurate fish detection and recognition of underwater images with fast R-CNN. In Proceedings of the OCEANS 2015—MTS/IEEE, Washington, DC, USA, 19–22 October 2015; pp. 1–5. [Google Scholar]
  43. Villon, S.; Chaumont, M.; Subsol, G.; Villéger, S.; Claverie, T.; Mouillot, D. Coral reef fish detection and recognition in underwater videos by supervised machine learning: Comparison between deep learning and HOG+SVM methods. In Advanced Concepts for Intelligent Vision Systems; Blanc-Talon, J., Distante, C., Philips, W., Popescu, D., Scheunders, P., Eds.; Springer: Cham, Switzerland, 2016; Volume 10016, pp. 160–171. [Google Scholar] [CrossRef]
  44. Rathi, D.; Jain, S.; Indu, S. Underwater Fish Species Classification using Convolutional Neural Network and Deep Learning. arXiv 2018, arXiv:1805.10106. [Google Scholar]
  45. Xu, W.; Matzner, S. Underwater Fish Detection Using Deep Learning for Water Power Applications. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 13–15 December 2018; pp. 313–318. [Google Scholar]
  46. Mandal, R.; Connolly, R.M.; Schlacher, T.A.; Stantic, B. Assessing fish abundance from underwater video using deep neural networks. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–6. [Google Scholar] [CrossRef]
  47. Gundam, M.; Charalampidis, D.; Ioup, G.; Ioup, J.; Thompson, C. Automatic fish classification in underwater video. Proc. Gulf. Caribb. Fish. Inst. 2015, 66, 276282. [Google Scholar]
  48. Sung, M.; Yu, S.C.; Girdhar, Y. Girdhar Vision based real-time fish detection using convolutional neural network. In Proceedings of the OCEANS 2017-Aberdeen, Aberdeen, UK, 19–22 June 2017; pp. 1–6. [Google Scholar]
  49. Jäger, J.; Rodner, E.; Denzler, J.; Wolff, V.; Fricke-Neuderth, K. Seaclef 2016: Object proposal classification for fish detection in underwater videos. In Proceedings of the Conference and Labs of the Evaluation Forum (CLEF), Évora, Portugal, 5–8 September 2016; Volume 1609, pp. 481–489. [Google Scholar]
  50. Zhuang, P.; Xing, L.; Liu, Y.; Guo, S.; Qiao, Y. Marine Animal Detection and Recognition with Advanced Deep Learning Models. In Proceedings of the Conference and Labs of the Evaluation Forum (CLEF), Dublin, Ireland, 11–14 September 2017. [Google Scholar]
  51. Zhao, Z.; Liu, Y.; Sun, X.; Liu, J.; Yang, X.; Zhou, C. Composited FishNet: Fish Detection and Species Recognition From Low-Quality Underwater Videos. IEEE Trans. Image Process. 2021, 30, 4719–4734. [Google Scholar] [CrossRef]
  52. Labao, A.B.; Naval, P.C., Jr. Cascaded deep network systems with linked ensemble components for underwater fish detection in the wild. Ecol. Inform. 2019, 52, 103–112. [Google Scholar] [CrossRef]
  53. Salman, A.; Siddiqui, S.A.; Shafait, F.; Mian, A.; Shortis, M.R.; Khurshid, K.; Ulges, A.; Schwanecke, U. Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system. ICES J. Mar. Sci. 2019, 77, 1295–1307. [Google Scholar] [CrossRef]
  54. Dieleman, S. Classifying Planktons with Deep Neural Networks. Available online: http://benanne.github.io/.2015/03/17/plankton.html (accessed on 22 March 2022).
  55. Lee, H.; Park, M.; Kim, J. Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3713–3717. [Google Scholar]
  56. Shiela, M.M.A.; Soriano, M.; Saloma, C. Classification of coral reef images from underwater video using neural networks. Opt. Express 2005, 13, 8766–8771. [Google Scholar]
  57. Elawady, M. Sparsem: Coral Classification Using Deep Convolutional Neural Networks. Master’s Thesis, Hariot-Watt University, Edinburgh, UK, 2014. [Google Scholar]
  58. CSE Group. Visual Object Tagging Tool (VOTT). Available online: https://github.com/microsoft/VoTT/ (accessed on 1 January 2022).
  59. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  60. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. Available online: https://arxiv.org/abs/1603.04467 (accessed on 20 April 2022).
Figure 1. Some individuals of Nephrops norvegicus.
Figure 1. Some individuals of Nephrops norvegicus.
Sensors 22 04441 g001
Figure 2. Nephrops burrow system.
Figure 2. Nephrops burrow system.
Sensors 22 04441 g002
Figure 3. Ground truth (blue color, bounding boxes). The result of detector (Inception) (red color, bounding boxes). Due to camera angle variation and burrows appearance, the detector missed detections in consecutive frames.
Figure 3. Ground truth (blue color, bounding boxes). The result of detector (Inception) (red color, bounding boxes). Due to camera angle variation and burrows appearance, the detector missed detections in consecutive frames.
Sensors 22 04441 g003
Figure 4. Detection refinement framework based on spatial–temporal filtering.
Figure 4. Detection refinement framework based on spatial–temporal filtering.
Sensors 22 04441 g004
Figure 5. Nephrops burrows detection framework.
Figure 5. Nephrops burrows detection framework.
Sensors 22 04441 g005
Figure 6. Sledge and equipment use in 2018 UWTV survey at the Gulf of Cadiz.
Figure 6. Sledge and equipment use in 2018 UWTV survey at the Gulf of Cadiz.
Sensors 22 04441 g006
Figure 7. Detection refinement algorithm.
Figure 7. Detection refinement algorithm.
Sensors 22 04441 g007
Figure 8. Experiment performed with image set 1 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.
Figure 8. Experiment performed with image set 1 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.
Sensors 22 04441 g008
Figure 9. Experiment performed with image set 2 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.
Figure 9. Experiment performed with image set 2 show mean average precision (mAP) of detection refinement with (a) detections with Inception model and refinements; (b) detections with ResNet50 model and refinements; (c) detections with ResNet101 model and refinements.
Sensors 22 04441 g009
Figure 10. False positive suppression using detection refinement algorithm (ac) are the ground truth (blue color bounding boxes), and original detections from Inception model (red color bounding boxes) (df) are the refined detections.
Figure 10. False positive suppression using detection refinement algorithm (ac) are the ground truth (blue color bounding boxes), and original detections from Inception model (red color bounding boxes) (df) are the refined detections.
Sensors 22 04441 g010
Figure 11. Identification of true positive missed detections. Panels (af) are the original detections from the Inception model, and (gl) are the identification of missed detections in the consecutive frames.
Figure 11. Identification of true positive missed detections. Panels (af) are the original detections from the Inception model, and (gl) are the identification of missed detections in the consecutive frames.
Sensors 22 04441 g011
Table 1. Equipment details used in data collection.
Table 1. Equipment details used in data collection.
Image System
Life Camera
Full HD (1920 × 1080) @ 30 fps
Mounting angle 45°
Recording Camera: SONY FDRAX33
4K Ultra HD (3840 × 2160) and Full HD (1920 × 1080) @ 50 fps
Mounting angle 45°
Photo camera: SONY ILCE QX1
20.1 MPixel
Mounting Angle variable
Lighting System
28,640 lumens, distributed in 4 spotlights with individual intensity system
TST-OFL 7000 (Thalassatech—Oil Filled LED)
Photogrammetry System
3-point lasers (5 mW & λ = 670 nm) forming a triangle of side 70 mm
2-line lasers (200 mW & λ = 670 nm) separated 75 cm (Field of view)
Auxiliary System
Battery (Li-ion, size 18,650, 3.7 V & 2400 mAh = capacity 480 Wh)
Sensors
Altimeter: Tritech PA500
CTD (conductivity, temperature, and depth): AML Oceanographic MINOS X
Table 2. Dataset distribution.
Table 2. Dataset distribution.
Dataset Distribution
Functional UnitTraining ImagesTesting ImagesTotal
Gulf of Cadiz Dataset200 (80%)48 (20%)248
Table 3. Detections of all temporal segments with refinements. Detections are refined using W = 8, 12, and 16 with λ = 0.3 and 0.4. The refined detection shows total number of TP, FP, and missed detections and F1-score.
Table 3. Detections of all temporal segments with refinements. Detections are refined using W = 8, 12, and 16 with λ = 0.3 and 0.4. The refined detection shows total number of TP, FP, and missed detections and F1-score.
GT = 2359RecallPrecisionF1-Score
WλTPFPMiss%Age Before%Age After%Age Before%Age After%Age Before%Age After
Inception80.3138011525658.569.492.393.471.679.6
80.4115034520448.757.476.979.759.766.7
120.3131617927755.867.588.089.968.377.1
120.489959617038.145.360.164.246.753.1
160.3130818737455.471.387.590.067.979.6
160.480469120934.142.953.859.441.749.9
ResNet5080.3161916335668.690.690.992.978.291.8
80.4138939327458.987.277.984.067.185.5
120.3155722540066.092.587.490.775.291.6
120.4106971323945.385.760.073.951.679.4
160.3149528750663.497.083.988.972.292.7
160.496282026040.886.654.071.346.578.2
ResNet10180.3189418033680.394.591.392.585.593.5
80.4172045426272.984.079.181.475.982.7
120.3187426534079.493.987.689.383.391.5
120.4126790720953.762.658.361.955.962.3
160.3175429642174.492.285.688.079.690.1
160.41154102022848.958.653.157.550.958.1
Table 4. Experiments definition for detection refinement.
Table 4. Experiments definition for detection refinement.
ExperimentModelTesting Set
Experiment 1InceptionImage set 1
Experiment 2ResNet50Image set 1
Experiment 3ResNet101Image set 1
Experiment 4InceptionImage set 2
Experiment 5ResNet50Image set 2
Experiment 6ResNet101Image set 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Naseer, A.; Baro, E.N.; Khan, S.D.; Vila, Y. A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery. Sensors 2022, 22, 4441. https://doi.org/10.3390/s22124441

AMA Style

Naseer A, Baro EN, Khan SD, Vila Y. A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery. Sensors. 2022; 22(12):4441. https://doi.org/10.3390/s22124441

Chicago/Turabian Style

Naseer, Atif, Enrique Nava Baro, Sultan Daud Khan, and Yolanda Vila. 2022. "A Novel Detection Refinement Technique for Accurate Identification of Nephrops norvegicus Burrows in Underwater Imagery" Sensors 22, no. 12: 4441. https://doi.org/10.3390/s22124441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop