A Deep-Learning Based Pipeline for Estimating the Abundance and Size of Aquatic Organisms in an Unconstrained Underwater Environment from Continuously Captured Stereo Video
Abstract
:1. Introduction
- The implementation of a processing pipeline for the automatic detection, abundance and size estimation of marine animals,
- A novel dataset, consisting of 73,144 images with 92,899 bounding boxes and 1198 track annotations for 10 different categories of marine species, was recorded in an unconstrained underwater scenario.
2. Related Work
3. Materials and Methods
- The recording setup, including the information about the stereo camera and the recording conditions,
- The dataset used in this study, including the number of samples, the different categories of marine organisms included as well as the distribution of the samples among them,
- An overview of the complete processing pipeline,
- The object detection algorithm used to identify marine organisms in the images,
- The stereo camera system and its calibration process,
- The stereo-matching technique was applied to match detected organisms in the left and right images.
3.1. Recording Setup
3.2. Dataset
- Bounding boxes: Each perceivable object of interest (OOI) is marked by a bounding box, as is common for most object detection tasks. The bounding box is defined by its (x, y) coordinates of the upper-left and bottom-right points spanning the rectangle.
- Classes: Each bounding box has a class label assigned to it, i.e., the animal’s taxa.
- Tracks: The bounding box annotations belonging to the same animal (object instance) are grouped into tracks and thus annotate the object’s movement over the course of a video sequence.
- Metadata: The metadata information for each image includes the geolocation, date and a timestamp for each recorded video frame, up to millisecond resolution.
3.3. Processing Pipeline Overview
3.4. Activity Detection
3.5. Detection of Marine Species
3.6. Stereo Processing
- For a given set of timely synchronized stereo image pairs, detect the inner corners of the checkerboard in each image.
- For each detected corner, triangulate the 3D position using the previously calibrated camera parameters and the pixel positions of the corner in both images.
- For each corner, calculate the 3D distance to all points that lie on the horizontal and vertical lines of the checkerboard.
- Given the known checkerboard dimensions, calculate the error between the measured and expected distances, i.e., the number of checkerboard fields times the real-world size of a single square.
3.7. Abundance Estimation
4. Experimental Results
5. Discussion
6. Conclusions and Outlook
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ADCP | Acoustic doppler current profiler |
AUV | Autonomous Underwater Vehicle |
CNN | Convolutional neural network |
COSYNA | Coastal Observing System for Northern and Arctic Seas |
DOM | Dissolved organic matter |
EMSO | European Multidisciplinary Seafloor and Water-Column Observatory |
FOV | Field of view |
GHT | Generalized Hough transform |
GMM | Gaussian mixture model |
IoU | Intersection over union |
mAP | Mean average precision |
MARS | Monterey Accelerated Research System |
MSFD | Marine Strategy Framework Directive |
NEPTUNE | North-East Pacific Time-Series Undersea Network experiments |
NMS | Non-maximum suppression |
OOI | Object on interest |
R-CNN | Region-Based Convolutional Neural Networks |
ROV | Remotely Operated Vehicle |
SVM | Support vector machines |
UFO | Underwater fish observatory |
WFD | Water Framework Directive |
YOLO | You only look once |
Appendix A
Hyperparameter | Setting |
---|---|
anchor_t | 4 |
box | 0.05 |
cls | 0.5 |
cls_pw | 1 |
copy_paste | 0 |
degrees | 0 |
fl_gamma | 0 |
fliplr | 0.5 |
flipud | 0.5 |
hsv_h | 0.015 |
hsv_s | 0.7 |
hsv_v | 0.4 |
iou_t | 0.2 |
lr0 | 0.01 |
lrf | 0.01 |
iou_t | 0.2 |
mixup | 0 |
momentum | 0.937 |
mosaic | 0 |
obj | 1 |
obj_pw | 1 |
perspective | 0 |
scale | 0.5 |
shear | 0 |
translate | 0.1 |
warmup_bias_lr | 0.1 |
warmup_epochs | 3 |
warmup_momentum | 0.8 |
weight_decay | 0.0005 |
image_weights | false |
imgsz | 1280 |
epochs | 200 |
optimizer | SGD |
patience | 25 |
weights | yolov5l6.pt |
References
- Dickey, T.D. Physical-optical-biological scales relevant to recruitment in large marine ecosystems. Am. Assoc. Adv. Sci. Publ. 1990, 90, 82–98. [Google Scholar]
- Mallet, D.; Pelletier, D. Underwater video techniques for observing coastal marine biodiversity: A review of sixty years of publications (1952–2012). Fish. Res. 2014, 154, 44–62. [Google Scholar] [CrossRef] [Green Version]
- Durden, J.M.; Schoening, T.; Althaus, F.; Friedman, A.; Garcia, R.; Glover, A.G.; Greinert, J.; Stout, N.J.; Jones, D.O.; Jordt, A.; et al. Perspectives in visual imaging for marine biology and ecology: From acquisition to understanding. Oceanogr. Mar. Biol. Annu. Rev. 2016, 54, 1–72. [Google Scholar]
- Malde, K.; Handegard, N.O.; Eikvil, L.; Salberg, A.B. Machine intelligence and the data-driven future of marine science. ICES J. Mar. Sci. 2020, 77, 1274–1285. [Google Scholar] [CrossRef]
- Goodwin, M.; Halvorsen, K.T.; Jiao, L.; Knausgård, K.M.; Martin, A.H.; Moyano, M.; Oomen, R.A.; Rasmussen, J.H.; Sørdalen, T.K.; Thorbjørnsen, S.H. Unlocking the potential of deep learning for marine ecology: Overview, applications, and outlook. ICES J. Mar. Sci. 2022, 79, 319–336. [Google Scholar] [CrossRef]
- Saleh, A.; Sheaves, M.; Rahimi Azghadi, M. Computer vision and deep learning for fish classification in underwater habitats: A survey. Fish Fish. 2022, 23, 977–999. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. A Review on the Use of Computer Vision and Artificial Intelligence for Fish Recognition, Monitoring, and Management. Fishes 2022, 7, 335. [Google Scholar] [CrossRef]
- Dawe, T.C.; Bird, L.; Talkovic, M.; Brekke, K.; Osborne, D.J.; Etchemendy, S. Operational Support of regional cabled observatories The MARS Facility. In Proceedings of the OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1–6. [Google Scholar]
- Barnes, C.R.; Best, M.M.; Zielinski, A. The NEPTUNE Canada regional cabled ocean observatory. Technology 2008, 50, 10–14. [Google Scholar]
- Dewey, R.; Round, A.; Macoun, P.; Vervynck, J.; Tunnicliffe, V. The VENUS cabled observatory: Engineering meets science on the seafloor. In Proceedings of the IEEE OCEANS 2007, Vancouver, BC, Canada, 29 Septembe–4 October 2007; pp. 1–7. [Google Scholar]
- Baschek, B.; Schroeder, F.; Brix, H.; Riethmüller, R.; Badewien, T.H.; Breitbach, G.; Brügge, B.; Colijn, F.; Doerffer, R.; Eschenbach, C.; et al. The coastal observing system for northern and arctic seas (COSYNA). Ocean Sci. 2017, 13, 379–410. [Google Scholar] [CrossRef] [Green Version]
- Fischer, P.; Schwanitz, M.; Loth, R.; Posner, U.; Brand, M.; Schröder, F. First year of practical experiences of the new Arctic AWIPEV-COSYNA cabled Underwater Observatory in Kongsfjorden, Spitsbergen. Ocean Sci. 2017, 13, 259–272. [Google Scholar] [CrossRef] [Green Version]
- Aguzzi, J.; Chatzievangelou, D.; Company, J.; Thomsen, L.; Marini, S.; Bonofiglio, F.; Juanes, F.; Rountree, R.; Berry, A.; Chumbinho, R.; et al. The potential of video imagery from worldwide cabled observatory networks to provide information supporting fish-stock and biodiversity assessment. ICES J. Mar. Sci. 2020, 77, 2396–2410. [Google Scholar] [CrossRef]
- Cullen, E.; Chumbinho, R.; Breslin, J. SmartBay Ireland’s marine real time data acquisition system. In Proceedings of the 2014 IEEE Oceans, St. John’s, NL, Canada, 14–19 September 2014; pp. 1–4. [Google Scholar]
- Zielinski, O.; Pieck, D.; Schulz, J.; Thölen, C.; Wollschläger, J.; Albinus, M.; Badewien, T.H.; Braun, A.; Engelen, B.; Feenders, C.; et al. The Spiekeroog coastal observatory: A scientific infrastructure at the land-sea transition zone (southern North Sea). Front. Mar. Sci. 2022, 8, 754905. [Google Scholar] [CrossRef]
- Li, X.; Shang, M.; Qin, H.; Chen, L. Fast accurate fish detection and recognition of underwater images with fast r-cnn. In Proceedings of the IEEE OCEANS 2015-MTS/IEEE Washington, Washington, DC, USA, 19–22 October 2015; pp. 1–5. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Fisher, R.B.; Chen-Burger, Y.H.; Giordano, D.; Hardman, L.; Lin, F.P. Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data; Springer: Berlin/Heidelberg, Germany, 2016; Volume 104. [Google Scholar]
- Li, X.; Tang, Y.; Gao, T. Deep but lightweight neural networks for fish detection. In Proceedings of the IEEE OCEANS 2017, Aberdeen, UK, 19–22 June 2017; pp. 1–5. [Google Scholar]
- Kim, K.H.; Hong, S.; Roh, B.; Cheon, Y.; Park, M. Pvanet: Deep but lightweight neural networks for real-time object detection. arXiv 2016, arXiv:1608.08021. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–92. [Google Scholar] [CrossRef] [Green Version]
- Sung, M.; Yu, S.C.; Girdhar, Y. Vision based real-time fish detection using convolutional neural network. In Proceedings of the IEEE OCEANS 2017, Aberdeen, UK, 19–22 June 2017; pp. 1–6. [Google Scholar]
- Cutter, G.; Stierhoff, K.; Zeng, J. Automated detection of rockfish in unconstrained underwater videos using haar cascades and a new image dataset: Labeled fishes in the wild. In Proceedings of the 2015 IEEE Winter Applications and Computer Vision Workshops, Waikoloa, HI, USA, 6–9 January 2015; pp. 57–62. [Google Scholar]
- Shi, C.; Jia, C.; Chen, Z. FFDet: A fully convolutional network for coral reef fish detection by layer fusion. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; pp. 1–4. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision—ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Lu, H.; Uemura, T.; Wang, D.; Zhu, J.; Huang, Z.; Kim, H. Deep-sea organisms tracking using dehazing and deep learning. Mob. Netw. Appl. 2020, 25, 1008–1015. [Google Scholar] [CrossRef]
- Bonofiglio, F.; De Leo, F.C.; Yee, C.; Chatzievangelou, D.; Aguzzi, J.; Marini, S. Machine learning applied to big data from marine cabled observatories: A case study of sablefish monitoring in the NE Pacific. Front. Mar. Sci. 2022, 9, 1570. [Google Scholar] [CrossRef]
- Gupta, A.; Kalhagen, E.S.; Olsen, Ø.L.; Goodwin, M. Hierarchical Object Detection applied to Fish Species: Hierarchical Object Detection of Fish Species. Nord. Mach. Intell. 2022, 2, 1–15. [Google Scholar]
- Yusup, I.; Iqbal, M.; Jaya, I. Real-time reef fishes identification using deep learning. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2020; Volume 429, p. 012046. [Google Scholar]
- Jalal, A.; Salman, A.; Mian, A.; Shortis, M.; Shafait, F. Fish detection and species classification in underwater environments using deep learning with temporal information. Ecol. Inform. 2020, 57, 101088. [Google Scholar] [CrossRef]
- Al Muksit, A.; Hasan, F.; Emon, M.F.H.B.; Haque, M.R.; Anwary, A.R.; Shatabda, S. YOLO-Fish: A robust fish detection model to detect fish in realistic underwater environment. Ecol. Inform. 2022, 72, 101847. [Google Scholar] [CrossRef]
- Jordt, A. Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation. Ph.D. Thesis, Self-Publishing of Department of Computer Science, Kiel, Germany, 2014. [Google Scholar]
- Boutros, N.; Shortis, M.R.; Harvey, E.S. A comparison of calibration methods and system configurations of underwater stereo-video systems for applications in marine ecology. Limnol. Oceanogr. Methods 2015, 13, 224–236. [Google Scholar] [CrossRef]
- Harvey, E.; Cappo, M.; Shortis, M.; Robson, S.; Buchanan, J.; Speare, P. The accuracy and precision of underwater measurements of length and maximum body depth of southern bluefin tuna (Thunnus maccoyii) with a stereo–video camera system. Fish. Res. 2003, 63, 315–326. [Google Scholar] [CrossRef]
- Garcia, R.; Prados, R.; Quintana, J.; Tempelaar, A.; Gracias, N.; Rosen, S.; Vågstøl, H.; Løvall, K. Automatic segmentation of fish using deep learning with application to fish size measurement. ICES J. Mar. Sci. 2020, 77, 1354–1366. [Google Scholar] [CrossRef]
- Suo, F.; Huang, K.; Ling, G.; Li, Y.; Xiang, J. Fish keypoints detection for ecology monitoring based on underwater visual intelligence. In Proceedings of the 2020 16th IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China, 13–15 December 2020; pp. 542–547. [Google Scholar]
- Jessop, S.A.; Saunders, B.J.; Goetze, J.S.; Harvey, E.S. A comparison of underwater visual census, baited, diver operated and remotely operated stereo-video for sampling shallow water reef fishes. Estuar. Coast. Shelf Sci. 2022, 276, 108017. [Google Scholar] [CrossRef]
- Shortis, M. Camera calibration techniques for accurate measurement underwater. Rec. Interpret. Marit. Archaeol. 2019, 11–27. [Google Scholar] [CrossRef] [Green Version]
- Castillón, M.; Palomer, A.; Forest, J.; Ridao, P. State of the art of underwater active optical 3D scanners. Sensors 2019, 19, 5161. [Google Scholar] [CrossRef] [Green Version]
- Cappo, M.; Harvey, E.; Shortis, M. Counting and measuring fish with baited video techniques-an overview. In Australian Society for Fish Biology Workshop Proceedings; Australian Society for Fish Biology: Tasmania, Australia, 2006; Volume 1, pp. 101–114. [Google Scholar]
- Böer, G.; Schramm, H. Semantic Segmentation of Marine Species in an Unconstrained Underwater Environment. In Robotics, Computer Vision and Intelligent Systems: First International Conference, ROBOVIS 2020, Virtual Event, 4–6 November 2020, and Second International Conference, ROBOVIS 2021, Virtual Event, 27–28 October 2021, Revised Selected Papers; Springer: Cham, Switzerland, 2022; pp. 131–146. [Google Scholar]
- Kim, Y.H.; Park, K.R. PSS-net: Parallel semantic segmentation network for detecting marine animals in underwater scene. Front. Mar. Sci. 2022, 9, 1003568. [Google Scholar] [CrossRef]
- Chen, R.; Fu, Z.; Huang, Y.; Cheng, E.; Ding, X. A Robust Object Segmentation Network for Under Water Scenes. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 2629–2633. [Google Scholar]
- Li, L.; Dong, B.; Rigall, E.; Zhou, T.; Dong, J.; Chen, G. Marine animal segmentation. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 2303–2314. [Google Scholar] [CrossRef]
- Laradji, I.H.; Saleh, A.; Rodriguez, P.; Nowrouzezahrai, D.; Azghadi, M.R.; Vazquez, D. Weakly supervised underwater fish segmentation using affinity LCFCN. Sci. Rep. 2021, 11, 17379. [Google Scholar] [CrossRef] [PubMed]
- Marchesan, M.; Spoto, M.; Verginella, L.; Ferrero, E.A. Behavioural effects of artificial light on fish species of commercial interest. Fish. Res. 2005, 73, 171–185. [Google Scholar] [CrossRef]
- Dawkins, M.; Sherrill, L.; Fieldhouse, K.; Hoogs, A.; Richards, B.; Zhang, D.; Prasad, L.; Williams, K.; Lauffenburger, N.; Wang, G. An open-source platform for underwater image and video analytics. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017; pp. 898–906. [Google Scholar]
- Pedersen, M.; Lehotskỳ, D.; Nikolov, I.; Moeslund, T.B. BrackishMOT: The Brackish Multi-Object Tracking Dataset. arXiv 2023, arXiv:2302.10645. [Google Scholar]
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 1–54. [Google Scholar] [CrossRef] [Green Version]
- Zivkovic, Z. Improved adaptive Gaussian mixture model for background subtraction. In Proceedings of the 17th IEEE International Conference on Pattern Recognition, ICPR 2004, Cambridge UK, 23–26 June 2004; Volume 2, pp. 28–31. [Google Scholar]
- Jocher, G. Yolov5: Real-Time Object Detection. 2022. Available online: https://github.com/ultralytics/yolov5 (accessed on 10 January 2023).
- Sedlazeck, A.; Koch, R. Perspective and non-perspective camera models in underwater imaging–overview and error analysis. Outdoor Large-Scale-Real-World Scene Anal. 2012, 7474, 212–242. [Google Scholar]
- Bouguet, J.Y. Camera Calibration Toolbox for Matlab (2008). 2008, Volume 1080. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc (accessed on 10 January 2023).
- Heikkila, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
- She, M.; Song, Y.; Mohrmann, J.; Köser, K. Adjustment and calibration of dome port camera systems for underwater vision. In Proceedings of the Pattern Recognition: 41st DAGM German Conference, DAGM GCPR 2019, Dortmund, Germany, 10–13 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 79–92. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Ellis, D.; DeMartini, E. Evaluation of a video camera technique for indexing abundances of juvenile pink snapper, Pristipomoides filamentosus, and other Hawaiian insular shelf fishes. Oceanogr. Lit. Rev. 1995, 9, 786. [Google Scholar]
- Gilbert, N.A.; Clare, J.D.; Stenglein, J.L.; Zuckerberg, B. Abundance estimation of unmarked animals based on camera-trap data. Conserv. Biol. 2021, 35, 88–100. [Google Scholar] [CrossRef]
- Moeller, A.K.; Lukacs, P.M.; Horne, J.S. Three novel methods to estimate abundance of unmarked animals using remote cameras. Ecosphere 2018, 9, e02331. [Google Scholar] [CrossRef] [Green Version]
- Denes, F.V.; Silveira, L.F.; Beissinger, S.R. Estimating abundance of unmarked animal populations: Accounting for imperfect detection and other sources of zero inflation. Methods Ecol. Evol. 2015, 6, 543–556. [Google Scholar] [CrossRef]
- Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning, PMLR, Atlanta, GA, USA, 16–21 June 2013; pp. 1139–1147. [Google Scholar]
- Levy, D.; Belfer, Y.; Osherov, E.; Bigal, E.; Scheinin, A.P.; Nativ, H.; Tchernov, D.; Treibitz, T. Automated analysis of marine video with limited data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1385–1393. [Google Scholar]
- Willis, T.J.; Babcock, R.C. A baited underwater video system for the determination of relative density of carnivorous reef fish. Mar. Freshw. Res. 2000, 51, 755–763. [Google Scholar] [CrossRef]
- Bacheler, N.M.; Schobernd, C.M.; Schobernd, Z.H.; Mitchell, W.A.; Berrane, D.J.; Kellison, G.T.; Reichert, M.J. Comparison of trap and underwater video gears for indexing reef fish presence and abundance in the southeast United States. Fish. Res. 2013, 143, 81–88. [Google Scholar] [CrossRef]
- Lantéri, N.; Ruhl, H.A.; Gates, A.; Martínez, E.; del Rio Fernandez, J.; Aguzzi, J.; Cannat, M.; Delory, E.; Embriaco, D.; Huber, R.; et al. The EMSO Generic Instrument Module (EGIM): Standardized and interoperable instrumentation for ocean observation. Front. Mar. Sci. 2022, 9, 205. [Google Scholar] [CrossRef]
Species | Images | Bounding Boxes | Tracks |
---|---|---|---|
Ctenophora | 20,695 | 28,800 | 203 |
Aurelia aurita | 26,542 | 27,330 | 109 |
Cyanea capillata | 11,838 | 11,838 | 26 |
Gadus morhua | 7725 | 8141 | 210 |
Fish unspecified | 3310 | 6163 | 215 |
Clupeidae | 1846 | 5001 | 335 |
Jellyfish unspecified | 2160 | 2262 | 12 |
Salmonidae | 599 | 1479 | 23 |
Scomber scombrus | 133 | 964 | 53 |
Pleuronectoidei | 921 | 921 | 12 |
Total | 73,144 | 92,899 | 1198 |
Species | Images | Bounding Boxes | ||
---|---|---|---|---|
Train | Val | Train | Val | |
Ctenophora | 17,412 | 3283 | 24,297 | 4503 |
Aurelia aurita | 22,486 | 4056 | 23,131 | 4199 |
Cyanea capillata | 9414 | 2424 | 9414 | 2424 |
Gadus morhua | 6255 | 1470 | 6617 | 1524 |
Fish unspecified | 2564 | 746 | 4864 | 1299 |
Clupeidae | 1001 | 845 | 3588 | 1413 |
Jellyfish unspecified | 1836 | 324 | 1918 | 344 |
Salmonidae | 452 | 147 | 1192 | 287 |
Scomber scombrus | 116 | 17 | 863 | 101 |
Pleuronectoidei | 774 | 147 | 774 | 147 |
Total | 60,144 | 13,000 | 76,658 | 16,241 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Böer, G.; Gröger, J.P.; Badri-Höher, S.; Cisewski, B.; Renkewitz, H.; Mittermayer, F.; Strickmann, T.; Schramm, H. A Deep-Learning Based Pipeline for Estimating the Abundance and Size of Aquatic Organisms in an Unconstrained Underwater Environment from Continuously Captured Stereo Video. Sensors 2023, 23, 3311. https://doi.org/10.3390/s23063311
Böer G, Gröger JP, Badri-Höher S, Cisewski B, Renkewitz H, Mittermayer F, Strickmann T, Schramm H. A Deep-Learning Based Pipeline for Estimating the Abundance and Size of Aquatic Organisms in an Unconstrained Underwater Environment from Continuously Captured Stereo Video. Sensors. 2023; 23(6):3311. https://doi.org/10.3390/s23063311
Chicago/Turabian StyleBöer, Gordon, Joachim Paul Gröger, Sabah Badri-Höher, Boris Cisewski, Helge Renkewitz, Felix Mittermayer, Tobias Strickmann, and Hauke Schramm. 2023. "A Deep-Learning Based Pipeline for Estimating the Abundance and Size of Aquatic Organisms in an Unconstrained Underwater Environment from Continuously Captured Stereo Video" Sensors 23, no. 6: 3311. https://doi.org/10.3390/s23063311
APA StyleBöer, G., Gröger, J. P., Badri-Höher, S., Cisewski, B., Renkewitz, H., Mittermayer, F., Strickmann, T., & Schramm, H. (2023). A Deep-Learning Based Pipeline for Estimating the Abundance and Size of Aquatic Organisms in an Unconstrained Underwater Environment from Continuously Captured Stereo Video. Sensors, 23(6), 3311. https://doi.org/10.3390/s23063311