YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling
Abstract
:1. Introduction
- To enhance fish species recognition, we made modifications to the depth scale of various layers within the backbone of the YOLOv8 model. These modifications resulted in improved performance and accuracy in identifying fish species.
- We incorporated the transformer block into both the backbone and neck networks of the YOLOv8-based approach. This enhancement boosts the model’s capabilities by enabling it to capture more contextual and global information, thereby improving its performance.
- To tackle the challenges posed by a highly imbalanced dataset, we introduced a class-aware loss function along with Wise-IoUv3 [48]. This approach enhances both classification and localization accuracy, leading to more precise and reliable results.
2. Related Works
3. Proposed Approach
3.1. A YOLOv8 Technique for the Recognition of Fish Species in Underwater Environments
3.2. YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling
3.2.1. Transformer Block
3.2.2. Wise-IoU Loss Function
3.2.3. Class-Aware Loss Function
3.3. Dataset Description
4. Experimental Results
4.1. Implementation Details
4.2. Performance
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
YOLOv8 | You Only Look Once, version 8 |
YOLOv8enh | enhanced form of You Only Look Once, version 8 |
SEAMAPD21 | The Southeast Area Monitoring and Assessment Program Dataset 2021 |
trans | Transformer block |
Appendix A
Species | MobileNetv3 | VGG300 | VGG512 | YOLOv5m | YOLOv5l | YOLOv5enh | YOLOv8s | YOLOv8m | YOLOv8l | YOLOv10l | YOLOv8-TF |
---|---|---|---|---|---|---|---|---|---|---|---|
ACANTHURUSCOERULEUS | - | - | - | 0.96 | 0.54 | 0.18 | 8.04 | 0 | 20.0 | 14.3 | 43.9 |
ACANTHURUS | 20.5 | 54.55 | 59.32 | 48.8 | 49.8 | 51.4 | 47.5 | 48.3 | 48.1 | 56.3 | 65.3 |
ALECTISCILIARIS | 72.73 | 36.36 | 29.9 | 64.4 | 64.7 | 62.7 | 62.2 | 70.4 | 65.4 | 68.2 | 65.9 |
ANISOTREMUSVIRGINICUS | - | - | - | 15.9 | 15.7 | 33.8 | 6.7 | 23.7 | 39.7 | 49.6 | 66.3 |
ANOMURA | 0 | 61.36 | 72.73 | 74.1 | 72.4 | 81.2 | 78.1 | 85.2 | 85.7 | 82.2 | 75.9 |
ANTHIINAE | - | - | - | 38.2 | 49.0 | 44.9 | 38.2 | 43.6 | 49.7 | 55.5 | 53.0 |
ARCHOSARGUSPROBATOCEPHALUS | 15.02 | 45.45 | 54.13 | 38.3 | 44.8 | 47.5 | 47.1 | 50.5 | 47.1 | 50.5 | 61.2 |
BALISTESCAPRISCUS | 67.53 | 73.42 | 74.31 | 70.0 | 70.8 | 72.6 | 73.1 | 75.1 | 75.5 | 76.7 | 77.3 |
BALISTESVETULA | 20.7 | 37.26 | 42.92 | 54.7 | 63.5 | 74.2 | 81.6 | 53.0 | 58.2 | 83.2 | 67.0 |
BODIANUSPULCHELLUS | 38.07 | 55.63 | 59.21 | 61.4 | 61.4 | 62.0 | 61.4 | 65.4 | 66.4 | 66.4 | 71.9 |
BODIANUSRUFUS | 0 | 20.41 | 41.53 | 37.7 | 35.3 | 39.4 | 43.3 | 36.7 | 37.0 | 47.8 | 57.4 |
CALAMUSBAJONADO | 0 | 78.6 | 89.6 | 89.5 | 79.6 | 79.6 | 89.5 | 79.6 | 79.6 | 79.6 | 63.3 |
CALAMUSLEUCOSTEUS | 54.86 | 69.24 | 73.16 | 71.8 | 72.2 | 72.7 | 75.7 | 77.1 | 77.5 | 77.2 | 80.1 |
CALAMUSNODOSUS | 7.57 | 6.89 | 10.3 | 74.7 | 75.9 | 77.3 | 77.2 | 78.1 | 78.7 | 81.7 | 80.0 |
CALAMUSPRORIDENS | 15.34 | 18.64 | 12.75 | 64.5 | 66.4 | 67.2 | 68.5 | 70.9 | 70.9 | 73.5 | 74.8 |
CALAMUS | 26.73 | 48.4 | 58.22 | 58.6 | 64.1 | 62.8 | 59.3 | 63.8 | 64.8 | 67.6 | 62.1 |
CANTHIDERMISSUFFLAMEN | 2.6 | 47.73 | 48.42 | 45.8 | 49.1 | 52.8 | 45.7 | 51.5 | 52.8 | 53.6 | 49.6 |
CANTHIGASTERROSTRATUS | - | - | - | 45.0 | 53.7 | 65.0 | 32.5 | 51.2 | 47.4 | 66.9 | 57.7 |
CARANXBARTHOLOMAEI | 46.62 | 50.64 | 54.35 | 58.1 | 60.5 | 62.6 | 59.7 | 58.6 | 60.7 | 64.6 | 76.1 |
CARANXCRYSOS | 19.13 | 48.94 | 45.56 | 53.7 | 54.8 | 58.1 | 57.0 | 57.5 | 56.6 | 60.6 | 59.0 |
CARANXRUBER | 0 | 84.85 | 54.55 | 68.5 | 75.0 | 73.9 | 74.9 | 77.0 | 71.5 | 76.1 | 74.0 |
CARCHARHINUSFALCIFORMIS | 100 | 54.55 | 54.55 | 81.6 | 84.3 | 83.5 | 78.3 | 83.2 | 83.2 | 83.0 | 78.5 |
CARCHARHINUSPEREZI | 100 | 100 | 100 | - | - | - | - | - | - | ||
CARCHARHINUSPLUMBEUS | 0 | 6.06 | 36.36 | 57.3 | 59.8 | 67.1 | 60.5 | 66.4 | 62.8 | 65.3 | 50.0 |
CAULOLATILUSCHRYSOPS | 41.85 | 42.43 | 39.22 | 73.0 | 72.8 | 75.2 | 74.2 | 79.9 | 77.8 | 80.1 | 79.1 |
CAULOLATILUSCYANOPS | 10.88 | 9.32 | 10.86 | 68.1 | 69.7 | 70.0 | 70.5 | 71.3 | 72.4 | 73.3 | 73.0 |
CENTROPRISTISOCYURA | - | - | - | 63.9 | 65.5 | 67.5 | 68.8 | 75.4 | 74.3 | 74.1 | 74.0 |
CEPHALOPHOLISCRUENTATA | 31.1 | 70.63 | 65.14 | 54.6 | 55.3 | 54.7 | 53.8 | 56.6 | 60.4 | 56.1 | 51.5 |
CHAETODONOCELLATUS | - | - | - | 29.8 | 31.7 | 35.4 | 25.2 | 31.2 | 31.3 | 48.8 | 50.1 |
CHAETODONSEDENTARIUS | 9.09 | 9.4 | 16.52 | 48.4 | 50.0 | 52.6 | 47.5 | 51.4 | 50.7 | 59.3 | 53.7 |
CHAETODON | - | - | - | 13.4 | 19.6 | 32.3 | 0.57 | 19.0 | 30.6 | 43.2 | 40.7 |
CHROMISENCHRYSURUS | 1.23 | 0 | 42.08 | 19.8 | 16.8 | 12.8 | 24.1 | 9.95 | 17.0 | 27.6 | 51.3 |
CHROMISINSOLATUS | 100 | 100 | 11.76 | 48.3 | 65.0 | 44.4 | 46.0 | 40.0 | 57.3 | 35.5 | 58.5 |
CHROMIS | - | - | - | 2.49 | 0 | 0 | 1.81 | 1.53 | 7.46 | 0.34 | 32.3 |
DERMATOLEPISINERMIS | 17.44 | 6.43 | 13.01 | 78.7 | 80.6 | 81.0 | 83.6 | 85.1 | 8.44 | 87.2 | 87.0 |
DIODONTIDAE | 100 | 100 | 100 | 3.73 | 19.9 | 9.95 | 14.9 | 9.95 | 9.95 | 59.7 | 75.8 |
DIPLECTRUMFORMOSUM | 27.72 | 44.46 | 60.04 | 44.6 | 47.6 | 49.2 | 45.1 | 47.6 | 50.2 | 56.4 | 58.6 |
EPINEPHELUSADSCENSIONIS | 21.44 | 63.68 | 75.8 | 41.1 | 45.8 | 46.6 | 42.1 | 45.5 | 50.5 | 58.5 | 64.5 |
EPINEPHELUSFLAVOLIMBATUS | 90.85 | 90.76 | 90.41 | 82.4 | 82.1 | 85.3 | 87.1 | 89.1 | 88.6 | 86.5 | 89.1 |
EPINEPHELUSMORIO | 33.69 | 37.5 | 35.33 | 73.5 | 74.5 | 77.1 | 77.1 | 78.3 | 79.3 | 81.6 | 80.2 |
EPINEPHELUSNIGRITUS | 100 | 100 | 100 | 64.0 | 61.8 | 56.8 | 67.1 | 62.0 | 65.5 | 64.5 | 69.8 |
EPINEPHELUS | 36.36 | 63.64 | 63.64 | 46.7 | 48.0 | 55.4 | 62.0 | 57.9 | 63.0 | 66.5 | 69.2 |
EQUETUSLANCEOLATUS | 100 | 100 | 100 | - | - | - | - | - | 69.2 | ||
EQUETUSUMBROSUS | 39.5 | 73.26 | 73.64 | 66.3 | - | 71.1 | 71.5 | 71.1 | 73.3 | 74.9 | 70.8 |
GONIOPLECTRUSHISPANUS | 81.82 | 85.71 | 84.03 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 80.6 |
GYMNOTHORAXMORINGA | - | - | - | 48.3 | 55.2 | 56.0 | 52.4 | 57.0 | 54.4 | 60.9 | 66.3 |
GYMNOTHORAXSAXICOLA | - | - | - | 62.2 | 61.1 | 63.7 | 59.6 | 66.1 | 66.2 | 68.4 | 70.2 |
HAEMULONAUROLINEATUM | 55.57 | 68.3 | 75.68 | 62.6 | 62.9 | 67.0 | 63.9 | 68.4 | 69.2 | 68.8 | 67.9 |
HAEMULONFLAVOLINEATUM | 0 | 24.11 | 67.85 | 34.1 | 35.4 | 39.6 | 44.1 | 37.2 | 49.4 | 57.6 | 51.6 |
HAEMULONMACROSTOMUM | 50 | 100 | 100 | 44.2 | 48.0 | 53.0 | 39.2 | 43.8 | 61.0 | 65.4 | 80.5 |
HAEMULONMELANURUM | 53.03 | 77.29 | 84.94 | 64.2 | 66.8 | 68.1 | 64.6 | 69.1 | 72.4 | 71.9 | 70.0 |
HAEMULONPLUMIERI | 39.48 | 47.68 | 64.31 | 42.3 | 41.9 | 50.7 | 44.0 | 54.0 | 50.1 | 57.7 | 65.8 |
HALICHOERESBATHYPHILUS | 9.27 | 26.55 | 52.36 | 39.7 | 48.1 | 47.0 | 51.4 | 52.2 | 51.9 | 60.7 | 37.9 |
HALICHOERESBIVITTATUS | - | - | - | 22.5 | 26.6 | 31.4 | 21.7 | 25.6 | 27.9 | 40.3 | 41.4 |
HALICHOERESGARNOTI | - | - | - | 23.2 | 21.3 | 29.9 | 28.7 | 35.0 | 39.6 | 34.2 | 47.9 |
HALICHOERES | - | - | - | 32.1 | - | 37.1 | 31.5 | 41.0 | 38.9 | 45.0 | 45.2 |
HOLACANTHUSBERMUDENSIS | 11.02 | 18.65 | 23.38 | 92.8 | - | 68.9 | 68.3 | 70.0 | 70.7 | 74.1 | 75.1 |
HOLACANTHUS | - | - | - | 31.5 | 33.9 | 39.2 | 34.0 | 44.6 | 44.6 | 36.8 | 70.9 |
HOLANTHIUSMARTINICENSIS | - | - | - | 78.4 | 83.5 | 47.5 | 32.7 | 41.8 | 45.3 | 58.6 | 54.9 |
HOLOCENTRUS | 0 | 84.22 | 93.78 | 48.3 | 62.5 | 50.9 | 61.9 | 60.0 | 64.1 | 54.0 | |
HYPOPLECTRUSGEMMA | - | - | - | 13.5 | 19.9 | 33.4 | 13.1 | 20.9 | 27.3 | 43.0 | 20.0 |
HYPOPLECTRUS | - | - | - | 25.3 | 29.6 | 34.2 | 13.1 | 29.9 | 22.4 | 51.2 | 61.6 |
HYPOPLECTRUSUNICOLOR | 0 | 6.06 | 33.64 | 53.2 | 57.0 | 67.7 | 62.9 | 69.0 | 67.0 | 68.6 | 56.2 |
IOGLOSSUS | - | - | - | 37.7 | 43.3 | 47.2 | 29.0 | 39.3 | 41.2 | 56.8 | 55.8 |
KYPHOSUS | 24.76 | 18.58 | 27.91 | 47.5 | 45.7 | 63.3 | 59.5 | 58.6 | 53.3 | 62.2 | 27.6 |
LACHNOLAIMUSMAXIMUS | 9.8 | 12.73 | 9.32 | 59.1 | 60.3 | 60.1 | 63.2 | 60.0 | 63.6 | 57.0 | 62.4 |
LACTOPHRYSTRIGONUS | - | - | - | 30.7 | 35.1 | 34.8 | 26.3 | 32.8 | 40.5 | 51.5 | 72.3 |
LIOPROPOMAEUKRINES | 3.03 | 20.78 | 21.56 | 70.8 | 65.0 | 69.3 | 65.0 | 76.3 | 70.5 | 71.3 | 80.8 |
LUTJANUSANALIS | 41.12 | 28.02 | 21.21 | 57.3 | 63.5 | 58.1 | 57.6 | 61.7 | 59.2 | 59.9 | 69.3 |
LUTJANUSAPODUS | 0 | 44.44 | 54.55 | 48.9 | 40.3 | 41.9 | 30.4 | 29.4 | 47.2 | 54.1 | 30.2 |
LUTJANUSBUCCANELA | 62.02 | 77.09 | 81.72 | 70.9 | 71.7 | 73.7 | 74.8 | 76.9 | 76.5 | 79.0 | 77.1 |
LUTJANUSCAMPECHANUS | 37.58 | 49.26 | 50.18 | 68.9 | 69.7 | 72.3 | 71.5 | 74.2 | 74.3 | 75.5 | 75.5 |
LUTJANUSGRISEUS | 16.3 | 40.1 | 54.56 | 58.0 | 60.1 | 61.7 | 60.4 | 61.8 | 63.2 | 67.7 | 67.2 |
LUTJANUSSYNAGRIS | 11.81 | 13.16 | 9.49 | 62.3 | 63.0 | 65.4 | 62.6 | 64.0 | 65.2 | 71.2 | 72.4 |
LUTJANUS | - | - | - | 16.9 | 28.7 | 38.1 | 34.8 | 41.8 | 37.4 | 27.4 | 43.1 |
LUTJANUSVIVANUS | 70.66 | 36.56 | 45.45 | 87.5 | 86.6 | 87.5 | 85.9 | 90.1 | 92.9 | 91.1 | 79.3 |
MALACANTHUSPLUMIERI | - | - | - | 56.1 | 56.8 | 59.1 | 55.2 | 59.4 | 60.1 | 63.5 | 70.1 |
MULLOIDICHTHYSMARTINICUS | 0 | 72.73 | 72.73 | 54.3 | 85.2 | 85.2 | 73.1 | 84.2 | 75.3 | 70.0 | 49.7 |
MURAENARETIFERA | 70.94 | 89.05 | 71.73 | 63.7 | 56.6 | 69.5 | 63.1 | 66.0 | 71.7 | 76.1 | 69.0 |
MYCTEROPERCABONACI | 45.45 | 90.91 | 90.91 | 48.0 | 47.1 | 52.3 | 44.4 | 56.1 | 52.9 | 59.2 | 54.8 |
MYCTEROPERCAINTERSTIALIS | 0 | 70.91 | 39.09 | 75.6 | 78.3 | 77.7 | 76.7 | 73.2 | 74.3 | 80.1 | 82.3 |
MYCTEROPERCAINTERSTITIALIS | 68.82 | 63.42 | 69.87 | 67.0 | 70.8 | 69.3 | 68 .5 | 73.2 | 72.6 | 71.9 | 76.1 |
MYCTEROPERCAMICROLEPIS | - | - | - | 59.7 | 49.7 | 59.7 | 69.7 | 69.7 | 69.7 | 69.7 | 81.0 |
MYCTEROPERCAPHENAX | 26.26 | 40.52 | 41.48 | 68.1 | 69.5 | 70.1 | 70.3 | 72.3 | 72.5 | 74.6 | 77.6 |
MYCTEROPERCA | 14.55 | 54.65 | 45.86 | 31.9 | 31.9 | 35.9 | 32.2 | 42.4 | 51.0 | 37.6 | 30.4 |
OCYURUSCHRYSURUS | 21.24 | 42.16 | 55.55 | 39.6 | 43.9 | 46.5 | 41.5 | 46.0 | 45.6 | 52.1 | 51.2 |
OPHICHTHUSPUNCTICEPS | - | - | - | 57.9 | 60.5 | 61.9 | 60.3 | 63.3 | 67.5 | 65.3 | 71.9 |
OPISTOGNATHUSAURIFRONS | - | - | - | 24.8 | 31.6 | 38.7 | 18.3 | 30.3 | 30.6 | 41.7 | 44.6 |
PAGRUSPAGRUS | 21.21 | 28.55 | 31.68 | 64.7 | 65.6 | 67.8 | 66.7 | 69.5 | 69.7 | 72.6 | 71.6 |
PARANTHIASFURCIFER | 54.55 | 56.21 | 15.36 | 34.1 | 44.7 | 37.2 | 33.6 | 40.3 | 25.1 | 41.5 | 43.1 |
POMACANTHUSARCUATUS | 18.92 | 17.43 | 18.81 | 64.0 | 64.4 | 66.9 | 67.4 | 69.2 | 67.7 | 73.4 | 68.2 |
POMACANTHUSPARU | 27.86 | 59.82 | 71.83 | 68.9 | 66.5 | 74.6 | 73.6 | 73.6 | 72.2 | 74.9 | 71.9 |
POMACANTHUS | 0 | 0 | 63.64 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 28.9 |
POMACENTRIDAE | 0 | 10.99 | 25.77 | 30.5 | 32.6 | 37.3 | 27.0 | 32.3 | 33.8 | 42.2 | 44.6 |
POMACENTRUSPARTITUS | - | - | - | 11.0 | 19.9 | 24.2 | 10.1 | 12.8 | 18.8 | 28.6 | 33.9 |
POMACENTRUS | - | - | - | 0 | 1.34 | 1.56 | 0 | 0 | 0 | 27.5 | 0 |
PRIACANTHUSARENATUS | - | - | - | 52.2 | 54.7 | 24.2 | 42.4 | 39.8 | 55.2 | 60.4 | 82.6 |
PRISTIGENYSALTA | 0 | 12.33 | 12.86 | 64.6 | 67.2 | 66.9 | 65.1 | 67.8 | 68.7 | 75.1 | 76.3 |
PRISTIPOMOIDESAQUILONARIS | 3.58 | 15.44 | 18.17 | 85.7 | 50.2 | 51.1 | 48.3 | 50.8 | 52.4 | 52.4 | 51.3 |
PSEUDUPENEUSMACULATUS | - | - | - | 64.6 | 69.6 | 74.6 | 50.0 | 70.5 | 40.6 | 67.5 | 40.5 |
PTEROIS | 32.83 | 23.57 | 22.9 | 75.9 | 78.2 | 78.7 | 81.6 | 83.3 | 85.1 | 83.4 | 82.9 |
RACHYCENTRONCANADUM | 45.45 | 0 | 27.72 | 89.5 | 83.0 | 73.1 | 79.8 | 82.3 | 80.2 | 88.4 | 74.9 |
RHOMBOPLITESAURORUBENS | 44.73 | 60.6 | 67.29 | 59.0 | 60.3 | 61.8 | 62.1 | 63.8 | 64.5 | 64.6 | 64.0 |
RYPTICUSMACULATUS | 50.79 | 70.94 | 73.96 | 61.1 | 63.2 | 65.1 | 63.8 | 66.6 | 69.1 | 67.5 | 69.3 |
SCARIDAE | - | - | - | 1.55 | 0 | 1.14 | 9.97 | 0.27 | 1.21 | 0.21 | 13.5 |
SCARUSVETULA | - | - | - | 40.0 | 34.8 | 34.9 | 40.6 | 38.3 | 31.0 | 56.0 | 52.0 |
SERIOLADUMERILI | 51.62 | 61.19 | 59.48 | 69.0 | 69.6 | 70.6 | 72.3 | 73.5 | 74.1 | 74.3 | 73.0 |
SERIOLAFASCIATA | 45.62 | 61.89 | 54.66 | 61.8 | 63.4 | 65.2 | 64.9 | 69.3 | 68.3 | 68.2 | 69.6 |
SERIOLARIVOLIANA | 51.55 | 60.93 | 66.46 | 69.0 | 71.8 | 72.7 | 72.1 | 74.6 | 74.5 | 76.0 | 74.1 |
SERIOLA | - | - | - | 23.2 | 22.6 | 39.8 | 32.2 | 38.3 | 17.6 | 43.6 | 43.3 |
SERIOLAZONATA | - | - | - | 69.7 | 89.5 | 69.7 | 69.7 | 79.6 | 69.7 | 79.6 | 57.8 |
SERRANUSANNULARIS | - | - | - | 64.1 | 65.1 | 67.9 | 65.0 | 70.2 | 70.1 | 77.8 | 79.0 |
SERRANUSPHOEBE | - | - | - | 40.6 | 40.7 | 44.6 | 41.5 | 40.7 | 45.0 | 47.5 | 54.9 |
SPARIDAE | - | - | - | 19.8 | 54.7 | 59.7 | 54.8 | 43.1 | 42.3 | 57.2 | 69.7 |
SPARISOMAAUROFRENATUM | - | - | - | 0.72 | 0.72 | 0 | 0.83 | 0 | 1.03 | 8.6 | 0.60 |
SPARISOMAVIRIDE | - | - | - | 30.1 | 38.0 | 38.7 | 38.6 | 46.5 | 39.4 | 47.6 | 0 |
SPHYRAENABARRACUDA | 0 | 45.45 | 45.45 | 71.1 | 73.4 | 78.4 | 72.1 | 71.8 | 83.7 | 84.1 | 74.4 |
STENOTOMUSCAPRINUS | - | - | - | 0.88 | 0 | 1.08 | 0.4 | 13.1 | 2.83 | 17.3 | 11.5 |
SYACIUM | - | - | - | 79.1 | 79.1 | 76.5 | 79.5 | 81.2 | 8.44 | 88.4 | 86.7 |
SYNODONTIDAE | - | - | - | 50.6 | 58.0 | 63.4 | 38.7 | 59.3 | 5.57 | 68.4 | 76.8 |
THALASSOMABIFASCIATUM | - | - | - | 3.36 | 6.13 | 11.4 | 2.21 | 9.9 | 6.58 | 34.2 | 38.2 |
UPENEUSPARVUS | 1.07 | 17.65 | 37.36 | 25.6 | 28.9 | 26.3 | 26.5 | 33.4 | 22.6 | 31.0 | 53.1 |
XANTHICHTHYSRINGENS | 23.21 | 100 | 100 | 78.3 | 81.0 | 83.0 | 77.7 | 84.7 | 8.73 | 87.5 | 74.7 |
References
- Changa, C.M.; rong Fanga, W.; Jaob, R.C.; Shyuc, C.Z.; Liaoc, I.C. Development of an intelligent feeding controller for indoor intensive culturing of eel. Aquac. Eng. 2004, 32, 343–353. [Google Scholar] [CrossRef]
- Cabreira, A.G.; Tripode, M.; Madirolas, A. Artificial neural networks for fish-species identification. ICES J. Mar. Sci. 2009, 66, 1119–1129. [Google Scholar] [CrossRef]
- Alaba, S.; Shah, C.; Nabi, M.; Ball, J.; Moorhead, R.; Han, D.; Prior, J.; Campbell, M.; Wallace, F. Semi-supervised learning for fish species recognition. In Proceedings of the Ocean Sensing and Monitoring XV, Orlando, FL, USA, 3–4 May 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12543, pp. 248–255. [Google Scholar] [CrossRef]
- Alaba, S.Y.; Nabi, M.; Shah, C.; Prior, J.; Campbell, M.D.; Wallace, F.; Ball, J.E.; Moorhead, R. Class-aware fish species recognition using deep learning for an imbalanced dataset. Sensors 2022, 22, 8268. [Google Scholar] [CrossRef]
- Shah, C.; Alaba, S.Y.; Nabi, M.M.; Prior, J.; Campbell, M.; Wallace, F.; Ball, J.E.; Moorhead, R. An enhanced YOLOv5 model for fish species recognition from underwater environments. In Proceedings of the Ocean Sensing and Monitoring XV, Orlando, FL, USA, 3–4 May 2023; Hou, W., Mullen, L.J., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2023; Volume 12543, p. 125430O. [Google Scholar] [CrossRef]
- Shah, C.; Alaba, S.Y.; Nabi, M.M.; Caillouet, R.; Prior, J.; Campbell, M.; Wallace, F.; Ball, J.E.; Moorhead, R. MI-AFR: Multiple instance active learning-based approach for fish species recognition in underwater environments. In Proceedings of the Ocean Sensing and Monitoring XV, Orlando, FL, USA, 3–4 May 2023; Hou, W., Mullen, L.J., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2023; Volume 12543, p. 125430N. [Google Scholar] [CrossRef]
- Prior, J.; Campbell, M.; Dawkins, M.; Mickle, P.; Moorhead, R.; Alaba, S.; Shah, C.; Salisbury, J.; Rademacher, K.; Felts, A.; et al. Estimating precision and accuracy of automated video post-processing: A step towards implementation of AI/ML for optics-based fish sampling. Front. Mar. Sci. 2023, 10, 1150651. [Google Scholar] [CrossRef]
- Jalali, M.A.; Ierodiaconou, D.; Monk, J.; Gorfine, H.; Rattray, A. Predictive mapping of abalone fishing grounds using remotely-sensed LiDAR and commercial catch data. Fish. Res. 2015, 169, 26–36. [Google Scholar] [CrossRef]
- Churnside, J.H.; Wells, R.; Boswell, K.M.; Quinlan, J.A.; Marchbanks, R.D.; McCarty, B.J.; Sutton, T.T. Surveying the distribution and abundance of flying fishes and other epipelagics in the northern Gulf of Mexico using airborne lidar. Bull. Mar. Sci. 2017, 93, 591–609. [Google Scholar] [CrossRef]
- Boswell, K.M.; Wilson, M.P.; Cowan, J.H., Jr. A semiautomated approach to estimating fish size, abundance, and behavior from dual-frequency identification sonar (DIDSON) data. N. Am. J. Fish. Manag. 2008, 28, 799–807. [Google Scholar] [CrossRef]
- Villon, S.; Chaumont, M.; Subsol, G.; Villéger, S.; Claverie, T.; Mouillot, D. Coral reef fish detection and recognition in underwater videos by supervised machine learning: Comparison between Deep Learning and HOG+ SVM methods. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Lecce, Italy, 24–27 October 2016; Springer: Cham, Switzeland, 2016; pp. 160–171. [Google Scholar] [CrossRef]
- Bicknell, A.W.; Godley, B.J.; Sheehan, E.V.; Votier, S.C.; Witt, M.J. Camera technology for monitoring marine biodiversity and human impact. Front. Ecol. Environ. 2016, 14, 424–432. [Google Scholar] [CrossRef]
- Shortis, M.; Harvey, E.; Abdo, D. A review of underwater stereo-image measurement for marine biology and ecology applications. In Oceanography and Marine Biology; CRC Press: Boca Raton, FL, USA, 2016; pp. 269–304. [Google Scholar] [CrossRef]
- Panetta, K.; Kezebou, L.; Oludare, V.; Agaian, S. Comprehensive Underwater Object Tracking Benchmark Dataset and Underwater Image Enhancement With GAN. IEEE J. Ocean. Eng. 2022, 47, 59–75. [Google Scholar] [CrossRef]
- Slonimer, A.L.; Dosso, S.E.; Albu, A.B.; Cote, M.; Marques, T.P.; Rezvanifar, A.; Ersahin, K.; Mudge, T.; Gauthier, S. Classification of Herring, Salmon, and Bubbles in Multifrequency Echograms Using U-Net Neural Networks. IEEE J. Ocean. Eng. 2023, 48, 1236–1254. [Google Scholar] [CrossRef]
- Ntouskos, V.; Mertikas, P.; Mallios, A.; Karantzalos, K. Seabed Classification From Multispectral Multibeam Data. IEEE J. Ocean. Eng. 2023, 48, 874–887. [Google Scholar] [CrossRef]
- Xiao, F.; Yuan, F.; Huang, Y.; Cheng, E. Turbid underwater image enhancement based on parameter-tuned stochastic resonance. IEEE J. Ocean. Eng. 2022, 48, 127–146. [Google Scholar] [CrossRef]
- Gu, K.; Liu, J.; Shi, S.; Xie, S.; Shi, T.; Qiao, J. Self-organizing multichannel deep learning system for river turbidity monitoring. IEEE Trans. Instrum. Meas. 2022, 71, 9510713. [Google Scholar] [CrossRef]
- Zeng, L.; Sun, B.; Zhu, D. Underwater target detection based on Faster R-CNN and adversarial occlusion network. Eng. Appl. Artif. Intell. 2021, 100, 104190. [Google Scholar] [CrossRef]
- Harden Jones, F. The reaction of fish to moving backgrounds. J. Exp. Biol. 1963, 40, 437–446. [Google Scholar] [CrossRef]
- SWIPENET: Object detection in noisy underwater scenes. Pattern Recognit. 2022, 132, 108926. [CrossRef]
- Cardaillac, A.; Ludvigsen, M. Camera-sonar combination for improved underwater localization and mapping. IEEE Access 2023, 11, 123070–123079. [Google Scholar] [CrossRef]
- Almanza-Medina, J.E.; Henson, B.; Zakharov, Y.V. Deep learning architectures for navigation using forward looking sonar images. IEEE Access 2021, 9, 33880–33896. [Google Scholar] [CrossRef]
- Chang, C.C.; Ubina, N.A.; Cheng, S.C.; Lan, H.Y.; Chen, K.C.; Huang, C.C. A Two-Mode Underwater Smart Sensor Object for Precision Aquaculture Based on AIoT Technology. Sensors 2022, 22, 7603. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, X.; An, D.; Wei, Y. Underwater image enhancement and marine snow removal for fishery based on integrated dual-channel neural network. Comput. Electron. Agric. 2021, 186, 106182. [Google Scholar] [CrossRef]
- Ju, Y.; Xiao, J.; Zhang, C.; Xie, H.; Luo, A.; Zhou, H.; Dong, J.; Kot, A.C. Towards marine snow removal with fusing Fourier information. Inf. Fusion 2025, 117, 102810. [Google Scholar] [CrossRef]
- Kaneko, R.; Sato, Y.; Ueda, T.; Higashi, H.; Tanaka, Y. Marine Snow Removal Benchmarking Dataset. In Proceedings of the 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Taipei, Taiwan, 31 October–3 November 2023; pp. 771–778. [Google Scholar] [CrossRef]
- Debnath, B.; Ebu, I.A.; Biswas, S.; Gurbuz, A.C.; Ball, J.E. Fmcw radar range profile and micro-doppler signature fusion for improved traffic signaling motion classification. In Proceedings of the 2024 IEEE Radar Conference (RadarConf24), Denver, CO, USA, 6–10 May 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Xu, W.; Matzner, S. Underwater fish detection using deep learning for water power applications. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 313–318. [Google Scholar] [CrossRef]
- Nabi, M.; Shah, C.; Alaba, S.Y.; Prior, J.; Campbell, M.D.; Wallace, F.; Moorhead, R.; Ball, J.E. Probabilistic model-based active learning with attention mechanism for fish species recognition. In Proceedings of the OCEANS 2023-MTS/IEEE US Gulf Coast, Biloxi, MS, USA, 25–28 September 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Shah, C.; Nabi, M.; Alaba, S.Y.; Prior, J.; Caillouet, R.; Campbell, M.D.; Wallace, F.; Ball, J.E.; Moorhead, R. A zero shot detection based approach for fish species recognition in underwater environments. In Proceedings of the OCEANS 2023-MTS/IEEE US Gulf Coast, Biloxi, MS, USA, 25–28 September 2023; pp. 1–7. [Google Scholar] [CrossRef]
- Jäger, J.; Rodner, E.; Denzler, J.; Wolff, V.; Fricke-Neuderth, K. SeaCLEF 2016: Object Proposal Classification for Fish Detection in Underwater Videos. In Proceedings of the CLEF (Working Notes), Évora, Portugal, 5–8 September 2016; pp. 481–489. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar] [CrossRef]
- Fang, Y.; Liao, B.; Wang, X.; Fang, J.; Qi, J.; Wu, R.; Niu, J.; Liu, W. You only look at one sequence: Rethinking transformer in vision through object detection. Adv. Neural Inf. Process. Syst. 2021, 34, 26183–26197. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 13029–13038. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Feng, J.; Jin, T. CEH-YOLO: A composite enhanced YOLO-based model for underwater object detection. Ecol. Inform. 2024, 82, 102758. [Google Scholar] [CrossRef]
- Li, Y.; Li, Q.; Pan, J.; Zhou, Y.; Zhu, H.; Wei, H.; Liu, C. SOD-YOLO: Small-Object-Detection Algorithm Based on Improved YOLOv8 for UAV Images. Remote Sens. 2024, 16, 3057. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ortenzi, L.; Aguzzi, J.; Costa, C.; Marini, S.; D’Agostino, D.; Thomsen, L.; De Leo, F.C.; Correa, P.V.; Chatzievangelou, D. Automated species classification and counting by deep-sea mobile crawler platforms using YOLO. Ecol. Inform. 2024, 82, 102788. [Google Scholar] [CrossRef]
- Jocher, G.; Stoken, A.; Chaurasia, A.; Borovec, J.; Kwon, Y.; Michael, K.; Changyu, L.; Fang, J.; Skalski, P.; Hogan, A.; et al. ultralytics/yolov5: V6. 0-YOLOv5n’Nano’models, Roboflow integration, TensorFlow export, OpenCV DNN support. Zenodo 2021. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Jung, H.K.; Choi, G.S. Improved YOLOv5: Efficient Object Detection Using Drone Images under Various Conditions. Appl. Sci. 2022, 12, 7255. [Google Scholar] [CrossRef]
- Wang, H.; Hu, Z.; Mo, H.; Zhao, X. Enhanced nighttime nail detection using improved YOLOv5 for precision road safety. Sci. Rep. 2025, 15, 5224. [Google Scholar] [CrossRef]
- Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, Z.; Wang, X.; Fu, W.; Ma, J.; Wang, G. Improved YOLOv8 Insulator Fault Detection Algorithm Based on BiFormer. In Proceedings of the 2023 IEEE 5th International Conference on Power, Intelligent Computing and Systems (ICPICS), Shenyang, China, 14–16 July 2023; pp. 962–965. [Google Scholar] [CrossRef]
- Li, Y.; Fan, Q.; Huang, H.; Han, Z.; Gu, Q. A Modified YOLOv8 Detection Network for UAV Aerial Image Recognition. Drones 2023, 7, 304. [Google Scholar] [CrossRef]
- Ansah, P.A.K.; Appati, J.K.; Owusu, E.; Boahen, E.K.; Boakye-Sekyerehene, P.; Dwumfour, A. SB-YOLO-V8: A Multilayered Deep Learning Approach for Real-Time Human Detection. Eng. Rep. 2025, 7, e70033. [Google Scholar] [CrossRef]
- Bi, J.; Li, K.; Zheng, X.; Zhang, G.; Lei, T. SPDC-YOLO: An Efficient Small Target Detection Network Based on Improved YOLOv8 for Drone Aerial Image. Remote Sens. 2025, 17, 685. [Google Scholar] [CrossRef]
- Yang, X.; Zhang, S.; Liu, J.; Gao, Q.; Dong, S.; Zhou, C. Deep learning for smart fish farming: Applications, opportunities and challenges. Rev. Aquac. 2021, 13, 66–90. [Google Scholar] [CrossRef]
- Villon, S.; Mouillot, D.; Chaumont, M.; Darling, E.S.; Subsol, G.; Claverie, T.; Villéger, S. A deep learning method for accurate and fast identification of coral reef fishes in underwater images. Ecol. Inform. 2018, 48, 238–244. [Google Scholar] [CrossRef]
- Rathi, D.; Jain, S.; Indu, S. Underwater fish species classification using convolutional neural network and deep learning. In Proceedings of the 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), Bangalore, India, 27–30 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Shreesha, S.; Pai, M.M.; Pai, R.M.; Verma, U. Pattern detection and prediction using deep learning for intelligent decision support to identify fish behaviour in aquaculture. Ecol. Inform. 2023, 78, 102287. [Google Scholar] [CrossRef]
- Zhou, C.; Xu, D.; Chen, L.; Zhang, S.; Sun, C.; Yang, X.; Wang, Y. Evaluation of fish feeding intensity in aquaculture using a convolutional neural network and machine vision. Aquaculture 2019, 507, 457–465. [Google Scholar] [CrossRef]
- Abinaya, N.; Susan, D.; Sidharthan, R.K. Deep learning-based segmental analysis of fish for biomass estimation in an occulted environment. Comput. Electron. Agric. 2022, 197, 106985. [Google Scholar] [CrossRef]
- Zambrano, A.F.; Giraldo, L.F.; Quimbayo, J.; Medina, B.; Castillo, E. Machine learning for manually-measured water quality prediction in fish farming. PLoS ONE 2021, 16, e0256380. [Google Scholar] [CrossRef] [PubMed]
- Boulais, O.; Alaba, S.Y.; Ball, J.E.; Campbell, M.; Iftekhar, A.T.; Moorehead, R.; Primrose, J.; Prior, J.; Wallace, F.; Yu, H.; et al. SEAMAPD21: A large-scale reef fish dataset for fine-grained categorization. In Proceedings of the Eight Workshop on Fine-Grained Visual Categorization, Online, 25 June 2021. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Wang, X.; Xue, G.; Huang, S.; Liu, Y. Underwater Object Detection Algorithm Based on Adding Channel and Spatial Fusion Attention Mechanism. J. Mar. Sci. Eng. 2023, 11, 1116. [Google Scholar] [CrossRef]
- Pachaiyappan, P.; Chidambaram, G.; Jahid, A.; Alsharif, M.H. Enhancing Underwater Object Detection and Classification Using Advanced Imaging Techniques: A Novel Approach with Diffusion Models. Sustainability 2024, 16, 7488. [Google Scholar] [CrossRef]
- Sung, M.; Yu, S.C.; Girdhar, Y. Vision based real-time fish detection using convolutional neural network. In Proceedings of the OCEANS 2017-Aberdeen, Aberdeen, UK, 19–22 June 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Jalal, A.; Salman, A.; Mian, A.; Shortis, M.; Shafait, F. Fish detection and species classification in underwater environments using deep learning with temporal information. Ecol. Inform. 2020, 57, 101088. [Google Scholar] [CrossRef]
- Ultralytics. Ultralytics/ultralytics: New000YOLOv8 in PyTorch > ONNX > CoreML > TFLite. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 8 March 2025).
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar] [CrossRef]
- Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Roy, S.K.; Sukul, A.; Jamali, A.; Haut, J.M.; Ghamisi, P. Cross hyperspectral and LiDAR attention transformer: An extended self-attention for land use and land cover classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5512815. [Google Scholar] [CrossRef]
- Zhang, Z.; Lu, X.; Cao, G.; Yang, Y.; Jiao, L.; Liu, F. ViT-YOLO:Transformer-Based YOLO for Object Detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 2799–2808. [Google Scholar] [CrossRef]
- Wu, T.; Dong, Y. YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition. Appl. Sci. 2023, 13, 12977. [Google Scholar] [CrossRef]
- Tong, Z.; Chen, Y.; Xu, Z.; Yu, R. Wise-IoU: Bounding box regression loss with dynamic focusing mechanism. arXiv 2023, arXiv:2301.10051. [Google Scholar]
Method | mAP0.5 | mAP0.5:0.95 | Parameters | GFLOPS | FPS |
---|---|---|---|---|---|
MobileNetv3-large | - | 32.51 | - | - | 105 |
VGG300 | - | 48.99 | - | - | 67 |
VGG512 | - | 52.75 | - | - | 54 |
YOLOv5s | 71.9 | 43.9 | 7.37 M | 17.1 | 117 |
YOLOv5m | 75.6 | 47.9 | 21.37 M | 49.5 | 110 |
YOLOv5l | 78.5 | 50.6 | 46.80 M | 109.9 | 103 |
YOLOv5enh | 81.1 | 53.0 | 61.30 M | 151.0 | 99 |
YOLOv8n | 72.1 | 45.4 | 3.66 M | 11.2 | 146 |
YOLOv8s | 76.1 | 49.6 | 11.21 M | 29.1 | 137 |
YOLOv8m | 80.3 | 52.7 | 25.91 M | 79.1 | 128 |
YOLOv8l | 81.1 | 53.4 | 43.70 M | 151.0 | 120 |
YOLOv10l | 84.2 | 58.5 | 25.92 M | 127.4 | 125 |
YOLOv8-TF | 87.9 | 61.2 | 30.56 M | 195.7 | 116 |
Method | mAP0.5 | mAP0.5:0.95 | Parameters | FPS |
---|---|---|---|---|
YOLOv8n | 83.90 | 53.70 | 3.93 M | 194 |
YOLOv8s | 87.65 | 54.80 | 14.81 M | 168 |
YOLOv8m | 89.10 | 54.10 | 26.49 M | 155 |
YOLOv8l | 91.70 | 56.40 | 47.31 M | 141 |
YOLOv10l | 92.40 | 57.12 | 25.79 M | 147 |
YOLOv8-TF | 94.60 | 58.21 | 30.51 M | 130 |
Method | mAP0.5 | mAP0.5:0.95 | Parameters | FPS |
---|---|---|---|---|
YOLOv8n | 47.70 | 32.40 | 4.07 M | 242 |
YOLOv8s | 59.44 | 42.43 | 14.83 M | 216 |
YOLOv8m | 62.90 | 45.70 | 26.86 M | 190 |
YOLOv8l | 67.20 | 49.60 | 47.36 M | 153 |
YOLOv10l | 68.20 | 50.50 | 25.88 M | 162 |
YOLOv8-TF | 69.50 | 51.80 | 30.54 M | 139 |
YOLOv8-TF | mAP0.5 | mAP0.5:0.95 | Params | FPS | |||
---|---|---|---|---|---|---|---|
Depth Scale | CA Loss | Trans | Wise IoU v3 Loss | ||||
✓ | ✕ | ✕ | ✕ | 81.8 | 53.2 | 26.87 M | 126 |
✓ | ✓ | ✕ | ✕ | 82.8 | 54.8 | 27.12 M | 123 |
✓ | ✓ | ✓ | ✕ | 86.2 | 59.2 | 30.52 M | 118 |
✓ | ✓ | ✓ | ✓ | 87.9 | 61.2 | 30.56 M | 116 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shah, C.; Nabi, M.M.; Alaba, S.Y.; Ebu, I.A.; Prior, J.; Campbell, M.D.; Caillouet, R.; Grossi, M.D.; Rowell, T.; Wallace, F.; et al. YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling. Sensors 2025, 25, 1846. https://doi.org/10.3390/s25061846
Shah C, Nabi MM, Alaba SY, Ebu IA, Prior J, Campbell MD, Caillouet R, Grossi MD, Rowell T, Wallace F, et al. YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling. Sensors. 2025; 25(6):1846. https://doi.org/10.3390/s25061846
Chicago/Turabian StyleShah, Chiranjibi, M M Nabi, Simegnew Yihunie Alaba, Iffat Ara Ebu, Jack Prior, Matthew D. Campbell, Ryan Caillouet, Matthew D. Grossi, Timothy Rowell, Farron Wallace, and et al. 2025. "YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling" Sensors 25, no. 6: 1846. https://doi.org/10.3390/s25061846
APA StyleShah, C., Nabi, M. M., Alaba, S. Y., Ebu, I. A., Prior, J., Campbell, M. D., Caillouet, R., Grossi, M. D., Rowell, T., Wallace, F., Ball, J. E., & Moorhead, R. (2025). YOLOv8-TF: Transformer-Enhanced YOLOv8 for Underwater Fish Species Recognition with Class Imbalance Handling. Sensors, 25(6), 1846. https://doi.org/10.3390/s25061846