Study on the Classification Perception and Visibility Enhancement of Ship Navigation Environments in Foggy Conditions
Abstract
:1. Introduction
2. Construction of the Ship Navigation Fog Environment Classification Standard and Image Dataset
2.1. Visibility Perception under Ship Navigation Fog Environment
- (1)
- The visual inspection method is the most primitive visibility perception method but is still being used today. However, this method is only based on crew experience, is susceptible to subjective factors, and offers low accuracy.
- (2)
- The instrument measurement method uses a visibility detection instrument and then employs the transmission method or scattering method to calculate the extinction coefficient for measurement. Commonly used measurement instruments include optical sensing visibility and lidar visibility instruments, with the former including atmospheric transmission instruments, back scatterers, side scatterers, and forward scatterers. The measurement data of the instrument are more accurate, but the instrument and maintenance costs are high, and it still requires a human inspection to assist the perception.
- (3)
- Image-based video measurement uses image-processing methods such as contrast, image inflection points, and dark channel priors, but these methods are affected by camera calibration and image quality.
2.2. Construction of Classified Image Dataset of Ship Navigation Fog Environment
3. Analysis of Images for Perception Enhancement
3.1. Enhanced Ship Navigation Fog Environment Perception Based on Image Enhancement
- (1)
- Global enhancement relies significantly on histogram equalization, Retinex theory, and high-contrast retention. Histogram equalization refers to equalization processing of the original image histogram to make the gray-level distribution uniform and improve the contrast [26]. Retinex theory uses three-color theory and color constancy balancing dynamic range compression, edge enhancement, and color constant [27]. High-contrast retention refers to preserving contrast at the junction of the color and shade contrasts, with other areas appearing medium gray. The modified image is superimposed on the original image one or more times to produce an enhanced image.
- (2)
- Local enhancement includes adaptive histogram equalization (AHE), limit contrast, and adaptive histogram equalization (CLAHE). The adaptive histogram equalization algorithm calculates a local histogram and redistributes the brightness to change the contrast to realize image enhancement [28]. The limited contrast adaptive histogram equalization algorithm solves the problem of excessive noise by limiting contrast based on the adaptive histogram equalization, with an interpolation method accelerating the histogram equalization [29].
3.2. Enhanced Perception of Ship Navigation Fog Environment Based on the Physical Model
4. Construction of the Ship Navigation Fog Environment Classification Model
4.1. Overall Model Structure
4.2. The EfficientNet Network
4.3. Optimization of Ship Navigation Fog Environment Classification Model
4.3.1. Improvement of the Attention Mechanism
4.3.2. Improvement of the Loss Function
5. Comparative Analysis of the Results of the Fog Environment Visibility Experiment
5.1. Classification Experiment of Ship Navigation Fog Environment
5.1.1. Description of the Fog Environment Visibility Dataset
5.1.2. Ablation Experiment of Ship Navigation Fog Environment Classification Model
- (1)
- Basic EfficientNet using the basic EfficientNet network for training;
- (2)
- EfficientNet + ECA using the ECA attention module based on Equation (1) to replace the SE module in the MBConv structure;
- (3)
- EfficientNet + ECA + BAM with the CBAM attention module added after the last convolutional layer based on Equation (2); and
- (4)
- Based on (3), the focus loss function was used to form the final improved EfficientNet model.
- When comparing Models (1) and (2), replacing the SE module with the ECA attention module improved the accuracy by 0.21% and reduced the MACS, the number of parameters, and the model size, showing that the ECA module was lighter and offered a stronger attention learning ability than the SE module.
- When comparing Models (3) and (2), the results showed that adding the CBAM attention module increased the accuracy of MACS with only a slight increase of 0.07% in the number of parameters and the model size, indicating that adding CBAM extracted better network features with greater accuracy.
- When comparing Models (4) and (3), the results showed that the focus loss function reaction model accelerated the network learning. Under the same input conditions, the accuracy of Model (4) was improved by 0.06%.
5.1.3. Classification Performance Analysis of Ship Navigation Fog Environment
5.2. Visibility Enhancement Experiment of Ship Fog Environment
5.2.1. Comparison of Visibility Enhancement Experiment in Fog Environment
- Samples in dataset classes 0–6 were processed with each augmentation algorithm;
- The classification model presented in this paper was used to classify the images processed in the previous step;
- The image-processing classification results were compared with the original image level to determine whether the visibility level was improved and whether the improvement was effectively enhanced;
- The ratio of the effective enhancement number of each level and the number of samples in the corresponding level in different datasets were calculated to obtain the effective enhancement rate.
5.2.2. Results of Visibility Enhancement in Fog Environment
6. Conclusions
- (1)
- We constructed a fog environment classification image dataset using visibility grade classification rules and perceived visibility.
- (2)
- Using this dataset, we designed a perceived visibility model structure using a discriminant deep learning architecture and the EfficientNet neural network while adding the CBAM, focal loss, and other improvements. Our experiments showed that our model’s accuracy exceeded 95%, which meets the needs of intelligent ship navigation in foggy conditions.
- (3)
- Using our model and the dataset, we were able to determine the best image-enhancement algorithm based on the type of fog detected. The dark channel prior algorithm worked best with fog classes 1, 2, 4, and 5. The CLAHE algorithm worked best with fog classes 0 and 3. The high-contrast retention algorithm worked best with fog class 6.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ye, Y.; Chen, Y.; Zhang, P.; Cheng, P.; Zhong, H.; Hou, H. Traffic Accident Analysis of Inland Waterway Vessels Based on Data-driven Bayesian Network. Saf. Environ. Eng. 2022, 29, 47–57. [Google Scholar]
- Zhen, G. Discussion on the safe navigation methods of ships in waters with poor visibility. China Water Transp. 2022, 4, 18–20. [Google Scholar]
- Khan, S.; Goucher-Lambert, K.; Kostas, K.; Kaklis, P. ShipHullGAN: A generic parametric modeller for ship hull design using deep convolutional generative model. Comput. Methods Appl. Mech. Eng. 2023, 411, 116051. [Google Scholar] [CrossRef]
- Wright, R.G. Intelligent autonomous ship navigation using multi-sensor modalities. Trans. Nav. Int. J. Mar. Navig. Saf. Sea Transp. 2019, 13, 3. [Google Scholar] [CrossRef]
- Varelas, T.; Archontaki, S.; Dimotikalis, J.; Turan, O.; Lazakis, I.; Varelas, O. Optimizing ship routing to maximize fleet revenue at Danaos. Interfaces 2013, 43, 37–47. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Zhang, D.; Zhao, Y.; Cui, Y.; Wan, C. A Visualization Analysis and Development Trend of Intelligent Ship Studies. J. Transp. Inf. Saf. 2021, 39, 7–16+34. [Google Scholar]
- Pedersen, M.; Bruslund Haurum, J.; Gade, R.; Moeslund, T.B. Detection of marine animals in a new underwater dataset with varying visibility. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Kim, D.; Park, M.S.; Park, Y.J.; Kim, W. Geostationary Ocean Color Imager (GOCI) marine fog detection in combination with Himawari-8 based on the decision tree. Remote Sens. 2020, 12, 149. [Google Scholar] [CrossRef] [Green Version]
- Cornejo-Bueno, S.; Casillas-Pérez, D.; Cornejo-Bueno, L.; Chidean, M.I.; Caamaño, A.J.; Cerro-Prada, E.; Casanova-Mateo, C.; Salcedo-Sanz, S. Statistical analysis and machine learning prediction of fog-caused low-visibility events at A-8 motor-road in Spain. Atmosphere 2021, 12, 679. [Google Scholar] [CrossRef]
- Palvanov, A.; Young, I.C. Visnet: Deep convolutional neural networks for forecasting atmospheric visibility. Sensors 2019, 19, 1343. [Google Scholar] [CrossRef] [Green Version]
- Huang, L.; Zhang, Z.; Xiao, P.; Sun, J.; Zhou, X. Classification and application of highway visibility based on deep learning. Trans. Atmos. Sci. 2022, 45, 203–211. [Google Scholar]
- Deniz, E.; Anıl, G.; Hazım Kemal, E. Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing. arXiv 2018, arXiv:1805.05308. [Google Scholar]
- Yuanjie, S.; Lerenhan, L.; Wenqi, R.; Changxin, G.; Nong, S. Domain Adaptation for Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2808–2817. [Google Scholar]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915. [Google Scholar]
- Park, J.; Han, D.K.; Ko, H. Fusion of Heterogeneous Adversarial Networks for Single Image Dehazing. IEEE Trans. Image Process. 2020, 29, 4721–4732. [Google Scholar] [CrossRef] [PubMed]
- Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive Learning for Compact Single Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 10551–10560. [Google Scholar]
- Ullah, H.; Muhammad, K.; Irfan, M.; Anwar, S.; Sajjad, M.; Imran, A.S.; de Albuquerque, V.H.C. Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing. IEEE Trans. Image Process. 2021, 30, 8968–8982. [Google Scholar] [CrossRef]
- Koschmieder, H. Theorie der horizontalen sichtweite. Beitr. Phys. Freien Atmos. 1924, 33–53, 171–181. [Google Scholar]
- Middleton, W. Vision through the Atmosphere; University of Toronto Press: Toronto, ON, Canada, 1952. [Google Scholar]
- Redman, B.J.; van der Laan, J.D.; Wright, J.B.; Segal, J.W.; Westlake, K.R.; Sanchez, A.L. Active and Passive Long-Wave Infrared Resolution Degradation in Realistic Fog Conditions; No. SAND2019-5291C; Sandia National Lab (SNL-NM): Albuquerque, NM, USA, 2019. [Google Scholar]
- Lu, T.; Yang, J.; Deng, M. A Visibility Estimation Method Based on Digital Total-sky Images. J. Appl. Meteorol. Sci. 2018, 29, 63–71. [Google Scholar]
- Kaiming, H.; Jian, S.; Xiaoou, T. Single Image Haze Removal Using Dark ChannelPrior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami Beach, FL, USA, 20–26 June 2009; pp. 1956–1963. [Google Scholar]
- Ortega, L.; Otero, L.D.; Otero, C. Application of machine learning algorithms for visibility classification. In Proceedings of the 2019 IEEE International Systems Conference (SysCon), Orlando, FL, USA, 8–11 April 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Huang, L. Wenyuan Bridge. Navigation Meteorology and Oceanography. Hubei; Wuhan University of Technology Press: Wuhan, China, 2014. [Google Scholar]
- Li, Z.; Che, W.; Qian, M.; Xu, X. An Improved Image Defogging Algorithm Based on Histogram Equalization. Henan Sci. 2021, 39, 1–6. [Google Scholar]
- Zhu, Y.; Lin, J.; Qu, F.; Zheng, Y. Improved Adaptive Retinex Image Enhancement Algorithm. In Proceedings of the ICETIS 2022, 7th International Conference on Electronic Technology and Information Science, Harbin, China, 21–23 January 2022; pp. 1–4. [Google Scholar]
- Wen, H.; Li, J. An adaptive threshold image enhancement algorithm based on histogram equalization. China Integr. Circuit 2022, 31, 38–42+71. [Google Scholar]
- Fang, D.; Fu, Q.; Wu, A. Foggy image enhancement based on adaptive dynamic range CLAHE [J/OL]. Laser Optoelectron. Prog. 2022, 9, 1–14. [Google Scholar]
- Yin, J.; He, J.; Luo, R.; Yu, W. A Defogging Algorithm Combining Sky Region Segmentation and Dark Channel Prior. Comput. Technol. Dev. 2022, 32, 216–220. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Nayak, D.R.; Padhy, N.; Mallick, P.K.; Zymbler, M.; Kumar, S. Brain tumor classification using dense efficient-net. Axioms 2022, 11, 34. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Goswami, V.; Sharma, B.; Patra, S.S.; Chowdhury, S.; Barik, R.K.; Dhaou, I.B. IoT-Fog Computing Sustainable System for Smart Cities: A Queueing-based Approach. In Proceedings of the 2023 1st International Conference on Advanced Innovations in Smart Cities (ICAISC), Jeddah, Saudi Arabia, 23–25 January 2023. [Google Scholar]
- Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; Yang, J. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Adv. Neural Inf. Process. Syst. 2020, 33, 21002–21012. [Google Scholar]
- Thombre, S.; Zhao, Z.; Ramm-Schmidt, H.; Garcia, J.M.V.; Malkamaki, T.; Nikolskiy, S.; Hammarberg, T.; Nuortie, H.; Bhuiyan, M.Z.H.; Sarkka, S.; et al. Sensors and AI techniques for situational awareness in autonomous ships: A review. IEEE Trans. Intell. Transp. Syst. 2020, 23, 64–83. [Google Scholar] [CrossRef]
Visibility Scale | Perceived Distance | Weather Conditions | |
---|---|---|---|
NM | Km | ||
0 | ≤0.05 | ≤0.1 | |
1 | 0.05–0.10 | 0.1–0.2 | Heavy fog, fog |
2 | 0.10.–0.25 | 0.20–0.5 | Dense fog |
3 | 0.250–0.50 | 0.50–1.0 | Fog, light fog |
4 | 0.500–1.00 | 0.01–2.0 | Mist |
5 | 0.01–2.0 | 0.02–4.0 | Moderate rain, light fog |
6 | 2–5 | 4–10 | Light rain, light fog |
7 | ≥5 | ≥10 | Drizzle, light rain, no fog |
Stage (Convolution Order) | Operator | Resolution | # of Channels | # of Layers |
---|---|---|---|---|
1 | Conv3 × 3 | 224 × 224 | 32 | 1 |
2 | MBConv1, k3 × 3 | 112 × 112 | 16 | 1 |
3 | MBConv6, k3 × 3 | 112 × 112 | 24 | 2 |
4 | MBConv6, k5 × 5 | 56 × 56 | 40 | 2 |
5 | MBConv6, k3 × 3 | 28 × 28 | 80 | 3 |
6 | MBConv6, k5 × 5 | 14 × 14 | 112 | 3 |
7 | MBConv6, k5 × 5 | 14 × 14 | 192 | 4 |
8 | MBConv6, k3 × 3 | 7 × 7 | 320 | 1 |
9 | Conv 1 × 1 & Pool & FC | 7 × 7 | 1280 | 1 |
Computational Model | Precision/% | MACs/G | Participants/M | Model Size/MB | FPS |
---|---|---|---|---|---|
Model 1: original EfficientNet | 94.63 | 0.398 | 4.02 | 15.62 | 35.7 |
Model 2: EfficientNet + ECA | 94.84 | 0.400 | 3.38 | 13.18 | 34.1 |
Model 3: EfficientNet + ECA + CBAM | 94.91 | 0.401 | 4.20 | 16.31 | 38.5 |
Model 4: the model in this paper | 95.05 | 0.401 | 4.20 | 16.31 | 42.9 |
Enhanced Algorithm | CLASS 0 | CLASS 1 | CLASS 2 | CLASS 3 | CLASS 4 | CLASS 5 | CLASS 6 |
---|---|---|---|---|---|---|---|
HIGH-CONTRAST RETENTION | 71.76% | 45.9% | 40.36% | 26.17% | 26.6% | 28.81% | 29.80% |
clahe | 76.08% | 19.16% | 51.16% | 27.46% | 10.04% | 29.67% | 24.64% |
DARK CHANNEL PRIORS | 51.50% | 48.26% | 73.26% | 22.54% | 29.49% | 48.59% | 23.50% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Fan, B.; Li, Y.; Xiao, J.; Min, L.; Zhang, J.; Chen, J.; Lin, Z.; Su, S.; Wu, R.; et al. Study on the Classification Perception and Visibility Enhancement of Ship Navigation Environments in Foggy Conditions. J. Mar. Sci. Eng. 2023, 11, 1298. https://doi.org/10.3390/jmse11071298
Wang C, Fan B, Li Y, Xiao J, Min L, Zhang J, Chen J, Lin Z, Su S, Wu R, et al. Study on the Classification Perception and Visibility Enhancement of Ship Navigation Environments in Foggy Conditions. Journal of Marine Science and Engineering. 2023; 11(7):1298. https://doi.org/10.3390/jmse11071298
Chicago/Turabian StyleWang, Chiming, Boyan Fan, Yanan Li, Jingjing Xiao, Lanxi Min, Jing Zhang, Jiuhu Chen, Zhong Lin, Sunxin Su, Rongjiong Wu, and et al. 2023. "Study on the Classification Perception and Visibility Enhancement of Ship Navigation Environments in Foggy Conditions" Journal of Marine Science and Engineering 11, no. 7: 1298. https://doi.org/10.3390/jmse11071298
APA StyleWang, C., Fan, B., Li, Y., Xiao, J., Min, L., Zhang, J., Chen, J., Lin, Z., Su, S., Wu, R., & Zhu, S. (2023). Study on the Classification Perception and Visibility Enhancement of Ship Navigation Environments in Foggy Conditions. Journal of Marine Science and Engineering, 11(7), 1298. https://doi.org/10.3390/jmse11071298