Next Article in Journal
Cable Robot Performance Evaluation by Wrench Exertion Capability
Previous Article in Journal
Robust Composite High-Order Super-Twisting Sliding Mode Control of Robot Manipulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation

Fujian Key Laboratory of Brain-inspired Computing Technique and Applications, School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
*
Author to whom correspondence should be addressed.
Robotics 2018, 7(1), 14; https://doi.org/10.3390/robotics7010014
Submission received: 18 November 2017 / Revised: 1 March 2018 / Accepted: 6 March 2018 / Published: 13 March 2018

Abstract

:
There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low to recognize. This paper proposes the use of the dark channel prior model for underwater environment recognition, in which underwater reflection models are used to obtain enhanced images. The proposed approach achieves very good performance and multi-scene robustness by combining the dark channel prior model with the underwater diffuse model. The experimental results are given to show the effectiveness of the dark channel prior model in underwater scenarios.

1. Introduction

Underwater robotics, marine science, and underwater exploration have become more active in recent years. Naturally, there is a strong need to apply computer vision-based algorithms to these works. However, given the illumination attenuation, uneven illumination results in lower and unbalanced image brightness. Moreover, there is serious back-scattering noise due to scattering and absorption, and underwater images often suffer from poor quality, such as low contrast, blur, and so on. This is a difficult problem for those activities that require clear and easily-recognizable underwater images.
Generally speaking, the principle of underwater images can be represented simply by Figure 1 [1]. The whole process can be decomposed into two parts: forward transmission and back-scattering. The scattering effect is brought by the suspended particles in the water that reflect the light in other directions, which makes the image blurry. The absorption is caused by the medium of the water, which degrades the energy of light rays according to their wavelengths, which makes the image visually lose its contrast and reduces the visible ranges. The underwater imaging model [2,3,4] can be expressed by Equation (1):
L r = e c l L p + r k f β ( θ ) I ( r ) c ( 1 e c l ) d r
in which Lr represents the image we actually see, Lp indicates the picture without noise, c represents the attenuation coefficient of water, l represents the distance between the object and the camera, kf is a constant related to the focal length, β(θ) represents the volume scattering coefficient, and I(r) represents the intensity of light on the object plane.
Several studies [5,6,7] have proposed a variety of methods for the determination of c and β(θ) and given measurements under different underwater environments. This means that if we know the type of underwater environment, we can use the c and β(θ) measured in the corresponding underwater environment to estimate the result of the underwater imaging model.
Inspired by this, we considered that the poor underwater image quality is mainly due to the various types of noise caused by scattering and absorption. To tackle the aforementioned problems, this paper presents a novel method that can process the underwater image to make it less affected by the underwater environment. This method removes the underwater noise, firstly, and then uses the contrast stretch method to enhance the denoised image. In this way, our proposed method can reduce the influence of noise and retain more information of underwater images.
The rest of this paper is organized as follows: Section 2 offers a brief overview of the existing research work that is related to this research; in Section 3, the proposed approach is described in detail; experimental results are given in Section 4 to demonstrate the feasibility and performance of the proposed method; and, finally, a brief conclusion and future works are presented in Section 5.

2. Related Works

2.1. Classical Models

Generally, there are four ways to enhance underwater images: enhancement in the spatial domain [8,9], enhancement in the frequency domain, color-constancy-based enhancement, and multi-method-based enhancement [10,11,12].
Iqbal et al. [13] proposed an algorithm based on integrated color model (ICM). This algorithm uses the strategy to stretch RGB channels firstly, and then converts the processing results to HSV space and further stretches the S and V components. It can effectively extend the display range of each channel of the image and realize the contrast enhancement. However, their method only achieved the partial enhancement effect, as the statistical distribution of the intensity values and the position information are not taken into account.
Histogram equalization (HE) [14] is another common enhancement method in the spatial domain. HE takes into account the statistical distribution of the values of each channel but still does not take into account the location information, so it often enhances noise and image details at the same time. A typical improvement is the use of generalized histogram equalization, discrete wavelet transform, and KL-transform [15], which achieved a better performance.
Ancuti et al. [10,11] used the image fusion technology for underwater image enhancement. Their algorithm first used shades of gray technology to perform color correction to obtain enhanced image A. Then, image A was denoised, and its contrast was enhanced to obtain the enhanced image B. A sequence of calculations on the Laplacian contrast map, local contrast map, significance feature map, and an exposure graph of A and B were conducted, respectively. The feature map was used to calculate the weight map and perform the normalization. Finally, multi-resolution analysis is used to synthesize the enhanced image. However, the computational complexity of this method is too high and is not suitable for the real-time use of the actual scene.
The most representative theory of color-constancy is the retinal cortex (Retinex) proposed by Dr. Land [16], which considered that the color perceived by human vision mainly depends on the reflection component r rather than the compnent f projected onto the retina. Retinex theory attempts to separate r from f and reduce the influence of i on the image so as to enhance the image. The commonly-used Retinex algorithms include single-scale Retinex (SSR) [16], multi-scale Retinex (MSR) [17], and multi-scale Retinex with color restoration (MSRCR) [17]. Zhang S et al. [18] have shown that those methods perform well in many tasks, but they also have some problems that can be observed in Figure 2 with respect to the halo artifacts that exist in the processed images: the edges are obvious within these images, which lead to higher contrasts, thereby causing the loss of more details.
In general, the methods mentioned above can improve the quality of the image with relatively little noise, but are unable to deal with the very large amount of noise in underwater images taken due to the extreme turbidity of underwater environments. The most fundamental reason is because these algorithms are from the image processing point of view and did not take into account the nature of the noise.

2.2. Dark Channel Prior Model

The fog image generation model [19,20,21,22] is generally described by the following expression:
I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) )
in which I(x) represents the image we actually see, J(x) indicates the picture without fog, A represents the global atmospheric light, and t(x) represents the transmission rate. The goal of the algorithm is to recover J(x) according to I(x).
The dark channel prior (DCP) model was proposed by He et al. [23] under the assumption that “some pixels have at least one channel with very low intensity in the outdoor fogless non-sky environment”. This can be described as follows:
J d a r k ( x ) = min c { r , g , b } ( min y Ω ( x ) ( J c ( y ) ) ) 0
Assuming that Ac is known, we can obtain recovered picture:
J ( x ) = I ( x ) A max ( t ( x ) , t 0 ) + A

2.3. Underwater Dark Channel Prior Models

Yang et al. [24] proposed an underwater dehaze algorithm based on the dark channel prior model. The overall steps were not changed, and white balance color recovery was added. Some algorithms used different underwater scattering models to evaluate the light intensity [25,26]. Block M et al. [27] proposed an automatic approach based on the dark channel prior model to recover the picture. However, these methods are too simple to contend with complex environments. In the case of uneven lighting, the processed image will be over-exposed, as can be seen in Figure 3b.
Wang et al. [28] proposed an underwater dehaze method based on the dark channel prior model and underwater back-scattering model. This algorithm can avoid the problem of over-exposure, but it brought another problem. It makes the recovered picture darker than the normal picture, and more information was lost, as can be seen from Figure 3c. In addition, due to the complexity of the underwater environment, this method relies heavily on manually determining the picture type and then selecting the appropriate parameters. Thus, it is not an efficient algorithm.
Moreover, none of the previous methods have given an experimental proof of the rationality of the underwater dark channel prior model, so the recovery formula may not be available underwater. As no experimental proof of the underwater dark channel prior model was found, we designed an experiment to verify the rationality of the underwater dark channel prior as he did [23]. In order to solve the problem of image darkening, we propose a new method for estimating he background light intensity and a simple post-processing step. In order to recognize the type of underwater environment, we propose a simple and effective convolutional neural network for scene recognition to determine the underwater environment category.

3. The Proposed Approach

3.1. Architecture

For an original picture, we identify the degree of turbidity in the original picture through the underwater environment recognizer, since the degree of turbidity determines the parameters of the underwater reflection model. Then, we use the corresponding parameters of the underwater image enhancement algorithm to obtain enhanced images according to the underwater environment information. Figure 4 shows our proposed framework that improves the previous two underwater image processing methods and makes the image enhancement have good performance. It is an automatic process that does not rely on manual operation.

3.2. Verify the Underwater Dark Channel Prior

In order to verify if the dark channel prior model works in the underwater environment, we collected 2000 relatively clear pictures of underwater environments and the statistics of intensities of all the pixels. As shown in Figure 5a, over 50% of the pixels have an intensity of 0, over 70% of the pixels have intensities under 10, and over 90% of the pixels have intensities under 50. The result is not as obvious as on land. It is clear that the underwater images are always more or less scattered or blurred.
As a comparison, we collected 2000 turbid underwater images to count the intensity of each pixel of the dark channel map. As shown in Figure 5b, we found that the ratio of intensity 0 dropped to below 20%, and a large number of pixel intensities fell within the range of 0 to 150. It becomes clear that the scattering of turbid water has a relatively large impact on the image quality. Therefore, we conclude that the dark channel prior rule also exists in the underwater environment.

3.3. Underwater Environment Recognition

As the underwater environment is very complex, we cannot use a set of parameters to handle all situations. The traditional underwater dehaze method is based on the dark channel prior model, and underwater backscatter needs to be determined manually by selecting the environment information and the appropriate parameters. Obviously, this is very inefficient and impractical.
After our observation and conclusion of previous underwater research measurements, we found that most of the underwater images could be divided into four types: pure water, clean water, mildly turbid water, and severely turbid water. Since there is no suitable underwater image dataset, currently, our underwater data are collected online and annotated manually in advance; each category has 2000 pictures, and the whole dataset is partitioned into a training set and a test set for experiments according to an 8:2 ratio.
It is difficult to design features to classify those pictures into four categories through traditional methods. In fact, we had tried some features, such as brightness, contrast, etc. However, the performance is not good enough in practice. Since the convolutional network is very effective in image recognition, and there are many effective CNN architectures, like LeNet-5 [29], AlexNet [30], ResNet [31], etc., it is feasible to use it to improve classification accuracy.
Although our task can be regarded as only a four-class classification problem, this is more complex than the digital classification problem in practice. We attempted to use LeNet-5, and it cannot learn well on this task. Other architectures are too complicated to handle this kind of problem and resulted in serious over-fitting because our data is insufficient. Thus, our guiding principle is to seek a balance between reducing the model complexity and maximizing the test precision. Based on that principle, we found a good architecture for this task after trying out a variety of models. Figure 6 shows this framework in detail and, in the training process, we flip, rotate, and crop each image to augment this dataset. Then, we obtain a dataset that contains four categories, and each category has 12,000 pictures.
Finally, we obtained an accuracy value of 97.2% in the test set; Figure 7 shows the performance of this framework.

3.4. Underwater Image Denoising Algorithm

As we have mentioned above, the enhanced images could be over-exposed if we use the original dark channel prior model [23,24]. This is because the reflection underwater and the reflection on land are different. However, we can find some similarities by comparing the underwater imaging model (Equation (1)) and foggy imaging model (Equation (2)) if we use e c l as the transmittance t(x), and let:
A = r k β ( θ ) I ( r ) c d r
the underwater imaging model can be rewritten as follows:
L r = t ( x ) L p + A ( 1 t ( x ) )
We can see that the underwater imaging principle is almost the same as the fog imaging principle. Then, the estimate of the background A’ is converted to the calculation of the underwater back-scattering.
According to the dark channel prior model, we can obtain the recovery equation as follows:
L p ( x ) = L r A max ( h ( x ) , h 0 ) + A
which means that once we estimate the background light A’ correctly, we can recover the underwater image. Thus, the estimation of the background light can directly affect the performance of the image enhancement.
Background light estimates based on the original pictures were, respectively, used and improved by Tan [32] and Fattal [33]. The core idea of those methods is to select the pixel value of the strongest light intensity in the original image as an estimation of the background light, because the point with the strongest intensity on a picture can be a good indicator of the overall environment. However, in some cases, if the intensity values of fog and the object are close, the object will not be recognized correctly, and the algorithm treats the object as fog.
In the underwater environment, the light intensity can be very uneven. Wang et al. [27] took the maximum brightness of an image as the background light, which is a convenient way to estimate background light. However, as the overall brightness of the image was ignored, the images become very dark after the enhancement process.
Based on the characteristics of the underwater environment, this paper proposes a new background light estimation algorithm to reflect the overall background light of the underwater environment.
We can take the intensity of top 1% of the pixels in the image, denoted as Nmax, and then take 0.5% of the pixels in the middle of the whole image, denoted as Nmiddle, finally taking 0.5% of the darkest pixel, denoted as Nmin. Then, we can obtain three different average background light estimations:
A max = 1 N max i = 1 N max k β ( θ i ) I i ( r ) c A m i d d l e = 1 N m i d l l e i = 1 N m i d d l e k β ( θ i ) I i ( r ) c A min = 1 N min i = 1 N min k β ( θ i ) I i ( r ) c
As a result, we can obtain three recovered images, and then average the three images to obtain the final recovered image. The background light obtained in this way can reflect the average brightness value of a picture as completely as possible and recover a brighter image.

3.5. Post-Processing

We discovered that the picture contrast is relatively lower than the normal pictures after being processed by our model; as a result, there is a potential for further enhancement, so we added a post-processing step to the image. Additionally, we believe that the basic noise has been removed, and there is no need to use more complex methods to reduce the image quality, so we chose a relatively simple linear contrast enhancement method, described as follows:
g ( x , y ) = α f ( x , y ) + β
in which g(x,y) represents the enhanced image, f(x,y) is original image, and α and β are the coefficients used to adjust the contrast and brightness, respectively, and are related to a specific scene.
In our experiments, specifically, clean water: α = 1.0, β = 0.1; mildly turbid water: α = 1.5, β = −0.1; and severely turbid water: α = 1.2, β = 0.1.

4. Experimental Results

In a very pure and clear underwater environment, it is not necessary to perform image enhancement to avoid information loss after processing. Our experiment is mainly concentrated in the other three situations: clean water, mildly turbid water, and severely turbid water. Figure 8, Figure 9 and Figure 10 show the original images and the images enhanced by different algorithms. More specifically, images in column (a) are the original images, the images in column (b) are created by the original dark channel prior model (DCP) [23,24], the images in column (c) are based on the back-scattering dark channel prior model (BSDCP) [27], and the images in column (d) are produced by our method.
As can be seen from the first row of Figure 8, the differences between the results obtained by individual algorithms are very small, as the light is relatively uniform and adequate. However, when the brightness is not uniform, the differences between individual algorithms are large, as shown in the second row of Figure 8. The problem of over-exposure processed by the original dark channel prior algorithm gets very serious; the back of the fish is processed as white, which is totally different from the original image. Our work is relatively good at denoising and color recovery in different situations.
In the next scenario, a more extreme underwater environment, our method performs significantly better than the other algorithms. It should be noted that other methods need their parameters adjusted manually according to the underwater environment before the image enhancement process, and our method does not require this manual adjustment.
In fact, there is still no recognized standard for underwater image quality evaluation; Yang [33] and Li [34] systematically discussed some methods for comparing the quality of underwater images. In this paper, we selected three experimental metrics—contrast, entropy, and average gradient—to evaluate the performance of each algorithm. In order to illustrate the generalization of the algorithm, we used the test set mentioned in Section 3.3. A total of 1200 pictures were taken from three kinds of underwater environments: clean, mildly turbid (MD), and severely turbid (SD). Each group contained 400 pictures. Then, we counted the proportion of algorithms that performed the best in each group, as shown in Table 1.
It can be inferred from Table 1 that the images processed by the original dark channel prior models generally have higher contrast and average gradients, but do poorly in entropy, because those images have dramatic changes in color due to over-exposure and have lost more image information. The experimental results show that our proposed method has very obvious advantages in most of those metrics, including stable contrasts, average gradient, and less information loss. Therefore, the algorithm we proposed is more stable and more efficient than the original dark channel prior model and the original dark channel prior model based on back-scatter. Meanwhile, our method can recognize the environmental features automatically and can make the process of image enhancement more accurate that the rest of methods.

5. Conclusions

A new underwater image enhancement approach is presented in this paper for underwater robot navigation and marine science recognition, which is based on the dark channel prior model and underwater back-scatter model. The proposed method solved the problem of over-exposure and over-dark that the original dark channel prior model and original back-scatter model caused. We introduced the environment recognition module, which can select the most suitable image enhancement parameters according to corresponding underwater environments and then enhance a variety of underwater environments precisely. The model presented in this paper is more robust than other approaches under different underwater conditions and can resist extreme distortion while retaining the detail of the picture as much as possible.
However, our model still has two problems that require further study. First, there is a shortage of underwater samples. There is no suitable well-marked underwater dataset available for us today. Therefore, we need to collect and label underwater pictures ourselves, but we still have too few resources to collect. The second one is the speed and robustness of the model. To obtain an enhanced image, we need to go through many steps and we also need to utilize the data measured by previous scientists. This will limit the use of the model in real-time.
Our future work will be focused on solving the two problems mentioned above. The automatic generation of underwater images will be investigated using unsupervised GAN [35] in order to obtain enough underwater images. This can provide more training data to improve the accuracy of underwater environment recognition. We are also striving to put the underwater environment module and image enhancement module on one network so that the parameters in the model could be learned through supervised training, model complexity could be reduced, and the speed could be effectively improved. Additionally, the other is color- correction, underwater pictures often have a wide range of blue and green light, and we need to find a good way to make the colors of underwater pictures look more natural and closer to the colors on land.

Acknowledgments

In this paper, we have described the research advances that enhance the image in different underwater environment. We want to thank Kaiming He and thank him for some groundbreaking work. We also would like to thank the entire Laboratory of Brain-inspired Computing Technique and Applications for their foundational contributions to the project and developing the production pipeline.

Author Contributions

Wei Pan provided the ideas and major revisions of this paper. Kun Xie completed the paper manuscript and all the experiments. Suxia Xu provided the necessary experimental and communication support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  2. McLean, J.W.; Voss, K.J. Point spread function in ocean water: Comparison between theory and experiment. Appl. Opt. 1991, 30, 2027–2030. [Google Scholar] [CrossRef] [PubMed]
  3. Voss, K.J. Simple empirical model of the oceanic point spread function. Appl. Opt. 1991, 30, 2647–2651. [Google Scholar] [CrossRef] [PubMed]
  4. Hou, W.; Gray, D.J.; Weidemann, A.D.; Arnone, R.A. Comparison and validation of point spread models for imaging in natural waters. Opt. Express 2008, 16, 9958–9965. [Google Scholar] [CrossRef] [PubMed]
  5. Binding, C.E.; Bowers, D.G.; Mitchelson-Jacob, E.G. Estimating suspended sediment concentrations from ocean colour measurements in moderately turbid waters; the impact of variable particle scattering properties. Remote Sens. Environ. 2005, 94, 373–383. [Google Scholar] [CrossRef]
  6. Lee, M.E.; Korchemkina, E.N. Volume Scattering Function of Seawater; Springer Series in Light Scattering; Springer: Cham, Switzerland, 2018; pp. 151–195. [Google Scholar]
  7. Kirk, J.T. Volume scattering function, average cosines, and the underwater light field. Limnol. Oceanogr. 1991, 36, 455–467. [Google Scholar] [CrossRef]
  8. Singh, S.S.; Jarial, P. Review and Comparative Analysis on Image Enhancement for Underwater Images. Int. J. Adv. Res. Comput. Sci. 2017, 8. [Google Scholar] [CrossRef]
  9. Lu, H.; Li, Y.; Zhang, Y.; Chen, M.; Serikawa, S.; Kim, H. Underwater optical image processing: A comprehensive review. Mob. Netw. Appl. 2017, 22, 1204–1211. [Google Scholar] [CrossRef]
  10. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing Underwater Images and Videos by Fusion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 81–88. [Google Scholar]
  11. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Trans. Image Process. 2018, 27, 379–393. [Google Scholar] [CrossRef] [PubMed]
  12. Raj, S.M.A.; Khadeeja, N.; Supriya, M.H. Implementation of Histogram Based Image Fusion Technique for Underwater Image Enhancement in Reconfigurable Platform. Indian J. Sci. Technol. 2017, 10. [Google Scholar] [CrossRef]
  13. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG Int. J. Comput. Sci. 2007, 34, 239–244. [Google Scholar]
  14. Althaf, S.K.; SK, J.B.; Shaik, M.A. A Study on Histogram Equalization Techniques for Underwater Image Enhancement. Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. 2017, 2. [Google Scholar]
  15. Badgujar, P.N.; Singh, J.K. Underwater image enhancement using generalized histogram equalization, discrete wavelet transform and KL-transform. Int. J. Innov. Res. Sci. Eng. Technol. 2017, 6, 11834–11840. [Google Scholar]
  16. Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  17. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, S.; Wang, T.; Dong, J.; Yu, H. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing 2017, 245, 1–9. [Google Scholar] [CrossRef]
  19. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  20. Fattal, R. Single Image Dehazing. In Proceedings of the Annual Conference on Computer Graphics SIGGRAPH, Los Angeles, CA, USA, 11–15 August 2008. [Google Scholar]
  21. Narasimhan, S.G.; Nayar, S.K. Chromatic Framework for Vision in Bad Weather. In Proceedings of the 2000 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head Island, SC, USA, 13–15 June 2000; pp. 598–605. [Google Scholar]
  22. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  23. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  24. Yang, H.Y.; Chen, P.Y.; Huang, C.C.; Zhuang, Y.Z.; Shiau, Y.H. Low Complexity Underwater Image Enhancement Based on Dark Channel Prior. In Proceedings of the 2011 Second International Conference on Innovations in Bio-inspired Computing and Applications Innovations in Bio-inspired Computing and Applications (IBICA), Ostrava, Czech Republic, 17–20 December 2011. [Google Scholar]
  25. Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  26. Akila, C.; Varatharajan, R. Color fidelity and visibility enhancement of underwater image de-hazing by enhanced fuzzy intensification operator. Multimedia Tools Appl. 2018, 77, 4309–4322. [Google Scholar] [CrossRef]
  27. Block, M.; Gehmlich, B.; Hettmanczyk, D. Automatic Underwater Image Enhancement using Improved Dark Channel Prior. Stud. Digit. Herit. 2017, 1, 566–589. [Google Scholar] [CrossRef]
  28. Wang, Z.; Zheng, B.; Tian, W. New Approach for Underwater Image Denoise Combining Inhomogeneous Illumination and Dark Channel Prior; MTS: Moscow, Russia, 2013. [Google Scholar]
  29. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  30. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 1097–1105. [Google Scholar] [CrossRef]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  32. Fattal, R. Single image dehazing. ACM Trans. Gr. (TOG) 2008, 27, 72. [Google Scholar] [CrossRef]
  33. Yang, M.; Sowmy, A.A. New Image Quality Evaluation Metric for Underwater Video. IEEE Signal Process. Lett. 2014, 21, 1215–1219. [Google Scholar] [CrossRef]
  34. Li, F.; Wu, J.; Wang, Y.; Zhao, Y.; Zhang, X. A Color Cast Detection Algorithm of Robust Performance. In Proceedings of the IEEE International Conference on Advanced Computational Intelligence, Nanjing, China, 18–20 October 2012; pp. 662–664. [Google Scholar]
  35. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 2672–2680. [Google Scholar]
Figure 1. How the underwater image is generated.
Figure 1. How the underwater image is generated.
Robotics 07 00014 g001
Figure 2. Underwater images processed by Retinex. (a) Original; (b) retinex.
Figure 2. Underwater images processed by Retinex. (a) Original; (b) retinex.
Robotics 07 00014 g002
Figure 3. Two methods for enhancing underwater images. (a) Original, (b) DCP, (c) BSDCP.
Figure 3. Two methods for enhancing underwater images. (a) Original, (b) DCP, (c) BSDCP.
Robotics 07 00014 g003
Figure 4. The framework of the underwater image enhancement system.
Figure 4. The framework of the underwater image enhancement system.
Robotics 07 00014 g004
Figure 5. The ratio of the dark channel. (a) Images in clean water; (b) images in turbid water.
Figure 5. The ratio of the dark channel. (a) Images in clean water; (b) images in turbid water.
Robotics 07 00014 g005
Figure 6. Convolutional network architecture to identify underwater scenes.
Figure 6. Convolutional network architecture to identify underwater scenes.
Robotics 07 00014 g006
Figure 7. Performance of this framework. (a) Loss; (b) accuracy.
Figure 7. Performance of this framework. (a) Loss; (b) accuracy.
Robotics 07 00014 g007
Figure 8. Image enhancements under clean water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Figure 8. Image enhancements under clean water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Robotics 07 00014 g008
Figure 9. Image enhancement under mildly turbid water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Figure 9. Image enhancement under mildly turbid water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Robotics 07 00014 g009
Figure 10. Image enhancement under severely turbid water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Figure 10. Image enhancement under severely turbid water. (a) Original, (b) DCP, (c) BSDCP, and (d) result.
Robotics 07 00014 g010
Table 1. The proportion of algorithms that performed the best in each group (%).
Table 1. The proportion of algorithms that performed the best in each group (%).
ContrastEntropyAverage Gradient
CleanMTSTCleanMTSTCleanMTST
Original2.507.256.01.03.01.54.55.255.5
DCP19.2543.047.00.750.00.042.540.542.5
BSDCP12.011.013.252.00.00.01.027.529.0
Result66.2538.7533.7596.2597.098.552.026.7523.0

Share and Cite

MDPI and ACS Style

Xie, K.; Pan, W.; Xu, S. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation. Robotics 2018, 7, 14. https://doi.org/10.3390/robotics7010014

AMA Style

Xie K, Pan W, Xu S. An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation. Robotics. 2018; 7(1):14. https://doi.org/10.3390/robotics7010014

Chicago/Turabian Style

Xie, Kun, Wei Pan, and Suxia Xu. 2018. "An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation" Robotics 7, no. 1: 14. https://doi.org/10.3390/robotics7010014

APA Style

Xie, K., Pan, W., & Xu, S. (2018). An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation. Robotics, 7(1), 14. https://doi.org/10.3390/robotics7010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop