Next Article in Journal
Experimental Assessment of Cuff Pressures on the Walls of a Trachea-Like Model Using Force Sensing Resistors: Insights for Patient Management in Intensive Care Unit Settings
Previous Article in Journal
TR Self-Adaptive Cancellation Based Pipeline Leakage Localization Method Using Piezoceramic Transducers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

The Color Improvement of Underwater Images Based on Light Source and Detector

1
Institute of Deep-Sea Science and Engineering, Chinese Academy of Sciences, Sanya 572000, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(2), 692; https://doi.org/10.3390/s22020692
Submission received: 14 December 2021 / Revised: 12 January 2022 / Accepted: 14 January 2022 / Published: 17 January 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
As one of the most direct approaches to perceive the world, optical images can provide plenty of useful information for underwater applications. However, underwater images often present color deviation due to the light attenuation in the water, which reduces the efficiency and accuracy in underwater applications. To improve the color reproduction of underwater images, we proposed a method with adjusting the spectral component of the light source and the spectral response of the detector. Then, we built the experimental setup to study the color deviation of underwater images with different lamps and different cameras. The experimental results showed that, a) in terms of light source, the color deviation of an underwater image with warm light LED (Light Emitting Diode) (with the value of Δ a * 2 + Δ b * 2 being 26.58) was the smallest compared with other lamps, b) in terms of detectors, the color deviation of images with the 3×CMOS RGB camera (a novel underwater camera with three CMOS sensors developed for suppressing the color deviation in our team) (with the value of Δ a * 2 + Δ b * 2 being 25.25) was the smallest compared with other cameras. The experimental result (i.e., the result of color improvement between different lamps or between different cameras) verified our assumption that the underwater image color could be improved by adjusting the spectral component of the light source and the spectral response of the detector. Differing from the color improvement method with image processing, this color-improvement method was based on hardware, which had advantages, including more image information being retained and less-time being consumed.

1. Introduction

1.1. The Background of Color Improvement in Marine Surveys

As one of the most direct approaches to perceive the world, optical images can provide plenty of useful information for various underwater applications, such as marine geology surveys, underwater mining, fishery, and marine archaeology [1,2,3]. However, differing from the optical imaging systems in the terrestrial environment, underwater imaging systems generally suffer from color deviation due to the varying degrees of attenuation encountered by light traveling in water with different wavelengths. The color deviation impacts the reliability and utility in underwater applications [4,5]. There is no doubt that the color improvement of underwater images is significant for underwater applications [6,7,8,9], and the scope of color improvement in underwater images has received considerable attention in recent decades [10,11]. To solve the color deviation of the underwater image, lots of research has been conducted, which can be classified into two categories, including color restoration with prior information of light attenuation, and color enhancement without the information of light attenuation.

1.2. Related Work

1.2.1. Color Restoration with the Prior Information of Light Attenuation

Color restoration with prior information is a method based on the analysis of light attenuation. This method requires three steps, including analyzing attenuation characteristics of light in water, building an underwater imaging model, and restoring the color of the underwater image by data processing [12,13]. First, the attenuation of light could be analyzed in terms of spectrum or color channels [14]. In [15], Kan et al. considered the nonlinear attenuation of light in different wavelengths at different depths, then the change in three color channel values was calculated and used to compensate for the color loss. In [16], after analyzing the influence of the spectral discretization on an underwater image, the color reproduction of the underwater images was studied by Boffety. In [17], to calculate the attenuation in water, Kaeli et al. proposed a novel method to estimate the attenuation coefficient with a Doppler velocity log. Second, an underwater imaging model was built. One general model of the underwater imaging process is the Jaffe–McGlamery model, where the irradiance of a monochromatic underwater image is formulated as the linear combination of the following three components: the direct component, the absorption component, and the scattering component [18,19]. In our team, to tackle the color deviation caused by artificial lighting, we proposed a new model of underwater image degradation, where the parameters of deep-sea lamps were added [20]. In [21], Guo et al. presented a model representing the absorption, scattering, and refraction of water, lenses, and image sensors. In [22], Lu et al. proposed a novel underwater imaging model to compensate for the attenuation discrepancy along the propagation path. In [23], Eunpil Park et al. proposed a novel underwater image-formation model in which forward scattering was included. Lastly, the color was compensated by data processing. The Dark Channel Prior (DCP) from outdoor image dehazing was introduced in underwater image color restoration [24]. In [25], Galdran et al. proposed the Red Channel Prior based on the DCP to recover the lost contrast in underwater images. This new prior reversed the red channel to deal with the strong attenuation of red light in water bodies. In [26], Drews Jr et al. proposed the Underwater DCP (UDCP) from the traditional DCP by excluding the red channel used in producing the prior. Apart from the DCP-related priors, there are also other priors proposed for underwater image restoration. In [27], Meng et al. proposed a hybrid method for the color-improvement process based on a principle that exploited the relationship of R, G, and B (red, green, and blue) channels. Validation proved that the method had a better performance. Codruta et al. introduced a single-image approach, which built on the blending of two images directly derived from a color-compensated and white-balanced board of the original degraded image. The evaluation revealed that the enhanced images and videos were characterized by improved color reproduction [28].

1.2.2. Color Enhancement without the Information of Light Attenuation

Color enhancement without the information of light attenuation is based on data processing, which does not require specialized hardware or knowledge about underwater conditions or scene structure [29]. This method could be divided into three categories, including the Retinex algorithm, Contrast Limited Adaptive Histogram Equalization algorithm (CLAHE), and deep learning algorithm. In terms of the Retinex algorithm, Hassan performed a Retinex-based enhancement of a CLAHE-processed image; the qualitative and quantitative performance comparison with some of the existing approaches showed that the proposed approach achieved better enhancement of the underwater images [30]. In [31], Tang et al. proposed a new underwater image-enhancement algorithm based on adaptive feedback and the Retinex algorithm; the result showed that the color saturation, color richness, and clarity of the image were all significantly improved. Jobson et al. extended a previously designed single-scale center/surround Retinex to a multi-scale version that achieved simultaneous dynamic range compression/color consistency/lightness rendition [32]. Shu Zhang et al. presented a novel method, namely LAB-MSR, achieved by modifying the original Retinex algorithm. It utilized the combination of the bilateral filter and trilateral filter on the three channels of the image in CIELAB color space according to the characteristics of each channel [33]. In terms of CLAHE, Iqbal et al. used histogram stretching in the RGB color space to restore the color balance. The saturation and intensity stretching of HSI was used to increase the true color and solve the problem of lighting [34]. In [35], Ahmad et al. integrated the modification of image histogram into two main color models consisting of Red–Green–Blue and Hue–Saturation–Value color spaces. Qualitative analysis revealed that the proposed method could significantly reduce the blue–green effect. In terms of a deep learning algorithm, to reduce the amount of data required while providing better image enhancement, Deng et al. proposes an underwater image color transfer generative adversarial network (UCT-GAN) [36]. In [37], Chen et al. proposed a new underwater image enhancement method based on deep learning and an image formation model. In [38], Lu et al. proposed a multi-scale cycle generative adversarial network system including the information of structural similarity index measure loss, dark channel prior algorithm, and adaptive structural similarity index measure loss. A strong performance on the underwater image color correction was acquired. Furthermore, many synthesis algorithms have been proposed. In [39], Kamil et al. proposed a natural method based on an underwater image color enhancement method consisting of four steps, including introducing a new approach to neutralize underwater the color cast, proposing dual-intensity images fusion based on the average of mean and median values, proposing swarm intelligence based on mean equalization, and applying the unsharp masking technique. Experiments on underwater images captured under various conditions indicated that the proposed method could improve the image color reproduction significantly. In [40], Li et al. proposed a weakly supervised color transfer method to correct color deviation by designing a multi-term loss function, including adversarial loss, cycle consistency loss, and structural similarity index measure loss. The experiments showed that the method produced visually pleasing results.

1.3. Our Work

Despite the above color improvement methods having advantages in cost and robustness, the methods also have caused the loss of information and the consumption of time. There is no doubt that these methods go against the rapid judgment and intelligent recognition in underwater applications. To address these problems, we proposed a color-improvement method with adjusting the spectral component of light source and the spectral response of detector. To verify our assumption, first, the underwater imaging model was analyzed. Then, an experiment platform for analyzing the color deviation of underwater optical imaging was designed, and the color deviation of underwater images with different lamps and different cameras was acquired. The experimental results showed that the color deviation of an underwater image with a warm-light LED (Light-Emitting Diode) (with a value of Δ a * 2 + Δ b * 2 being 26.58) is the smallest compared to other lamps, and the color deviation of an image with the 3×CMOS RGB camera (a novel underwater camera with three CMOS sensors developed in our team for suppressing the color deviation (with the value of Δ a * 2 + Δ b * 2 being 25.25)) is the smallest compared to other cameras. The result verified our assumption of color improvement by adjusting the spectral component of light source and the spectral response of detector. Furthermore, differing from the color-improvement method with image processing, this color-improvement method was based on hardware, which has advantages, including more image information being retained and less time being consumed, which are significant in the rapid judgment and real-time video transmission of underwater applications.

2. Experimental Setup and Details

2.1. The Analysis of the Underwater Imaging Process

The underwater imaging process is shown in Figure 1, wherein the light from the light source penetrates through the water to the target. Then, the light is reflected by the object to the lens and detector; the point p is imaged to the pixel i in detector. L λ denotes the spectral component of the light source, S p λ is the reflection function of point p at wavelength λ , and R i λ denotes the spectral response of pixel i at wavelength λ .
In the underwater imaging model, the intensity values of a certain pixel i of the detector from point p , V i , p , can be calculated as
V i , p = λ L λ S p λ R i λ e μ λ l p d λ
where μ λ denotes the light attenuation coefficient at wavelength λ . l p denotes the optical path of attenuation from light source to point p to lens. As is shown in Equation (1), there are four factors that influence the intensity value V i , p . The attenuation (i.e., μ λ and l p ) is the root cause for color deviation. The reflectance ( S p λ ) is an intrinsic feature of the object. The spectral component ( L λ ) of the lamp and the spectral response of detector ( R i λ ) can be modified by us, so a method adjusting the spectral distribution performance of lighting source and the spectral response function of detector is proposed.

2.2. Experimental Setup and Process

To verify our assumption, the experimental setup was built as shown in Figure 2a. The camera and lamp were placed outside the water tank; the test target, i.e., the 24 color board, was put inside the water tank. The attenuation coefficient of water is shown in Figure 2b. The experiment was carried out in a dark room. The light penetrates through the water to the 24-color board and is reflected at the camera, with changes in the distance between the target to the wall of the tank (which is the imaging distance in water). The color reproduction of underwater images with different lamps and cameras can be derived. The distance parameter is just a variate for selecting samples, and our main conclusion is not relevant to the distance parameter. Therefore, we set the imaging distance from 0 m to 2 m.
To analyze the effect of the light source on color reproduction of the underwater image, four kinds of lamps (including the day light LED, warm light LED, cold light LED, and incandescent lamp) were applied in the experiment. The spectral component curves of the four lamps are shown in Figure 3.
Meanwhile, to verify the effect of the detector on color improvement of underwater images, four cameras, including three consumer cameras were used. Camera No.1 is the camera of HUAWEI mobile phone LIO-AL00, in which the white balance is set to 5500 k, corresponding to the environment of indirect sunlight on sunny days; camera No.2 is a camera of Hikvision DS-2CD7087EWD-A, in which the white balance is set to indirect sunlight on sunny days (i.e., 5500 k); and camera No.3 is a GoPro hero7-1, in which the white balance is also set to 5500 k. The detailed specs of these three cameras are as follows:
The specifications of camera No.1—HUAWEI mobile phone LIO-AL00:
  • The camera contains four sub-cameras with 3× optical zoom (with 18 mm, 27 mm, 80 mm) and 30× digital zoom;
  • The resolution is 3840 × 2160;
  • The white balance setting is 5500 k;
  • The shutter speed is 1/125 s;
  • The exposure compensation is 0;
  • The ISO is 640;
  • AF (auto focus) is AF-C.
The specifications of camera No.2—Hikvision DS-2CD7087EWD-A:
  • The resolution is 3840 × 2160;
  • The size of detector is 1/1.8”;
  • The minimum illumination in color mode is 0.002 [email protected];
  • The white balance setting is indirect sunlight on sunny days (i.e., 5500 k);
  • The digital noise reduction level is set to 50;
  • The brightness is set to 50;
  • The contrast ratio is set to 50;
  • The sharpness is set to 50;
  • The saturation is set to 50;
  • The shutter speed is 1/25 s;
  • The day/night conversion mode is turned off;
  • The Backlight compensation function is turned off.
The specifications of camera No.3—GoPro hero7-1:
  • The resolution is 12 megapixels;
  • The white balance setting is 5500 k;
  • The sharpness setting is moderate;
  • The shutter speed is 1/125 s;
  • The exposure compensation is 0;
  • ISO is set from 100 to 3200;
  • The function of color is set to flat;
  • The FOV is set to linearity;
  • The function of super phone is turned off.
The 3×CMOS RGB camera design of our team was the fourth camera applied in this experiment (Figure 4 shows the difference between the traditional camera and 3×CMOS RGB camera). Figure 4a shows the imaging principle of traditional color camera, where the filter consists of three types of micro filters corresponding R, G, and B channels. Figure 4b shows the imaging principle of 3×CMOS RGB camera, where the polychromatic light is split into R, G, B colors with the coating prism. Then, by composing data from three detectors corresponding to R, G, and B channels, the color image is derived. Figure 4c shows the photo of spectral structure with prism. Figure 4d shows the transmittance curve of R, G, and B channels, in which the transmittance of red channel is increased by coating. Figure 4e shows the photo of the 3×CMOS RGB camera movement. The improvement of underwater image color is presented between the 3×CMOS camera with the three cheap cameras (the “raw images” were captured by the three common camera, while the “improved images” were captured by 3×CMOS camera).
In the 3×CMOS RGB camera, the coatings of the prism spectral structure are modified by increasing the transmittance of the red wave band. The difference in transmittance between common prism spectral structure with the newly designed prism spectral structure is shown in Figure 5. In the calibration of image color, the white balance algorithm is designed to retain the advantage of increasing the transmittance of the red wave band of the prism spectral structure.
The CMOS sensors are GSENSE2011E from Gpixel Inc. GSENSE2011E features 2e readout noise, 87.5dB intra-scene dynamic range, and frame rate up to 668fps. The number of pixels is 2048 (H) × 1152 (W), the pixel size is 6.5 μm × 6.5 μm, and the detector has an outstanding quantum efficiency of 72% at 595 nm. The spectral response of the detector is shown in Figure 6.
The F number of the lens is 4. The working wavelength band is 400–700 nm. The lens is fixed focal with the focal distance is 12 mm. The resolution is 2048 × 1152. The minimum illumination is [email protected]. The shutter speed is 1/60 s. The raw images are on based three channel CMOS sensors with Cameralink-based interface, the compressed output images are one channel HD-SDI images with data processing of FPGA.
The experimental process is shown in Figure 7. By setting experimental parameters, including the light source, camera, test color block, and work distance, a dataset including the values of R, G, and B can be derived. Figure 7a shows the process of setting experimental parameters; Figure 7b shows the flow chart of experimental process.
The process of acquiring experiment data is shown in Figure 8. As is shown in Figure 8a, the underwater images were captured first; then, 7 color blocks were extracted. With the change in distance, lamps, and cameras, 144 images and 1008 blocks were captured. Figure 8b shows the acquisition process of this experiment images data. Lastly, the R, G, and B values of experiment images were acquired.

3. Experimental Results

Based on the acquisition process of experiment images data (shown as Figure 8), 144 color images and 1008 selected blocks were captured in the experiment. As it was difficult to show them, we turned the form of experiment results from image to data. To remove the effect of brightness on experiment images, the image data were converted from the RGB color space to the CIELAB color space according to Equations (2)–(4), where L * is the parameter to measure brightness (which is ignored in this paper), and a * and b * are the parameters to measure color [41].
L * = 116 Y / Y 0   1 / 3 16 a * = 500 f X / 0.9505 f Y b * = 200 f Y f Z / 1.0891
where X , Y , Z are derived from Equation (3) and f t is expressed as Equation (4). R , G , and B are the values of image data corresponding to R, G, and B channels.
X Y Z = 0.412453 0.357580 0.180423 0.212671 0.715160 0.072169 0.019334 0.119193 0.950227 R G B
f t = t 1 / 3 if   t > 6 29 3 1 3 29 6 2 t + 4 29 otherwise
The values of a * and b * at different distances with four types of lamps and four types of cameras can be calculated as Figure 9.
By calculating the difference in the values of a * and b * between experimental images and calibration images, the color deviation parameters Δ a * and Δ b * with different light lamps and different cameras are shown in Figure 10. It can be concluded that the color deviation increases with the increase in the imaging distance.
Based on the data in Figure 10a, the color deviation can be derived as in Table 1. We can conclude that the color deviation of the underwater image with the warm-light LED is the smallest (the value of Δ a * 2 + Δ b * 2 is 26.58).
Based on the data in Figure 10b, the color deviation with different cameras can be calculated as in Table 2. We can conclude that the color deviation with the 3×CMOS RGB camera is the smallest (the value of Δ a * 2 + Δ b * 2 is 25.25).

4. Conclusions and Discussion

By analyzing light attenuation in water and the imaging model, we proposed a method to correct the color deviation by adjusting the spectral component of the light source and the spectral response of the detector. Then, an experimental setup for analyzing the color deviation in underwater images was set up, and the color deviation with different light sources and different cameras was analyzed quantitatively. The experiment’s results showed that the color reproduction of underwater images with a warm-light LED is superior to the other lamps and that the color deviation of underwater images with a 3×CMOS RGB camera, including adjusting the spectral response function of the detector, is superior to the other cameras.

Author Contributions

Conceptualization, X.Q.; methodology, X.Q. and Y.W.; analysis, X.Q. and B.L.; manuscript writing, X.Q., B.L., K.L., C.L., B.Z. and J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Hainan Provincial Natural Science Foundation of China (Nos. 618QN308 and 2019RC258), the Youth Innovation Promotion Association CAS (No. 2020361), the Science and Technology project of Sanya city (Nos. 2018KS03 and 2017YD13), the Knowledge Innovation Engineering Frontier Project of the Chinese Academy of Sciences (No.Y770011001), and the project of the Key Laboratory of Space Laser Information Transmission and Detection Technology of the Chinese Academy of Sciences (No. KJL-2021-001).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding author upon request.

Acknowledgments

The authors thank Brian Connolly (Physik Instrumente) for the helpful discussion on the translation stage.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahn, J.; Yasukawa, S.; Sonoda, T.; Nishida, Y.; Ishii, K.; Ura, T. An Optical Image Transmission System for Deep Sea Creature Sampling Missions Using Autonomous Underwater Vehicle. IEEE J. Ocean. Eng. 2018, 45, 350–361. [Google Scholar] [CrossRef]
  2. Lu, H.; Li, Y.; Zhang, Y.; Chen, M.; Serikawa, S.; Kim, H. Underwater Optical Image Processing: A Comprehensive Review. Mob. Netw. Appl. 2017, 22, 1204–1211. [Google Scholar] [CrossRef]
  3. Du, M.; Peng, X.; Zhang, H.; Ye, C.; Dasgupta, S.; Li, J.; Li, J.; Liu, S.; Xu, H.; Chen, C.; et al. Geology, environment and life of the deepest part of the world’s ocean. Innovation 2021, 2, 100109. [Google Scholar] [CrossRef]
  4. Lee, D.; Kim, G.; Kim, D.; Myung, H.; Choi, H.T. Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean. Eng. 2012, 48, 59–68. [Google Scholar] [CrossRef]
  5. Feng, F.; Wu, G.; Wu, Y.; Miao, Y.; Liu, B. Algorithm for Underwater Polarization Imaging Based on Global Estimation. Acta Opt. Sin. 2020, 40, 75–83. [Google Scholar]
  6. Huang, H.; Sun, Z.; Liu, S.; Di, Y.; Xu, J.; Liu, C.; Xu, R.; Song, H.; Zhan, S.; Wu, J. Underwater Hyperspectral imaging for in situ underwater microplastic detection. Sci. Total Environ. 2021, 776, 145960. [Google Scholar] [CrossRef]
  7. Xu, Y.; Liu, X.; Cao, X.; Huang, C.; Liu, E.; Qian, S.; Liu, X.; Wu, Y.; Dong, F.; Qiu, C.W.; et al. Artificial Intelligence: A Powerful Paradigm for Scientific Research. Innovation 2021, 2, 100179. [Google Scholar] [CrossRef]
  8. Zhang, L.; Lei, J.; Zhang, R.; Zhang, F.; Zhang, G.; Zhu, Y. Polarization characteristic analysis of orthogonal reflectors with a side-scanning galvanometer in a laser communication terminal system. Appl. Opt. 2020, 59, 9944–9955. [Google Scholar] [CrossRef] [PubMed]
  9. Lin, J.; Du, Z.; Yu, C.; Ge, W.; Lü, W.; Deng, H.; Zhang, C.; Chen, X.; Zhang, Z.; Xu, J. Machine-vision-based acquisition, pointing, and tracking system for underwater wireless optical communications. Chin. Opt. Lett. 2021, 19, 050604. [Google Scholar] [CrossRef]
  10. Liu, B.; Liu, Z.; Men, S.; Li, Y.; Ding, Z.; He, J.; Zhao, Z. Underwater Hyperspectral Imaging Technology and Its Applications for Detecting and Mapping the Seafloor: A Review. Sensors 2020, 20, 4962. [Google Scholar] [CrossRef]
  11. Vlachos, M.; Skarlatos, D. An Extensive Literature Review on Underwater Image Colour Correction. Sensors 2021, 21, 5690. [Google Scholar] [CrossRef]
  12. Li, Y.; Lu, H.; Serikawa, S. Underwater Image Devignetting and Color Correction; Springer: Cham, Switzerland, 2015. [Google Scholar]
  13. Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean. Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
  14. Chang, H.H.; Cheng, C.Y.; Sung, C.C. Single Underwater Image Restoration Based on Depth Estimation and Transmission Compensation. IEEE J. Ocean. Eng. 2019, 44, 1130–1149. [Google Scholar] [CrossRef]
  15. Kan, L.; Yu, J.; Yang, Y.; Liu, H.; Wang, J. Color correction of underwater images using spectral data. In Proceedings of the Optoelectronic Imaging and Multimedia Technology III, Beijing, China, 9–11 October 2014; International Society for Optics and Photonics: Bellingham, WA, USA, 2014. [Google Scholar]
  16. Boffety, M.; Galland, F.; Allais, A. Color image simulation for underwater optics. Appl. Opt. 2012, 51, 5633. [Google Scholar] [CrossRef]
  17. Kaeli, J.W.; Singh, H.; Murphy, C.; Kunz, C. Improving Color Correction for Underwater Image Surveys. In Proceedings of the OCEANS’11 MTS/IEEE KONA, Waikoloa, HI, USA, 19–22 September 2011. [Google Scholar]
  18. McGlamery, B.L. A Computer Model for Underwater Camera Systems. Ocean Opt. 1980, 6, 221–231. [Google Scholar]
  19. Jaffe, J.S. Computer Modeling and the Design of Optimal Underwater Imaging Systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  20. Liu, Y.; Xu, H.; Shang, D.; Li, C.; Quan, X. An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering. Sensors 2019, 19, 5567. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Guo, Y.; Song, H.; Liu, H.; Wei, H.; Yang, P.; Zhan, S.; Wang, H.; Huang, H.; Liao, N.; Mu, Q.; et al. Model-based restoration of underwater spectral images captured with narrowband filters. Opt. Express 2016, 24, 13101. [Google Scholar] [CrossRef] [PubMed]
  22. Lu, H.; Li, Y.; Nakashima, S.; Serikawa, S. Turbidity Underwater Image Restoration Using Spectral Properties and Light Compensation. Ieice Trans. Inf. Syst. 2016, 99, 219–227. [Google Scholar] [CrossRef] [Green Version]
  23. Park, E.; Sim, J.Y. Underwater Image Restoration Using Geodesic Color Distance and Complete Image Formation Model. IEEE Access 2020, 8, 157918–157930. [Google Scholar] [CrossRef]
  24. He, K.M.; Sun, J.; Tang, X.O. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  25. Galdran, A.; Pardo, D.; Picón, A.; Gila, A.A. Automatic Red-Channel Underwater Image Restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  26. Drews, P., Jr.; do Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Sydney, Australia, 1–8 December 2013. [Google Scholar]
  27. Meng, H.; Yan, Y.; Cai, C.; Qiao, R.; Wang, F. A hybrid algorithm for underwater image restoration based on color correction and image sharpening. Multimed. Syst. 2020, 1–11. [Google Scholar] [CrossRef]
  28. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Processing 2017, 27, 379–393. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Pramunendar, R.A.; Shidik, G.F.; Supriyanto, C.; Andono, P.N.; Hariadi, M. Auto Level Color Correction for Underwater Image Matching Optimization. Int. J. Comput. Sci. Netw. Secur. 2013, 13, 18–23. [Google Scholar]
  30. Hassan, N.; Ullah, S.; Bhatti, N.; Mahmood, H.; Zia, M. The Retinex based improved underwater image enhancement. Multimed. Tools Appl. 2021, 80, 1839–1857. [Google Scholar] [CrossRef]
  31. Tang, Z.; Jiang, L.; Luo, Z. A new underwater image enhancement algorithm based on adaptive feedback and Retinex algorithm. Multimed. Tools Appl. 2021, 80, 28487–28499. [Google Scholar] [CrossRef]
  32. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Processing 2002, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
  33. Zhang, S.; Wang, T.; Dong, J.; Yu, H. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing 2017, 245, 1–9. [Google Scholar] [CrossRef] [Green Version]
  34. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG Int. J. Comput. Sci. 2007, 34, 239–244. [Google Scholar]
  35. Ghani, A. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2014, 27, 219–230. [Google Scholar] [CrossRef]
  36. Deng, J.; Luo, G.; Zhao, C. UCT-GAN: Underwater image colour transfer generative adversarial network. IET Image Processing 2020, 14, 3613–3622. [Google Scholar] [CrossRef]
  37. Chen, X.; Zhang, P.; Quan, L.; Yi, C.; Lu, C. Underwater Image Enhancement based on Deep Learning and Image Formation Model. arXiv 2021, arXiv:2101.00991v2. [Google Scholar]
  38. Lu, J.; Li, N.; Zhang, S.; Yu, Z.; Zheng, H.; Zheng, B. Multi-scale adversarial network for underwater image restoration. Opt. Laser Technol. 2019, 110, 105–113. [Google Scholar] [CrossRef]
  39. Azmi, K.Z.; Ghani, A.S.; Yusof, Z.M.; Ibrahim, Z. Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm. Appl. Soft Comput. 2019, 85, 105810. [Google Scholar] [CrossRef]
  40. Li, C.; Guo, J.; Guo, C. Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer. IEEE Signal Processing Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef] [Green Version]
  41. Connolly, C.; Fleiss, T. A study of efficiency and accuracy in the transformation from RGB to CIELAB color space. IEEE Trans. Image Processing A Publ. IEEE Signal Processing Soc. 1997, 6, 1046. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The sketch map of the underwater imaging process.
Figure 1. The sketch map of the underwater imaging process.
Sensors 22 00692 g001
Figure 2. The experimental setup for color reproduction analysis of underwater images: (a) the experimental setup of capturing underwater images; (b) the attenuation coefficient of tap water.
Figure 2. The experimental setup for color reproduction analysis of underwater images: (a) the experimental setup of capturing underwater images; (b) the attenuation coefficient of tap water.
Sensors 22 00692 g002
Figure 3. The spectral component curves of different lamps.
Figure 3. The spectral component curves of different lamps.
Sensors 22 00692 g003
Figure 4. The design of the 3×CMOS RGB camera: (a) the imaging principle of traditional color; (b) the imaging principle of traditional of 3×CMOS RGB camera; (c) the photo of spectral structure with prism; (d) the transmittance curve of R, G, and B channels; (e) the photo of the 3×CMOS RGB camera movement.
Figure 4. The design of the 3×CMOS RGB camera: (a) the imaging principle of traditional color; (b) the imaging principle of traditional of 3×CMOS RGB camera; (c) the photo of spectral structure with prism; (d) the transmittance curve of R, G, and B channels; (e) the photo of the 3×CMOS RGB camera movement.
Sensors 22 00692 g004
Figure 5. The difference in transmittance between common prism spectral structure and the newly designed prism spectral structure: (a) the spectral transmittance of common prism spectral structure; (b) the spectral transmittance of newly designed prism spectral structure.
Figure 5. The difference in transmittance between common prism spectral structure and the newly designed prism spectral structure: (a) the spectral transmittance of common prism spectral structure; (b) the spectral transmittance of newly designed prism spectral structure.
Sensors 22 00692 g005
Figure 6. The spectral response of CMOS sensor GSENSE2011E.
Figure 6. The spectral response of CMOS sensor GSENSE2011E.
Sensors 22 00692 g006
Figure 7. The experimental process: (a) the process of setting experimental parameters; (b) the flow chat of experimental process.
Figure 7. The experimental process: (a) the process of setting experimental parameters; (b) the flow chat of experimental process.
Sensors 22 00692 g007
Figure 8. The process of acquiring experiment data: (a) the sketch map of data acquisition; (b) the flow chat of data acquisition.
Figure 8. The process of acquiring experiment data: (a) the sketch map of data acquisition; (b) the flow chat of data acquisition.
Sensors 22 00692 g008
Figure 9. The values of a * and b * at different distances with different lamps and different cameras.
Figure 9. The values of a * and b * at different distances with different lamps and different cameras.
Sensors 22 00692 g009
Figure 10. The values of Δ a * and Δ b * with four kinds of lamps and four kinds of cameras: (a) Δ a * and Δ b * with different lamps; (b) Δ a * and Δ b * with different cameras.
Figure 10. The values of Δ a * and Δ b * with four kinds of lamps and four kinds of cameras: (a) Δ a * and Δ b * with different lamps; (b) Δ a * and Δ b * with different cameras.
Sensors 22 00692 g010
Table 1. The color deviation and nonlinear errors with different lamps.
Table 1. The color deviation and nonlinear errors with different lamps.
ItemsDay-Light LEDWarm-Light LEDCold-Light LEDIncandescent Lamp
Δ a * In air12.6714.4715.9015.91
2 m26.2022.6723.8724.27
Mean value in water21.2818.5419.9519.74
Δ b * In air12.4117.0217.2021.09
2 m30.4524.6826.1332.29
Mean value in water22.2719.0419.6228.47
Δ a * 2 + Δ b * 2 In air17.7322.3423.4326.42
2 m40.1733.5135.4040.40
Mean value in water30.8026.5827.9834.65
Table 2. The Color deviation with different cameras.
Table 2. The Color deviation with different cameras.
ItemsCamera No.1Camera No.2Camera No.33×CMOS RGB Camera
Δ a * In air10.8722.0613.8712.16
2 m23.4226.9423.9922.68
Mean value in water18.4824.5218.7717.75
Δ b * In air15.9724.8013.0013.94
2 m27.0631.2526.5024.84
Mean value in water24.1328.2717.8517.96
Δ a * 2 + Δ b * 2 In air19.3233.1919.0118.50
2 m35.7941.2635.7432.82
Mean value in water30.3937.4125.9125.25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Quan, X.; Wei, Y.; Li, B.; Liu, K.; Li, C.; Zhang, B.; Yang, J. The Color Improvement of Underwater Images Based on Light Source and Detector. Sensors 2022, 22, 692. https://doi.org/10.3390/s22020692

AMA Style

Quan X, Wei Y, Li B, Liu K, Li C, Zhang B, Yang J. The Color Improvement of Underwater Images Based on Light Source and Detector. Sensors. 2022; 22(2):692. https://doi.org/10.3390/s22020692

Chicago/Turabian Style

Quan, Xiangqian, Yucong Wei, Bo Li, Kaibin Liu, Chen Li, Bing Zhang, and Jingchuan Yang. 2022. "The Color Improvement of Underwater Images Based on Light Source and Detector" Sensors 22, no. 2: 692. https://doi.org/10.3390/s22020692

APA Style

Quan, X., Wei, Y., Li, B., Liu, K., Li, C., Zhang, B., & Yang, J. (2022). The Color Improvement of Underwater Images Based on Light Source and Detector. Sensors, 22(2), 692. https://doi.org/10.3390/s22020692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop