In recent years, many methods have been proposed to enhance and restore underwater images to tackle these issues. Image enhancement algorithms [
2] include histogram equalization, white balancing, wavelet transform, fusion methods, etc. These algorithms enhance image contrast and sharpen image details to improve the quality without relying on the underwater imaging model. While these methods are easy to operate and have simple principles, they do not address the root cause of the degradation of underwater images. Image restoration algorithms based on the Jaffe–McGlamery underwater imaging model [
3,
4] address this limitation. The core of this approach is to solve two unknowns in the model: transparency and background light, to restore the image. After the pioneering work of He et al. [
5], many related methods and variants emerged [
6,
7]. While the above image restoration algorithm can be applied to image dehazing and deblurring, there is still room for improvement [
8]. For better results, combined approaches [
8,
9,
10,
11,
12,
13,
14,
15,
16] that combine image enhancement methods such as Gray-World assumption theory, semantic white balance, low-pass filtering, and polarization technology with image restoration algorithms. While these methods can achieve better results, they require more processing time and complex computations.
Most of the methods mentioned above attempt to determine the distance between the camera and the target and approximate the intensity value of the farthest point as the background light. For instance, the dark channel prior (DCP) algorithm proposed by He et al. [
5] exploits the fact that scattered light can increase the luminance of the dark channel. As the depth of the scene increases, the dark channel luminance also increases. The background light can be obtained by calculating the average gray value of the top 1% of the brightest pixels in the dark channel. Furthermore, the transmission map can be obtained by substituting the dark channel value into the Jaffe–McGlamery underwater imaging model [
3,
4] after assuming that the dark channel value of a clear and fog-free image tends to zero. Drews et al. [
6] proposed the underwater dark channel prior (UDCP) algorithm, which is a variant of the DCP algorithm that removes the effect of the red channel by considering the severe attenuation of the red channel in underwater images. Carlevaris-Bianco et al. [
1] calculated the difference between the red channel and the maximum value of the blue and green channels to obtain the transmittance, which decreases with increasing distance. This law leads to the proposed MIP algorithm, where the grayscale value at the lowest transmittance, which corresponds to the farthest distance, is the value of the background light. Dai et al. [
14] employed the fact that objects that are further away from the camera are more blurred than closer objects. They used a quadtree to find a small area in the image with the flattest, the least color variation, and most blurred, and the mean grayscale value of this area is the value of background light. Additionally, Peng et al. [
15] combined several of the above methods to solve for the unknowns in the Jaffe–McGlamery underwater imaging model [
3,
4]. Furthermore, there have been various deep learning-based methods proposed recently for scene depth and lighting estimation. For instance, Wang et al. [
17] presented an occlusion-aware light field depth estimation network with channel and view attention, which uses a coarse-to-fine approach to fuse sub-aperture images from different viewpoints, enabling robust and accurate depth estimation even in the presence of occlusions. Song et al. [
7] used deep learning to find the linear relationship between the maximum value of the blue-green channel and the maximum value of the red channel and the distance. Similarly, the transmission map is found, and the grayscale value at the farthest distance is the value of background light. Ke et al. [
16] comprehensively consider color, saturation, and detail information to construct the scene depth and edge maps for estimating the transmission map. Zhan et al. [
18] proposed a lighting estimation framework that combines regression-based and generation-based methods to achieve precise regression of lighting distribution with the inclusion of a depth branch. These methods rely on estimating the background light value by approximating the farthest point in the underwater image as a point at infinity from the camera. Therefore, these methods only work well when processing horizontally captured images, as shown in
Figure 1a. However, in areas such as underwater pipeline tracking, underwater mine clearance, underwater terrain exploration, and seafood fishing, the cameras on UUVs are typically pointed downwards vertically or diagonally. The captured images are shown in
Figure 1b, where each point is very close to the camera. In this situation, the distance-based background light estimation methods are unsuitable for such images. Moreover, the DCP algorithm, which depends on the background light to solve for the transmission map, tends to introduce errors in the results, degrading the quality further.
To solve this problem, image restoration methods that do not rely on the farthest point from the camera in the image are needed. Currently, the commonly used solution is to create an end-to-end network that directly outputs the restored image after inputting the original image [
19,
20,
21,
22,
23,
24,
25,
26,
27]. For instance, Tang et al. [
25] unified the unknown variables of the underwater imaging model. They predicted a single-variable linear physical model through a lightweight convolutional neural network (CNN) to generate clear images directly. Zhang et al. [
26] enhanced the three-channel features of the image and fused them based on CNN to solve the problem of non-uniform illumination. Han et al. [
27] utilized contrastive learning and generative adversarial networks to maximize the mutual information between original information for image restoration. Although these methods perform well in underwater scenes, they require graphics processing units (GPUs) with high performance and large memory. However, the suitability of this method for UUVs with limited hardware resources is limited, which led to the proposal of an underwater overhead image restoration method suitable for central processing units (CPUs).
Recently, Li et al. [
8] proposed a simple and effective underwater image restoration method based on the principles of minimum information loss and histogram priors. This method is distance-independent and can be implemented on CPUs. Inspired by this, these principles and prior knowledge are applied to propose a distance-independent background light calculation method. The novelty of the method is that it takes a global perspective and constructs a loss function based on the expected effect of image restoration, thus obtaining the background light without relying on distance. In addition, considering the real-time requirement of image processing, Jamil et al. [
28] classified the information in the image into three categories: useful, redundant, and irrelevant, and discussed the pros and cons of various lossy image compression methods. Regarding the global variable of background light, which is related to the overall color tone of the entire image but not related to the details in the image. Inspired by this, in solving the background light, this paper adopts a lossy compression method to reduce the resolution of the image, thereby improving the efficiency of the algorithm. The main contributions of this work are as follows: