*2.5. Sonar Propagation Attenuation Model*

Burguera et al. [18] analyzed the side-scan sonar propagation model, which can be expressed by

$$I(\mathbf{p}) = K \cdot \Phi(p) \cdot R(p) \cdot \cos(\beta(p)) \tag{3}$$

where *I*(p) is the echo intensity, which is the received side-scan sonar data intensity; *K* is the normalized coefficient; Φ(*p*) is the acoustic penetration intensity; *R*(*p*) is the reflection intensity of the acoustic wave on the seabed; and β(*p*) is the incidence angle of the sonar. As the seabed reflection intensity is needed, *I*(p) can be obtained from the propagation model formula. The acoustic penetration intensity model can be derived from the sensitivity model proposed by Kleeman and Kuc [19], but the influence of the incident angle of the sonar on the intensity of side-scan sonar is excluded in Burguera et al. [18]. Thus, the correction effect is not good, and the parameter information in the calculation equation needs the parameter information and propagation information of side-scan sonar, so this method is not suitable for gray scale correction of a single side-scan sonar image.

### *2.6. Beam Pattern*

The beam pattern is determined by the working characteristics and physical design of the sonar sensor array [7], but it is also one of the reasons for the uneven gray level of side-scan sonar images. Chang and colleagues [6,7,9] determined the energy distribution function relative to the angle by summing up the energy levels for each angle over the whole data series. Then, according to the

statistical results, the average energy of each angle can be obtained. Finally, the inverse of this average can be applied as a correcting factor to individual datum in the time series. However, this method needs to consider the change in seabed topography and seabed sediments; otherwise, the correction image will be poor.

#### **3. Gray Scale Correction Method Based on Retinex**

The image processing algorithm based on Retinex theory is a commonly used optical image defogging and low illumination image enhancement algorithm, which was proposed by Land in 1963 [20]. It decomposes an image into an illumination map and a reflectance map, expressed as

$$S(\mathbf{x}, \mathbf{y}) = R(\mathbf{x}, \mathbf{y}) \* L(\mathbf{x}, \mathbf{y}) \tag{4}$$

where, *S(x,y)* is the original image, *R(x,y)* is the reflectance map, *L(x,y)* is the illumination map, and the operator \* means element-wise multiplication. The reflectance map reflects the essential information of the scene in the image, and the illuminated map reflects the brightness information of the environment in the image. The change in brightness information results in the change in the gray value of the image. Therefore, to ensure that the image can normally reflect the scene information, we need to reduce the influence of illumination change on the original image.

The commonly used image enhancement methods based on Retinex include Single Scale Retinex (SSR), Multi-Scale Retinex (MSR), Multi-Scale Retinex with Color Restoration (MSRCR), and Multiscale Retinex with Chromaticity Preservation (MSRCP) [21–24].

The SSR method uses Retinex to deform Equation (4) by calculating the logarithm to produce the reflection map:

$$r(\mathbf{x}, y) = \log R(\mathbf{x}, y) = \log(\frac{S(\mathbf{x}, y)}{L(\mathbf{x}, y)}) \tag{5}$$

Firstly, we get the low-pass function that is calculated with Equation (6), then use the low-pass function to estimate the illumination map that corresponds to the low frequency part of the original image. Thus, the reflection map represented by the high frequency component of the original image can be obtained with Equation (7). Finally, the obtained logarithmic reflectance map is restored to the corrected image.

$$F(x, y) = \lambda e^{-\frac{(x^2 + y^2)}{c^2}} \tag{6}$$

$$r(\mathbf{x}, y) = \log S(\mathbf{x}, y) - \log[F(\mathbf{x}, y) \otimes S(\mathbf{x}, y)] \tag{7}$$

where *c* is the Gaussian circumference scale, λ is a scale, and ⊗ represents convolution operation.

The image corrected by the SSR algorithm may cause blurring and excessive correction, so the original image is processed by multi-scale low-pass function in the MSR algorithm. The multi-scale low-pass function is the weighted sum of multi-scale low-pass function in SSR algorithm. Thus, we can implement the algorithm with Equation (8).

$$r(\mathbf{x}, y) = \sum\_{k}^{K} \mathcal{W}\_{k} [\log S(\mathbf{x}, y) - \log[F\_{k}(\mathbf{x}, y) \otimes S(\mathbf{x}, y)]] \tag{8}$$

where, *K* is the number of *F(x,y)*, when *K* = 1, MSR is SSR. *W*<sup>k</sup> is the weight. The value of *K* is usually 3 and *W*<sup>1</sup> = *W*<sup>2</sup> = *W*<sup>3</sup> = 1/3.

As images processed with the MSR algorithm have a color imbalance, MSRCR and MSRCCP algorithms were developed on the basis of MSR. The MSRCR algorithm uses a color restoration factor to avoid the color imbalances caused by image local contrast enhancement. MSRCP uses the MSR algorithm and intensity information of each channel of image to enhance image.

At present, some new algorithms are based on Retinex. Guo et al. [25] proposed a simplified enhancement model called low-light image enhancement (LIME). They used max-RGB technology to estimate the illumination map, which takes the maximum value of the three channels of color image R, G, and B, then uses structure to refine the illumination map, uses gamma correction to re-estimate the non-linearity of the fine illumination map as the illumination map, and finally uses Retinex to obtain the enhanced image. The naturalness preserved enhancement (NPE) [26] algorithm is a non-linear uniform illumination map enhancement method. The image is decomposed into an illumination map and a reflection map by a filter, then the illumination map is transformed, and finally the illumination map and reflection map are merged again as the final enhancement image. Simultaneous reflection and illumination estimation (SRIE) [27] is a weighted variable model for simultaneous estimation of reflected and illuminated images. The estimated illuminated map is processed to enhance the image. Fu et al. [28] proposed a multi-deviation fusion method (MF) to adjust the illumination by fusing multiple derivations of the initially estimated illumination map.
