Next Article in Journal
Anomaly Detection in 6G Networks Using Machine Learning Methods
Previous Article in Journal
DB-YOLOv5: A UAV Object Detection Model Based on Dual Backbone Network for Security Surveillance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Adaptive Finch Clustering Sonar Segmentation Algorithm Based on Data Distribution and Posterior Probability

1
Department of Electronics, Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266000, China
2
China Shipbuilding Corporation No. 710 Research Institute, Yichang Testing Technology Research Institute, Yichang 443003, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(15), 3297; https://doi.org/10.3390/electronics12153297
Submission received: 3 July 2023 / Revised: 27 July 2023 / Accepted: 28 July 2023 / Published: 31 July 2023

Abstract

:
This study proposes a side-scan sonar target detection technique for CPU or low-performance GPU to meet the requirement of underwater target detection. To rectify the gray distribution of the original side scan sonar data, enhance picture segmentation, and supply the data distribution probability for the clustering algorithm, the methodology uses a classic image processing technique that is GPU-friendly. The modified adaptive Finch clustering technique is used to segment the image and remove image voids after assessing the processed image attributes. The posterior information is then used to apply a classification label to each pixel. The characteristics of the connected region are analyzed in the data playback of the Tuandao experiment in accordance with the imaging principle of side-scan sonar and the original shape and size characteristics of the target. The predicted target results are combined with the AUV navigation information to obtain the predicted target longitude and latitude information, which is then sent to the AUV master control system to guide the next plan. The Jiaozhou Bay sea test results demonstrate that the traditional target detection algorithm put forth in this paper can be integrated into a low-performance GPU to detect targets and locate them. The detection accuracy and speed exhibit strong performance, and real-time autonomous sonar detection is made possible.

1. Introduction

Ocean research, seabed resource discovery, Marine territory control, and Marine economic growth all have significant strategic importance with the notion of Marine power. Sound waves are frequently employed in marine operations because they have a comparatively low attenuation during ocean propagation and a reasonably long detection distance. Equipment called Side Scan Sonar (SSS) captures images underwater by measuring the strength of sound wave echoes [1,2]. It can achieve long-range and extensive unmanned ocean sensing when mounted on an autonomous underwater vehicle (AUV) [3,4,5,6,7,8]. The target recognition of side-scan sonar images has been extremely difficult due to the relative complexity of the seafloor environment, noise interference caused by the equipment’s operation during AUV navigation, and image distortion caused by the change in speed and attitude during navigation [9]. Additionally, unlike regular optical images, side-scan sonar images do not directly project objects; rather, they reflect the maximum outline of objects and the reflectivity of objects’ materials, losing information about the surface shape and color of objects, making target recognition more difficult. The ability to use computers effectively and quickly to assist in human target detection and realize the automatic construction of ocean detection systems is a current research hotspot. Manual image interpretation relies largely on the individual experience of sonar personnel.
The goal of target point recognition in side-scan sonar images is to identify the target’s reflected signal and the bottom terrain from the images [10]. The primary procedures are feature extraction from the data, target categorization, and result evaluation. Traditional template matching, threshold segmentation, and approaches based on machine learning, such as support vector machines, neural networks, and decision trees, are examples of target categorization algorithms that are frequently employed [11]. Accuracy, robustness, and real-time performance of target point recognition are important considerations in practical applications. In the twenty-first century, as deep learning has gained popularity, a wide range of network topologies have been developed, but as detection accuracy standards are continuously improved, the number of computing volumes is also increasing exponentially. In order to achieve the real-time requirements, it is necessary to compute some network models with high detection accuracy on devices without GPUs or GPUs with poor performance. Although the process of extracting target features from the detection technique based on image features is more difficult, it still has a significantly lower computational load than a deep learning network and may be directed to the CPU for target detection. There have been numerous studies on object segmentation using these algorithms in recent years. In order to identify underwater targets, Kucukbayrak et al. [12] first collected synthetic features from underwater audio sounds. They then trained a topological hidden Markov model. In order to solve the maximum posterior probability, Duan et al. [13] limited Markov random fields using prior information and developed the bifeature co-occurrence matrix of image edge contour and image texture. The active contour model was enhanced by Hao et al. [14], who also produced positive experimental findings on a sizable area with challenging topography. To achieve precise picture segmentation, Daniel et al. [15] employed decision trees to perform a rigid registration of objects and shadow portions in sonar images, respectively. The C-means clustering segmentation algorithm was considerably enhanced by the addition of the membership degree idea by Tian et al. [16], which increased the segmentation accuracy of sonar pictures. To prevent local minimization in repeated iterations, Huo Guanying’s team [17] first employed edge-driven constraints for rough image segmentation before using the Region Extensible fitting (RSF) model for fine image segmentation. In 2019, Sarfraz et al. [18] proposed a clustering method that does not require any hyperparameters and uses the first nearest neighbor detection method to find the maximum link information of images. In light of the aforementioned literature, it can be concluded that the processing of target objects in sonar pictures at the moment is largely knowledge-dependent [19], and that either the calculation matrix is predetermined or image features are artificially extracted based on the image characteristics. Of course, these processing techniques are not used everywhere. The present conventional algorithms, on the other hand, mainly focus on picture segmentation [20,21,22,23,24,25] and can only tell the target’s highlight from the seabed’s reverberation background. Furthermore, because these methods require a lot of computation and take a long time to run on most hardware [26,27], it is not viable to perform an autonomous real-time detection of the submarine target.
An improved adaptive Finch clustering sonar segmentation technique based on data distribution and posterior probability is proposed for CPU and low-performance GPU taking into account the need for autonomous real-time target recognition for underwater vehicles. The following are the important points and contributions:
  • A gray scale correction and data distribution calculation method based on side-scan sonar image is proposed, including the effective filtering of speckle noise in the image, and the gray scale distribution correction of the image to improve the gray scale distribution regularity of the image;
  • An adaptive Finch clustering rough segmentation algorithm based on data distribution is proposed. The adjacency matrix is constructed by using the data distribution probability of the corrected image, and the cluster analysis is carried out to fill the noise hole in the sonar image, and the highlight area of the target object, the shaded area, and the seabed reverberation area are initially divided;
  • An improved Finch clustering target detection and target discrimination method based on a posteriori probability of side-scan sonar images is proposed. The detection target can be identified by the area mark, position mark and relative distance mark of the connected region. The algorithm can be integrated on CPU or low-performance GPU. The algorithm has passed a large number of laboratory data measurement certifications and carried out sea test experiments, which meets the needs of AUV real-time target detection, and further improves the autonomous performance of AUV.
This paper is organized as follows: Section 2 primarily introduces the real-time traditional object detection architecture. Section 3 introduces the distribution extraction method of sonar image data, the adaptive Finch clustering rough segmentation algorithm, and the improved Finch clustering target detection algorithm based on image posterior probability. The suggested method’s sea test results and simulation results, which confirm the algorithm’s efficacy and accuracy, are mostly described in Section 4. The conclusion of this essay is found in Section 5.

2. Real-Time Traditional Target Detection Architecture

The “Sailfish-324” Underwater Vehicle, independently developed by the Underwater Vehicle Laboratory (UVL) of the Ocean University of China, consists of a navigation and positioning system, a control system, a propulsion system, a communication system, and a target detection system. Figure 1 shows in detail the hardware structure of the target detection system of “Sword-324” underwater vehicle adopted in this paper, including side-scan sonar left and right transducer, electronic cabin, micro host (board), NVIDIA embedded computing module (GPU), and AUV master control system (PC104). The side-scan sonar’s left and right transducers produce sound waves and detect their echoes; To enable storing and display, the board recorded the echo intensity in real-time; The GPU handles the online analysis of sonar data flow [28], implements real-time target detection algorithm to interpret underwater targets in real-time, computes target location using navigation data, and stores and transmits final target identification results with location data to the AUV main control system. The primary function of PC104, the AUV’s central control module, is to upload and download data across various systems. Its use in the target detection system is to receive the sensor data from the AUV navigation system and the results of the identification and positioning performed by the target detection system, and to carry out autonomous path planning [29,30,31] and task planning in accordance with those results.
The GPU (NVIDIA Jetson TX2) of the AUV object detection system is combined with the object detection technique suggested in this research, which is based on conventional image processing. Figure 2 displays the flowchart for the algorithm. Two processes do parallel calculations as part of the algorithm. Process 1 is in charge of receiving the scanned original XTF data returned by the sonar sensor, saving the data, and sending it to Process 2. Process 2 is in charge of analyzing the original data thrown by Process 1 and completing the processing and recognition of the image data obtained from the analysis. The two processes are started at the same time, and the calculation is parallel, which not only ensures the integrity of sonar data reception but also realizes the real-time processing of data. Gray scale distribution correction [9] and the probability distribution calculation of the denoised picture are two of the components of the distribution probability extraction of sonar image data. Gray scale in the rectified image is evenly distributed, and the image is relatively smooth. To generate a rough segmentation of the image and increase image connection, the recovered divisible distribution probability is passed into the Finch clustering adjacency matrix as the initial condition. Real-time underwater target recognition can be achieved by using the posterior probability to assess the connectivity of the coarsely segmented image and determine the threshold value. The position results computed after multiple detections can then be corrected and averaged, and the final target recognition information can be sent to the AUV master control system to complete the subsequent path planning.

3. The Proposed Method

The initial acoustic information supplied by SSS comprises a lot of noise interference due to the presence of several pollutants in the water and ocean current interference in the Marine environment, which poses significant challenges for manual or automated image interpretation.
In order to make it easier to accurately classify a subsequent target detection, we first explain how to calculate the distribution probability and then smooth and correct the gray distribution of the original image. Next, we explain the adaptive Finch clustering method, and finally we suggest an improved Finch clustering target analysis algorithm based on the posterior probability of the sonar image from this experiment.

3.1. Image Data Distribution Processing

There are several high-intensity speckled noises in the initial sonar data returned from the transducer, which are mostly brought on by mutual interference between the returned sound waves in a side scan unit. The echo scattering field is unpredictable since the sea floor is not flat and rather uneven. Although the frequency of the emitted sound wave is constant, the phase incoherence is due to the unequal scattering field. The echo belt imaging demonstrates how the noise intensity varies depending on the pixel, with some noise pixels being brighter than the mean and others being darker than the mean. The irregularity of this type of noise dispersion makes noise reduction work extremely challenging.
The original sonar picture data produced by the SSS is in a strip because of its scanning properties. In this experiment, the Shark-S900U side scan sonar is set to a fixed height of 8 m, a scanning range of 30 m, and a scanning frequency of 900 kHz. The 6s narrowband image with a resolution of 4800 × 110 is created by splicing the 110 ping data that it received. The operating mode and imaging concept of the side-scan sonar with an AUV are explicitly illustrated in Figure 3.
The most important aspect of image preprocessing in the area of side-scan sonar image target recognition is how to smooth the image without losing the image pixel information. The target highlight region, the bottom reverberation area, and the darkened area that is blocked or the silent wave returns are the three basic components of a sonar image. The ensuing target detection performance will be significantly impacted by the efficient and precise adjustment of the gray distribution of these three regions.

3.1.1. Image Gray Level Smoothing Correction

Image filtering is a powerful tool for reducing image noise. This article compares a number of widely used image filtering algorithms, including mean filtering, bilateral filtering, median filtering, and guided filtering [32]. It also discusses smooth correction of sonar image gray distribution and how to choose the best filtering method for the object detection method that is suggested in this article.
The main idea of mean filtering is to take the average value of n pixels around a calculation point as the real pixel value of the current calculation point, and the calculation formula is shown in Equation (1):
P ( x , y ) = 1 ( 2 n + 1 ) 2 j = y n y + n i = x n x + n P ( i , j ) ,
where P ( x , y ) is the point pixel value, n is the size of the selected rectangular window. Figure 4 shows the comparison effect between the target image and the original image after mean filtering. The images processed by the other three filtering methods are also added in Figure 4.

3.1.2. Actively Select Image Resolution

The 6s data produced by side-scan sonar contains the intensity information of the same horizontal position received by the port transducer and the starboard transducer, in accordance with the imaging principle of side-scan sonar shown in Figure 3. The PSNR index is used to compare images with various resolutions that have been processed using various filtering techniques. This is because sonar images have blind areas (areas directly below AUV) and echoes from acoustic waves to the sea surface, such as part (a) in Figure 3. Equation (2) is the calculation’s formula. Table 1 shows the calculation results. The larger the PSNR is, the closer the processed image is to the original image, so it can be obtained by analyzing the data in the table, and the image obtained is smoother when the left and Starboard images are filtered separately.
P S N R = 10 lg 255 2 × M N i , j f ( i , j ) f ( i , j ) ,
where M N is image size.
In order to improve the quality of image processing, the histogram quantitative analysis can be used to extract the gray distribution probability of the image and then optimize the filter parameters using the image’s statistical data. To enhance the clarity and contrast of image details, for instance, the findings of histogram analysis can be used to identify the suitable filter scale and filter type. The histogram is used to quantitatively examine each of the picture filtering techniques suggested in the previous two sections. The analysis findings are displayed in Figure 5 and Figure 6.
It is evident from the histogram data analysis image that in Figure 5a of the original sonar image, it is difficult to distinguish the seabed reverberation area and shadow area, aside from the blind area highlighting and some target object highlighting areas, and the pixels with gray values below 100 are distributed in an erratic manner and have obvious jitter. While average filtering, Figure 5b, has a significant smoothing effect on speckle noise, and the gray value shows obvious peaks near 30 and 80, greatly improving the image segmentation, bilateral filtering Figure 5c, guided filtering Figure 5d, and median filtering Figure 5e are unable to filter speckle noise well. The distribution of pixels of low gray values in the obtained histogram still lack any obvious characteristics.
Additionally, Figure 6 compares and contrasts the port side image from 6s and the full image. It is evident that by only filtering the port side image, the target object extraction in subsequent images is significantly improved because the distribution characteristics of the gray value are more obvious, the peak value is significantly higher than that of the entire image, and the number of pixels at the middle segmentation threshold is significantly decreased.

3.2. Adaptive Finch Clustering Algorithm Based on Data Distribution Probability

Using the K-nearest Neighbor, adjacency matrices, and the maximum flow algorithm, the graph theory-based Finch clustering technique primarily evaluates the distance and similarity of data rapidly before realizing data clustering. Finch clustering is superior to other clustering algorithms in that it processes huge amounts of data quickly, effectively handles noisy data, and effectively handles outliers.

3.2.1. Adjacency Matrix Equation

In Finch clustering, the initial parameter k and the data distance measure have a significant impact on the K-nearest neighbor algorithm, and various decisions frequently result in different clustering outcomes. Since the Finch clustering depends on the choice of beginning values, the distribution of target items in the sonar image is not uniform and there is no meaningful relation in absolute distance. This work suggests using the data distribution to automatically create the initial adjacency matrix E i j and implement the adaptive Finch clustering in order to address this issue.
According to the histogram analysis of the image in Section 3.1.2, the gray distribution of the image after mean filtering has three obvious maximum values, and the three maximum values from small to large correspond to the shadow and seawater area, the submarine background area and the target area, respectively. According to the analysis results, the pixel corresponding to the maximum value output is taken as the core point of the calculation cluster and written into the core point set A = A 1 , A 2 , A 3 , , A N , A R N d , where N is the total number of samples and each sample point has d attribute values.
The remaining sample points are divided into boundary point B = B 1 , B 2 , B 3 , , B N ,   B R N d and noise point sets C = C 1 , C 2 , C 3 , , C N ,   C R N d according to the histogram analysis results. When the distribution probability of sample points is greater than the minimum value, it is classified into the boundary point set; when the distribution probability of sample points is less than the minimum value, it is classified into the noise point set C .
The sample points in the noise point set C are compared with the sample points in the boundary point set B, and the Mahalanobis distance is used for the calculation; the calculation formula is as follows:
D C i , B j = x i μ j T S j 1 x i μ j ,
where D C i , B j represents the Mahalanobis distance between the sample point C i and the core point B j , the principal component vector of the sample point C i is denoted as x i , the mean vector of the cluster core point B j is denoted as μ j , and the covariance matrix of the core point B j is denoted as S j .
The boundary point B j , with the smallest calculated Mahalanobis distance D C i , B j , is taken as the nearest information point of the sample point C i . In the same way, the sample point in the boundary point set B is compared with the core point in the set A , and the core point A j with the smallest calculated Mahalanobis distance D B i , A j is taken as the nearest information point of the sample point B i .
According to the nearest neighbor information obtained above, the adjacency matrix is obtained by filling in Formula (4).
E i , j = 1             i f   j = k i 1 o r   k j 1 = i   o r   k i 1 = k j 1 0             o t h e r w i s e                                                           ,
where k i 1 represents the nearest neighbor index of sample i , the meaning of the above formula is: (1) connect the nearest neighbor of sample; (2) if the nearest neighbor of sample j is i , also connect; (3) Also connect if the nearest neighbors of sample i and sample j are the same.
Through the derivation of the above formula, a complete and sparse adjacency matrix can be obtained, in which the sample points connected together belong to the same cluster class. This results in the first hierarchical clustering result, which implements the direct transfer join according to the first nearest neighbor index only. Figure 7 illustrates the process of implementing the above algorithm in a 3 × 3 image. Figure 7a is the original image, and the image indicates the core point set A , the boundary point set B , and the noise point set C ; Figure 7b is the adjacency matrix measured according to Mahalanobis distance, and Figure 7c is the first hierarchical clustering result of Figure 7a.

3.2.2. Simple Hierarchical Clustering

According to the results obtained in Section 3.2.1, many cluster classes with high similarity can be obtained only once by clustering, because in the initial operation of selecting cluster centers, we set the points with the highest distribution probability as cluster core points, which will lead to many initial core points, and after the first level clustering, these core points will become a new cluster class. If we then use the traditional clustering method, such as K-means, etc., we will have to calculate the distance between clusters of every two or two clusters, which will be an extremely expensive calculation, and the complexity is O ( N 2 l o g ( N ) ) . Due to the particularity of sonar image, in the case of scanning frequency of 900 kHZ, the single-board image has reached 2400 resolution, and the amount of data of 6s is more than 100,000, if you want to calculate the inter-cluster distance between each core point, it will take a lot of time.
In the adaptive Finch algorithm proposed in this paper, each boundary point will retain two nearest neighbor information points in the initial calculation, and the second smallest nearest neighbor information will be written into the second adjacency matrix F i , j according to Formula (5). The difference between Formulas (5) and (4) is that the adjacency matrix F i , j is not directly transmitted, but only generates the adjacency matrix according to the information of adjacent points. In this way, the amount of computation in iterative calculations can be reduced, and there is no need to perform repeated inter-cluster calculations. The core point set of E i , j and F i , j is written into the core point adjacency matrix G i , j according to Formula (6). Each core point will also have its own neighboring core points. In this way, we only need to calculate the intercluster distance between the neighboring core points, and the calculation Formula (7) is the Euclidean distance with the introduced weight, and the introduced weight is (8). In this way, the computational complexity will be greatly reduced to O ( N log ( N ) ) .
F i , j = 1             i f   j = k i 1 o r   k j 1 = i 0             o t h e r w i s e                       ,
G i , j = 1             i f   E i , j = 1   o r   F i , j = 1 , b u t   E i , j     F i , j 1 0             o t h e r w i s e                                                                                                                                           ,
D μ i , μ j = t = 1 m ω i ( μ i t μ j t ) 2 ,
where D μ i , μ j represents the intercluster distance between the cluster center μ i and the cluster center μ j , m represents the dimension of the data, and μ j t represents the attribute value of the cluster center μ j under the dimension t .
ω i = p i x μ i 1 n   j = 1 n p i x μ j ,
where p i x μ i represent the pixel value of the cluster center μ i . 1 n   j = 1 n p i x μ j is the average pixel value between clusters for the cluster center μ i .
Then, the average pixel value of each cluster is recalculated and taken as the new cluster center, the calculation formula is shown in (9), and the next iteration is carried out. The iterative process will constantly update the cluster center and redistribute data points, so that the distortion function Q will continuously shrink to below the set threshold or no longer change, so as to achieve the purpose of algorithm convergence. The distortion function Q is the Formula (10).
y i = 1 n j = 1 n p i x x j ,
Q ( y 1 , y 2 , y n , x 1 , x 2 , x n ) = 1 n i = 1 n x i y i 2 ,
where x i denotes the sample data point and y i denotes the cluster center x i being assigned.
The findings demonstrate that using the extracted gray distribution probability as the initial core point of Finch clustering directly not only speeds up iterative computation’s convergence but also prevents the uncertainty of experimental results brought on by center point initialization that is chosen at random. Figure 8 displays the outcomes of processing the sonar port transducer’s original image using the clustering approach suggested in this paper. As a consequence, [[80.88554], [30.078537], [232.76544]] is the cluster center array.

3.2.3. Eliminating Minimal Noise Points

Analysis of the clustering effect in Figure 8 reveals that the treated image still contains a significant number of tiny noise interference spots. The first sonar data’s gray distribution and brightness are uneven due to the unevenness of the seabed and the presence of plankton in the water, which makes it extremely difficult to separate the shadow zone from the background region.
Considering that there is a very small interference area and a relatively large area containing effective information in the image, how to preserve the effective area while deleting or reducing the background interference is the next research direction of the algorithm in this paper. For the very small interference region, in the adjacency matrix E i , j , it is concretely shown that a core point A i contains only a very small number of the nearest information points. We only need to reclassify the core points whose nearest information points are less than a certain critical value into the boundary point set B to solve this problem. The specific calculation formula is as follows:
A i = B n + 1       i f   1 j A i , j < ε A i               o t h e r w i s e                 ,
where A i represents the i th core point in the core point set, n represents the number of samples in the current boundary point set, A i , j represents the index value corresponding to the core point A i and the boundary point B j , and ε represents the set threshold.
Following this step’s algorithm enhancement, the clean adaptive Finch clustering effect diagram illustrated in Figure 9 may be obtained by removing the least amount of isolated point noise. The target area (marked in red box) is free of small interference bands, whether they are highlighted or hidden by a black background, according to analysis of the processing results in Figure 9. The minimal noise points in the overall image are also corroded, and the isolated noise points with large pixels exist in a single piece, forming a one-piece shadow area, and the image segmentation is significantly improved.

3.3. Improved Finch Clustering Target Detection Algorithm Based on Posterior Probability

3.3.1. Principle of Posterior Probability Theory

It can be shown that the target item can already be clearly distinguished by human eyes in a picture that has been processed using the adaptive Finch clustering technique based on data distribution probability. The key research question of this experiment is how to use the clustering method to separate the target object from the background. The typical CPU-oriented target detection strategy investigated in this paper is not appropriate for neural networks’ high computing volume. The primary reference index is to determine whether there are reasonably close, independent, and suitable sized pairs of bright and dark patches in the image, taking into account that human eyes discern the target item in Figure 9b. If you translate this into computer terms, this means determining whether pairs of white continuous regions and black continuous regions exist, respectively, within the defined threshold range and whether the centers of gravity of the two regions are reasonably close to one another. The revised Finch clustering analysis algorithm based on image posterior probability is proposed in this research based on these reference indications. It can successfully detect and discriminate these regions to achieve precise target detection and localization.
(1) The image size of a metal ball with a 1 m diameter is relatively consistent under scanning conditions with a fixed height of 8 m, according to the side-scan sonar imaging method. A threshold analysis was carried out on the results of the grouping of 50 photos of the target items. Figure 10 depicts the analysis findings. As can be observed, the pixel values of the target object’s high-lit region with clear imaging are concentrated between 680 and 1350, whereas those of the shaded region are concentrated between 1000 and 2000. Therefore, for the connected regions of two colors, the region with pixel values within the threshold range is reserved, and the region outside the threshold range is classified as the submarine reverberation background region.
(2) The side-scan sonar’s scan images of the port and starboard were examined separately. In the picture of the port, the shadow area of the target item should be on the left side of the highlighted region, and in the starboard picture, it should be on the right side of the highlighted area. The center abscissa of the connected region that is highlighted on the port map should be larger than the shadow area, according to the information corresponding to the figure; the opposite is true for the starboard chart.
(3) There should be relative coherence between the high-light area with high echo intensity reflected by the target and the shaded area, where the silent wave arrives due to the block by the target, because only the blocked area will have no echo intensity information due to the small loss of sound wave propagation in water. The paired black and white continuous area’s center coordinates are reasonably close together in the im-age, as measured by Euclidean distance. According to an investigation of 50 photos of the target object, a clear image of the target object can be recognized if the Euclidean distance between the center of the highlighted area and the center of the shadow area is less than 130.

3.3.2. Target Determination in Cluster Analysis Based on Posterior Probability

A clustering technique based on Bayesian theory is the cluster analysis based on a posteriori probability. In this procedure, many categories must be created from a collection of data samples. Following initial data processing, which is covered in detail in Section 3.1 and Section 3.2, the Bayesian theory is used to determine the likelihood that a given data sample belongs to a particular category.
There are two posterior probabilities that are frequently employed for the classification of the data types discussed in this article:
(1)
The area posterior probability of the connected region
Using cluster analysis, related data points are grouped together to create a single cluster. In order to establish whether a region should be taken into account as a collection of related data points in the cluster analysis based on the posterior probability of the experimental data used in this research, we must first define the area threshold of a connected region.
In Section 3.2, the adjacency matrix G i , j obtained after the completion of the adaptive Finch clustering iteration contains the determined cluster center and the boundary point set belonging to the cluster center. By calculating whether the amount of data in the boundary point set of each cluster center is within the threshold range, we can judge whether the cluster area conforms to the theoretical area of the current target object. The calculation formula is shown in (12).
S A i = t a r g e t                                 i f   B i j ( m i n , m a x ) b a c k g r o u n d         o t h e r w i s e                                               ,
where S A i indicates the boundary number belonging to the core point A i , m i n indicates the lower bound of the threshold interval, and m a x indicates the upper bound of the threshold interval.
The setting of this area threshold needs to be determined according to the specific application scenario. How to determine the area threshold is described in detail in Section 3.3.1. In general, we can consider the size of the individual clusters in the data set, as well as the density of the data points, to define the threshold. According to the experimental data of the cluster island, ocean current coefficient, AUV sailing speed, and AUV attitude Angle stability will all affect the imaging size of the target object, then the empirical formula of the area posterior probability of the connected region can be listed as follows:
P a r e a = 1 5 C + 41 102 V + 37 100 A π ( p i x x ) 2 / S A i ,
where C is the ocean current coefficient, V is the current sailing speed of AUV (m/s), A is the attitude Angle stability of AUV, Range is the scanning range of side-scan sonar, θ is the scanning Angle of side-scan sonar, p i x x is the imaging width of the target under ideal conditions, and the calculation formula is shown in Formula (14), w is the imaging width of the single side.
p i x x = x w ( R a n g e ) 2 ( h ) 2 ,
where x is the actual width of the scanned object, and h is the height of the AUV against the base.
(2)
The posterior probability of the relative position of the highlighted connected area and the shaded blocked connected area
In some of the cluster analysis, we need to consider certain specific properties of the data point, such as the position of the data point in the image. In the posterior probability-based image segmentation and object detection in this experiment, we can use highlighted connecting regions and shading to block the relative position of connected regions to help cluster analysis.
In the adjacency matrix G i , j obtained after the iteration of adaptive Finch clustering in Section 3.2, the core point set A contains the position information of data points. It is only necessary to extract the position information of each data point in the point set and calculate the relative positions between the two according to Formula (15) to determine whether they are within the range of distance threshold.
D ( A i , B j ) = t a r g e t                                   i f     A i B j 2 ( m i n , m a x ) b a c k g r o u n d           o t h e r w i s e                                                                       ,
The setting of the distance threshold needs to be determined according to the specific application scenario. How to determine the distance threshold is described in detail in Section 3.3.1. According to the experimental data of the group island, AUV roll attitude coefficient will affect the imaging relative position of the target object, then the empirical formula of the posterior probability of the relative position of the connected region can be classified as:
P p o s i t i o n = 0.86 R o l l w 2 h / D ( A i , B j ) ,
In conclusion, in the cluster analysis based on posterior probability, we need to define suitable posterior probabilities to help us determine the set of data points in the cluster, these posterior probabilities are usually calculated based on the known prior distribution and sample data and need to use suitable algorithms for the probability calculation and need to search for optimal solutions to obtain the best clustering results.

4. Discussion

The Tuandao data provided by UVL Laboratory of Ocean University of China were used for testing in order to confirm the image target detection method of the side-scan sonar suggested in this paper. Table 2 displays the side-scan sonar’s data parameters. In total, 46,817 valid data were gathered during the course of the 5 h experiment, 3308 of which were photographs of the object under study. The remaining data consisted of undersea geomorphological maps without the object.

4.1. Target Detection Effect Test

In the effective data set of 3308 images containing target objects in Tuandao, 50 images were randomly selected as the experimental data training set. The data distribution extraction and posterior probability detection process introduced in Section 3 were calculated according to the feature law and gray scale distribution of these 50 images, and the remaining 3258 images of target objects were tested. Part of the test results are shown in Figure 11.
Three narrow-band pictures make up each image. The first is the image decoded from the original sonar data, the second are the results of image processing and recognition achieved using the image target detection method suggested in this research, and the third are the recognition results translated from the recognition box to the original image. The starboard image in the full 6s is represented by group (a). (b) Group is the partial image of the target item in 6s data with 224 × 600 pixels. Figure 12 displays the detection outcomes of the suggested technique on additional data sets with various experimental settings. A metal pole that is 1 m long serves as the data set’s target item. The first row shows the original sonar image, which has pixels that are 224 by 224, and the second row displays the results of segmentation using the suggested algorithm.
The target recognition of the large-resolution image on the low-performance equipment is also one of the key points considered in this paper due to the high resolution of the side-scan sonar image and the risk that the target will be separated if the image is cut and then the target recognition is carried out. Figure 13 compares the segmentation outcomes of the same image using the algorithm suggested in this research versus those from other algorithms. The research and comparison show that the algorithm suggested in this paper is highly recognized in sonar images, has a certain robustness against noise interference, and has a certain speed advantage. The target detection frame’s recognition accuracy is assessed using the MIoU assessment index. The average inter-section ratio between each category’s manual label and the prediction frame of the target detection method suggested in this study is to be determined by MIoU. The positioning accuracy increases with the size of the MIoU. Table 3 compares the accuracy rate, MIoU value and detection speed of the target detection frame selection results of experimental data under different target detection methods. The methods with better performance under each evaluation indicator have been highlighted in bold. The test speed is the test result on the NVIDIA Jetson TX2.
The classic image target detection approach suggested in this research performs well in terms of detection accuracy (ACC), target frame accuracy (MIoU), and detection speed of the group island data, according to the data in the table. The calculation volume is minimal, the research methodology in this study simply relies on image pixel processing, it is suitable for small GPUs and CPUs, and it can perform target detection.

4.2. Subsection

The algorithm was loaded onto an AUV to conduct a sea test experiment at Jiaozhou Bay, Qingdao, in order to confirm the engineering viability of the object detection approach proposed in this study. The experiment made use of the autonomous underwater robot “Sailfish-324” that UVL Laboratory of Ocean University of China independently built. The “Sailfish-324” is 3.8 m long, weighs 260 kg, and has a 32.4 cm diameter. equipped with various sensor groups, including INS, DVL, and GPS. Figure 14 displays the experimental area and “Sailfish-324”.
In the sea test experiment, the target detection algorithm is used to process and recognize the original sonar data in real time, and the detected target is located according to the determined pixel distribution of the target. The positioning formula is as follows:
t a r g e t l e v e t = ( p i x x R a n g e w ) 2 ( h c o s θ ) 2 ,
where, t a r g e t l e v e t is the horizontal distance between the target and the AUV, and h is the height set for the AUV sailing. After the horizontal distance is obtained, the predicted position of the metal ball can be obtained by fusing the navigation data.
The identification results of the experiment were recorded in the log in real time, part of which is shown in Table 4. The average positioning error was about 3.798. To test the engineering real-time performance of the algorithm proposed in this paper on CPU and low-performance GPU, 12th Gen Intel(R) Core(TM) i5-12500H @ 2.50 GHz (CPU), NVIDIA Jetson TX2 (GPU1), and NVIDIA Jetson Xavier (GPU2) were tested. The GPU2 computing power is 21TOPS, while the GPU1 computing power is only 1.26TOPS. GPU1 can be used as a representative of low-performance GPU. Table 5 records the time required for the algorithm proposed in this paper to be applied to various equipment projects.
With a pixel size 110 × 4800, the typical data reception time is 6 s. The real-time target detection requirements of thread 1 receiving sonar original data and thread 2 parsing data stream and processing it with target detection algorithm are met by the suggested target detection technique running on the two devices, which requires a detection time of much less than 6s. By avoiding serial data parsing, the issue of partial data loss is solved.

5. Conclusions

This research proposes an improved adaptive Finch clustering target detection technique for CPU and low-performance GPU based on data distribution and posterior probability. The program presents a collection of algorithm flow suitable for the detection of small targets on the ocean floor after thoroughly analyzing the gray distribution features of the original sonar data. The algorithm enables the smooth adjustment of the image gray level, which enhances the readability of the image and pixel-level connectivity. This is accomplished by efficiently filtering the speckle noise in the original data stream. After rough segmenting the sonar image and eliminating several holes in the sonar data, the improved adaptive Finch clustering technique is utilized. Additionally, a better Finch clustering target detection algorithm based on posterior distribution probability is deduced using the principle of sonar imaging and empirical reasoning of experimental data, which resolves the issue that some low-performance devices cannot simultaneously receive data and compute complex neural network algorithms.
The improved adaptive Finch clustering target identification technique suggested in this study, which is based on data distribution and posterior probability, performs well in terms of accuracy, according to experimental data playback and real-time sea tests. It fully satisfies the requirements for real-time detection of underwater targets while also having a small computer volume and a significantly slower computing speed than other target detection algorithms with the same accuracy.

Author Contributions

Conceptualization, Q.H. and Q.W.; methodology, Q.H.; software, M.L. and B.H.; validation, M.L., Q.H., G.G., J.L. (Jie Li) and J.L. (Jingjing Li); writing—original draft preparation, Q.H. and Q.W.; writing—review and editing, Q.H. and G.G.; project administration, Q.H.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. MacLennan, D.N. Recent developments in side scan sonar techniques. Fish. Res. 1983, 2, 160–162. [Google Scholar] [CrossRef]
  2. Asplin, R.G.; Christensson, C.G. A New Generationside Scan Sonar; Oceans: Baltimore, MD, USA, 1988; pp. 329–334. [Google Scholar]
  3. Aloysius, N.; Geetha, M. A review on deep convolutional neural networks. In Proceedings of the International Conference on Communication and Signal Processing (ICCSP), Tamilnadu, India, 6–8 April 2017; IEEE: Karnataka, India, 2017. [Google Scholar]
  4. Kim, Y.; Kim, S.; Kim, T.; Kim, C. CNN-Based Semantic Segmentation Using Level Set Loss. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019. [Google Scholar]
  5. Ranjan, R.; Sankaranarayanan, S.; Bansal, A.; Bodla, N.; Chen, J.-C.; Patel, V.M.; Castillo, C.D.; Chellappa, R. Deep Learning for Understanding Faces: Machines May Be Just as Good, or Better, than Humans. IEEE Signal Process. Mag. 2018, 35, 66–83. [Google Scholar] [CrossRef]
  6. Hu, F.; Xia, G.S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef] [Green Version]
  7. Jin, L.L.; Liang, H.; Yang, C.H. Accurate Underwater ATR in Forward-Looking Sonar Imagery Using Deep Convolutional Neural Networks. IEEE Access 2019, 7, 125522–125531. [Google Scholar] [CrossRef]
  8. Venetiane, P.L.; Werblin, F.; Roska, T. Analogic CNN Algorithms for Some Image Compression and Restoration Tasks. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1995, 42, 278–284. [Google Scholar] [CrossRef]
  9. Ye, X.; Yang, H.; Li, C.; Jia, Y.; Li, P. A Gray Scale Correction Method for Side-Scan Sonar Images Based on Retinex. Remote Sens. 2019, 11, 1281. [Google Scholar] [CrossRef] [Green Version]
  10. Li, J.; Chen, L.; Shen, J.; Xiao, X.; Liu, X.; Sun, X.; Wang, X.; Li, D. Improved Neural Network with Spatial Pyramid Pooling and Online Datasets Preprocessing for Underwater Target Detection Based on Side Scan Sonar Imagery. Remote Sens. 2023, 15, 440. [Google Scholar] [CrossRef]
  11. Salankar, S.S.; Patre, B.M. Neural Network and Decision Tree Induction: A Comparison in the Domain of Classification of Sonar Signal. In Proceedings of the First International Conference on Emerging Trends in Engineering and Technology, ICETET ′08, Nagpur, Maharashtra, India, 16–18 July 2008. [Google Scholar] [CrossRef]
  12. Kucukbayrak, M.; Gune, O.; Arica, N. Underwater Acoustic Signal Recognition Methods. J. Nav. Sci. Eng. 2009, 5, 64–78. [Google Scholar]
  13. Duan, M.; Lu, Y.; Su, Y. A novel image segmentation method based on dualfeature Markov random fields. Adv. Laser Optoelectron. 2020, 57, 166–174. [Google Scholar]
  14. Hao, M.; Lin, H.; Gao, Y.-Y. Ground crack extraction in mining area by UAV image based on improved active contour model. J. Geoinf. Sci. 2022, 24, 2448–2457. [Google Scholar]
  15. Daniel, S.; FL, L.; Roux, C. Side-scan sonar image matching. IEEE J. Ocean. Eng. 1998, 23, 245–259. [Google Scholar] [CrossRef]
  16. Tian, W.-W.; Lin, J. Research on side-scan sonar image segmentation method based on C-means and FCM. Ship Electron. Eng. 2021, 41, 96–100+177. [Google Scholar]
  17. Huo, G.; Yang, S.X.; Li, Q.; Zhou, Y. A robust and fast method for sidescan sonar imagesegmentation using nonlocal despeckling and active contour model. IEEE Trans. Cybern. 2017, 47, 855–872. [Google Scholar] [CrossRef] [PubMed]
  18. Sarfraz, M.S.; Stiefelhagen, R.; Sharma, V. Efficient Parameter-free Clustering Using First Neighbor Relations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  19. Hao, Z.; Wang, Q. Review of underwater target detection based on sonar image. J. Unmanned Underw. Syst. 2023, 31, 339–348. [Google Scholar]
  20. Reed, S.; Ruiz, I.; Capus, C.; Petillot, Y. The fusion of large scale classified side-scan sonar image mosaics. IEEE Trans. Image Process. 2006, 15, 2049–2060. [Google Scholar] [CrossRef]
  21. Wang, Y.; Zhou, K.; Tian, W.; Chen, Z.; Yang, D. Underwater Sonar Image Segmentation by a Novel Joint Level Set Model. J. Phys. Conf. Ser. 2022, 2173, 012040. [Google Scholar] [CrossRef]
  22. Xia, P.; Wu, C.; Lei, B. Unsupervised sonar image segmentation based on fuzzy TS-MRF model. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2017, 45, 17–22. [Google Scholar] [CrossRef]
  23. Jing, X.Z.; Lei, D.H.; Pei, C.P. Processing of Sonar Image Based on Compressive Sensing; Huazhong University of Science and Technology: Wuhan, China, 2023. [Google Scholar]
  24. Tian, Y.; Lan, L.; Sun, L. A Review of Sonar Image Segmentation for Underwater Small Targets. In Proceedings of the PRIS 2020: 2020 International Conference on Pattern Recognition and Intelligent Systems, Athens, Greece, 30 July–2 August 2020. [Google Scholar] [CrossRef]
  25. Wu, J.P.; Guo, H.T. Underwater Sonar Image Segmentation Based on Snake Model. Appl. Mech. Mater. 2014, 448–453, 3675–3678. [Google Scholar] [CrossRef]
  26. Dong, G.; Yan, Y.; Shen, C.; Wang, H. Real-Time High-Performance Semantic Image Segmentation of Urban Street Scenes. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3258–3274. [Google Scholar] [CrossRef] [Green Version]
  27. Galceran, E.; Djapic, V.; Carreras, M.; Williams, D.P. A Real-time Underwater Object Detection Algorithm for Multi-beam Forward Looking Sonar. IFAC Proc. Vol. 2012, 45, 306–311. [Google Scholar] [CrossRef]
  28. Ma, L.; Yang, J.; Li, C. Analysis of XTF Data Format in ECLIPS 5700 Logging System. Well Logging Technol. 2001, 25, 225–230. [Google Scholar]
  29. Wei, D.; Wang, F.; Ma, H. Long-term Autonomous Mission Planning of AUV in Large-scale Complex Marine Environment. In Proceedings of the 2019 4th International Conference, Jinan, China, 18–21 October 2019. [Google Scholar] [CrossRef]
  30. Lee, C.S. NPS AUV Workbench: Collaborative Environment for Autonomous Underwater Vehicles (AUV) Mission Planning and 3D Visualization. Master’s Thesis, Naval Postgraduate School, Monterey, CA, USA, 2004. [Google Scholar]
  31. MahmoudZadeh, S.; Powers, D.M.; Sammut, K.; Atyabi, A.; Yazdani, A. A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean. Comput. Electr. Eng. 2018, 67, 741–760. [Google Scholar] [CrossRef] [Green Version]
  32. Hai-Bin, L.I.; Hui-Zhong, T.; Hai-Ying, S. The Impact of the Application of Wavelet Transform on Side-scan Sonar Image Filtering. In Hydrographic Surveying and Charting; Oriprobe Information Services, Inc.: Windsor, ON, Canada, 2009. [Google Scholar]
Figure 1. Hardware connection of the target detection system.
Figure 1. Hardware connection of the target detection system.
Electronics 12 03297 g001
Figure 2. Flow chart of object detection algorithm.
Figure 2. Flow chart of object detection algorithm.
Electronics 12 03297 g002
Figure 3. Working mode and imaging principle of AUV mounted side scan sonar.
Figure 3. Working mode and imaging principle of AUV mounted side scan sonar.
Electronics 12 03297 g003
Figure 4. Comparison of the effect of four filtering methods on the target object: (a) The original image; (b) Mean filtering; (c) Bilateral filtering; (d) Guided filtering; (e) Median filtering.
Figure 4. Comparison of the effect of four filtering methods on the target object: (a) The original image; (b) Mean filtering; (c) Bilateral filtering; (d) Guided filtering; (e) Median filtering.
Electronics 12 03297 g004
Figure 5. Histogram analysis of different filtering methods: (a) The original image; (b) Mean filtering; (c) Bilateral filtering; (d) Guided filtering; (e) Median filtering.
Figure 5. Histogram analysis of different filtering methods: (a) The original image; (b) Mean filtering; (c) Bilateral filtering; (d) Guided filtering; (e) Median filtering.
Electronics 12 03297 g005
Figure 6. Histogram analysis of image filtering with different resolutions: (a) Port filtering; (b) Complete narrow-band filtering.
Figure 6. Histogram analysis of image filtering with different resolutions: (a) Port filtering; (b) Complete narrow-band filtering.
Electronics 12 03297 g006
Figure 7. 3 × 3 images illustrative. (a) original image. (b) adjacency matrix. (c) the first hierarchical clustering result.
Figure 7. 3 × 3 images illustrative. (a) original image. (b) adjacency matrix. (c) the first hierarchical clustering result.
Electronics 12 03297 g007
Figure 8. Display of improved hierarchical clustering effect: (a) The original image; (b) the processed image.
Figure 8. Display of improved hierarchical clustering effect: (a) The original image; (b) the processed image.
Electronics 12 03297 g008
Figure 9. Effect of adaptive Finch clustering: (a) Before the improvement; (b) After the improvement.
Figure 9. Effect of adaptive Finch clustering: (a) Before the improvement; (b) After the improvement.
Electronics 12 03297 g009
Figure 10. Threshold analysis of the highlighted area and shaded area of the object.
Figure 10. Threshold analysis of the highlighted area and shaded area of the object.
Electronics 12 03297 g010
Figure 11. Target detection results: (a) Display of complete narrowband image target detection results; (b) Display only partial target detection results of the target object. The red area represents the identified target object.
Figure 11. Target detection results: (a) Display of complete narrowband image target detection results; (b) Display only partial target detection results of the target object. The red area represents the identified target object.
Electronics 12 03297 g011
Figure 12. Detection results of other targets (metal rods) using the algorithm proposed in this paper: (a) The original image; (b) Target detection results.
Figure 12. Detection results of other targets (metal rods) using the algorithm proposed in this paper: (a) The original image; (b) Target detection results.
Electronics 12 03297 g012
Figure 13. Comparison of segmentation results of different algorithms: (a) The original image; (b) K-means (c) HOG; (d) DPM; (e) MRF; (f) The algorithm proposed in this paper.
Figure 13. Comparison of segmentation results of different algorithms: (a) The original image; (b) K-means (c) HOG; (d) DPM; (e) MRF; (f) The algorithm proposed in this paper.
Electronics 12 03297 g013
Figure 14. Experimental environment and equipment: (a) Experimental side scanning area; (b) “Sailfish-324” AUV.
Figure 14. Experimental environment and equipment: (a) Experimental side scanning area; (b) “Sailfish-324” AUV.
Electronics 12 03297 g014
Table 1. PSNR index comparison of four filtering methods.
Table 1. PSNR index comparison of four filtering methods.
Filtering AlgorithmPSNR
6s Full Picture6s Half PictureTarget Object Area
Mean filtering19.4818.7017.20
Bilateral filtering37.4337.5839.56
Median filtering37.2736.9529.23
Guided filtering27.1827.4026.87
Table 2. Parameters of side-scan sonar data set.
Table 2. Parameters of side-scan sonar data set.
Sea AreaTarget ObjectSonar TypeSweep FrequencySweep RangeHeight Determination
Regiment IslandMetal balls
(1 m diameter)
Shark-S900U900 kHz30 m8 m
Table 3. Performance comparison of different target detection methods.
Table 3. Performance comparison of different target detection methods.
MethodsACC (%)MIoUDetecting Speed
K-means87.91/0.47 FPS2.1 s/img
HOG84.7381.840.2 FPS5 s/img
DPM90.5890.480.22 FPS4.5 s/img
MRF88.1587.965.10 FPS196 ms/img
MobileNet91.04/0.29 FPS3.4 s/img
The object detection method proposed in this paper90.7991.134.76 FPS210 ms/img
Table 4. Metal ball target detection real-time positioning.
Table 4. Metal ball target detection real-time positioning.
Target IdentificationActual LocationThe Detected LocationError (m)
Metal ball 1(3602°57′41.04″ N, 12017°45′48.24″ E)(3602°57′38.98″ N, 12017°45′47.52″ E)4.032
(3602°57′39.54″ N, 12017°45′49.50″ E)4.015
(3602°57′43.07″ N, 12017°45′49.39″ E)3.871
(3602°57′43.11″ N, 12017°45′49.33″ E)3.786
Metal ball 2(3602°57′11.16″ N, 12017°42′55.44″ E)(3602°57′12.03″ N, 12017°42′56.44″ E)3.597
(3602°57′12.01″ N, 12017°42′56.45″ E)3.854
(3602°57′10.21″ N, 12017°42′54.93″ E)3.743
(3602°57′12.19″ N, 12017°42′56.47″ E)3.468
Metal ball 3(3602°56′33.36″ N, 12017°44′7.8″ E)(3602°56′33.45″ N, 12017°44′7.62″ E)3.859
(3602°56′31.49″ N, 12017°44′8.65″ E)3.866
(3602°56′34.50″ N, 12017°44′8.69″ E)3.813
(3602°56′32.46″ N, 12017°44′6.74″ E)3.729
Table 5. Time required for target detection of sea test experiment.
Table 5. Time required for target detection of sea test experiment.
6s Image SizeTime to Receive DataParse 6s Data Time
CPUGPU1GPU2
110 × 24006 s0.2108 s0.4709 s0.5774 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, Q.; Lei, M.; Gao, G.; Wang, Q.; Li, J.; Li, J.; He, B. Improved Adaptive Finch Clustering Sonar Segmentation Algorithm Based on Data Distribution and Posterior Probability. Electronics 2023, 12, 3297. https://doi.org/10.3390/electronics12153297

AMA Style

He Q, Lei M, Gao G, Wang Q, Li J, Li J, He B. Improved Adaptive Finch Clustering Sonar Segmentation Algorithm Based on Data Distribution and Posterior Probability. Electronics. 2023; 12(15):3297. https://doi.org/10.3390/electronics12153297

Chicago/Turabian Style

He, Qianqian, Min Lei, Guocheng Gao, Qi Wang, Jie Li, Jingjing Li, and Bo He. 2023. "Improved Adaptive Finch Clustering Sonar Segmentation Algorithm Based on Data Distribution and Posterior Probability" Electronics 12, no. 15: 3297. https://doi.org/10.3390/electronics12153297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop