Next Article in Journal
A New Framework for Active Loss Reduction and Voltage Profile Enhancement in a Distributed Generation-Dominated Radial Distribution Network
Previous Article in Journal
Temperature Relaxation in Glass-Forming Materials under Local Fast Laser Excitations during Laser-Induced Microstructuring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Acoustic Array Sensor Signal Recognition Algorithm for Low-Altitude Targets Using Multiple Five-Element Acoustic Positioning Systems with VMD

School of Electronic and Information Engineering, Xi’an Technological University, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(3), 1075; https://doi.org/10.3390/app14031075
Submission received: 19 December 2023 / Revised: 23 January 2024 / Accepted: 23 January 2024 / Published: 26 January 2024

Abstract

:
To solve the problem in target acoustic signal processing and recognition when the target flies at a low altitude based on the acoustic positioning system, which is often affected by external interference and brings false information, this paper proposes a target signal processing and recognition algorithm for low-altitude target acoustic positioning based on variational modal decomposition and the test method of multiple five-element acoustic arrays. This algorithm uses VMD to decompose the target signal into modal components with different central frequencies and then performs wavelet threshold processing on the low-frequency part of the signal. After determining the remaining signal components and the low-frequency part’s threshold, the residual component is reconstructed. Based on the test principle and calculation model of the five-element acoustic positioning system, following processing of the low-altitude target acoustic positioning signal using variational modal decomposition, the cross-correlation function method is introduced to perform correlation operations on the basic array of five acoustic sensors and then obtain the time value and time difference of the target acoustic information in each acoustic sensor, ultimately determining the spatial position of the target. Finally, we used the data fusion processing method for target coordinates in multi-acoustic basic arrays to determine the actual target position. By comparing the results obtained using the high-speed camera method with those of the proposed approach, it was found that the average error in the test area of 100 × 100 m was less than 1 m.

1. Introduction

1.1. Background of the Study

Radar detection faces the threats of low-altitude target attack, stealth technology, and electromagnetic interference technology. However, the existing conventional radars have their shortcomings regarding the detection of low-altitude targets. This is mainly reflected in clutter from low-altitude obstacles. They have strong interference, and the useful target echo signal will inevitably be completely submerged. Due to the interference of the terrain, the interference due to low-altitude obstacles’ reflected clutter will produce a low-angle blind area and low-altitude targets cannot be recognized [1,2,3]. However, the existing acoustic positioning for low-altitude targets generally uses acoustic sensors for ranging and positioning, but it is easily affected by the positioning system equipment and external environmental interference factors, especially when there are random noise signals in the acoustic sensor detection signal, which often restricts the recognition of the real signal of the target. The advantages of acoustic sensor positioning technology are low cost, simple arrangement, and flexibility. Therefore, acoustic sensor positioning technology is still an important means of current ground to air target positioning [4].
In the target positioning method, in addition to utilizing acoustic sensor technology, other methods such as dual high-speed camera systems can be employed [5,6,7]. From the fundamental principles of dual high-speed camera systems, the positioning process remains unaffected by external factors such as ambient temperature and wind speed, causing it to be considered an ideal test method with relatively high precision. However, for the target positioning test method based on using an unmanned aerial vehicle (UAV) with a high-speed camera, the test area is relatively limited, leading to the potential occurrence of missed captures. The test principle using multiple acoustic sensor arrays can expand the test range by increasing the number of sensor arrays, becoming a crucial method for the future positioning test of low, slow, and small targets. Nevertheless, the high sensitivity of acoustic sensors to environmental conditions results in an inherent uncertainty in the time-delayed information from the sensors, impacting the test accuracy. To address this issue, this paper introduces innovative methods of variational modal decomposition for acoustic information processing and a time calculation approach based on correlation functions for multiple acoustic sensors. This innovative methodology successfully mitigates the impact of environmental factors on acoustic sensors and effectively reduces test errors. These innovations not only constitute the core of this paper but also hold significant research significance.

1.2. Related Works

In the acoustic positioning system, due to the influence of vibration, shock, and other factors, the acoustic signal of the target will be mixed with a large number of noise signals, resulting in irregular and non-stationary random high-frequency signals in the target signal output by the acoustic sensor [8]. Additionally, there are offset signals caused by external factors such as shock and vibration. Directly applying traditional peak value methods in the time domain to extract target information may lead to random deviations. In addition, when using wavelet transform in the frequency domain, there is an issue with the choice of wavelet basis [9]. Although wavelet transform exhibits good time–frequency characteristics and can identify the target signal embedded in the noise, the waveforms of the target acoustic information of different shapes are different, and the selection of different wavelet bases will produce different accuracy errors. For example, Zhang et al. [10] proposed a new algorithm for over-the-horizon passive acoustic positioning of low-altitude sound source targets. The algorithm extracts the target characteristic point based on the acoustic signal of the low-altitude sound source target on the basis of the positioning reflection point, which can effectively reduce the influence of the over-the-horizon propagation error of the acoustic signal, improve the positioning accuracy, and provide important information for further tracking and positioning of the radar. Liu et al. [11] studied an acoustic positioning method based on the distributed fusion strategy. This study looked at the distributed fusion acoustic positioning method from the perspective of energy consumption and proposed the fusion positioning processing of the target through the acoustic event’s energy amplitude of the distributed position.
To enhance the precision of passive sound measurement and overcome the problem of ambiguity in the spatial orientation of sound source targets, Lu et al. [12] proposed a fusion algorithm for passive sound positioning using a dual five-element cross array. They devised a model for the microphone array with a double five-element cross array, and using this model, they established criteria for determining the quadrant of the sound source point. Additionally, they provided the calculation formula for the coordinates of passive acoustic positioning. Constructing the fusion algorithm model for passive acoustic positioning with a double five-element cross array involves utilizing the sinusoidal value of the pitch angle as the weighting coefficient. Guo et al. [13] introduced a framework for extracting and recognizing helicopter acoustic characteristics using a composite deep neural network. Their research focused on investigating an algorithm for combined acoustic signal recognition that incorporates the structures of both convolutional neural networks and long-term and short-term memory neural networks. The algorithm optimizes the acoustic signal characteristic representation based on the short-term spectrum of the signal and extracts the correlation information between the front and back frames, which makes up for the defect that the common acoustic target recognition method cannot make full use of the time history information of the target signal. Jiang et al. [14] presented a seismic signal recognition model utilizing wavelet analysis. Their study delved into the application of wavelet analysis for processing and recognizing seismic signals. Additionally, they conducted a comparative analysis of various wavelet basis functions, assessing their respective capabilities in detecting seismic signals.
There are many existing studies on the recognition and processing of acoustic sensor signals, such as wavelet transform [15,16,17], support vector machine [18], and particle swarm optimization [19]. In [19], Jiang et al. employed the wavelet transform filtering technique to process the output signal from acoustic sensors. They introduced a time extraction algorithm for projectile explosive acoustic signals based on the wavelet modulus maximum, improving the measurement accuracy of projectile explosive position coordinates. However, the acoustic sensor system for low-altitude target positioning still has some shortcomings in signal recognition. For example, when a target flies at low altitude, it is inevitable that the acoustic sensor will capture different degrees of non-real target signals, and the sensor itself has high noise, which can easily submerge the real target signal. At the same time, due to the inevitable influence of factors such as vibration and shock in the system, the acoustic signal of the target will mix with a large number of noise interference signals, making the noise signal in the target signal output by the acoustic sensor an irregular and non-stationary random high-frequency signal, which brings great difficulties to the recognition of the target acoustic signal.
In addition, there are high-speed camera methods for target positioning. The camera principle is mainly used to capture the synchronous frame target images, and the image processing method is used to determine the pixel position of the target in the image. Then, the position of the target is obtained by test the spatial geometric relationship of multiple high-speed cameras. Du et al. [20] introduced a method for measuring projectile explosive position coordinates using high-speed photogrammetry, and the test method utilized the trajectory inertia characteristics of the terminal ballistics to extrapolate the trajectory. Huang et al. [21] proposed a measurement method based on the intersection of the optical axis of two high-speed cameras, that is, using two cameras to shoot synchronously to obtain the attitude angle of each target. According to the optical axis orientation of the camera and the spatial position relationship between the camera and the target, the mathematical model of the target’s angle is derived. Combined with the actual application requirements, several simplified forms under special circumstances are obtained. From the test principle of high-speed photogrammetry, it is not affected by external factors such as environmental illumination and wind, and its test accuracy is relatively high. However, the high-speed camera method is limited by the field of view, and the range of a single high-speed photogrammetry test area is limited. If larger area is used, more high-speed cameras are needed. With the increase in the number of high-speed cameras, the cost increases, which is not conducive to the target positioning test. In order to reduce the test cost and meet the positioning accuracy requirements, this paper proposes a target positioning test method for a multi-sound array and a multi-sound array acoustic signal recognition method.

1.3. The Main Contribution of the Paper

To optimize the target positioning algorithm based on a core of the acoustic sensor array, this paper proposes a test method based on five-element acoustic array, establishes a UAV target positioning calculation model, and gives a target signal processing and recognition algorithm for low-altitude target acoustic positioning using variational modal decomposition. The research highlights and main contributions of this paper are as follows:
(1)
We set up the calculation model of target acoustic positioning by using multiple five-element acoustic arrays, derive the target position calculation function, and analyze the target signal characteristics.
(2)
We utilize VMD for decomposing the target signal into modal components with distinct center frequencies. Subsequently, we apply wavelet threshold processing to the low-frequency segment of the signal and establish a target signal processing and recognition algorithm based on variational modal decomposition.
(3)
We use the test principle and calculation model for the acoustic positioning of low-altitude targets in each basic array’s five-element configuration, and we employ variational modal decomposition to process the target acoustic positioning signal. The time value and time difference of the target’s acoustic information in each acoustic sensor are obtained through the cross-correlation function method. Consequently, we determine the spatial position of the target within each basic array.
(4)
We research the data fusion processing method of target coordinates on multi-acoustic basic arrays and give the test verification.
The subsequent sections of this paper are structured as follows: Section 2 states the multiple five-element acoustic array positioning method for low-altitude flying targets. Section 3 states a variational modal decomposition target signal processing and recognition algorithm. Section 4 states the target information time extraction method based on cross-correlation function in a basic five-element acoustic array. Section 5 states the data fusion processing method of target coordinates on multi-acoustic basic arrays. Section 6 provides the examination and analysis. Section 7 draws conclusions for this paper and outline avenues for future research.

2. Multiple Five-Element Acoustic Array Positioning Method for Low-Altitude Flying Target

This paper constructs a test system for the low-altitude flying target position based on multiple five-element acoustic arrays. Here, we use a triangular basic array to find the low-altitude flying target position by the same three basic arrays, as shown in Figure 1.
In Figure 1, the five-element acoustic array is a basic array, and three basic arrays are arranged in an isosceles triangle. O denotes the center in the O x y z coordinate system. The coordinates ( k , 0 , 0 ) , ( k , 0 , 0 ) , ( 0 , k , 0 ) represent the bottom center of three basic arrays A , B , and C denoted as O 1 , O 2 , and O 3 , respectively; r A , r B , and r C are the distances from points O 1 , O 2 , O 3 to the low-altitude flying target, respectively.
For the positioning of low-altitude targets, a certain number of five-element acoustic arrays are often arranged on the ground, and multiple five-element acoustic arrays are arranged in a certain order in the test area to carry out target detection and calculation of target positioning parameters in different size areas [22,23]. To fully demonstrate the advantages of the proposed variational modal decomposition method for acoustic positioning signal processing of ground to air targets, this study delves into the exploration and analysis of the target recognition algorithm within the framework of five-element acoustic positioning, leveraging the inherent positioning mechanism of the five-element acoustic array.
To illustrate the positioning principle, we define O 1 as an independent acoustic array origin of the basic array A , namely, the relative coordinates of O 1 are (0, 0, 0) in Figure 1, but the real coordinates of O 1 are ( k , 0 , 0 ) in the whole multiple five-element acoustic array test system, and the target position in basic array A is P A ( x A   , y A   , z A   ) , but the actual position in basic array A is P ( x A , y A , z A ) in the O x y z coordinate system, and their relationship can be obtained by Formula (1).
x A = x A   + k y A = y A   z A = z A  
In the same way, the coordinates P ( x B , y B , z B ) and P ( x C , y C , z C ) are obtained from basic arrays B and C in the O x y z coordinate system.
S 0 A S 4 A are five acoustic sensors within the basic array A , and the coordinate positions of the five acoustic sensors are defined as S 0 A ( 0 , 0 , H ) , S 1 A ( D , 0 , 0 ) , S 2 A ( 0 , D , 0 ) , S 3 A ( D , 0 , 0 ) , and S 4 A ( 0 , D , 0 ) , respectively. The target is located at any position P A ( x A   , y A   , z A   ) in the test system of basic array A , and the distance between the sound information generated by the target itself and each sensor is r 0 , r 1 , r 2 , r 3 , and r 4 , respectively. The corresponding time when the five acoustic sensors perceive the target sound information is t 0 , t 1 , t 2 , t 3 , and t 4 , respectively, then the relative delay between the arrival of S 0 A and S 1 A is τ 0 i = t i t 0 , i = 1 , 2 , 3 , 4 , so, P A ( x A   , y A   , z A   ) can be found by Equation (2).
x A   = τ 02 2 τ 01 2 + τ 04 2 τ 03 2 c 2 τ 01 τ 02 + τ 03 τ 04 · ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) ) 1 + ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) ) 2 · 1 1 + τ 04 τ 02 τ 03 τ 01 2 y A   = τ 02 2 τ 01 2 + τ 04 2 τ 03 2 c 2 τ 01 τ 02 + τ 03 τ 04 · ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) ) 1 + ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) ) 2 · τ 04 τ 02 τ 03 τ 01 1 + τ 04 τ 02 τ 03 τ 01 2 z A   = τ 02 2 τ 01 2 + τ 04 2 τ 03 2 c 2 τ 01 τ 02 + τ 03 τ 04 · 1 1 + ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) ) 2
where c is the current speed of sound, r A is the distance from the target to the origin of the coordinate O 1 .
It can be seen that the five-element acoustic array can simultaneously obtain the azimuth and pitch angle estimates of the sound source target independent of the actual sound velocity, thus eliminating the influence of time-varying environmental atmospheric parameters such as temperature, air pressure, wind speed, sound speed, and wind direction on the sound source direction estimation and ensuring the accuracy of the system positioning. In order to obtain the result of Formula (2), the most important thing is to determine τ 0 i . Based on the calculation method, the coordinates P B ( x B   , y B   , z B   ) and P C ( x C   , y C   , z C   ) are also obtained from basic arrays B and C . When the coordinates P A ( x A   , y A   , z A   ) , P B ( x B   , y B   , z B   ) , and P C ( x C   , y C   , z C   ) are determined, then P ( x A , y A , z A ) , P ( x B , y B , z B ) , and P ( x C , y C , z C ) are obtained by Formula (1). Finally, the data fusion processing method of target coordinates on multi-acoustic basic arrays to determine the actual target position is carried out.

3. A Variational Modal Decomposition Target Signal Processing and Recognition Algorithm

The characteristic waveform of explosive sound signals typically includes rapidly rising peaks and relatively short attenuation. In contrast to stationary signals, explosive sound signals show sudden and intense amplitude jumps, with high-intensity signals across a broad frequency range in the spectrum. Within the unit acoustic array, there is a degree of similarity among the explosive sound signals collected by each acoustic sensor. More precisely, the five acoustic sensors are relatively close, resulting in the information obtained in the unit acoustic array being largely consistent.
However, due to the varying distances from each acoustic array to the sound source, directional attenuation of explosive information occurs during the transmission process. Consequently, each acoustic sensor receives different levels of acoustic energy. This disparity is primarily attributed to the gradual attenuation of projectile explosive information along the transmission path.
Considering the presence of various interference noises in the test environment, such as wind sounds, speech, and other non-projectile-related information, these noises also impact the explosive sound signal. Nevertheless, the acoustic signals collected during each explosive event exhibit a certain correlation within each acoustic array. This indicates that despite the influence of noise, the explosive signal can still maintain a certain level of identifiability and similarity within the acoustic array.
As the target traverses the test area of the five-element acoustic positioning system, the acoustic sensor captures information at the moment the target enters the test area, and the amplification and processing circuits then generate a sudden change signal. This signal not only carries information about the target but also may be affected by background noise, vibration signals, and other interference signals resulting from environmental changes. To address the impact of existing interference signals on extracting genuine target information, we employ a target signal processing algorithm based on variational modal decomposition (VMD) [24,25]. VMD decomposes the output signal from the infrared sky screen into a series of modal components with distinct center frequencies. Treating the target signal as a mutation signal, it reconstructs a signal enriched with additional frequency and amplitude components. By iteratively searching for the optimal solution of the variational model, the center frequency and bandwidth of each intrinsic mode function are alternately updated to achieve adaptive signal separation in the frequency domain. This iterative process results in n modal components, forming the basis for a more refined decomposition of the signal. The following outlines the specific steps of the algorithm:
Step 1: it is presumed that the signal undergoes decomposition, denoted as f ( t ) , which consists of n modal components x n with different center frequencies, and each modal component converges around its respective center frequency, denoted as s n . In order to estimate the bandwidth of the frequency-shift signal, we use Gaussian smoothness to evaluate the frequency-shift signal to ensure that the estimated bandwidth of each subsignal is the smallest. Formula (3) is used to construct a variational problem with constraints.
min n = 1 n δ ( t ) + j π t x n ( t ) exp j s n t 2 2 s . t . n = 1 n x n ( t ) = f ( t )
where x n ( t ) is the output acoustic signal of the target, f ( t ) is the discrete subsignal with distinct frequencies obtained through VMD decomposition, and δ ( t ) is the unit impulse function. In order to solve the optimal solution of the variational problem, we introduce Formula (4).
L x n , s n , λ = σ n = 1 n t δ ( t ) + j π t x n ( t ) exp j s n t 2 2 + f ( t ) n = 1 n x n ( t ) 2 2 + λ ( t ) , f ( t ) n = 1 n x n ( t )
where σ is the penalty operator and λ ( t ) is the Lagrange multiplier.
Employing the penalty operator alternating direction method, we determine the optimal solution for Formula (2). Taking into account the convergence conditions and applying the optimal solution, we iteratively update each intrinsic mode function x n ( t ) , center frequency s n and λ ( t ) using Formulas (5)–(7). Finally, the target output signal to be decomposed is successfully decomposed into a finite number of modal components.
x n k + 1 ω = f ω i = 1 , i n n x i ω + λ ω 2 1 + 2 α ω ω k 2
s n k + 1 = 0 ω x n ω 2 d ω 0 x n ω 2 d ω
λ n + 1 ω = λ n ω + τ f ω n = 1 n x n k + 1 ω
where τ is the noise tolerance parameter, k is the number of iterations, f ω , x i ω , λ ω , and x n k + 1 ω represent the Fourier transform of f t , x i t , λ t , and x n k + 1 t , respectively.
Step 2: Wavelet threshold processing is applied to the acoustic sensor’s output signal. The fundamental concept involves filtering the signal. This method transforms the signal into different spatial scales through wavelet transform, filters out noise through threshold processing, and ultimately reconstructs the signal. In the process of filtering and denoising the target signal, the threshold processing in the wavelet threshold method is crucial, encompassing the selection of the threshold and the choice of the threshold function. Threshold quantization rules generally fall into two categories: hard threshold and soft threshold. The hard threshold function may induce local mutations at the threshold λ after signal processing. Although the signal processed by the soft threshold function may lose a small portion of the high-frequency signal, it is smoother and eliminates the local mutation caused by the hard threshold function. By utilizing the weighted average method of hard threshold and soft threshold functions, a new threshold function can be derived. The specific formula is shown in (8).
s j , k = sgn s j , k A s j , k 1 / B ( A λ ) 1 / B B , s j , k λ 0 , s j , k < λ
where A and B are regulatory factors, A = [0, 1], B > 0. This threshold function incorporates an adjustment coefficient A to amalgamate the benefits of both hard and soft thresholds. Additionally, an adjustment factor B is introduced to augment the shrinkage of wavelet coefficients [26,27].
Step 3: The time information is obtained when the target passes through the test area instantaneously. Utilizing a five-element acoustic array, the target positioning system captures target signals via an acquisition card. During target recognition, variations in the target’s position within the test area and discrepancies in the frequency of the mutated signal are observed. This divergence may arise due to the incomplete decomposition of the target signal in a specific component. Consequently, it is essential to evaluate the outcomes in each component, establishing the relationship between the threshold in the initial modal component and the peak point in the remaining components. If this relationship surpasses the threshold, wavelet threshold denoising is applied; otherwise, the component is deemed noise and subsequently eliminated, followed by signal reconstruction.
According to the above algorithm, the algorithm’s essence lies in decomposing the target acoustic signal into a finite number of modal components. By utilizing the alternating method of the penalty operator, the optimal solution for Formula (3) can be achieved. Assuming the VMD mode number is k , a finite number of modal components decompose the target acoustic signal, as illustrated in Figure 2.
The parameter limit value of the second step in Figure 2 can be obtained by Formula (9).
x k n + 1 ( s ) = f ( s ) i k x i ( s ) + λ ( s ) 2 + 4 σ ( s s k ) 2 s k n + 1 = 0 s | x k ( s ) | 2 d s 0 | x k ( s ) | 2 d s
It is not difficult to see from Figure 2 that the selection of decomposition parameters for VMD is the core of the whole processing algorithm. If the selection of parameters is not appropriate, it will cause the target acoustic signal to be mixed in the noise signal and diagnosis of effective target acoustic information will not be possible [28].
According to the above algorithm flow, we obtained the initial signal of any acoustic sensor in the five-element acoustic array target positioning system and subjected it to the processing algorithm that was designed, using VMD to decompose the original signal containing noise, and the time domain waveforms corresponding to the five modal components are obtained, as shown in Figure 3.
Figure 3 displays the various modal components, namely IMF1–IMF5. The outcomes indicate that the variational modal decomposition signal processing algorithm results in a smoother presentation of target information in the output signal.

4. Target Information Time Extraction Method Based on Cross-Correlation Function in Basic Five-Element Acoustic Array

Upon acquiring the reconstructed signal, the target signal’s distinctive points are recognized based on the peak point. The low-frequency component is eliminated, and after assessing the threshold of the low-frequency part, the remaining components are reconstructed. The peak value of the reconstructed signal is determined, and the peak mutation point is recorded as the target’s time point. As illustrated in Figure 1, the target signals output by any two acoustic sensors exhibit correlation for the same target.
To find the five time parameters of the acoustic position model in basic arrays A , B , and C , we use the correlation of two acoustic sensor signals to construct a pairwise cross-correlation function in each basic array. Here, we use basic array A as an example to illustrate the time extraction algorithm.
Assuming that x 0 ( t ) ~ x 4 ( t ) are the output of five acoustic sensors as S 0 ~ S 4 and S 0 is the center, looking for the cross-correlation between S 0 and the other four acoustic sensors, the cross correlation function of x 0 ( t ) and x 1 ( t ) ~ x 4 ( t ) is:
R x 0 x 1 ( τ ) = 0 Δ T 01 x 0 ( t ) x 1 ( t + τ 01 ) d t R x 0 x 2 ( τ ) = 0 Δ T 02 x 0 ( t ) x 2 ( t + τ 02 ) d t R x 0 x 3 ( τ ) = 0 Δ T 03 x 0 ( t ) x 3 ( t + τ 03 ) d t R x 0 x 4 ( τ ) = 0 Δ T 04 x 0 ( t ) x 4 ( t + τ 04 ) d t
where Δ T 01 ~ Δ T 04 indicate the pulse width of the signal from S 0 ~ S 4 . The target signal is collected in a certain sampling period using the acquisition card in practice. This necessitates the discretization of Formula (10), resulting in the transformation of Formula (10) into Formula (11).
R x 0 x 1 ( τ 01 ) = i = 1 n x 0 ( t i ) x 1 ( t i + τ 01 ) R x 0 x 2 ( τ 02 ) = i = 1 n x 0 ( t i ) x 2 ( t i + τ 02 ) R x 0 x 3 ( τ 03 ) = i = 1 n x 0 ( t i ) x 3 ( t i + τ 03 ) R x 0 x 4 ( τ 04 ) = i = 1 n x 0 ( t i ) x 4 ( t i + τ 04 )
Different values of R x 0 x 1 ( τ 01 ) ~ R x 0 x 4 ( τ 04 ) arise when the values of τ 01 ~ τ 04 vary, as per the characteristics of the correlation function [29,30].
Upon reaching the maximum value of R x 0 x 1 ( τ ) , the corresponding value of τ 01 represents the time difference between the target passing through the test area of acoustic sensors S 1 and S 2 , that is, Δ t 01 = t 1 t 0 = τ 01 . In the same way, we can obtain Δ t 02 = t 2 t 0 = τ 02 , Δ t 03 = t 3 t 0 = τ 03 , and Δ t 04 = t 4 t 0 = τ 04 . When parameters D and H are determined, combined with Formula (1), the specific position of the target can be obtained at a certain moment.

5. Data Fusion Processing Method of Target Coordinates on Multi-Acoustic Basic Arrays

It is assumed that the test data have no deviation and obey a normal distribution. That is to say, the observed values C A , C B , and C C of the target position coordinates obtained by the basic arrays A , B , and C obey a normal distribution, the mean value of which represents the true values C A   , C B   , and C C   of the target position coordinates. The standard deviation results of basic array test data are denoted as σ A , σ B , and σ C , respectively, which are obtained by accepting the target’s spread range. The probability density function of C A , C B , and C C is as follows:
f C A   = 1 σ A 2 π exp C A C A   2 σ A 2 f C B   = 1 σ B 2 π exp C B C B   2 σ B 2 f C C   = 1 σ C 2 π exp C C C C   2 σ C 2
In general, the target position coordinates have theoretical value. However, there is a certain deviation between the actual target position in a set of tests and the theoretical value, which makes the uncertainty great. Therefore, it is difficult to obtain valid prior information about C A , C B , and C C before the test. One basic array can only test one target explosive position once, making the target position test non-repeatable. The obtained test results are denoted as C A s , C B s , and C C s , respectively; C A   , C B   , and C C   are estimated with the observed values C A s , C B s , and C C s by applying the conventional point estimation theoretical approach. Subsequently, the probability density function of the true values C A s , C B s , and C C s of the target position is as follows:
f ( C A   | C A s ) = 1 σ A 2 π exp C A   C A s 2 σ A 2 f ( C B   | C B s ) = 1 σ B 2 π exp C B   C B s 2 σ B 2 f ( C C   | C C s ) = 1 σ C 2 π exp C C   C C s 2 σ C 2
Formula (13) delineates the probability density function acquired for C A   , C B   , and C C   given the test values C A s , C B s , and C C s . This formula encompasses not only the information derived from the actual measured values but also incorporates prior information. In contrast, conventional data processing methods narrow their focus solely to the actual measured values, neglecting considerations for the accuracy of the test equipment. Consequently, the aforementioned representation of test data is considered more scientifically sound.
From the perspective of probability theory, when evidence is acquired, there necessarily exists a probability density distribution function for the target position under this circumstance, and we denote probability density distribution function as f x , y , z | C A , C B , C C . The optimal estimate for the target coordinate is determined by the spatial position associated with the maximum value of f x , y , z | C A , C B , C C . Because the three-dimensional coordinate’s data of the low-altitude target are independent of each other in three basic arrays, the traditional probability of the low-altitude target gained by each basic array is as follows:
f ( x , y , z | x A , y A , z A ) = f ( x | x A ) × f ( y | y A ) × f ( z | z A ) π 2 θ f ( x , y , z | x B , y B , z B ) = f ( x | x B ) × f ( y | y B ) × f ( z | z B ) π 2 θ f ( x , y , z | x C , y C , z C ) = f ( x | x C ) × f ( y | y C ) × f ( z | z C ) π 2 θ
where θ is pitch angle of the target position in the system. It can be found by Formula (15).
θ = arctan ( 2 H ( τ 01 τ 03 ) 2 + ( τ 02 τ 04 ) 2 D ( τ 01 + τ 02 + τ 03 + τ 04 ) )
According to Formula (13), Formula (14) is expanded to Formula (16).
f ( x , y , z | x A , y A , z A ) = 1 2 π 2 π σ x A σ y A σ z A exp ( x x A ) 2 2 σ x A 2 ( y y A ) 2 2 σ y A 2 ( z z A ) 2 2 σ z A 2 f ( x , y , z | x B , y B , z B ) = 1 2 π 2 π σ x B σ y B σ z B exp ( x x B ) 2 2 σ x B 2 ( y y B ) 2 2 σ y B 2 ( z z B ) 2 2 σ z B 2 f ( x , y , z | x C , y C , z C ) = 1 2 π 2 π σ x C σ y C σ z C exp ( x x C ) 2 2 σ x C 2 ( y y C ) 2 2 σ y C 2 ( z z C ) 2 2 σ z C 2
The traditional probability of test values in basic arrays A , B , and C can be found by Formula (17).
f ( x , y , z | x A , y A , z A ; x B , y B , z B ; x C , y C , z C ) = f ( x , y , z | x A , y A , z A ) × f ( x , y , z | x B , y B , z B ) × f ( x , y , z | x C , y C , z C )
By resolving the spatial coordinates linked to the peak value in Formula (17), we can derive the actual target position P ( x , y , z ) through the data fusion of target positions across distributed multi-acoustic basic arrays. The fusion calculation results are expressed by Formula (18).
[ x , y , z ] = arg max x , y , z f ( x , y , z | x A , y A , z A ; x B , y B , z B ; x C , y C , z C )

6. Test and Analysis

To verify the algorithm proposed in this paper, we used a low-altitude flying UAV as a target to verify the algorithm. According to the geometric relationship of the acoustic array arrangement in Figure 1 and Figure 2, the point O is the central coordinate of the system, denoted as O x y z . Points A , B , and C represent three individual acoustic unit arrays, with each unit array comprising five acoustic sensors. Each unit array has its own central coordinates, where the center origin coordinates of arrays A , B , and C in the O x y z coordinate system are (30,0,0), (−30,0,0), and (0,30,0), respectively. Here, the unit of coordinate parameters is meters. The acoustic sensors in the three unit arrays are of the same model, characterized by high sensitivity and low noise. In this experiment, we selected the AWA14411 model from the Aihua Instrument Company for acoustic sensors to capture explosive sound signals. Each unit array is equipped with a set of PCI-9812 data acquisition cards and an IPC-610-L microcomputer. The data acquisition cards are responsible for collecting target signals from the five acoustic sensors in each unit array and converting them into digital signals for computer processing. The computer handles data storage and, subsequently, the stored signals from the five acoustic sensors are transmitted to the terminal processing subsystem via a wireless transmission module. This subsystem processes and fuses the signals received from the data acquisition cards, ultimately determining the location of the explosive point.
In Figure 1, the central coordinates of the three unit acoustic arrays A , B , and C are denoted as O 1 , O 2 , and O 3 , respectively. The relative coordinate positions of the acoustic sensors within each unit acoustic array are S 0 A ( 0 , 0 , 2.4 ) , S 1 A ( 3 , 0 , 0 ) , S 2 A ( 0 , 3 , 0 ) , S 3 A ( 3 , 0 , 0 ) , S 4 A ( 0 , 3 , 0 ) ; S 0 B ( 0 , 0 , 2.4 ) , S 1 B ( 3 , 0 , 0 ) , S 2 B ( 0 , 3 , 0 ) , S 3 B ( 3 , 0 , 0 ) , S 4 B ( 0 , 3 , 0 ) ; S 0 C ( 0 , 0 , 2.4 ) , S 1 C ( 3 , 0 , 0 ) , S 2 C ( 0 , 3 , 0 ) , S 3 C ( 3 , 0 , 0 ) , S 4 C ( 0 , 3 , 0 ) .
To calculate the specific spatial position of the (UAV) target, in this experiment, we collected signals from all sensors within each unit acoustic array, as illustrated in Figure 4, Figure 5 and Figure 6. In Figure 4, S 0 A S 4 A represent the signal collected by S 0 S 4 of the acoustic array A . In Figure 5, S 0 B S 4 B represent the signal collected by S 0 S 4 of the acoustic array B . In Figure 6, S 0 C S 4 C represent the signal collected by S 0 S 4 of the acoustic array C .
By analyzing the signals collected in Figure 4, Figure 5 and Figure 6, it can be observed that the output signals of the acoustic sensors contain various frequency components, mainly including target signals and background noise signals. The noise manifests as multiple random frequency signals, exhibiting a random Gaussian distribution. It is evident that when a certain frequency of noise signal undergoes a sudden change, it may exhibit characteristics similar to false target signals. When such signals occur at multiple frequencies, it can pose challenges to the recognition of UAV target signals.
To eliminate false target signals and interference from other frequency components, we employed the VMD algorithm proposed in the third part of the paper for effective removal. According to the algorithm and principles shown in Figure 3, the key feature of this algorithm is decomposition of each acoustic sensor signal into five layers, where IMF1 represents the frequency signal of the real target. Applying this algorithm to process the signals collected by the acoustic sensors in the three unit arrays, Figure 7, Figure 8 and Figure 9 display the results of the processing.
From Figure 7, Figure 8 and Figure 9, it is evident that in this experiment, due to the relatively short distance of each acoustic array, each of the five acoustic sensors in each array could capture the acoustic information of the UAV target as it traverses the test area. Following the principles of target acoustic positioning for the multiple five-element acoustic arrays, we processed the acoustic information of the UAV target using the VMD and correlation function algorithms. The temporal information of each sensor in basic array A , as shown in Figure 7, is: t 0 A = 52.96   ms , t 1 A = 51.32   ms , t 2 A = 40.03   ms , t 3 A = 45.08   ms , t 4 A = 57.11   ms . The temporal information of each sensor in basic array B, as shown in Figure 8, is: t 0 B = 293.25   ms , t 1 B = 292.46   ms , t 2 B = 301.78   ms , t 3 B = 293.96   ms , t 4 B = 284.83   ms . The temporal information of each sensor in basic array C, as shown in Figure 9, is: t 0 C = 216.32   ms , t 1 C = 224.87   ms , t 2 C = 214.78   ms , t 3 C = 211.21   ms , t 4 C = 212.98   ms . According to Formulas (10) and (11), we calculate the time difference of each sensor in the basic array A as: τ 01 A = 1.64   ms , τ 02 A = 12.93   ms , τ 03 A = 7.88   ms , τ 04 A = 4.15   ms . The time difference of each sensor in the basic array B is: τ 01 B = 0.79   ms , τ 02 B = 8.53   ms , τ 03 B = 0.71   ms , τ 04 B = 8.42   ms . The time difference of each sensor in the basic array C is: τ 01 C = 8.55   ms , τ 02 C = 1.54   ms , τ 03 C = 5.11   ms , τ 04 C = 3.34   ms . Further, using the calculation functions for basic arrays A , B , and C in Formula (2), the UAV target positions for the signals in Figure 7, Figure 8 and Figure 9 are calculated. P A ( x A   , y A   , z A   ) = ( 2.10 , 5.75 , 14.63 ) , P B ( x B   , y B   , z B   ) = ( 3.01 , 33.92 , 0.14 ) , P C ( x C   , y C   , z C   ) = ( 0.48 , 0.06 , 0.11 ) . Subsequently, applying the conversion relationships of basic arrays A , B , and C in the O x y z coordinate system, we determined the relative positions for the UAV signals corresponding to Figure 7, Figure 8 and Figure 9. P ( x A , y A , z A ) = ( 27.9 , 5.75 , 14.63 ) , P ( x B , y B , z B ) = ( 33.01 , 33.92 , 0.14 ) , P ( x C , y C , z C ) = ( 0.48 , 29.94 , 0.11 ) .
To further validate the feasibility of the proposed algorithm in this paper, based on the selected hardware equipment and performance metrics, we conducted five experiments. Following the target signal processing algorithm using VMD and the time difference calculation algorithm based on the correlation function as proposed in this paper, we processed the collected acoustic information from the five UAV instances. Table 1, Table 2 and Table 3 present the corresponding temporal information t 0 A t 4 A , t 0 B t 4 B , t 0 C t 4 C , the processed positions P A ( x A   , y A   , z A   ) , P B ( x B   , y B   , z B   ) , P C ( x C   , y C   , z C   ) of the unit acoustic arrays for the UAV target, and the positions P ( x A , y A , z A ) , P ( x B , y B , z B ) , P ( x C , y C , z C ) in the O x y z coordinate system.
According to the test results in Table 1, Table 2 and Table 3, it is verified that the signal processing method of low-altitude target acoustic positioning based on variational modal decomposition proposed in this paper aligns with the test requirements. The algorithm fully considers the characteristics of multiple frequency components of the target and uses the VMD to decompose different frequency signals to ascertain the optimal solution of the algorithm, thereby obtaining the actual signal frequency component of the target. In the calculation process, it is not difficult to find that the effectiveness of VMD mainly depends on the number of modes K and the quadratic penalty factor σ . The modal number K is the K modal components obtained by the decomposition of the original signal. The number of components preset by VMD reduces the occurrence of modal aliasing. The quadratic penalty factor σ affects the accuracy of the decomposition. The larger the penalty factor, the narrower the frequency band, and the more concentrated the reconstructed signal. From the collected waveform, it can be seen that except for the drone target signal, all other signals output by the sensor are noise. In this paper, the weighted average method of hard and soft threshold function is used to eliminate the influence of signal smoothness difference of single mode, and the processing effect is more ideal. After the signal reconstruction is used in the algorithm, the maximum point of the waveform is directly extracted through the peak feature point of the target signal, which reduces the time delay error and improves the test accuracy.
Following the data fusion processing method for target coordinates on multi-acoustic basic arrays as described in the fifth section, we performed data fusion processing on the five UAV positioning experiments from Table 1, Table 2 and Table 3. The computed results are presented in Table 4.
According to the data of Table 4, we found that although the measurement object is the same target, the measurement results of each acoustic basic array are also different, which is mainly due to two aspects: first, there are environmental factors in the propagation process of the target sound, resulting in the difference in the time when the target sound reaches each sensor of each basic array; second, due to the geometric relative error in the arrangement of each basic array, the target sound is different. This is also the cause of error. Therefore, the measurement results of the three arrays will be biased. It can be seen from the data table that the error is within 1 m in the three coordinates x, y, and z. in addition, we use the fusion method to deal with the target position. Different from the simple average of three acoustic basic arrays, it considers the deviation probability of each acoustic basic array, so that the calculated result is closer to the real value.
To validate the scientific nature of the algorithm proposed in this paper, practical tests were conducted based on Table 1, Table 2 and Table 3 and a comparative verification was performed using the dual high-speed camera method for the positioning of UAVs in five instances. The basic principles of the dual high-speed camera method can be referred to in the relevant reference [21]. In the comparative experiments, a synchronous triggering device was used to trigger the target acoustic positioning system of the multiple five-element acoustic arrays and simultaneously trigger the dual high-speed cameras. Image processing recognition was employed in the high-speed cameras to obtain the current spatial position of the UAV. Table 5 presents the positioning results of the method proposed in this paper compared with the dual high-speed camera method. In the table, P T ( x T , y T , z T ) represents the results calculated by the high-speed cameras, while Δ P ( Δ x , Δ y , Δ z ) represents the comparative error between the two methods.
Compared with the high-speed camera method, the error calculated by the proposed method in this paper ranges from −1 m to 1 m. From the basic principles of the high-speed camera method, it is an ideal test method that is not affected by external factors such as environmental temperature and wind speed, and it possesses relatively high test accuracy. However, for UAV target positioning, the test area range of the high-speed camera method is relatively limited, making it prone to missed captures. In contrast, the method based on multiple acoustic sensor arrays can expand the test range by increasing the number of arrays, making it an important means for future low, slow, and small target positioning tests. However, due to the high sensitivity of acoustic sensors to the environment, there is an inherent uncertain time delay error, leading to relatively lower test accuracy. This paper introduces the VMD method for acoustic information processing and a time calculation method based on correlation functions for multiple acoustic sensors. These methods effectively circumvent the impact of environmental factors on acoustic sensors, reducing errors. This is the core innovation of this paper. Through observation of the comparative experimental results, it is evident that the proposed method in this paper approaches the test accuracy level of the high-speed camera method.

7. Conclusions

In this paper, an algorithm for variational modal decomposition was introduced for the processing of target signals based on the signal test requirements of the multiple five-element acoustic positioning system, establishing the solution model of multi-acoustic array positioning a target and giving the data fusion processing method of target coordinates on multi-acoustic basic arrays.
In acoustic signal processing, we employ VMD to decompose the target signal into modal components with distinct center frequencies. Subsequently, wavelet threshold processing is applied to the low-frequency segment of the signal. After recognizing the remaining signal components and establishing the low-frequency part threshold, it undergoes filtering and removal, and the components that persist are then reconstructed. The five acoustic sensor signals in each basic array of the multiple five-element acoustic positioning system are correlated to obtain the target information time extraction by using cross-correlation function. Finally the target positioning value is determined by the data fusion processing method, and the positioning test of the drone target is verified.
Simultaneously, we conducted a comparative verification with the high-speed camera method, and the results indicated that the average error in a test area of 100 × 100 m was less than 1 m. The fundamental principles of the high-speed camera method suggest that it is relatively ideal, being unaffected by external factors such as environmental temperature and wind speed, and possesses relatively high test accuracy. However, for UAV target positioning, the test area range of the high-speed camera method is relatively limited, making it prone to missed captures. In contrast, the method based on multiple acoustic sensor arrays, serving as the test principle, can expand the test range by increasing the number of arrays, making it a crucial means for future low, slow, and small target positioning tests.
Due to the sensitivity of acoustic sensors to environmental conditions, there exists an inherent uncertain time delay error, resulting in relatively lower test accuracy. However, this paper introduces the VMD method for acoustic information processing and a time calculation method based on correlation functions for multiple acoustic sensors, successfully mitigating the impact of environmental factors on acoustic sensors and effectively reducing errors. This innovative achievement is the core highlight of this paper. Through observation of comparative experimental results, it is evident that the proposed method in this paper approaches the test accuracy of the high-speed camera method. An acoustic array sensor signal recognition algorithm of low-altitude targets in multiple five-element acoustic positioning systems with VMD not only addresses the positioning issue of low, slow, and small targets in a large area but is also applicable to spatial positioning tests of projectile impact points in the current weapon equipment system. This provides new insights for future intelligent fuze projectile impact position tests, high-speed small target spatial positioning, and the positioning of other novel aerial targets, showcasing broad application prospects.

Author Contributions

Conceptualization and methodology and validation, writing—original draft, C.S.; methodology, software and validation, writing—original draft, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shaanxi Provincial Science and Technology Department fund (No. 2023-YBGY-342) and National Natural Science Foundation of China (No. 62073256).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cheng, Y.; Guo, M.; Guo, L. Radar signal recognition exploiting information geometry and support vector machine. IET Signal Process. 2022, 17, 12167. [Google Scholar] [CrossRef]
  2. Liu, B.; Lian, Z.; Liu, T.; Wu, Z.; Ge, Q. Study of MFL signal identification in pipelines based on non-uniform magnetic charge distribution patterns. Meas. Sci. Technol. 2023, 34, 044003. [Google Scholar] [CrossRef]
  3. Li, H.; Zhang, X. Flight parameter calculation method of multi-projectiles using temporal and spatial information constraint. Def. Technol. 2023, 19, 63–75. [Google Scholar] [CrossRef]
  4. Cao, Q.; Wang, L.; Zhang, J.; Guo, T.; Liu, X. Identification of Vibration Signal for Residual Pressure Utilization Hydraulic Unit Using MRFO-BP Neural Network. Shock Vib. 2022, 2022, 8506273. [Google Scholar] [CrossRef]
  5. Jiao, Z.; Du, N.; Fan, L.; Huang, W. High-speed photography test of projectile miss distance and velocity measurement. Firepower Command. Control 2017, 42, 191–194. [Google Scholar]
  6. Sun, C.; Jia, Y.; Wang, D. Modeling of high-speed laser photography system for field projectile testing. Optik 2021, 241, 166980. [Google Scholar] [CrossRef]
  7. Huang, Z.; Zhang, G.; Cao, Y.; Zhang, H.; Shen, M. Simulation study on instantaneous position measurement of burst point of general industrial camera. Appl. Opt. 2021, 42, 891–897. [Google Scholar]
  8. Huang, J.; Mo, J.; Zhang, J.; Ma, X. A Fiber Vibration Signal Recognition Method Based on CNN-CBAM-LSTM. Appl. Sci. 2022, 12, 8478. [Google Scholar] [CrossRef]
  9. Li, H.; Zhang, X. A temporal-spatio detection model and contrast calculation method on new active sky screen with high-power laser. Optik 2022, 269, 169935. [Google Scholar] [CrossRef]
  10. Zhang, X.; Gao, Y. A New Algorithm for NLOS Acoustic Passive Localization of Low Altitude Targets. J. Electron. Inf. Technol. 2008, 30, 1136–1140. [Google Scholar] [CrossRef]
  11. Liu, Y.; Wang, H.; Wan, Z.; Li, D. Distributed acoustic localization method based on fusing information. Audio Eng. 2022, 46, 79–82. [Google Scholar]
  12. Lu, J.; Ye, D.; Chen, G.; Guo, Y.; Ma, W. Passive acoustic localization fusion algorithm and performance analysis of double five element cross array. Chin. J. Sci. Instrum. 2016, 37, 827–835. [Google Scholar]
  13. Guo, Y.; Zhou, Y.; Guan, L.; Bao, M. Research on combined deep neural network in acoustic helicopter target recognition. J. Appl. Acoust. 2019, 38, 8–15. [Google Scholar]
  14. Jiang, W.; Ding, W.; Zhu, X.; Hou, F. A Recognition Algorithm of Seismic Signals Based on Wavelet Analysis. J. Mar. Sci. Eng. 2022, 10, 1093. [Google Scholar] [CrossRef]
  15. Li, G.; Deng, H.; Yang, H. Traffic flow prediction model based on improved variational mode decomposition and error correction. Alex. Eng. J. 2023, 76, 361–389. [Google Scholar] [CrossRef]
  16. Li, H.; Li, M.; Ma, Y.; Zheng, Y.; Li, S. A variational mode decomposition projectile signal processing algorithm of infrared sky screen velocity measurement system and detection mathematical model of detection screen. Optik 2023, 287, 171077. [Google Scholar] [CrossRef]
  17. Gao, F.; Wang, J.; Xi, X.; She, Q.; Luo, Z. Gait recognition for lower extremity electromyographic signals based on PSO-SVM method. J. Electron. Inf. Technol. 2015, 37, 1154–1159. [Google Scholar]
  18. Li, H.; Zhang, X. Projectile explosion position parameters data fusion calculation and measurement method based on distributed multi-acoustic sensor arrays. IEEE Access 2022, 10, 6099–6108. [Google Scholar] [CrossRef]
  19. Jiang, Y.; Chen, S.; Wang, K.; Liao, W.; Wang, H.; Zhang, Q. Quantitative detection of rail head internal hole defects based on laser ultrasonic bulk wave and optimized variational mode decomposition algorithm. Measurement 2023, 218, 113185. [Google Scholar] [CrossRef]
  20. Du, B.; Xu, Y.; Wei, B. Coordinate test method for near-ground burst point of guided munitions. J. Ordnance Equip. Eng. 2021, 42, 39–43. [Google Scholar]
  21. Huang, H.; Hao, Z.; Zhang, J.; Wang, J.; Yue, X.; Mao, Y.; Huang, H. Method of target attitude measurement based on double high-speed photography. Equip. Environ. Eng. 2021, 18, 62–67. [Google Scholar]
  22. Wang, Y.; Zhang, H.; Ji, C. A projectile impact point location method Based on hybrid array composed of triangles and five-element crosses. J. Detect. Control 2020, 42, 92–96. [Google Scholar]
  23. Zheng, J.; Zhang, B.; Xiong, C. Acoustic location method of projectile impact-point Based on compound double-arrays. J. Ballist. 2016, 28, 68–73. [Google Scholar]
  24. Guo, K.; Yu, X.; Liu, G.; Tang, S. A Long-Term Traffic Flow Prediction Model Based on Variational Mode Decomposition and Auto-Correlation Mechanism. Appl. Sci. 2023, 13, 7139. [Google Scholar] [CrossRef]
  25. Hu, L.; Wang, W.; Ding, G. RUL prediction for lithium-ion batteries based on variational mode decomposition and hybrid network model. Signal Image Video Process. 2023, 17, 3109–3117. [Google Scholar] [CrossRef]
  26. Bai, D.; Lu, G.; Zhu, Z.; Zhu, X.; Tao, C.; Fang, J.; Li, Y. Prediction Interval Estimation of Landslide Displacement Using Bootstrap, Variational Mode Decomposition, and Long and Short-Term Time-Series Network. Remote Sens. 2022, 14, 5808. [Google Scholar] [CrossRef]
  27. Aparicio-Esteve, E.; Hernández, Á.; Ureña, J.; Villadangos, J.M. Visible light positioning system based on a quadrant photodiode and encoding techniques. IEEE Trans. Instrum. Meas. 2020, 69, 5589–5603. [Google Scholar] [CrossRef]
  28. Yin, X.; He, Q.; Zhang, H.; Qin, Z.; Zhang, B. Sound Based Fault Diagnosis Method Based on Variational Mode Decomposition and Support Vector Machine. Electronics 2022, 11, 2422. [Google Scholar] [CrossRef]
  29. Ryota, T.; Kiwamu, N. Multimode dispersion measurement of surface waves extracted by multicomponent ambient noise cross-correlation functions. Geophys. J. Int. 2022, 231, 1196–1220. [Google Scholar]
  30. Gao, G.; Li, J.; Hu, C. Research on the location technology of near ground explosion point. Sci. Technol. Eng. 2015, 15, 155–159. [Google Scholar]
Figure 1. The target acoustic positioning principle of multiple five-element acoustic arrays.
Figure 1. The target acoustic positioning principle of multiple five-element acoustic arrays.
Applsci 14 01075 g001
Figure 2. The process of target acoustic signal decomposition into a finite number of modal components.
Figure 2. The process of target acoustic signal decomposition into a finite number of modal components.
Applsci 14 01075 g002
Figure 3. Original signal and IMF component by VMD.
Figure 3. Original signal and IMF component by VMD.
Applsci 14 01075 g003
Figure 4. Five acoustic signals in basic array A collected from UAV at a certain moment.
Figure 4. Five acoustic signals in basic array A collected from UAV at a certain moment.
Applsci 14 01075 g004
Figure 5. Five acoustic signals in basic array B collected from UAV at a certain moment.
Figure 5. Five acoustic signals in basic array B collected from UAV at a certain moment.
Applsci 14 01075 g005
Figure 6. Five acoustic signals in basic array C collected from UAV at a certain moment.
Figure 6. Five acoustic signals in basic array C collected from UAV at a certain moment.
Applsci 14 01075 g006
Figure 7. The processing effect of five acoustic sensors in basic array A .
Figure 7. The processing effect of five acoustic sensors in basic array A .
Applsci 14 01075 g007
Figure 8. The processing effect of five acoustic sensors in basic array B .
Figure 8. The processing effect of five acoustic sensors in basic array B .
Applsci 14 01075 g008
Figure 9. The processing effect of five acoustic sensors in basic array C .
Figure 9. The processing effect of five acoustic sensors in basic array C .
Applsci 14 01075 g009
Table 1. Test data in basic array A .
Table 1. Test data in basic array A .
No.Target Time Value of Acoustic Sensor in Basic Array A P ( x A   , y A   , z A   ) P ( x A , y A , z A )
t 0 [ms] t 1 [ms] t 2 [ms] t 3 [ms] t 4 [ms]
161.8367.6571.8975.2870.281.54, −0.25, 14.2231.54, −0.25, 14.22
289.8786.4288.5686.0984.03−0.28, −3.94, 18.2629.72, −3.94, 18.26
363.6775.2180.3185.3179.311.11, −0.10, 14.6531.11, −0.10, 14.65
481.2190.4395.7690.6586.010.03, −1.76, 13.8130.03, −1.76, 13.81
548.5667.7658.4257.4666.421.19, −0.86, 13.2631.19, −0.86, 13.26
Table 2. Test data in basic array B .
Table 2. Test data in basic array B .
No.Target Time Value of Acoustic Sensor in Basic Array B P B ( x B   , y B   , z B   ) P ( x B , y B , z B )
t 0 [ms] t 1 [ms] t 2 [ms] t 3 [ms] t 4 [ms]
1339.94331.68340.62349.26339.59100.97, −0.2, 8.9550.97, −0.20, 8.95
2310.98303.48312.62320.83311.95−99.28, −3.94, 18.2649.72, −3.94, 18.26
3327.05319.23328.10336.69328.07101.11, −0.10, 14.6551.11, −0.10, 14.65
4343.41335.56344.58353.02344.26100.03, −1.76, 13.8150.03, −1.76, 13.81
5315.67307.76316.69325.25316.55101.19, −0.86, 13.2651.19, −0.86, 13.26
Table 3. Test data in basic array C .
Table 3. Test data in basic array C .
No.Target Time Value of Acoustic Sensor in Basic Array C P C ( x C   , y C   , z C   ) P ( x C , y C , z C )
t 0 [ms] t 1 [ms] t 2 [ms] t 3 [ms] t 4 [ms]
1253.62248.24260.62260.72248.354.39, −54.35, 2.7350.97, −0.20, 8.95
2234.50230.43242.48242.01229.89−0.28, −53.94, 18.2649.72, −3.94, 18.26
3239.21234.43246.66246.78234.561.11, −50.10, 14.6551.11, −0.10, 14.65
4242.24237.53249.77249.57237.320.03, −51.76, 13.8150.03, −1.76, 13.81
5244.22239.32251.59251.63239.361.19, −50.86, 13.2651.19, −0.86, 13.26
Table 4. Test data in basic arrays A , B and C with the same target.
Table 4. Test data in basic arrays A , B and C with the same target.
No. P ( x A , y A , z A ) P ( x B , y B , z B ) P ( x C , y C , z C ) P ( x , y , z )
131.54, −0.25, 14.2232.18, −0.32, 15.0831.16, −0.09, 15.3031.64, −0.23, 14.89
229.72, −3.94, 18.2630.88, −4.21, 18.7830.45, −3.87, 18.7230.38, −4.01, 18.61
331.11, −0.10, 14.6532.26, −0.39, 15.8932.98, 0.77, 15.8632.11, −0.12, 15.48
430.03, −1.76, 13.8130.91, −2.23, 14.5631.45, −2.35, 14.6230.81, −2.17, 14.38
531.19, −0.86, 13.2632.48, −1.37, 14.5230.41, −1.39, 13.9831.34, −1.23, 13.93
Table 5. The test data of the proposed method in this paper and the double high-speed camera method.
Table 5. The test data of the proposed method in this paper and the double high-speed camera method.
No. P ( x , y , z ) P T ( x T , y T , z T ) Δ P ( Δ x , Δ y , Δ z )
131.64, −0.23, 14.8930.84, 0.7, 14.180.8, 0.93, 0.71
230.38, −4.01, 18.6129.73, −4.79, 18.040.65, 0.78, 0.57
332.11, −0.12, 15.4831.63, −0.41, 14.660.48, 0.29, 0.82
430.81, −2.17, 14.3830.46, −3.06, 13.970.35, 0.89, 0.41
531.34, −1.23, 13.9332.18, −1.95, 13.11−0.84, 0.72, 0.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, C.; Li, H. An Acoustic Array Sensor Signal Recognition Algorithm for Low-Altitude Targets Using Multiple Five-Element Acoustic Positioning Systems with VMD. Appl. Sci. 2024, 14, 1075. https://doi.org/10.3390/app14031075

AMA Style

Song C, Li H. An Acoustic Array Sensor Signal Recognition Algorithm for Low-Altitude Targets Using Multiple Five-Element Acoustic Positioning Systems with VMD. Applied Sciences. 2024; 14(3):1075. https://doi.org/10.3390/app14031075

Chicago/Turabian Style

Song, Chunhuan, and Hanshan Li. 2024. "An Acoustic Array Sensor Signal Recognition Algorithm for Low-Altitude Targets Using Multiple Five-Element Acoustic Positioning Systems with VMD" Applied Sciences 14, no. 3: 1075. https://doi.org/10.3390/app14031075

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop