Next Article in Journal
Low-Power Preprocessing System at MCU-Based Application Nodes for Reducing Data Transmission
Previous Article in Journal
Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions
Previous Article in Special Issue
Audio Pre-Processing and Beamforming Implementation on Embedded Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Dimensional Direction-of-Arrival Estimation Using Direct Data Processing Approach in Directional Frequency Analysis and Recording (DIFAR) Sonobuoy

by
Amirhossein Nemati
1,
Bijan Zakeri
1,* and
Amir Masoud Molaei
2
1
Electrical and Computer Engineering Faculty, Babol Noshirvani University of Technology, Babol 71167, Iran
2
Centre for Wireless Innovation, Queen’s University Belfast, Belfast BT3 9DT, UK
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(15), 2931; https://doi.org/10.3390/electronics13152931
Submission received: 29 June 2024 / Revised: 18 July 2024 / Accepted: 23 July 2024 / Published: 25 July 2024
(This article belongs to the Special Issue Recent Advances in Audio, Speech and Music Processing and Analysis)

Abstract

:
Today, the common solutions for underwater source angle detection require manned vessels and towed arrays, which are associated with high costs, risks, and deployment difficulties. An alternative solution for such applications is represented by acoustic vector sensors (AVSs), which are compact, lightweight and moderate in cost, and which have promising performance in terms of the bearing discrimination in two or three dimensions. One of the most popular devices for passive monitoring in underwater surveillance systems that employ AVSs is the directional frequency analysis and recording (DIFAR) sonobuoy. In this paper, direct data-processing (DDP) algorithms are implemented to calculate the azimuth angle of underwater acoustic sources by using short-time Fourier transform (STFT) via the arctan method instead of using fast Fourier transform (FFT). These algorithms for bearing estimation use the ‘Azigram’ to plot the estimated bearing of a source. It is demonstrated that by knowing the active sound intensity of the sound field and applying the inverse tangent to its real part, this matrix can be obtained. Announcing the time and frequency of the source simultaneously is one of the main advantages of this method, enabling the detection of multiple sources concurrently. DDP can also provide more details about sources’ characteristics, such as the frequency of the source and the time of the source’s presence.

1. Introduction

Critical crossings, military docks, secret underwater targets, oil platforms, etc., are all sensitive and vulnerable points for sabotage operations. Enemy forces can include divers, unmanned underwater vehicles, submarines, or surface vessels. Detecting the presence and direction of these approaching forces near critical facilities allows security personnel time to consider solutions before an attack occurs. Detecting an underwater source’s bearing estimation requires measuring the pressure and vector components of an acoustic source. These measurements are typically classified as active or passive detection. Passive acoustic-monitoring (PAM) and antisubmarine warfare (ASW) applications commonly use towed arrays composed of omnidirectional hydrophones. Most towed array sonar systems (TASSs) utilized for underwater source detection consist of a line array of omnidirectional cylindrical hydrophones. However, the hydrophone line array may extend over several kilometers, introducing a degree of uncertainty in terms of the sensor positioning [1]. A disadvantage of a TASS is its inability to distinguish between an acoustic signal coming from the port or starboard direction [2], as the relative arrival times across the hydrophones are the same. Additionally, the array length is typically inversely proportional to the frequency. The size, weight, and hydrodynamic resistance of the array generally restrict its use on autonomous platforms, especially for low frequencies [3].
One of the most common applications of acoustic vector sensors (AVSs) is direction-of-arrival (DOA) estimation [4], which involves determining the azimuth and elevation angles of the incoming sound field. This capability allows for positioning the sound source in the 3-D space [5]. The AVS treats the acoustic wavefield as a vector field, enabling it to measure the sound pressure and particle velocity (or acceleration) at a single point. The AVS offers several advantages, including frequency-independent dipole directivity, effective suppression of isotropic noise, and avoidance of complications arising from spatial under-sampling. In modern times, the AVS has found widespread use in both military and civil applications [6,7]. The directional frequency analysis and recording (DIFAR) sonobuoy serves as a primary submarine detection device employed by passive airborne systems. Between 1965 and 1969, the first model of this type, known as the ‘AN/SSQ-53 sonobuoy’, was developed. It achieves directionality by deriving the acoustic particle velocity or acceleration along two orthogonal horizontal axes, along with a pressure component from an omnidirectional sensor.
In [8], bioacoustic directional information from sonobuoys is presented using an azigram. The text explains the azigram matrix, providing diverse examples of transient and intermittent signals. It also highlights the numerous advantages of this representation for directional bioacoustics data and delves into the topics of multiplexing and demultiplexing signals in DIFAR. One of the focal points in [9] is the exploration of methods for estimating angles using an arctangent in unmanned automatic submarines. Additionally, various models involving the time, frequency, and time–frequency have been investigated under different conditions. Specifically, in [9], the text delves into data processing, encompassing detection and direction-of-arrival (DOA) estimation. Experimental results derived from data collected at sea are then presented. The obtained results are thoroughly analyzed with respect to the different methods and scenarios.
In [10], a PMN-28PT piezoelectric single crystal is utilized to create a pressure sensor, with a particular focus on constructing a cardio pattern as one of the key aspects of the article. In [11], an approach involving a compressive mode accelerometer is employed to achieve outstanding reception sensitivity and a dipole beam pattern. This method is based on the analysis of the vector sensor structure, considering the geometry and material of the piezoelectric and seismic mass. The latest advancements in acoustic vector sensors (AVS) for direction-of-arrival (DOA) estimation, target tracking, signal enhancement, and discussions on the performance bound in signal resolution against background noise and interference, along with a comparison with a conventional microphone array, are comprehensively reviewed in [11]. In [12], an underwater AVS with high sensitivity and broad bandwidth is introduced. Taking into account the operating frequency, sensitivity, and structural parameters of the vector sensor, [12] outlines the design of a high-sensitivity accelerometer with an integrated circuit piezoelectric (ICP). The sensitivity of this accelerometer is reported as 800 mV/ms−2. The primary contribution of the work in [12] lies in fabricating an AVS with a broad working band (10–2000 Hz), high sensitivity (−185 dB at 100 Hz), and compact size (outer diameter 42 mm, length 80 mm) using this accelerometer.
One way to determine the angle in the DIFAR sonobuoy involves creating a cardioid pattern using the acoustic vector sensors (AVSs) inside it. Challenges arising from the application of beamforming algorithms have led researchers to explore alternative methods for source angle discovery while preserving the sonobuoy’s structure. Direct data processing (DDP) of signals received by sensors, with the application of various algorithms to reduce the computational load, has gained significant attention in recent years. Historically, direct processing of information faced challenges due to the limited storage and heavy processing equipment. However, with technological advancements today, this method can be easily employed to enhance performance, extend operational range, and facilitate user-friendly operation with the mentioned sonobuoy device. This paper demonstrates how to detect the active sound intensity of the sound field and construct the azigram matrix using the short-time Fourier transform (STFT). Within milliseconds, this method can detect the presence of underwater targets and determine their angle of presence. This paper contributes to achieving the final accurate angle and compares different modes of signal-to-noise ratio (SNR) and signal performance in an environment with phase and amplitude changes due to multipaths. The DDP algorithm, when implemented with modern equipment, is poised to render previous models of sonobuoy DIFAR obsolete. Given its reasonable computing load, it has the potential to significantly enhance the operational speed and response for users.
The remainder of this article is structured as follows. Section 2 provides a theoretical analysis of the vector sensor, encompassing the AVS type, its application in DIFAR sonobuoys, and the theory of calculations. Section 3 delves into the DDP algorithm and its specific details. Computer simulations conducted under various conditions and the corresponding results are presented in Section 4. Lastly, Section 5 offers conclusions and suggestions for future work.

2. AVS Theoretical Analysis

2.1. Acoustic Vector Sensor Background

While hydrophones are capable of measuring the scalar acoustic pressure at a specific point in space, an acoustic vector sensor (AVS) excels in measuring the acoustic particle velocity in three orthogonal directions, along with pressure at the sensor’s central point. Despite being categorized into various models, AVSs are typically classified into two main categories based on the principle of pressure–pressure ‘p-p’ and pressure–velocity ‘p-u’ measurements. Pressure–pressure sensors can estimate particle velocity by measuring the pressure gradient between two closely spaced sensors [13]. However, the performance of gradient sensors can be adversely affected by issues such as finite difference approximation, scattering, diffraction, and instrument phase mismatch [14]. Inertial pressure velocity-based sensors utilize an accelerometer or a geophone to directly sense the particle velocity [15,16]. Using the results of the velocity and pressure measurements, the sound intensity vectors in three orthogonal directions can be obtained. The sound intensity ( W / m 2 ) in a specific direction is the product of the sound pressure (scalar) and the particle velocity (vector) component in that direction [17]. This quantity is utilized today to determine the underwater source angle through DIFAR sonobuoys. Based on research [18,19], two main factors influence the performance of an AVS. The first factor is the inertial sensor within the AVS, directly impacting the sensor’s sensitivity and the ratio of the maximum to minimum in the directivity pattern. The second factor is the elastic suspension element connected to the AVS [13].

2.2. DIFAR Sonobuoys

Despite the DIFAR sonobuoy’s significantly smaller size compared to the length of the received sound waves, it possesses the capability to determine the wave approach direction with sufficient accuracy. Figure 1 illustrates an incident plane wave with amplitude pi from a distance, with coordinates r, impacting two hydrophones positioned on an end-capped cylindrical tube of small cross-sectional area A and length S. The plane wave arrives at an angle θ, where the distance is denoted as d = s/2cosθ. Here, k represents the wave number, and the net force, F, in the x-direction is given as follows [20]:
F = 2 j p i e i k r sin π S λ c o s θ   2 j A p i e i k r π S λ c o s θ
One method for detecting the force involves utilizing the voltage output from a pair of accelerometers. These accelerometers can constitute a pair of piezoelectric devices with different materials. As depicted in Figure 2, when the chamber is moved to the left and right, the force generated by the sound pressure enters the piezoelectric elements and generates voltage.
According to (1) and considering that u = F / j ω m , if the container shown in Figure 2 has a mass of m, the velocity of the container along the x-axis can be obtained as follows:
u = 2 j A p i e i k r sin π L j ω m λ c o s θ = p i e i k r A L m c c o s θ .  
By considering the buoyancy relationship as B = ρ A L / m [21], (2) can be rewritten as below:
u = B p i e i k r ρ 0 c c o s θ = B u i   c o s θ .
In Equation (3), ρ 0 represents the density of the diffusion medium, and uᵢ denotes the velocity of the particle outside the chamber, induced by the radiation of a plane wave at an angle θ to the x-axis.
As can be seen from the above relationships, the propagation of acoustic waves and the speed of sound particles are influenced by many factors. It ss essential to understand how these factors interact and make informed decisions based on specific conditions. Some key factors include the environmental properties and water depth.
Temperature: Ideally, the temperature profile should minimize refraction. This could involve placing the sonobuoy just below a thermocline where sound travels with minimal bending.
Salinity: While not much control can be exerted over salinity, understanding its influence on sound speed in the deployment area helps in predicting sound propagation.
Pressure: Considering the pressure changes with depth is crucial for predicting sound speed variations.
These three environmental properties directly affect the wave propagation speed and form the underwater sound profile (USP), a vital map depicting the sound speed variations at different depths within a specific body of water. A USP typically displays the sound speed (meters per second) on the vertical axis and the depth (meters) on the horizontal axis. Warmer temperature, higher salinity, and greater pressure generally lead to faster sound propagation, making a detailed USP for the deployment area crucial.
The position of the sonobuoy has to be where the sound from the target travels efficiently, often near the expected target depth or within a sound convergence zone based on the temperature profile. The depth should be deep enough to minimize wind and wave noise, but not so deep that it introduces noise from other sources. Finding a balance is important. These and several fundamental key aspects of underwater acoustic communications are investigated in [22].
DIFAR sonobuoys, as illustrated in Figure 3a, integrate directional arrays featuring crossed pairs of piezoceramic transducers along with an omnidirectional reference transducer. The accelerometer functions as a bipolar pattern generator, while the hydrophone serves as an omnidirectional sensor. The pattern generated by these piezoelectrics, as depicted in Figure 3b, facilitates the process of acquiring directional information from surrounding sources.
In Figure 3a, the blue rectangle along the y-axis forms the cosine dipole pattern, and the red rectangle along the x-axis creates the sine dipole pattern shown in Figure 3b. Expanding Equations (2) and (3) leads to Equations (4)–(6), which are the fundamental relationships for obtaining information in DIFAR sonobuoys. While improvements in system construction can reduce many error components, there remains an unavoidable bearing error induced by the ambient ocean noise present in the acoustic signals. In Equations (4)–(6), these noises are represented by no, nc, and ns for the omni-channel, cosine channel, and sine channel, respectively. Throughout the rest of this paper, subscripts o, c, and s will be used to denote the omni, cosine, and sine channels. In the DDP approach, employed in this paper for the DIFAR sonobuoy system, when the acoustic signal from the source impacts the sensor at a β angle relative to the y-axis, the output of each channel is obtained as follows [23]:
p t = p 0 t + n o t
x c t = H p 0 t cos β + n c t
x s t = H p 0 t sin β + n s t
In this context, p(t) represents the omnidirectional signal associated with the hydrophone, whereas Equations (5) and (6) signify the particle velocity signal or energy conversion in the cosine and sine directions, respectively. The cosine direction is termed the North–South dipole, and the sine direction is denoted as the East–West dipole. Factor H serves as the conversion factor for sound energy and relies on various factors, including the speed of sound propagation in the environment, the density of the environment, and the sensor’s structure [24].

3. Direct Data-Processing (DDP) Approach

The common algorithms for obtaining the position of targets using underwater sensors are listed in Table 1. Details of each can be found in [25]. Accurate distance or angle measurement is achieved using range-based algorithms such as the time difference of arrival (TDOA), time of arrival (TOA), and angle of arrival (AOA). Due to constraints like time-varying characteristics, which are rarely suitable for underwater sensors, the received signal strength indicator (RSSI) is not very convenient. The purpose of this text is not to compare the proposed methods.
Due to the close relationship between the concepts discussed in this paper and range-based algorithms, their details and differences are summarized in Table 2. Relevant points related to these algorithms are highlighted in the table [26].
In terms of the mentioned algorithms, authors commonly use the fast Fourier transform (FFT). When employing the FFT, we obtain the frequency domain of the signal without information regarding its temporal details. The FFT exclusively provides frequency specifications and lacks insight into the time characteristics of the signal. However, in certain scenarios, understanding the temporal aspects of the signal becomes crucial. When searching for a source, knowledge of the target’s current time is necessary. Awareness of the appearance time of the desired frequency significantly contributes to ongoing bearing estimation and source tracking.
In practical terms, the process of computing short-time Fourier transforms (STFTs) involves dividing a longer time signal into shorter, equal-length segments. Subsequently, the Fourier or fast Fourier transform is computed separately on each of these shorter segments, revealing the Fourier spectrum for each. The STFT essentially employs the Fourier transforms to determine the sinusoidal frequency and phase content of localized sections of a signal as it evolves over time. Typically, the changing spectra are plotted as a function of time, forming a spectrogram. The short-time Fourier transforms (SFFTs) of the signals in Equations (4)–(6) are calculated as Expressions (7)–(9), respectively [27]:
S T F T p [ n ] τ , ω P τ , ω = n = + p n ψ n τ e i ω n ,  
S T F T x c [ n ] τ , ω X c τ , ω = n = + x c n ψ n τ e i ω n ,  
S T F T x s [ n ] τ , ω X s τ , ω = n = + x s n ψ n τ e i ω n
In Equations (7)–(9), we have defined P τ , ω as the short-time Fourier transform (STFT) of the acoustic pressure, while X c τ , ω   a n d   X s τ , ω represent the STFTs of the particle velocities vectors xc and xs, respectively. Within these equations, ψ n τ represents the window function, and its type can be adjusted to suit different applications.
  • Fast Fourier transform (FFT):
    The FFT is an algorithm used to compute the discrete Fourier transform (DFT) of a sequence or its inverse (IDFT).
    It efficiently computes the DFT of a sequence or its inverse by dividing the transformation into smaller, manageable parts.
    The FFT operates on a fixed-length signal and provides frequency domain information for the entire signal.
    It is commonly used for analyzing signals with stationary frequency content, such as analyzing the frequency components of an entire audio recording or an image.
  • Short-time Fourier transform (STFT):
    The STFT is a technique used to analyze the frequency content of a signal over time.
    It involves dividing the signal into short segments, applying the Fourier transform to each segment, and then analyzing how the frequency content of the signal changes over time.
    The STFT provides a time-varying representation of the frequency content of a signal, which can reveal transient events or changes in the frequency content over time.
    It is often used in signal-processing applications where the frequency content of a signal is not stationary, such as in audio processing for music analysis, speech recognition, or biomedical signal analysis.
In this article, this characteristic of the STFT is utilized: the changing spectra are plotted as a function of the time, forming a spectrogram.
A commonly employed method for bearing estimation involves utilizing the active intensity I [28]. In this approach, the instantaneous active intensity is defined as the real part of the product between the acoustic pressure and the particle velocities. This relationship can be expressed as follows [29]:
I c τ , ω r e a l   P τ , ω X c * τ , ω
I s τ , ω r e a l   P τ , ω X s * τ , ω  
The symbol ‘*’ denotes a complex conjugate. Consequently, the azimuth angle is calculated as follows:
β τ , ω = tan 1 I s τ , ω I c τ , ω .
In the processing phase of this study, we employ the four-quadrant inverse tangent, commonly known as ‘atan2’ [30]. This function returns angle values within the interval of [−π, π], as illustrated in Equation (13).
a t a n 2 a , b = t a n 1 a b   b > 0 t a n 1 a b + π   a 0 , b < 0 t a n 1 a b π   a < 0 , b < 0 + π 2   a > 0 , b = 0 π 2   a < 0 , b = 0   u n d e f i n e d   a = 0 , b = 0 .  
Utilizing the atan2 function, we construct a matrix referred to as the ‘Azigram’. This matrix encapsulates information resembling a spectrogram, where the dominant bearing is depicted in color, usually on a hue saturation value (HSV) color scale. This ensures a smooth color transition between 0 and 360 degrees, using color to signify the azimuth rather than the intensity [8].
By combining Equations (12) and (13), we derive Equation (14), which serves as the foundation for the processing aspect of this paper.
β τ , ω = atan 2 I s τ , ω , I c τ , ω .
In Table 3, the algorithm’s implementation steps are meticulously described step by step. Additionally, Figure 4 illustrates the block diagram of the algorithm implementation, aligning with the steps outlined in the table.

4. Simulations and Results

4.1. Constant Frequency Cosine Source

Sometimes, the original signal becomes so entangled with noise that they are thoroughly blended and challenging to distinguish. In the literature, the barrage jamming signals, which are intentionally interfered by the enemy jammer radar, are usually assumed to be Gaussian distributed. For this reason, in this paper, we will evaluate the results by adding Gaussian white noise to the original signal. Figure 5a depicts the spectrogram of the received signal with an SNR of zero dB, where the power of the signal equals that of the noise, in the sinusoidal and cosine channels. The horizontal axis represents the time, while the vertical axis represents the frequency. Notably, the frequency of the signal remains constant at 2 kHz throughout the observed time span.
In the processing phase, two critical factors for discretization are the choice of windowing and the number of samples. In this context, we utilized 5000 samples and implemented the Hamming window for windowing. By employing the relationships detailed in Section 3 and deriving the active intensity, the azigram matrix is obtained. This matrix, depicted in Figure 6b, represents the estimated angle index, with the axes denoting the time and frequency. The color bar on the right side of the figure indicates the estimated angle range in degrees. Assuming the transmitter is positioned at an angle of 45 degrees, the figure’s indicator reveals that at 83.2 milliseconds and a frequency of 2 kHz, the estimated angle is 44.7205 degrees. This accuracy is noteworthy, particularly considering the SNR of zero dB. However, our focus is specifically on points corresponding to the 2 kHz frequency. Through the application of an appropriate filter or threshold, more precise values can be attained, as illustrated in Figure 7.
To extract the final angle in this figure, different thresholding methods can be used, which is beyond the scope of this article. Certainly, the use of various thresholding methods can be the subject of different articles. The hard thresholding method, grounded in experimental tests, proves to be computationally efficient. Figure 7b clearly demonstrates that the angle indicated at a specific time and frequency aligns with the estimated angle in Figure 6b. For a more in-depth analysis of the obtained results, Figure 8 is presented, offering additional insights. In this figure, the stars represent the angles obtained at the same moment by the algorithm. Due to the influence of various environmental and hardware factors, these calculations may have a slight error. Therefore, by considering the time window, the accuracy of the final result can be ensured. The blue area shows in which range the obtained angle had the highest density. Therefore, after the end of the time window, the average points are announced to the user as the final target angle. The orange line point denotes the mean values derived from the blue stars. Essentially, within each 500-millisecond time window, the average is presented to the system user as an estimated angle. This temporal evaluation serves the purpose of assessing the algorithm’s performance over time and ensuring the attainment of accurate results through averaging over more data.
In Figure 8, it is evident that at SNR = 10 dB, the standard deviation area spans from approximately 44.5 to 46 degrees, with an average of 45.2044 degrees. Conversely, at SNR = 0 dB, the corresponding values are 42.5, 48.7, and 45.425 degrees, respectively. In essence, over a 500-millisecond interval, the system yields estimated angles of 45.2044 and 45.425 for SNRs equal to 10 dB and 0 dB, respectively. Finally, the angle of 45.2044 degrees in SNR = 10 dB and the angle of 45.425 degrees in SNR = 0 dB are announced to the end-user as the momentary position of the target. In the next parts of the simulation, according to the change in the input signal, the results are obtained in the same way.

4.2. Time-Varying Frequency Source

In certain scenarios, variable frequency sources may be encountered. To illustrate the outcomes of the proposed method under such conditions, we conducted a simulation employing a cosine signal with a frequency that changes over time. The results at an SNR equal to zero dB are presented in Figure 9. Figure 9a displays the spectrogram of the signal received by the omnidirectional, sine, and cosine channels, showcasing the dynamic frequency changes over time. By deriving the azigram matrix and subsequently applying an appropriate filter, the presence angle of the source becomes distinctly identifiable (refer to Figure 10).
Similarly to the previous example, we scrutinized the algorithm’s performance in more detail at SNRs of 0 and 10 dB to assess its accuracy. Figure 11 presents the standard deviation and mean of the estimates over a time window of 500 milliseconds. By examining the vertical axis, it becomes straightforward to compare the accuracy of the algorithm in different scenarios.

4.3. Multipath Effect

In this section, the effect of a multipath on the radiated signals from the source is assessed. The effect arises from the interaction of the source signal with surfaces such as the floor and water, as well as other creatures and objects, introducing errors in detecting the true angle of the source. The simulations in this section are conducted under certain assumptions:
  • The speed of sound propagation in the entire environment is considered constant.
  • As the speed of sound is assumed constant, all the propagation paths consist of straight lines between the source, upper and lower boundaries, and the receiver.
  • There is always a direct path assumed.
  • For each propagation path, the time delay, gain, Doppler coefficient, reflection loss, and area-dependent propagation loss are assumed to be known. Figure 12 illustrates the type of propagation paths and the locations of the receiver (DIFAR sensor) and the transmitter (target).
In the simulations, the speed of sound propagation underwater is set to 1530 m/s, and the signal is assumed to take five paths. The losses incurred by hitting any surface and the water bottom are considered to be 5 dB. The depth of the acoustic channel created is 200 m, surrounded by the water surface and bottom. Initially, we examine the results with the specified assumptions, considering a stationary signal source. Subsequently, the impact of both the motion of the source and a stationary receiver is reported. It is important to note that the primary focus of this section is to utilize the output signal from the multipath channel. Therefore, the specific factors influencing the multipath effect or how these effects occur are beyond the scope of this paper. To assess the channel performance, the signals are examined in various scenarios by applying a pure signal, disregarding the influence of noise. Initially, we showcase the passage of a pure signal through the channel and observe its amplitude and phase changes post-channel traversal. Subsequently, we assume that 5, 10, and 20 signals pass through this channel, respectively. Figure 13 and Figure 14 display the sum of 5 input signals in one color, the sum of 10 signals passing through the channel in another color, and the sum of 20 signals passing through the channel in yet another color.
In Figure 13a, we assume that the transmitter and receiver shown in Figure 12 are located approximately 70 m apart. Subsequently, we increase this distance to 700 m. Finally, in Figure 14, we illustrate the effect of the channel on the amplitude and phase of signals when the transmitter and receiver are at a distance of approximately 70 m, and the transmitter is moving at a speed of 20 km/h. The blue curve in the figure represents the signal passing through a straight path. Comparing this curve with others depicting the sum of the signals affected by a multipath channel reveals the impact of the channel. By comparing Figure 13a,b, which showcase the sum of the number of signals received by the receiver according to the change in the number of paths, it becomes apparent that increasing the distance results in a delay in the signals received by the receiver, leading to changes in the amplitude and phase.
In Figure 14, the effect of altering the transmitter’s speed on the amplitude and phase of the signals collected by the receiver is depicted. This figure reveals the performance of the multipath channel and the amplitude and phase changes induced by the channel compared to a direct path. Subsequently, we examine the algorithm implemented in this channel. A cosine signal with a frequency of 2 kHz is transmitted to the receiver. The objective is to estimate the azimuth angle using the algorithm introduced in Section 3. The source angle is assumed to be 45 degrees, and five paths are considered. This implies that the receiver receives the superposition of five signals that have traversed different paths with applied losses. By utilizing the relationships outlined in Section 3 and obtaining the active intensity of the sine and cosine channels (Figure 15a), the azigram matrix is constructed (Figure 15b). The index in this matrix represents the estimated angle, with the horizontal axis representing the time and the vertical axis representing the frequency. Similarly to the previous examples, the outputs of the thresholding filter are depicted in Figure 16.
However, similarly to the previous sections, the performance of this algorithm is examined concerning the noise level in the environment and the variation in the SNR, the results of which are displayed in Figure 17. It is reiterated that in this figure, considering the relative positions of the source and receiver, an angle of 45 degrees is assumed. Since the sound source is stationary, it is presumed that this angle does not change over time. The orange line in the figure represents the primary angle of the source position relative to the receiver coordinate system, which remains constant over time. The blue stars denote the angle estimates generated by the algorithm, and the density of these points is illustrated in the standard deviation area indicated by the blue box. Additionally, the average estimated points, presented to the user as the final estimate, are depicted with an orange dotted line. As is evident in this figure, at SNR = 10 dB, the standard deviation area ranges from approximately 44.2 to 45.8 degrees, with an average of 45.023 degrees. At SNR = 0 dB, this area extends from about 41.9 to 48.1 degrees, with an average of 44.9446 degrees. Essentially, over 500 milliseconds, if SNR = 10 dB, the system reports the number 45.023 degrees as the final estimate to the user. Conversely, if SNR = 0 dB, the system presents the number 44.9446 degrees as the source’s estimated angle to the user.

4.4. Single-Frequency Moving Cosine Source

Now, let us examine the scenario where the same source is in motion at a speed of 20 km/h, producing a cosine signal with a constant frequency. Due to the mobility of the source and the Doppler effect in the channel, the waveforms in different channels are depicted in Figure 18b.
Taking into account the signal power loss at each collision with the channel’s surface or floor, which is set at 5 decibels in this simulation, a set of five signals is generated based on the number of paths, as is visible in Figure 18a. Clearly, over time, the signal reaching the receiver later has traveled a longer path and incurs more losses in the process, as is evident in this figure. Finally, the sum of these signals is introduced to the system as input p0, and after applying the coefficients and noise, it enters the receivers. Subsequently, as before, we obtain the active intensity and showcase the azigram matrix at an SNR of zero decibels, as depicted in Figure 19. Due to the movement of the source and the subsequent change in its position relative to the receiver, the angle also varies over time. In this figure, the color change at the frequency of 2 kHz with the passage of time signifies the angle change.
However, to assess the system’s performance in the presence of varying levels of noise, we also examine the system’s response under different SNR conditions, the results of which are shown in Figure 20. In this figure, as previously mentioned, the angle changes over time, starting from 45 degrees due to the source’s initial position.
The orange line illustrates the source’s presence at a 45-degree angle initially; then, due to its speed and changing position, the angle fluctuates. It is observed that at SNR = 0 dB, the standard deviation ranges from about 42.9 to 49 degrees, with an average of 45.8648 degrees. At SNR = 10 dB, the standard deviation range is from 44.4 to 46.6 degrees, with an average of 45.49 degrees.

4.5. Cosine Signal with Variable Frequency

In this section, the sound signal emitted by a cosine source with a frequency that changes over time will be analyzed. The variation in the frequency can indicate changes in the speed of movement of the source. Additionally, examining the system’s response to different frequencies is crucial for evaluating its efficiency. Initially, we assume the source is stationary. Subsequently, we will analyze the system’s performance while considering the SNR.
Figure 21 displays the signals, considering the impact of the multipath and frequency changes over time. These signals, mixed with noise, reach the receiver in the time domain. As anticipated, due to the predetermined number of paths (five paths), five signals are evident.
Following the application of the mentioned relationships and the derivation of the azigram matrix, Figure 22 is obtained by applying an appropriate filter. As depicted in this figure, the frequency changes over time, but the estimated angle is correctly determined with an acceptable error. It is noteworthy that the index in this figure represents the angle (approximately 45 degrees in this case), the x-axis signifies the time axis, and the y-axis denotes the frequency axis. Figure 23 illustrates the impact of the SNR and the variation in the estimation accuracy within each window.

5. Conclusions and Future Works

In this paper, the DDP operation with the azigram matrix was described. It was demonstrated that by knowing the active sound intensity of the sound field and applying the inverse tangent to its real part, this matrix can be obtained. We introduced the axes of this matrix, which were displayed by the spectrogram, and we highlighted that announcing the time and frequency of the source simultaneously is one of the main advantages of this method, enabling the detection of multiple sources concurrently. Different signals with varying SNRs were used as input to the system, and their results were presented both as a time–frequency screen and as a two-dimensional linear display.
Considering the performance of the introduced algorithm, we simulated a multipath channel in MATLAB software (MATLAB R2020b) and evaluated the passage of the source signal through this channel. It was mentioned that within a time window of 500 milliseconds, a single value is presented to the user as an average of the total points obtained from processing this matrix. The area of the highest estimate in the different situations was represented as the standard deviation in the results, serving as a measure of the measurement accuracy of this system. Providing all the mentioned details, the determination of the definite angle of multiple sources by this algorithm in the final linear output, along with the details of the source type, is an aspect that few articles have addressed thus far. The first step in our future work involves examining the prototype of the built sonobuoy, applying the DDP algorithm to it, and comparing the results with other methods and algorithms.
In future studies, the factors and methods affecting the signals sent from the device to the receiver will be discussed. Given the construction of our specialized device, our goal in subsequent articles is to address real-world data and the numerous influencing factors, such as jamming effects and the sound wave propagation environment, including the temperature, pressure, and water salinity. The ultimate goal is to demonstrate the effective performance of our device by comparing the obtained signals with the signals of similar devices and other algorithms. Additionally, the authors of this article intend to explore the type of resource in the future by employing new methods such as machine learning and artificial intelligence, aiming to discern the details of underwater resources using their distinguishing features. In essence, we are striving to transform a DIFAR sonobuoy into an accurate and clear ‘human eye’ in the underwater environment in the future.

Author Contributions

Methodology, A.N., B.Z. and A.M.M.; Software, A.N.; Validation, B.Z. and A.M.M.; Formal analysis, B.Z.; Investigation, A.N.; Writing—original draft, A.N.; Writing—review & editing, A.N., B.Z. and A.M.M.; Visualization, A.N.; Supervision, B.Z. and A.M.M.; Project administration, B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, F.; Milios, E.; Stergiopoulos, S.; Dhanantwari, A. New towed-array shape-estimation scheme for real-time sonar systems. IEEE J. Ocean. Eng. 2003, 28, 552–563. [Google Scholar] [CrossRef]
  2. Braca, P.; Willett, P.; LePage, K.; Marano, S.; Matta, V. Bayesian tracking in underwater wireless sensor networks with port-starboard ambiguity. IEEE Trans. Signal Process. 2014, 62, 1864–1878. [Google Scholar] [CrossRef]
  3. Mackinson, S.; Freeman, S.; Flatt, R.; Meadows, B. Improved acoustic surveys that save time and money: Integrating fisheries and ground-discrimination acoustic technologies. J. Exp. Mar. Biol. Ecol. 2004, 305, 129–140. [Google Scholar] [CrossRef]
  4. Molaei, A.M.; Zakeri, B.; Andargoli, S.M.H. Two-dimensional DOA estimation for multi-path environments by accurate separation of signals using k-medoids clustering. IET Commun. 2019, 13, 1141–1147. [Google Scholar] [CrossRef]
  5. Tervo, S. Direction estimation based on sound intensity vectors. In Proceedings of the 2009 European Signal Processing Conference (EUSIPCO 2009), Glasgow, UK, 24–28 August 2009; pp. 700–704. [Google Scholar]
  6. Abdi, A.; Guo, H. Signal correlation modeling in acoustic vector sensor arrays. IEEE Trans. Signal Process. 2009, 57, 892–903. [Google Scholar] [CrossRef]
  7. Song, Y.; Wong, K.T. Three-dimensional localization of a near-field emitter of unknown spectrum using an acoustic vector sensor. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1035–1041. [Google Scholar] [CrossRef]
  8. Thode, A.M.; Sakai, T.; Michalec, J.; Rankin, S.; Soldevilla, M.S.; Martin, B.; Kim, K.H. Displaying bioacoustic directional information from sonobuoys using azigrams. J. Acoust. Soc. Am. 2019, 146, 95–102. [Google Scholar] [CrossRef] [PubMed]
  9. Terracciano, D.S.; Costanzi, R.; Manzari, V.; Stifani, M.; Caiti, A. Passive Bearing Estimation Using a 2-D Acoustic Vector Sensor Mounted on a Hybrid Autonomous Underwater Vehicle. IEEE J. Ocean. Eng. 2022, 47, 799–814. [Google Scholar] [CrossRef]
  10. Yeo, H.G.; Choi, J.; Jin, C.; Pyo, S.; Roh, Y.; Choi, H. The Design and Optimization of a Compressive-Type Vector Sensor Utilizing a PMN-28PT Piezoelectric Single-Crystal. Sensors 2019, 19, 5155. [Google Scholar] [CrossRef] [PubMed]
  11. Cao, J.; Liu, J.; Wang, J.; Lai, X. Acoustic vector sensor: Reviews and future perspectives. IET Signal Process. 2017, 11, 1–9. [Google Scholar] [CrossRef]
  12. Zhang, H.; Chen, H.-J.; Wang, W.-Z. An underwater acoustic vector sensor with high sensitivity and broad band. Sens. Transducers 2014, 170, 30. [Google Scholar]
  13. Silvia, M.T.; Richards, R.T. A theoretical and experimental investigation of low-frequency acoustic vector sensor. In Proceedings of the Oceans 2002, Biloxi, MI, USA, 29–31 October 2002. [Google Scholar]
  14. Kalgan, A.; Bahl, R.; Kumar, A. Studies on underwater acoustic vector sensor for passive estimation of direction of arrival of radiating acoustic signal. Indian J. Geo-Mar. Sci. 2015, 44, 213–219. [Google Scholar]
  15. Felisberto, P.; Santos, P.; Jesus, S.M. Tracking source azimuth using a single vector sensor. In Proceedings of the 2010 Fourth International Conference on Sensor Technologies and Applications, Venice, Italy, 18–25 July 2010. [Google Scholar]
  16. Gabrielson, T.B. Design problems and limitations in vector sensors. In Proceedings of the Workshop on Directional Acoustic Sensors, Newport, RI, USA, 17–18 April 2001. [Google Scholar]
  17. Kotus, J.; Szwoch, G. Calibration of acoustic vector sensor based on MEMS microphones for DOA estimation. Appl. Acoust. 2018, 141, 307–321. [Google Scholar] [CrossRef]
  18. Liu, S.; Lan, Y.; Li, Q. Design of Underwater Acoustic Vector Sensor and its Elastic Suspension Element. In Applied Mechanics and Materials; Trans Tech Publications, Ltd.: Seestrasse, Switzerland, 2015; Volume 713, pp. 569–572. [Google Scholar]
  19. Kumar, B.; Kumar, A.; Bahl, R. Performance Analysis of Highly Directional Acoustic Vector Sensor for Underwater Applications. In Global Oceans 2020: Singapore–US Gulf Coast; IEEE: Biloxi, MS, USA, 2020; pp. 1–5. [Google Scholar]
  20. Sherman, C.H.; Butler, J.L. Transducers and Arrays for Underwater Sound; Springer: New York, NY, USA, 2016. [Google Scholar]
  21. Besant, W.H. Elementary Hydrostatics; Deighton, Bell and Company: Cambridge, UK, 1890. [Google Scholar]
  22. Akyildiz, I.F.; Pompili, D.; Melodia, T. Underwater acoustic sensor networks: Research challenges. Ad Hoc Netw. 2005, 3, 257–279. [Google Scholar] [CrossRef]
  23. Maranda, B.H. The statistical accuracy of an arctangent bearing estimator. In Proceedings of the Oceans 2003. Celebrating the Past… Teaming Toward the Future (IEEE Cat. No. 03CH37492), San Diego, CA, USA, 22–26 September 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 4, pp. 2127–2132. [Google Scholar]
  24. Nehorai, A.; Paldi, E. Acoustic vector-sensor array processing. IEEE Trans. Signal Process. 1994, 42, 2481–2491. [Google Scholar] [CrossRef]
  25. Su, X.; Ullah, I.; Liu, X.; Choi, D. A review of underwater localization techniques, algorithms, and challenges. J. Sens. 2020, 2020, 6403161. [Google Scholar] [CrossRef]
  26. Qu, F.; Wang, S.; Wu, Z.; Liu, Z. A survey of ranging algorithms and localization schemes in underwater acoustic sensor network. China Commun. 2016, 13, 66–81. [Google Scholar]
  27. Sejdić, E.; Djurović, I.; Jiang, J. Time–frequency feature representation using energy concentration: An overview of recent advances. Digit. Signal Process. 2009, 19, 153–183. [Google Scholar] [CrossRef]
  28. Fahy, F.J. Sound Intensity, 2nd ed.; CRC Press: Boca Raton, FL, USA, 1995. [Google Scholar]
  29. Heyser, R.C. Instantaneous intensity. J. Acoust. Soc. Am. 1986, 80, S103. [Google Scholar] [CrossRef]
  30. Davies, S. Bearing accuracies for arctan processing of crossed dipole arrays. In Proceedings of the OCEANS’87, Halifax, NS, Canada, 28 September–1 October 1987; IEEE: Piscataway, NJ, USA, 1987; pp. 351–356. [Google Scholar]
Figure 1. Incident wave on two hydrophones, 1 and 2, in the end-capped cylindrical tube receiving plane wavefronts w1 and w2 [20].
Figure 1. Incident wave on two hydrophones, 1 and 2, in the end-capped cylindrical tube receiving plane wavefronts w1 and w2 [20].
Electronics 13 02931 g001
Figure 2. End-mounted accelerometers for net force [20].
Figure 2. End-mounted accelerometers for net force [20].
Electronics 13 02931 g002
Figure 3. (a) The pattern formed by each of the piezoelectrics alone. (b) Placement of the cross piezoelectric and central hydrophone in a DIFAR sonobuoy.
Figure 3. (a) The pattern formed by each of the piezoelectrics alone. (b) Placement of the cross piezoelectric and central hydrophone in a DIFAR sonobuoy.
Electronics 13 02931 g003
Figure 4. Block diagram corresponding to the steps of the DDP method. * indicates the complex conjugate.
Figure 4. Block diagram corresponding to the steps of the DDP method. * indicates the complex conjugate.
Electronics 13 02931 g004
Figure 5. (a) Depicts a fixed-frequency cosine signal received by omnidirectional, sine, and cosine receivers. (b) Illustrates the short-time Fourier transform (STFT) spectrum of different channels for a single-frequency cosine signal at SNR = 0 dB.
Figure 5. (a) Depicts a fixed-frequency cosine signal received by omnidirectional, sine, and cosine receivers. (b) Illustrates the short-time Fourier transform (STFT) spectrum of different channels for a single-frequency cosine signal at SNR = 0 dB.
Electronics 13 02931 g005
Figure 6. (a) Estimation of the angle of 45 degrees of a single-frequency cosine signal using the spectrum of the spectrogram at SNR = 0 dB. (b) Active sound intensity spectrum of the sine and cosine channels.
Figure 6. (a) Estimation of the angle of 45 degrees of a single-frequency cosine signal using the spectrum of the spectrogram at SNR = 0 dB. (b) Active sound intensity spectrum of the sine and cosine channels.
Electronics 13 02931 g006
Figure 7. (a) Active sound intensity spectrum of the sine and cosine channels SNR = 0 dB. (b) Filtered azigram estimation of the 45-degree angle of a single-frequency cosine signal using the STFT spectrogram. The unit of the color bar in the figure on the right is degrees.
Figure 7. (a) Active sound intensity spectrum of the sine and cosine channels SNR = 0 dB. (b) Filtered azigram estimation of the 45-degree angle of a single-frequency cosine signal using the STFT spectrogram. The unit of the color bar in the figure on the right is degrees.
Electronics 13 02931 g007
Figure 8. Display of the standard deviation area and estimated points by the algorithm in the SNRs of 0 and 10 dB.
Figure 8. Display of the standard deviation area and estimated points by the algorithm in the SNRs of 0 and 10 dB.
Electronics 13 02931 g008
Figure 9. (a) Depicts the short-time Fourier transform (STFT) spectrum of different channels corresponding to a variable frequency source. (b) Shows the signals received in different channels in the time domain. SNR = 0 dB.
Figure 9. (a) Depicts the short-time Fourier transform (STFT) spectrum of different channels corresponding to a variable frequency source. (b) Shows the signals received in different channels in the time domain. SNR = 0 dB.
Electronics 13 02931 g009
Figure 10. (a) Illustrates the filtered azigram of a cosine signal with variable frequency using the STFT spectrogram. (b) Represents the active sound acoustic spectrum of the sine and cosine channels. SNR = 0 dB.
Figure 10. (a) Illustrates the filtered azigram of a cosine signal with variable frequency using the STFT spectrogram. (b) Represents the active sound acoustic spectrum of the sine and cosine channels. SNR = 0 dB.
Electronics 13 02931 g010
Figure 11. Display of the standard deviation area and estimated points by the algorithm at SNRs of 0 and 10 dB for a variable frequency signal.
Figure 11. Display of the standard deviation area and estimated points by the algorithm at SNRs of 0 and 10 dB for a variable frequency signal.
Electronics 13 02931 g011
Figure 12. Multipath effect in a channel with a depth of 200 m, assuming a constant propagation speed.
Figure 12. Multipath effect in a channel with a depth of 200 m, assuming a constant propagation speed.
Electronics 13 02931 g012
Figure 13. (a) The speed of the transmitter is 0 km/h and the approximate distance between them is 70 m. (b) The speed of the transmitter is 0 km/h and the approximate distance between them is 700 m.
Figure 13. (a) The speed of the transmitter is 0 km/h and the approximate distance between them is 70 m. (b) The speed of the transmitter is 0 km/h and the approximate distance between them is 700 m.
Electronics 13 02931 g013
Figure 14. The speed of the transmitter is 20 km/h and the distance between them is 60 m.
Figure 14. The speed of the transmitter is 20 km/h and the distance between them is 60 m.
Electronics 13 02931 g014
Figure 15. (a) Represents the active sound intensity spectrum of the sine and cosine channels. (b) Illustrates the estimation of the angle of 45 degrees for a single-frequency cosine signal in a multipath channel, utilizing the spectrum of the spectrogram. SNR = 0 dB. The unit of the color bar in the figure on the right is degrees.
Figure 15. (a) Represents the active sound intensity spectrum of the sine and cosine channels. (b) Illustrates the estimation of the angle of 45 degrees for a single-frequency cosine signal in a multipath channel, utilizing the spectrum of the spectrogram. SNR = 0 dB. The unit of the color bar in the figure on the right is degrees.
Electronics 13 02931 g015
Figure 16. (a) Active audio intensity spectrum of the sine and cosine channels, SNR = 0 dB. (b) Filtered azigram estimation of the angle of 45 degrees of a single frequency cosine signal in a multipath channel and using the STFT spectrogram.
Figure 16. (a) Active audio intensity spectrum of the sine and cosine channels, SNR = 0 dB. (b) Filtered azigram estimation of the angle of 45 degrees of a single frequency cosine signal in a multipath channel and using the STFT spectrogram.
Electronics 13 02931 g016
Figure 17. The area of standard deviation compared to the mean, the blue points estimated by the algorithm and the main angle in the SNRs of 0 and 10 dB.
Figure 17. The area of standard deviation compared to the mean, the blue points estimated by the algorithm and the main angle in the SNRs of 0 and 10 dB.
Electronics 13 02931 g017
Figure 18. (a) Single-frequency cosine signals from a moving source passed through a multipath channel. (b) Time signals that arrived at omnidirectional, sinusoidal and cosine receivers from a moving source with a fixed frequency cosine signal in a multipath channel. Different colors represent different signals.
Figure 18. (a) Single-frequency cosine signals from a moving source passed through a multipath channel. (b) Time signals that arrived at omnidirectional, sinusoidal and cosine receivers from a moving source with a fixed frequency cosine signal in a multipath channel. Different colors represent different signals.
Electronics 13 02931 g018
Figure 19. (a) Active sound intensity spectrum of the sine and cosine channels at SNR = 0 dB. (b) Filtered azigram: estimation of the angle of 45 degrees for a single-frequency cosine signal from a moving source in a multipath channel using the STFT spectrogram. The unit of the color bar in the figure on the right is degrees.
Figure 19. (a) Active sound intensity spectrum of the sine and cosine channels at SNR = 0 dB. (b) Filtered azigram: estimation of the angle of 45 degrees for a single-frequency cosine signal from a moving source in a multipath channel using the STFT spectrogram. The unit of the color bar in the figure on the right is degrees.
Electronics 13 02931 g019
Figure 20. This figure illustrates the standard deviation area relative to the mean, the blue points estimated by the algorithm, and the principal angle for the SNR of 0 and 10 dB. These results are obtained from a moving source emitting a constant frequency cosine signal in a multipath channel.
Figure 20. This figure illustrates the standard deviation area relative to the mean, the blue points estimated by the algorithm, and the principal angle for the SNR of 0 and 10 dB. These results are obtained from a moving source emitting a constant frequency cosine signal in a multipath channel.
Electronics 13 02931 g020
Figure 21. Signals that passed through a multipath channel from a source with a cosine signal and time-varying frequency.
Figure 21. Signals that passed through a multipath channel from a source with a cosine signal and time-varying frequency.
Electronics 13 02931 g021
Figure 22. (a) Active sound intensity spectrum of the sine and cosine channels. (b) Filtered azigram: estimation of the 45-degree angle of a cosine signal with variable frequency and a stationary source in a multipath channel using the STFT spectrogram.
Figure 22. (a) Active sound intensity spectrum of the sine and cosine channels. (b) Filtered azigram: estimation of the 45-degree angle of a cosine signal with variable frequency and a stationary source in a multipath channel using the STFT spectrogram.
Electronics 13 02931 g022
Figure 23. The area of standard deviation relative to the mean, the blue points estimated by the algorithm, and the principal angle in the SNR of 0 and 10 dB from a stationary source with a cosine signal and a time-varying frequency in a multipath channel.
Figure 23. The area of standard deviation relative to the mean, the blue points estimated by the algorithm, and the principal angle in the SNR of 0 and 10 dB from a stationary source with a cosine signal and a time-varying frequency in a multipath channel.
Electronics 13 02931 g023
Table 1. Traditional underwater localization algorithms.
Table 1. Traditional underwater localization algorithms.
Underwater Localization Algorithms
Range-Base AlgorithmRange-Free Algorithm
TDOATOAAOARSSIHOPArea-baseCentroid
Table 2. Analysis and comparison approach concerning underwater localization algorithms used in underwater sensors.
Table 2. Analysis and comparison approach concerning underwater localization algorithms used in underwater sensors.
Localization AlgorithmsMethodologyAdvantagesDrawbacks/Issues
TOAAcoustic/targets must be synchronizedMost frequently used for underwater sensorsTime synchronization is required
TDOAKnown transmission timeDoes not depend on the transmission time of the sourceHigh cost and energy consumption
AOABased on the arrival anglesAll unknown nodes can detect incident signal anglesUltrasound receiver increases the cost
RSSIDepend on the strength of the received signal and path loss impactApplicable in asynchronous scenariosLoss caused by multipath fading
This work (based on DDP)Using STFT and azigram matrix for angle detectionSimultaneous detection of target frequency and time of its presenceLow accuracy in detecting targets with the same frequency that appear simultaneously
Table 3. Implementation steps of the DDP algorithm.
Table 3. Implementation steps of the DDP algorithm.
No.DescriptionEquation Number
1Introduce the simulated signals as input p0 to the system. This signal can be a single-frequency or variable-frequency cosine type. It should be noted that various signal restrictions, such as applying multipath effects, are implemented at this stage. The collective signal resulting from multipaths is then entered into the system as p0.
2Given the SNR considered in the simulation, apply Gaussian white noise to the signal p0 of omnidirectional, sinusoidal and cosine channels to generate p, xs and xc.(4)–(6)
3Choose a suitable window, which can be Blackman–Harris type, to reduce side lobes, or other windows such as Hanning or Hamming, as well as a suitable window size, to reduce or increase the desired frequency or time resolution.
4Apply STFT to omnidirectional, sine and cosine channels according to the points mentioned in Step 3.(7)–(9)
5Calculate the product of the omnidirectional signal in the complex conjugate of the sine and cosine channel signal obtained in Step 4 and consider its real part to obtain the active sound intensity value of the sine and cosine channels.(10) and (11)
6Apply the inverse tangent to the active sound intensity of the sine and cosine channels to obtain the azigram matrix.(14)
7Apply a threshold limit or filter on the values obtained in the azigram matrix to better discover the angle of the underwater source.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nemati, A.; Zakeri, B.; Molaei, A.M. Two-Dimensional Direction-of-Arrival Estimation Using Direct Data Processing Approach in Directional Frequency Analysis and Recording (DIFAR) Sonobuoy. Electronics 2024, 13, 2931. https://doi.org/10.3390/electronics13152931

AMA Style

Nemati A, Zakeri B, Molaei AM. Two-Dimensional Direction-of-Arrival Estimation Using Direct Data Processing Approach in Directional Frequency Analysis and Recording (DIFAR) Sonobuoy. Electronics. 2024; 13(15):2931. https://doi.org/10.3390/electronics13152931

Chicago/Turabian Style

Nemati, Amirhossein, Bijan Zakeri, and Amir Masoud Molaei. 2024. "Two-Dimensional Direction-of-Arrival Estimation Using Direct Data Processing Approach in Directional Frequency Analysis and Recording (DIFAR) Sonobuoy" Electronics 13, no. 15: 2931. https://doi.org/10.3390/electronics13152931

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop