Next Article in Journal
Physical Distancing Device with Edge Computing for COVID-19 (PADDIE-C19)
Previous Article in Journal
Modeling and Calibration for Dithering of MDRLG and Time-Delay of Accelerometer in SINS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Angle Recognition Algorithm for Tracking Moving Targets Using WiFi Signals with Adaptive Spatiotemporal Clustering

1
School of Physics and Information Engineering, Fuzhou University, Fuzhou 350008, China
2
Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NC B3J 1Z1, Canada
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(1), 276; https://doi.org/10.3390/s22010276
Submission received: 30 November 2021 / Revised: 28 December 2021 / Accepted: 29 December 2021 / Published: 30 December 2021
(This article belongs to the Section Navigation and Positioning)

Abstract

:
An angle estimation algorithm for tracking indoor moving targets with WiFi is proposed. First, phase calibration and static path elimination are proposed and performed on the collected channel state information signals from different antennas. Then, the angle of arrival information is obtained with the joint estimation algorithm of the angle of arrival (AOA) and time of flight (TOF). To deal with the multipath effects, we adopt the DBscan spatiotemporal clustering algorithm with adaptive parameters. In addition, the time-continuous angle of arrival information is obtained by interpolating and supplementing points to extract the dynamic signal paths better. Finally, the least-squares method is used for linear fitting to obtain the final angle information of a moving target. Experiments are conducted with the tracking data set presented with Tsinghua’s Widar 2.0. The results show that the average angle estimation error with the proposed algorithm is smaller than Widar2.0. The average angle error is about 7.18° in the classroom environment, 3.62° in the corridor environment, and 12.16° in the office environment; they are smaller than the errors of the existing system.

1. Introduction

In recent years, indoor positioning technology has been developed and applied in many areas, and its commercial profits reached USD 10 billion in 2020 [1]. For example, it can help locate patients in a hospital and diagnose depression, mania, and so on. In home care and supervision of children, it can be used to monitor abnormal behaviors. In large warehouses, it can locate goods and valuables. It can also help rescue workers find trapped people in time in sudden disasters in industrial areas. As a result, various indoor positioning technologies have been developed. For example, the indoor positioning technology based on Bluetooth has been proposed [2,3], although its application is usually limited to a small range of about ten meters. The indoor positioning technology using ultrasonic waves has been presented in [4,5]. Nevertheless, indoor multipath has a great influence on positioning accuracy. An ultrasonic wave is susceptible to ambient temperature and the Doppler effect. The ultra-wideband technology has also been used for indoor positioning [6,7], but it is relatively expensive and has not been widely applied. The RFID-based indoor positioning technology has also been described [8,9,10], and it usually has a poor anti-jamming ability. Ref. [11] introduces an FPGA implementation of the position evaluation algorithm based on the TDOA principle. With the prevalence of WiFi signals, using them for indoor positioning has been studied and developed [12,13,14].
WiFi positioning and tracking algorithms can be divided into two types. The first is active positioning or tracking, such as SpotFi [15], Wicapture [16,17], and Milliback [18], but they are inconvenient because they require people to carry devices around with them.
The second technology is passive positioning or tracking. There are two main passive tracking algorithms: (1) fingerprint-based tracking algorithms [19,20,21,22] and (2) parameter-based indoor tracking algorithms. Fingerprint-based tracking algorithms collect a large number of samples in advance and use them for training an algorithm. They require a lot of energy and resources. In addition, they are highly dependent on the environments: the algorithms need to be recalibrated and retrained every time the environments change. On the other hand, the parameter-based approach does not require training and is independent of the environment but more computationally intensive.
The AOA is a key positioning parameter. Many algorithms have been developed for estimating AOA. SpotFi [15] collects CSI of WiFi signals and then applies the joint AOA estimation algorithm and the TOF to estimate the angle of a moving target. Dynamic Music applies the MUSIC algorithm of joint estimation of AOA and TOF in [23]. Widar 2.0 [24] uses a multiparameter estimation algorithm. By using a link in a group of RX-TX (three receiving antennas at each receiving end), the amplitude information, TOF information, AOA information, and Doppler velocity of the moving target signal can be estimated simultaneously. A median error of 0.75 m is achieved within the 8 m range. Because a four-dimensional search is required, the expectation-maximization algorithm is used to reduce the number of searches.
Static path refers to the signal reflected from stationary objects (such as furniture, walls, floors, etc.) to the receiver. A dynamic path is a signal that passes through a moving target and arrives at the receiving end. In general, we need to identify whether the AOA is a static or dynamic path [15]. Compared with other angle estimation algorithms, the static path elimination algorithm is first used in this paper, which eliminates the need to distinguish the static angle from the dynamic path angle in subsequent analysis. The above-mentioned methods use the instantaneous signals to estimate the AOA, not considering the history of the measured AOA of a moving target. To take advantage of the AOA history, this paper proposes an angle estimation algorithm that uses the past AOA information. In addition, phase recalibration and static path elimination are performed on the AOA-related CSI signal to remove the interferences of static or stationary objects. The DBscan spatiotemporal clustering algorithm is also adopted to mitigate the multipath problem. Finally, the least-squares method is used for linear fitting to obtain the final angle information of the moving target.
In short, the main contributions of this paper are:
(1)
Phase calibration and static path elimination are performed on the collected CSI signals, and then AOA and TOF are jointly used for the AOA estimations;
(2)
A DBscan spatiotemporal clustering algorithm with adaptive parameter adjustment is proposed to reduce multipath effects;
(3)
The linear fitting method of the least-squares method is introduced and applied to supplement and finalize the AOA results.
Note that in our studies, we use the tracking data set of Tsinghua’s Widar2.0 to test our proposed algorithm. This data set includes 24 trajectories in classrooms, offices, and corridors and has about 1700 pieces of angle information.
This article will discuss it in detail in the Materials and Methods (Section 2), Results (Section 3), and Conclusions (Section 4).

2. Materials and Methods

In the following subsections, the proposed algorithm is elaborated. It includes CSI model building, phase calibration, static path elimination, and the angle estimation algorithm jointly with AOA and TOF, the spatiotemporal clustering algorithm, and the least-squares linear fitting shown in Figure 1.

2.1. CSI Model

As described before, the WiFi signals propagate in space and are scattered by any objects they encounter in an indoor environment. Therefore, the WiFi signals’ CSI embodies the information about static and dynamic objects (and thus paths) in an indoor environment.
Consider the receiving array as shown in Figure 2. It has M elements.
Assume the ith subcarrier signal received by the mth antenna element is h ( i , m , t ) . It can be expressed as:
h ( i , m , t ) = l = 1 L a l ( i , m , t ) e j 2 π f i τ l ( i , m , t ) + N ( t )
where L represents the total number of paths. N(t) represents noise in the path. a l ( i , m , t ) represents the amplitude of the ith subcarrier signal received by the mth element along path l. τ l represents the signal flight time along path l. If the phase of the 1th subcarrier signal h ( 1 , 1 , t ) of 1th antenna is taken as the reference phase, the phase difference of the ith subcarrier h ( i , m , t ) of the mth antenna with respect to h ( 1 , 1 , t ) can be expressed as:
2 π f i τ l ( i , m , t ) = 2 π ( Δ f i τ l + f i d ( m 1 ) sin ( ϕ l ) c )
where Δ f i is the frequency difference between the subcarrier i and the reference carrier, d(m − 1) is the extra propagation distance between the mth antenna and the reference antenna, c is the speed of light, and ϕ l is the AOA of path l.
As seen, phase differences (Equation (2)) between adjacent antennas contain AOA information. However, due to imperfect hardware clock synchronization, time offset, frequency shifts, and initial phases can cause measurement errors. Therefore, phase calibration is required as described below. In addition, we are interested in tracking moving targets and then the dynamic phase. We then need to remove the static path information, specifically in AOA determination. They are elaborated in the following subsections.

2.2. Phase Calibration and Static Path Elimination

Since there is no strict clock synchronization in the CSI signal reception, there will be an error between the measured CSI signal h ˜ ( i , m , t ) and the actual CSI signal h ( i , m , t ) :
h ˜ ( i , m , t ) = h ( i , m , t ) e 2 π j ( Δ f i ε t + Δ t ε f ) + ζ s
where ε t is the time offset, ε f is the frequency offset, and ζ s is the initial phase offset.
Since the time offset and frequency offset are the same for the signals received at different antenna elements, they can be removed by conjugate multiplication [24,25]. Conjugate multiplication of signals of one set of antennas and other antenna signals can be obtained:
T ( i , m , t ) = a n g l e ( h ˜ ( i , m , t ) ) × h ˜ * ( i , m 0 , t ) = a n g l e ( h ( i , m , t ) ) × h * ( i , m 0 , t )
where m 0 is the reference antenna and m is for all antennas. a n g l e ( h ˜ ( i , m , t ) ) means the phase of h ˜ ( i , m , t ) , shown as follows:
a n g l e ( h ˜ ( i , m , t ) ) = exp ( j φ ( h ˜ ( i , m , t ) ) )
h ( i , m , t ) and h ( i , m 0 , t ) are expressed as follows:
h ( i , m , t ) = l s = 1 L s h l s ( i , m , t ) + l d = 1 L d h l d ( i , m , t )
and
h ( i , m 0 , t ) = l s = 1 L s h l s ( i , m 0 , t ) + l d = 1 L d h l d ( i , m 0 , t ) .
Substitution of Equations (6) and (7) into Equation (4) reads:
T ( i , m , t ) = l s = 1 L s ( exp ( j φ ( h l s ( i , m , t ) ) ) h l s * ( i , m 0 , t ) ) + l s = 1 L s ( exp ( j φ ( h l s ( i , m , t ) ) ) ) l d = 1 L d ( h l d * ( i , m 0 , t ) ) + l d = 1 L d ( exp ( j φ ( h l d ( i , m , t ) ) ) ) l s = 1 L s ( h l s * ( i , m 0 , t ) ) + l d = 1 L d ( exp ( j φ ( h l d ( i , m , t ) ) ) h l d * ( i , m 0 , t ) )
The l s = 1 L s exp ( j φ ( h l s ( i , m , t ) ) ) ( h l s * ( i , m 0 , t ) ) is caused by the static path and is of low frequency. l d = 1 L d ( exp ( j φ ( h l d ( i , m , t ) ) ) h l d * ( i , m 0 , t ) ) is caused by the dynamic path and is of high frequency. The above two terms can be removed with a bandpass filter. The remaining two terms are expanded as follows:
exp ( j φ ( h l s ( i , m , t ) ) h l d * ( i , m 0 , t ) = a l d * exp ( j 2 π ( Δ f i ( τ l s τ l d ) + f c d ( m m 0 ) ( sin ( ϕ l s ) sin ( ϕ l d ) ) / c ) ,
and
exp ( j φ ( h l d ( i , m , t ) ) h l s * ( i , m 0 , t ) = a l s * exp ( j 2 π ( Δ f i ( τ l d τ l s ) + f c d ( m m 0 ) ( sin ( ϕ l d ) sin ( ϕ l s ) ) / c )
By comparing Equations (10) and (11), we can get:
| exp ( j φ ( h l d ( i , m , t ) ) h l s * ( i , m 0 , t ) | = | a l s | | a l d | = | exp ( j φ ( h l s ( i , m , t ) ) h l d * ( i , m 0 , t ) |
Since the amplitude of the static path signal | a l s | is far greater than that of the dynamic path signal | a l d | (In the case that there is no occlusion between the transmitting and receiving antennas), the latter can be neglected. We did an experiment where a person moves away from an antenna and then slows down to a stop and vice versa. The results of the short-time Fourier transform (STFT) before and after static elimination are shown in Figure 3a,b. In the absence of static cancellation, noise, dynamic path signal, and static path signal are mixed together and cannot be distinguished. After the static elimination, there is little noise, and the signal contains mostly the dynamic path signal. This is very helpful for extracting accurate AOA information later.
To obtain a more accurate measurement of phase information and the dynamic path, we further apply the well-known minimum-norm method (MNM) [26] signal processing algorithm to estimate AOA.

2.3. Joint Estimation with AOA and TOF Using the MNM Algorithm

The mathematical expression of a narrowband far-field signal is:
h ( i , m , t ) = A s ( i , m , t ) + N ( t )
where h ( i , m , t ) is the received signal, A is the guiding vector of the antenna array, s ( i , m , t ) is the signal matrix, N ( t ) is the noise signal, and the unbiased estimate of the covariance matrix of the signal is
R ( i , m , t ) = 1 N n = 1 N h n ( i , m , t ) h n H ( i , m , t )
where n is the number of snapshots, and H is the transrank operator.
Substitution of (12) into (13) reads:
R = E ( h h H ) = A E ( S S H ) A H + E ( N N H )
where E is the expectation operator.
However, three antennas can only identify two paths at most. For a typical indoor signal of 6–8, three antennas are not enough. However, each antenna receives 30 subcarriers, so the phase deviation caused by different subcarriers in different paths of TOF is used to extend the antenna [14].
The phase shift caused by the introduction of path ld in adjacent different antennas is d sin ( θ l d ) / c . The phase shift in the Mth antenna of the propagation path ld relative to the reference antenna 1 is
Φ ( θ ) = e j 2 π d ( M 1 ) sin ( θ l d ) × f / c
The steering vector of the signal in path ld caused by the antenna is:
a ( θ l d ) = [ 1 Φ ( θ l d ) Φ ( θ l d ) ( M 1 ) ] T
The steering vector for all multipath paths is
a ( θ ) = [ a ( θ 1 ) , a ( θ 2 ) , , a ( θ L d ) ] T
The phase shift caused by the introduction of the ld path in different adjacent subcarriers is 2 π ( f i f i + 1 ) τ l d .
The phase shift introduced by the time of flight of path ld to the first subcarrier of the same antenna at the ith subcarrier is 2 π ( i 1 ) f δ τ l d , where f δ is the frequency interval between two continuous subcarriers. Denote
ϕ ( τ l d ) = e 2 π ( i 1 ) f δ τ l d
The signal guidance vector of path ld with the signal composed of the subcarrier and the antenna is:
a ( θ l d ) = [ 1   , Φ ( τ l d ) , , Φ ( τ l d ) I 1 antenna 1 , Φ ( θ l d ) , , Φ ( θ l d ) Φ ( τ l d ) I 1 antenna 2 , , Φ ( θ l d ) ( M 1 ) , , Φ ( τ l d ) I 1 Φ ( θ l d ) ( M 1 ) antenna M ] T
The guiding vector for all multipath paths is
A = [ a ( θ 1 ) , a ( θ 2 ) , , a ( θ L d ) ] T .  
The MNM algorithm is a subspace algorithm with constrained weights. The MNM algorithm has a higher resolution than the MUSIC algorithm, as shown in Figure 4a,b.
The MNM algorithm constraint conditions are as follows:
{ min { W H W } W ( 1 ) = 1 , U s W = 0
U s is the signal space obtained by Formula (15). W is a linear combination of the noise space. Estimates of TOF are obtained by looking for the peak of the formula:
p M N M = 1 | A H W M N M | 2
Figure 5 shows the result of AOA obtained by joint estimation of AOA and TOF after phase calibration and path elimination. It can be seen that multipath signals are very complex.

2.4. Spatiotemporal Clustering Algorithm of DBscan Based on Adaptive Parameter Adjustment

Figure 5 shows that it is difficult to find the angle information of the moving target from the MNM algorithm. Meanwhile, DBscan is a density clustering algorithm. It does not need to specify the center of the cluster and the number of clusters. Since the angle information changes continuously in the adjacent time, we cluster the angle information together with the time information.
To ensure the consistency of spatiotemporal information, we add the time-varying scale parameter ρ . New time Unit change is:
t = ρ t .
With q as the center, the number of points in the p neighborhood whose radius is smaller than Eps is
N E p s ( p ) = { q D | d i s t ( p , q ) E p s }
where Eps is the radius of the cluster circle, dist is the distance formula, and it is defined as follows:
d i s t ( p , q ) = ( A O A p A O A q ) 2 + ρ 2 ( t p t q ) 2
If the number of elements in the p neighborhood is less than the specified parameter Minpts, it is considered a noise point. If the number of points is greater than or equal to it, a new cluster is created, and the point is added to that cluster. Eps parameters directly affect clustering results and generally need to be adjusted experimentally. Here, we use adaptive parameter adjustment and the specific algorithm as Algorithm 1.
Algorithm 1: Spatiotemporal Cluster Algorithm
Input: D, Eps0, t′, Minpts, t0
Output: clu
 While (clu(t′(end))-clu(t′(start) > (size(t)-t0))
  Do step 1: The distance distribution of the points to be clustered is calculated
     step 2: for i = 1:n
     Neighbors = find(dist(D(i)) ≤ Eps0)
      If num (Neighbors) < Minpts
       D(i) = noise
       Else Expand Cluster (D(i), Neighbors)
      End if
    End for
End while
Figure 6 shows the results of DBscan clustering after adaptive parameter adjustment. The black point is the noise information, and the clustering result of the red point with the most points is the AOA result of the dynamic target we need. It can be seen from the figure that clustering results can remove a lot of noise and multipath information.

2.5. Processing of AOA Data after Clustering

The AOA after clustering is not continuous in time. Because the initial velocity of the target is relatively small when the target starts to move, the dynamic path obtained by the static path elimination algorithm may not be detected. In addition, dynamic paths are unstable. Therefore, the undetected points need to be supplemented to ensure their continuity in time. For those with multiple clustering angles at the same time, take their mean values; for those with missing values at the starting time point, fill in at the adjacent points. If the intermediate time point is missing, the mean of the time points before and after is added.
After the above steps, AOA is continuous in time, but the fluctuation is relatively large, which is not in line with reality. So, we used the polynomial least-squares linear fitting. At t1, t2, …, tZ time, the AOA values are AOA1, AOA2, …, AOAZ. Let us write it as a function AOAz = f(tZ). The target polynomial of the fit is
f ( t Z ) = a 0 + a 1 t Z + a 2 t Z 2 + + a n t Z n .
The fitting process is an optimization problem. Let the sum of mean square error between the fitted polynomial value and the original function value be minimized. The mathematical expression is as follows:
z = 1 Z ( f ( t z ) A O A z ) 2 = m i n .
Figure 7 shows the results of polynomial fitting using the least-squares method. The red asterisk is the result of AOA after interpolation. The blue line is the polynomial linear fitting results of the least-squares method.

3. Results

This section mainly includes two parts: experimental setting, environment introduction, and experimental result analysis.

3.1. Experimental Setting and Environment

We used the Widar2.0 data set. A sending antenna and three receiving antennas are used in the experiment. The receiving antennas are placed half a wavelength apart. The antenna used in this paper is a uniform linear array. The antenna is for the WiFi protocol 5300 network card 2.4 G and 5 G dual-band antenna. The device is operating in monitor mode. The number of the channel is 165, which has a center frequency of 5.825 GHz. The transmitting antenna has a packet rate of 1000 packets per second. There are three experimental environments: a large and empty classroom, a small office with lots of furniture, and a long and narrow corridor.
There are 24 trajectories in the three environments of Widar 2.0. There are about 70 angles for each trajectory and about 1700 angle information in total. In the 24 trajectories, there are 6 that take the shape of letter ‘Z’ (facing 3 different directions), 7 circles (starting points at 4 different positions), 2 symmetrical paths of number ‘7’, 1 rectangle, 6 vertical lines (2 different starting points), and 2 oblique lines (2 different starting points).

3.2. Analysis of Experimental Results

Firstly, the classroom environment data are analyzed. These data use T02 data of the classroom environment. The trajectory is in the direction perpendicular to the cable line of the transmitting and receiving antennas, first approaching and then backward away. A position is estimated every 0.1 s for 4.6 s, and 46 angles are estimated.
The angle estimation results are shown in Figure 8, where the red line is the actual angle information. The blue line is the angle estimation result with the proposed algorithm. The yellow line is the angle estimation results obtained with Tsinghua’s Widar2.0. The average angle error of the proposed algorithm is 5.14°, and the average angle error of Widar2.0 is 8.53°. Because the algorithm in this paper is based on temporal and spatial clustering and fully considers the continuity of angle information of the adjacent time, the average error with the proposed algorithm is relatively small, and the accuracy is relatively high.
Here are the results of the T02 data for the office environment. The trajectory is perpendicular to the cable line of the transmitting and receiving antennas, first approaching and then backward away. Position estimation is made every 0.1 s. The time of the trajectories is 5.6 s, with 56 angles estimated. The angle estimation results are shown in Figure 9: the red line is the actual angle information, and the blue line is the angle estimation result obtained with the proposed algorithm. The yellow line is the angle estimation results of Tsinghua’s Widar2.0. The average angle error of the algorithm in this paper is 12.14°, and the average angle error of Widar2.0 is 16.26°. Because the algorithm in this paper is based on temporal and spatial clustering and fully considers the continuity of angle information of the adjacent time, the average error of the proposed algorithm is relatively small, and the accuracy is relatively high.
The following are the T01 data obtained in the corridor environment. The trajectory is perpendicular to the cable line of the transmitting and receiving antennas, first, moving away and then coming back. Position estimation is made every 0.1 s. The time of the trajectories is 6.7 s, with 67 angles estimated. There are 67 angles. The angle estimation results are shown in Figure 10, where the red line is the actual angle information. The blue line is the angle estimation result of the proposed algorithm. The yellow line is the angle estimation results of Tsinghua’s Widar2.0. The average angle error of the proposed algorithm is 1.7°, and the average angle error of Widar2.0 is 4.5°. Because the algorithm in this paper is based on temporal and spatial clustering and fully considers the continuity of angle information of the adjacent time, the average error of the proposed algorithm is relatively small, and the accuracy is relatively high.
The data with only one trajectory may not be so complete. We ran all data for the three environments of the classroom, office, and corridor, respectively. The cumulative angle error of the corridor is shown in Figure 11. In Figure 11, the blue line is the error accumulation of the proposed algorithm, and the red line is the error accumulation curve of Tsinghua Widar2.0. It can be seen from the figure that 90% of the angle error of the proposed algorithm in this paper is less than 7.74°, and 90% of the angle error of the Tsinghua Widar2.0 algorithm is less than 10.72°. The algorithm in this paper is slightly better than the latter.
The cumulative angle error of the office is shown in Figure 12. In Figure 12, the blue line is the error accumulation curve of the proposed algorithm, and the red is the error accumulation curve of Widar2.0. It can be seen from the figure that 90% of the angle error of the algorithm is less than 20.63°, and 90% of the angle error of the Tsinghua Widar2.0 algorithm is less than 25.61°. The algorithm in this paper is slightly better than the latter.
We ran the data for all the tracks in the classroom environment. The angle error accumulation diagram is shown in Figure 13. In Figure 13, the blue line is the error accumulation curve of the algorithm in this paper, and the red line is the error accumulation curve of Widar2.0. It can be seen from the figure that 90% of the angle error of the algorithm in this paper is less than 13.68°, and 90% of the angle error of the Tsinghua Widar2.0 algorithm is less than 15.83°. The algorithm in this paper is slightly better than the latter.

3.3. System Performance

What factors will affect the accuracy of the proposed algorithm? In order to understand the system performance of this algorithm, we analyze from different environments, different walking speeds, different data sampling rates, different trajectories shapes, different walking directions, different lengths of the trajectories, and different filtering methods.
(1)
Different environments.
As known, indoors is a typical multipath environment. Multipath information will affect the accuracy of our parameter estimation. In this experiment, we analyzed the errors of the three test environments, respectively. Figure 14 is the error accumulation function for the three environments. From the figure, we can find that the average error of AOA in the corridor environment is the smallest. Because there is no furniture in the corridor, the multipath interference is less, and the result is relatively more accurate. In the office environment, there are sofas, tea tables, drinking fountains, and storage cabinets. They will cause more severe multipath and relatively larger errors. In the classroom environment, only desks are placed beside the wall. The degree of multipath effects lies in between that with the corridor and that with the office, and so do the errors.
(2)
Different walking speeds.
In order to understand the impact of walking speed on the system performance, we conduct the following experiments. The errors of AOA are analyzed under three different conditions of the target: relatively slow walking speed, normal walking speed, and fast walking speed. The results are shown in Figure 15. As we can see, the error is relatively large when the target walks slowly. This is because we apply the static path elimination beforehand. When the target walks slowly, part of useful information will be eliminated, so the error is large. On the other hand, when the target walks faster, less useful information is eliminated, and the error is smaller.
(3)
Different sampling rates.
In our case, 1000 data packets are collected every second. If the sampling rate is small, what will happen? For this reason, we compare the error in the case of the sampling rates of 500/s and 1000/s, as shown in Figure 16. As the sampling rate decreases, the error increases, as we would expect.
(4)
Different shapes’ trajectories.
Will different trajectories affect the estimation of AOA? For this reason, we compared the errors under three different trajectory shapes: z-shaped route, rectangular close loop, and vertical line. As can be seen from Figure 17, the overall error of the vertical line is the smallest. This is because when you go straight, the angle information is more continuous, and the result is more accurate. For the rectangular loop, there is a significant angle change at the turning corners, so the error is the largest. For the z-shape route, the error lies in between that with the rectangular and the vertical.
(5)
Different directions of motion.
In order to understand the impact of different motion directions on the algorithm in this paper, we analyze the two cases of the walking direction and receiving and receiving line being 90° and 45°.
The results are shown in Figure 18. Because the AOA of the receiving antenna varies less along the direction of 45°, the corresponding results are more accurate.
(6)
Different walking distances.
Does the length of the walk distances affect the angle estimation? We conduct the following experiment to analyze the errors under different walking distances. As shown in Figure 19, distance has no impact on angle estimation. Since the speed is relatively low at the beginning and end of the walk, the error between distances of 0–3 m and 9–13 m is large, which is consistent with the previous analysis.
(7)
Different filtering methods.
In this paper, two methods are used to compare the final processing of AOA: one is the least-squares linear fitting, the other is the multiple uses of Hampel and smooth filtering. The result is shown in Figure 20. The linear fitting can fit the original results to the maximum extent. The smoothing and Hampel can smooth out the places where the angle changes dramatically, resulting in the loss of details. Therefore, the linear fitting method is more accurate.

4. Conclusions

This paper presents a new angle recognition algorithm for moving targets. In this algorithm, phase noise and static path are eliminated by conjugate multiplication of CSI signals of different antennas. Then, AOA and TOF are used to estimate the AOA value of the moving target. Next, the spatial and temporal clustering algorithm adjusted by adaptive parameters is used to obtain a more accurate AOA. Then, the continuous AOA value is obtained by the method of difference complementation. Finally, the least-squares polynomial linear fitting method is used to obtain the final AOA value. Through the verification of all the tracks in three different environments of Widar2.0, the average errors of the proposed algorithm in the classroom, office, and corridor are found to be 7.18°, 12.16°, and 3.62°, respectively. The average errors of Widar2.0 are 7.49°, 16.42°, and 4.55 °, respectively. Therefore, the proposed algorithm is a better algorithm. There are three experimental environments for the algorithm in this paper: classroom, office, and corridor. This paper does not consider how to model in the case of obstacles and occlusion. For both cases, references are available [27]. The literature points out that the uniqueness of each propagation path, where paths can be clustered on the basis of obstacles, whose dimensions are larger than the wavelength involved, where these clusters can be described by an appropriate shadow depth (namely, the shadow deviation), providing a log-additive expression of shadow losses (excess path loss, in general) in the RF formulas dealing with local mean power estimation/prediction. These would also be complementary to localization issues in indoor environments, as shown in [28]. The respective presence of shadow (large-scale fading) largely influences the CDFs of the received signal for each and every antenna, as shown in [29]. This will be our follow-up research work.

Author Contributions

Conceptualization, L.T. and L.C.; methodology, L.T.; software, L.T.; validation L.T., Z.C. and Z.X.; investigation, L.T.; resources, Z.X.; data curation, L.C.; writing—original draft preparation, Z.X.; writing—review and editing, L.T.; visualization, L.T.; supervision, L.C.; project administration, Z.C.; funding acquisition, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (NSFC) No. 61401100 and 62071125, the Natural Science Foundation of Fujian Province under grant #2018J01805, the Department of Education of Fujian Province under Youth Research Project #JAT190011, and Fuzhou University under Scientific Research Project GXRC-18074.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, S.; Chan, S. WiFi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons. IEEE Commun. Surv. Tutor. 2017, 18, 466–490. [Google Scholar] [CrossRef]
  2. Liu, S.; Jiang, Y.; Striegel, A. Face-to-Face Proximity Estimation Using The blue linetooth on Smartphones. IEEE Trans. Mob. Comput. 2014, 13, 811–823. [Google Scholar] [CrossRef]
  3. Zhao, X.; Xiao, Z.; Markham, A.; Trigoni, N.; Ren, Y. Does BTLE measure up against WiFi? A comparison of indoor location performance. In Proceedings of the 20th European Wireless Conference, Barcelona, Spain, 14–16 May 2014. [Google Scholar]
  4. Zheng, S.; Purohit, A.; Chen, K.; Pan, S.; Pering, T.; Zhang, P. PANDAA: Physical arrangement detection of networked devices through ambient-sound awareness. In Proceedings of the Ubicomp: Ubiquitous Computing, International Conference, Ubicomp, Beijing, China, 17–21 September 2011. [Google Scholar]
  5. Huang, W.; Xiong, Y.; Li, X.-Y.; Lin, H.; Mao, X.; Yang, P.; Liu, Y. Shake and walk: Acoustic direction finding and fine-grained indoor localization using smartphones. In Proceedings of the IEEE INFOCOM 2014—IEEE Conference on Computer Communications, Toronto, ON, Canada, 27 April–2 May 2014. [Google Scholar]
  6. Mohammadmoradi, H.; Heydariaan, M.; Gnawali, O.; Kim, K. UWB-Based Single-Anchor Indoor Localization Using Reflected Multipath Components. In Proceedings of the 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 18–21 February 2019. [Google Scholar]
  7. Wang, C.; Xu, A.; Kuang, J.; Sui, X.; Hao, Y.; Niu, X. A High-Accuracy Indoor Localization System and Applications Based on Tightly Coupled UWB/INS/Floor Map Integration. IEEE Sens. J. 2021, 21, 18166–18177. [Google Scholar] [CrossRef]
  8. Jin, G.-Y.; Lu, X.-Y.; Park, M.-S. An Indoor Localization Mechanism Using Active RFID Tag. In Proceedings of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing (SUTC’06), Taichung, Taiwan, 5–7 June 2006. [Google Scholar]
  9. Gharat, V.; Colin, E.; Baudoin, G.; Richard, D. Indoor performance analysis of LF-RFID based positioning system: Comparison with UHF-RFID and UWB. In Proceedings of the International Conference on Indoor Positioning & Indoor Navigation, Sapporo, Japan, 18–21 September 2017. [Google Scholar]
  10. Chen, R.; Huang, X.; Zhou, Y.; Hui, Y.; Cheng, N. UHF-RFID-Based Real-Time Vehicle Localization in GPS-Less Environments. IEEE Trans. Intell. Transp. Syst. 2021, 1–8, in press. [Google Scholar] [CrossRef]
  11. Piccinni, G.; Avitabile, G.; Coviello, G.; Talarico, C. Real-Time Distance Evaluation System for Wireless Localization. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 3320–3330. [Google Scholar] [CrossRef]
  12. Zhang, L.; Gao, Q.; Ma, X.; Wang, J.; Yang, T.; Wang, H. DeFi: Robust Training-Free Device-Free Wireless Localization with WiFi. IEEE Trans. Veh. Technol. 2018, 67, 8822–8831. [Google Scholar] [CrossRef]
  13. Zheng, Y.; Sheng, M.; Liu, J.; Li, J. OpArray: Exploiting Array Orientation for Accurate Indoor Localization. IEEE Trans. Commun. 2019, 67, 847–858. [Google Scholar] [CrossRef]
  14. Vasisht, D.; Kumar, S.; Katabi, D. Decimeter-level localization with a single WiFi access point. In Proceedings of the 13th USENIX Symposium on Networked Systems Design and Implementation, Santa Clara, CA, USA, 16–18 March 2016. [Google Scholar]
  15. Kotaru, M.; Joshi, K.; Bharadia, D.; Katti, S. Spotfi: Decimeter level localization using WiFi. In Proceedings of the SIGCOMM’15 ACM Conference on Special Interest Group on Data Communication, London, UK, 17–32 August 2015. [Google Scholar]
  16. Kotaru, M.; Katti, S. Position Tracking for Virtual Reality Using Commodity WiFi. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2671–2681. [Google Scholar] [CrossRef] [Green Version]
  17. Fadzilla, M.A.; Harun, A.; Shahriman, A.B. Localization Assessment for Asset Tracking Deployment by Comparing an Indoor Localization System with a Possible Outdoor Localization System. In Proceedings of the 2018 International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA), Kuching, Malaysia, 15–17 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  18. Xiao, N.; Yang, P.; Li, X.; Zhang, Y.; Yan, Y.; Zhou, H. MilliBack: Real-Time Plug-n-Play Millimeter Level Tracking Using Wireless Backscattering. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 112. [Google Scholar] [CrossRef]
  19. Shu, Y.; Huang, Y.; Zhang, J.; Coué, P.; Chen, J.; Shin, K.G. Gradient-Based Fingerprinting for Indoor Localization and Tracking. IEEE Trans. Ind. Electron. 2016, 63, 2424–2433. [Google Scholar] [CrossRef]
  20. Wang, X.; Gao, L.; Mao, S.; Pandey, S. CSI-Based Fingerprinting for Indoor Localization: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2017, 66, 763–776. [Google Scholar] [CrossRef] [Green Version]
  21. Sun, W.; Xue, M.; Yu, H.; Tang, H.; Lin, A. Augmentation of Fingerprints for Indoor WiFi Localization Based on Gaussian Process Regression. IEEE Trans. Veh. Technol. 2018, 67, 10896–10905. [Google Scholar] [CrossRef]
  22. Shi, S.; Sigg, S.; Chen, L.; Ji, Y. Accurate Location Tracking From CSI-Based Passive Device-Free Probabilistic Fingerprinting. IEEE Trans. Veh. Technol. 2018, 67, 5217–5230. [Google Scholar] [CrossRef] [Green Version]
  23. Xiang, L.; Li, S.; Zhang, D.; Xiong, J.; Wang, Y.; Mei, H. Dynamic-MUSIC: Accurate device-free indoor localization. In Proceedings of the 2016 ACM International Joint Conference ACM, Heidelberg, Germany, 12–16 September 2016. [Google Scholar]
  24. Qian, K.; Wu, C.; Zhang, G.; Zheng, Y.; Yunhao, L. Widar2.0: Passive Human Tracking with a Single WiFi Link. In Proceedings of the MobiSys’18 16th Annual International Conference on Mobile Systems, Applications, and Services, Munich, Germany, 10–15 June 2018. [Google Scholar]
  25. Li, X.; Zhang, D.; Lv, Q.; Xiong, J.; Li, S.; Zhang, Y.; Mei, H. IndoTrack: Device-Free Indoor Human Tracking with Commodity WiFi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 72. [Google Scholar] [CrossRef]
  26. Li, F.; Vaccaro, R.J.; Tufts, D.W. Min-norm linear prediction for arbitrary sensor arrays. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Glasgow, UK, 23–26 May 1989; Volume 4, pp. 2613–2616. [Google Scholar] [CrossRef]
  27. Salo, J.; Vuokko, L.; El-Sallabi, H.M.; Vainikainen, P. An additive model as a physical basis for shadow fading. IEEE Trans. Veh. Technol. 2007, 56, 13–26. [Google Scholar] [CrossRef]
  28. Palipana, S.; Pietropaoli, B.; Pesch, D. Recent advances in RF-based passive device-free localisation for indoor applications. Ad Hoc Netw. 2017, 64, 80–98. [Google Scholar] [CrossRef]
  29. Chrysikos, T.; Kotsopoulos, S. Characterization of large-scale fading for the 2.4 GHz channel in obstacle-dense indoor propagation topologies. In Proceedings of the 2012 IEEE Vehicular Technology Conference (VTC Fall), Quebec City, QC, Canada, 3–6 September 2012. [Google Scholar]
Figure 1. Algorithm flow chart (RX is the receiving antenna and TX is the transmitting antenna).
Figure 1. Algorithm flow chart (RX is the receiving antenna and TX is the transmitting antenna).
Sensors 22 00276 g001
Figure 2. Receiving array (d is the array spacing, ϕ l is the AOA of path l and M is the Mth array antenna).
Figure 2. Receiving array (d is the array spacing, ϕ l is the AOA of path l and M is the Mth array antenna).
Sensors 22 00276 g002
Figure 3. (a) Measured time-frequency chart without static elimination; (b) Measured time-frequency chart with static elimination.
Figure 3. (a) Measured time-frequency chart without static elimination; (b) Measured time-frequency chart with static elimination.
Sensors 22 00276 g003
Figure 4. (a) AOA and TOF are estimated by MUSIC algorithm; (b) AOA and TOF are estimated by MNM algorithm.
Figure 4. (a) AOA and TOF are estimated by MUSIC algorithm; (b) AOA and TOF are estimated by MNM algorithm.
Sensors 22 00276 g004
Figure 5. AOA value of the dynamic target after joint estimation.
Figure 5. AOA value of the dynamic target after joint estimation.
Sensors 22 00276 g005
Figure 6. AOA spatiotemporal clustering results with adaptive parameters.
Figure 6. AOA spatiotemporal clustering results with adaptive parameters.
Sensors 22 00276 g006
Figure 7. AOA results of linear fitting by least-squares method after value filling points.
Figure 7. AOA results of linear fitting by least-squares method after value filling points.
Sensors 22 00276 g007
Figure 8. AOA results of classroom T02 data.
Figure 8. AOA results of classroom T02 data.
Sensors 22 00276 g008
Figure 9. AOA results of office T02 data.
Figure 9. AOA results of office T02 data.
Sensors 22 00276 g009
Figure 10. AOA results of corridor T01 data.
Figure 10. AOA results of corridor T01 data.
Sensors 22 00276 g010
Figure 11. Cumulative angle error of the corridor.
Figure 11. Cumulative angle error of the corridor.
Sensors 22 00276 g011
Figure 12. Cumulative angle error of the office.
Figure 12. Cumulative angle error of the office.
Sensors 22 00276 g012
Figure 13. Cumulative angle error of the classroom.
Figure 13. Cumulative angle error of the classroom.
Sensors 22 00276 g013
Figure 14. AOA error accumulation diagram of three different environments.
Figure 14. AOA error accumulation diagram of three different environments.
Sensors 22 00276 g014
Figure 15. AOA error accumulation diagram of three different walking Speeds.
Figure 15. AOA error accumulation diagram of three different walking Speeds.
Sensors 22 00276 g015
Figure 16. AOA error accumulation diagram of two different sampling rates.
Figure 16. AOA error accumulation diagram of two different sampling rates.
Sensors 22 00276 g016
Figure 17. AOA error accumulation diagram of three different shapes’ Trajectories.
Figure 17. AOA error accumulation diagram of three different shapes’ Trajectories.
Sensors 22 00276 g017
Figure 18. AOA error accumulation diagram of two different directions of motion.
Figure 18. AOA error accumulation diagram of two different directions of motion.
Sensors 22 00276 g018
Figure 19. AOA error accumulation diagram of four different walking distances.
Figure 19. AOA error accumulation diagram of four different walking distances.
Sensors 22 00276 g019
Figure 20. AOA error accumulation diagram of two different filtering methods.
Figure 20. AOA error accumulation diagram of two different filtering methods.
Sensors 22 00276 g020
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, L.; Chen, L.; Xu, Z.; Chen, Z. An Angle Recognition Algorithm for Tracking Moving Targets Using WiFi Signals with Adaptive Spatiotemporal Clustering. Sensors 2022, 22, 276. https://doi.org/10.3390/s22010276

AMA Style

Tian L, Chen L, Xu Z, Chen Z. An Angle Recognition Algorithm for Tracking Moving Targets Using WiFi Signals with Adaptive Spatiotemporal Clustering. Sensors. 2022; 22(1):276. https://doi.org/10.3390/s22010276

Chicago/Turabian Style

Tian, Liping, Liangqin Chen, Zhimeng Xu, and Zhizhang Chen. 2022. "An Angle Recognition Algorithm for Tracking Moving Targets Using WiFi Signals with Adaptive Spatiotemporal Clustering" Sensors 22, no. 1: 276. https://doi.org/10.3390/s22010276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop