Next Article in Journal
A Novel Hybrid Model for the Prediction and Classification of Rolling Bearing Condition
Previous Article in Journal
Dispersion of Elastic Waves in Functionally Graded CNTs-Reinforced Composite Beams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simplify Belief Propagation and Variation Expectation Maximization for Distributed Cooperative Localization

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(8), 3851; https://doi.org/10.3390/app12083851
Submission received: 21 March 2022 / Revised: 7 April 2022 / Accepted: 8 April 2022 / Published: 11 April 2022

Abstract

:
Only a specific location can make sensor data useful. The paper presents an simplify belief propagation and variation expectation maximization (SBPVEM) algorithm to achieve node localization by cooperating with another target node while lowering communication costs in a challenging environment where the anchor is sparse. A simplified belief propagation algorithm is proposed as the overall reasoning framework by modeling the cooperative localization problem as a graph model. The high-aggregation sampling and variation expectation–maximization algorithm is applied to sample and fit the complicated distribution. Experiments show that SBPVEM can obtain accurate node localization equal to NBP and SPAWN in a challenging environment while reducing bandwidth requirements. In addition, the SBPVEM has a better expressive ability than PVSPA, for SBPVEM is efficient in challenging environments.

1. Introduction

Positioning is essential in multi-sensor networks, vehicle ad hoc networks [1], underwater unmanned clusters [2], military environment research [3], forest emergency rescue [4], and other tasks that need environmental awareness. Cooperative localization uses measurements between unlocalized target nodes to localize all network nodes when anchor nodes are not sufficient or accessible.
Collaborative localization based on filtering [5] and optimization [6] requires that the nodes have a good initial state. The optimization result is unreliable when the target node does not have a good prior. However, nodes cannot provide good and reliable prior under divergent conditions and some extreme conditions. Refs. [7,8] use belief propagation (BP) methods to deal with the problems of no initial position of target nodes, few anchor nodes, and sparse links between nodes. The BP method is an accurate reasoning method applied to an acyclic probability graph, and the loopy belief propagation (LBP) method approximates the BP algorithm on an acyclic graph. Ref. [9] proposed a SPAWN method for location on the basis of LBP. In order to solve the positioning probability in continuous space, the BP algorithm needs to discretize the value space by the grid. The high probability interval may be smaller than the feasible region, which wastes many computational resources of the algorithm. Ref. [10] proposed the Nonparametric Belief Propagation (NBP) method, which uses weighted particles to represent the irregular distribution of node positions. However, communication overhead is the critical problem in the implementation, which mainly comes from the representation of information exchanged between neighboring nodes. The particle-based method requires a higher communication overhead between nodes.
In order to reduce the communication overhead, Ref. [11] combines the idea of a sigma point to fit the fitted parameters, which uses a Gaussian to represent the information between nodes. Refs. [12,13] show that when BP based on parameter fitting is applied to the positioning problem, it can only deal with the situation that the positioning center of nodes is Gaussian distribution. Ref. [14] considers the multimodal situation of node position but confuses Gaussian distribution and mixed Gaussian distribution. Select a Gaussian to fit the mixed Gaussian. Ref. [15] proposed for the first time to use the VMP algorithm as a generalization of BP on a factor graph and to use a variational method to fit the probability of nodes, which can fit the non-Gaussian situation. However, the mean-field hypothesis destroys the association between multi-dimensional states. Ref. [16] applies the VMP method to the positioning problem. However, only single-mode and dual-mode positioning cases are considered. Ignore the constraint range of other nodes when the node can only communicate with a few anchor nodes.
This paper proposes the SBPVEM method for the distributed cooperative localization of sparse anchor targets in the algorithm. A simplified belief propagation algorithm is proposed as the overall reasoning framework by modeling the cooperative localization problem as a graph model. The high-aggregation sampling and variation expectation–maximization algorithm is applied to sample and fit the complicated distribution. In practice, our work solves the self-localization problem of distributed nodes under the condition of a minimal number of anchor nodes, makes cooperative localization with sparse anchor nodes in wireless sensor networks a reality, and lowers the cost in large-scale applications.
The Section 1 of this paper details the subject’s background and associated studies. Section 2 models cooperative localization problems as a graph model and describes belief propagation as necessary background knowledge. The proposed SBPVEM approach is detailed in depth in the Section 3. The Section 4 includes some experiments to show our algorithm’s effectiveness. Discussion is presented in the Section 5.

2. Model

2.1. System Model

The undirected graph defined by the node v 𝒱 and a set of edges ( r , t ) ε is used to represent the node location problem. Each node represents an intelligent unmanned platform. For the following discussion, we divide the nodes into anchor nodes V a that know their positions and nodes that need to be located V t . Each edge represents the communication link between sensors. If there is ( r , t ) ε , it means the communication relationship established between node T and node R, which can transfer the information of nodes and their mutual constraints Table 1.
The positioning problem is modeled as a probability graph model problem, and the posterior distribution of node positions is the positioning result we want.
P ( X | D ) P ( D | X ) P ( X ) = ( r , t ) ε p ( d r t | x r , x t ) v V t p v ( x v )
Among them, p ( d r t | x r , x t ) : It means the probability density of distance measurement of d r t based on the node position estimation of x r , x t . Let us assume that d r t = x r x t + o m e g a r t , where ω r t N ( 0 , σ i j ) represents the measured noise, we can obtain:
p ( d r t | x r , x t ) = 1 2 π σ r t 2 exp ( d r t x r x t ) 2 2 σ r t 2
p v ( x v ) indicates the probability density of the location of the node. In the cooperative localization problem, the marginal probability of posterior distribution is the position estimation of nodes.
p ( x v | D ) p ( X | D ) d X \ x v
where X \ x v represents the set of all variables in x except x v .

2.2. Belief Propagation

The BP method is an accurate reasoning method, and the core process is belief calculation and message calculation. The belief of node A is the probability distribution density of variable A value.
The message from node A to node B is the probability distribution density of variable B’s possible values from the perspective of node A. The renewal of the belief of node B is the aggregation of information transmitted by node B to all neighboring nodes.
U sends a message to node t:
m u t ( x t ) = x u ψ t u ( x t , x u ) B u ( x u ) m t u ( x u ) d x u
T receives the message from the surrounding nodes and starts belief calculation:
B t ( x t ) = z ψ t ( x t ) u G t m u t ( x t )
where ψ u ( x u ) is the node potential function, ψ t u ( x t , x u ) is the paired potential function of two nodes, and z is the normalization factor.
When the BP method is applied to the location problem, we consider the location problem based on distance. In this problem, the potential function of nodes ψ t ( x t ) is the probability density function of nodes in different positions.
ψ t ( x t ) = p t ( x t )
The paired potential function ψ t u ( x t , x u ) between two nodes is the probability density function of two nodes based on the distance. The combination Formula (2) is:
ψ t u ( x t , x u ) = p ( d t u | x u , x t ) = z exp ( d t u x u x t ) 2 2 σ t u 2
LBP, which can be applied to a graph with loops, adds iteration based on BP. The message from node u to node t in the i-th iteration is:
m u t i ( x t ) = x u ψ t u ( x t , x u ) B u i 1 ( x u ) m t u i 1 ( x u ) d x u
The belief update of node t in the i-th iteration is:
B t i ( x t ) = z ψ t ( x t ) u G t m u t i ( x t )

3. Methodology

Since the viable solution to the positioning problem is in continuous space, directly dividing the feasible space and calculating the probability wastes much computational power. To lower the computation complexity and communication requirements, NBP uses particles to represent potential functions. Nevertheless, the amount of data to transmit is still huge Figure 1.
This paper proposes a simple belt propagation and variation expectation–maximization (SBPVEM) method to realize the cooperative localization of nodes. The algorithm consists mainly of Simplify Belief Propagation (SBP), high-aggregation sampling, and variation expectation–maximization (VEM). The concrete operation of the algorithm on the node is shown in Figure 2.

3.1. Simplify Belief Propagation

According to the message update formula, the message m u t transmitted by node U to node T is the accumulation of the product of the potential function between two nodes. There is an aggregation of messages transmitted by node U to neighboring nodes except T on the possible values of each node U. Here, the information transmitted by the T node to the U node is removed to prevent the confidence of the T node from being used many times, resulting in overconfidence of the node. However, the approximation of LBP to BP shows that the repeated use of node confidence will not affect the expectation of node positioning, so here, we simplify the message from node U to node T as follows:
m u t i ( x t ) = x u ψ t u ( x t , x u ) B u i 1 ( x u ) d x u
Because of the equivalence of the solution, we can make:
m u t i ( x t ) = B u i 1 ( x u )
Ψ u t ( x t , x u ) = x u ψ t u ( x t , x u ) m u t i ( x t ) d x u
B ˜ t i ( x t ) = k ψ t ( x t ) u G t Ψ u t
In this way, we disassemble the SBP problem into three steps: The first step, the same as that shown in Formula (11), is to broadcast the belief of the node as the message of the node to all neighboring nodes. Secondly, according to the Formula (12), the potential function Ψ between the target node and the neighbor node is calculated based on the belief of the source node and the constraint between the two nodes. In Step 3, based on Formula (13), the target node calculates the belief estimate of its own positioning B ˜ based on the potential function.
Assuming that the Gaussian mixture model can express the potential function of nodes, the Gaussian mixture model results from the superposition of multiple Gaussian distributions.
G ( α , μ , Σ ) = k = 1 n α k N ( μ k , Σ k )
Among them, N represents Gaussian distribution, and μ k and Σ k are the parameters of the k-th Gaussian distribution. The formula of multi-dimensional Gaussian distribution is:
N ( μ k , Σ k ) = 1 2 π d e t ( Σ k ) exp 1 2 ( x μ k ) T Σ k 1 ( x μ k )
The potential function of nodes is:
B u i ( x u ) = G u ( α , μ , Σ )
Formulas (14) and (16) are brought into (12), and there are:
Ψ u t ( x t , x u ) = k = 1 K α k x u ψ t u ( x t , x u ) N u ( μ k , Σ k ) d x u
According to Formulas (7) and (15), ψ t u ( x t , x u ) can be regarded as one-dimensional Gaussian, and N u ( μ k , Σ k ) can be regarded as part of the integral that can be understood as the paired potential function ψ t u , the expectation under the probability N u ( μ k , Σ k ) . For the convenience of calculation, we can directly bring the paired potential function between nodes into the belief distribution of nodes. Let x s be the equivalent variable:
x s = x u μ | x u μ | ( | x u μ | d )
in which | x u μ | d considers the distance constraint between nodes and transforms the two-dimensional problem into one-dimensional. x u μ | x u μ | keeps the directionality of the node position and restores the one-dimensional distance to a two-dimensional direction. Bring the equivalent variable x s into the two-dimensional belief distribution of nodes and obtain:
x u ψ t u ( x t , x u ) N u ( μ k , Σ k ) d x u = z exp x s Σ k 1 x s 2 = z exp x u μ | x u μ | ( | x u μ | d ) Σ k 1 x u μ | x u μ | ( | x u μ | d ) 2
The results are shown schematically in Figure 3. Bring Formula (19) into (17) and obtain the potential function obtained by fusion node potential function and paired potential function.

3.2. High-Aggregation Sampling

In SBP, all nodes transmit the position belief of the node itself. However, the belief of nodes is the aggregation of paired potential functions based on mixed Gaussian, and its form is very complicated. For the convenience of expression, we can consider approximating the accurate distribution of nodes by sampling and re-fitting. We propose a highly condensed sampling method, which can simultaneously reduce the amount of calculation, improve the sampling efficiency, and improve the effectiveness of samples.
Sampling is divided into five steps: uniform sampling in sampling space, probability calculation, node filtering, resampling, and importance sampling.
(1) Determine the sampling space and obtain uniform sampling x ˜ U ( x _ m i n , x _ m a x ) in the space.
x _ m i n = max { [ x _ m i n ; max ( x g d g ) ] }
x _ m a x = min { [ x _ m a x ; min ( x g + d g ) ] }
(2) Apply the belief of the node B ˜ to the sampled sample particles, and obtain the particle probability P ( x ˜ )
(3) Filter P g a t e for nodes with particularly low probability to obtain effective nodes x g a t e .
x g a t e = x ˜ i f P ( x ˜ ) > P g a t e
(4) When the number of valid samples is insufficient, resample by sample expansion and random walk.
Expand the sample by copying the sample:
x ˜ g a t e = x g a t e ; ; x g a t e f l o o r ( η ) ;
where η = n / s i z e ( x g a t e ) , n is the total number of samples of the node.
Through random walk of the expanded samples, resampling of the samples is realized.
x t e m = x ˜ g a t e + β η ˜ σ v
among η ˜ = max 3 , η , the combination of η ˜ and σ is used to limit the distance traveled. v is used to indicate the direction of random walk. In a two-dimensional environment, v = [ cos ( θ ) , sin ( θ ) ] β U [ 0 , 1 ] means uniform sampling in distance. θ U [ 0 , 2 π ) means uniform sampling in angle.
(5) Importance sampling of the obtained new particles.
x s a m p l e P ( x t e m )
In this algorithm, uniform sampling is used to obtain sufficient effective particles, which ensures the expressiveness of the samples as a whole. Stepping uniform sampling by importance sampling can realize the approximation of distribution by particles.

3.3. Variation Expectation Maximization

The EM method is a parameter solution method of the Gaussian mixture model, which is usually used to solve a low-dimensional model with a known number of components. However, it is impossible to determine in advance how many Gaussian models are needed to achieve proper fitting for strange shapes. In order to better fit the possible positions of nodes, this paper proposes a variable expectation–maximization method, which can adapt to the fitting situation with an uncertain number of Gaussian components. In this paper, we consider initializing the results using a Gaussian mixture model with enough components and then filter the Gaussian distribution used for fitting by component filtering, leaving the part with higher responsiveness as the selected model Algorithm 1.
In the Gaussian mixture model shown as Formula (14), α k is the weight of the k-th Gaussian distribution, α k [ 0 , 1 ] , and k = 1 k α k = 1 . We introduce the hidden variable γ j k to indicate whether the j-th sample comes from the K-th Gaussian distribution. When the j-th sample comes from the k-th Gaussian distribution, γ j k = 1 ; otherwise, γ j k = 0 . When the weight of the k-th Gaussian distribution is more significant, the more samples from the kth Gaussian distribution there will be. On the other hand, the more samples from the k-th class, the heavier the weight of the k-th class. The relationship between the hidden variable γ j k and the weight α k is:
p ( γ j k = 1 ) = α k
Because γ j × = 0 , 0 , 1 , 0 , 0 for every sample j is a 1-of-K hot unique code, it can be written as follows:
p ( γ j × | α ) = k = 1 K α k γ j k
After specifying the classification, there are:
p ( y j | γ j k = 1 , μ , Σ ) = N ( y j | μ k , Σ k )
Algorithm 1: Parameter fitting based on VEM
Applsci 12 03851 i001
The joint probability of all samples considering sampling:
p ( γ | α ) = j = 1 N k = 1 K α k γ j k
In case of multi-sample and multi-classification:
p ( y | γ , μ , Σ ) = j = 1 N k = 1 K N ( y | μ k , Σ k ) γ j k
The likelihood probability under the given distribution parameters is:
P ( y , γ | α , μ , Σ ) = k = 1 K α k n k j = 1 N N ( y j | μ k , Σ k ) γ j k
Among them, n k = j = 1 n γ j k is used to represent the number of nodes from the k-th Gaussian distribution and can represent the intensity weight of the k-th component in the calculation.
α k = n k N
For the convenience of calculation, log-likelihood can be taken:
log P ( y , γ | θ ) = k = 1 K n k log α k + j = 1 N γ j k log N ( y j | θ k )
where:
log N ( y j | θ k ) = log ( 2 π ) 1 2 log ( | Σ k | ) 1 2 ( y μ k ) T Σ k 1 ( y μ k )
The goal of the EM algorithm is to find the distribution parameters that can make the sample appear the most expected. Through two steps of expectation and maximization, it is realized iteratively. The expectation is to calculate the expectation of hidden variables under the current observation data and parameters. Maximizing is to find the parameters that can maximize the expectation of the currently hidden variable.
Given the observation y and the current parameter θ i , the expectation of conditional distribution of hidden variable γ is:
Q ( θ , θ ( i ) ) = E γ l o g P ( y , γ | θ ) | Y , θ i = k = 1 K n k log α k + j = 1 N γ ^ j k log N ( y j | θ k )
in which θ is the real parameter of distribution, θ ( i ) is the distribution parameter calculated in the I-th iteration, and γ ^ j k = E γ γ j k
γ ^ j k = α k N ( y j | θ k ) k = 1 K α k N ( y j | θ k )
γ ^ j k indicates the possibility that sample J is in the k-th component of the Gaussian mixture model. [ γ ^ j k ] is called the responsivity matrix. When there are more samples from the k-th possibility, the weight of the k-th component will be more significant.
α k = n k N j γ ^ j k
When K is extensive, many components make up the Gaussian mixture model. However, there will be some components that do not play a vital role. In order to reduce the amount of further calculation, it is necessary to screen the components of the Gaussian model.
k ^ p i c k α k > τ 1 | α |
where τ is the scaling factor. When the weight α k of the k-th component in the mixed Gaussian model is greater than τ 1 | α | , the kth component will be selected to enter the next iteration; otherwise, it will be deleted. In the iterative process, the number of components can be controlled through continuous selection.
The parameters that can maximize the current expectation can be determined by derivation.
Extreme values can be solved by making the partial derivative of the mean value zero:
μ k = i = 1 N γ ^ j k y j i = 1 N γ ^ j k
You can obtain the updated Σ k :
Σ k = j = 1 N γ ^ j k ( y j μ k ) T ( y j μ k ) j = 1 N γ ^ j k

3.4. Summary

The proposed method contains three steps: simplified belief propagation, high agglutination sampling, and variation expectation–maximization. A simplified belief propagation algorithm is proposed as the overall reasoning framework by modeling the cooperative localization problem as a graph model. The high-aggregation sampling and variation expectation–maximization algorithm is applied to sample and fit the complicated distribution. The overall flow is depicted in Algorithm 2.
Algorithm 2: Cooperative localization based on SBPVEM method
Applsci 12 03851 i002
The point-to-point message transmission between nodes is turned into the node’s position estimation broadcast in simplified belief propagation. Before importance sampling, high-aggregation sampling adds a filter layer to improve efficiency by clustering particles in high-probability intervals. Finally, the particle is fitted into a Gaussian mixture model with an unknown number of components using variable expectation maximum.

4. Experiment and Results

4.1. Setting Up

To verify the algorithm, we use static numerical experiments in 2d with a range of 20 m × 20 m. Consider the following three connection scenarios to gradually increase the difficulty of locating the target node: two anchor nodes, one anchor node, and no anchor node. NBP [10], SPAWN [9], and PVSPA [16] are compared to SBPVEM.
The RMSE of the position error is calculated to evaluate the accuracy of different algorithms.
e r r = 1 N x g t x p r e
where N indicates the number of nodes to be located. x g t R N × 2 is a vector composed of the ground truth positions of all nodes to be located. x p r e R N × 2 is a vector composed of the predicted positions of all nodes to be located.
e r r = 1 N i = 1 N x g t i x p r e i
where x g t i = [ p g t i 1 , p g t i 2 ] is the ground truth position of node i. x p r e i = [ p p r e i 1 , p p r e i 2 ] is the predicted position of node i. x g t i x p r e i = ( p g t i 1 p p r e i 1 ) 2 + ( p g t i 2 p p r e i 2 ) 2 is the Euclidean distance between the ground truth position and the predicted position of the node i.

4.2. Connection Scenario 1: Two Anchors

In connection scenario 1, four anchor nodes and two target nodes are set up with each target node connecting to two anchor nodes, and the target nodes connecting to each other. Figure 4 depicts the location distribution (a), communication relationship between nodes (a), positioning error of various algorithms (b–e), and overall positioning error to constrast (f). The four algorithms can achieve nice positioning and convergence after two iterations, as shown in Figure 4f.

4.3. Connection Scenario 2: One Anchor

In connection scenario 2, three anchor nodes and three target nodes are set up, with each target node connecting to one anchor node, and the target nodes connecting to each other. Figure 5 depicts the location distribution (a), communication relationship between nodes (a), positioning error of various algorithms (b–e), and overall positioning error to constrast (f). As the diagram depicts, SBPVEM can obtain accurate locate results equivalent to SPAWN and NBP. Nevertheless, PVSPA can hardly reduce the position error.

4.4. Connection Scenario 3: No Anchor

In connection scenario 3, three anchor nodes and six target nodes are set up. Three of six target nodes connect with one anchor, while others only connect with neighbor target nodes. Figure 6 depicts the location distribution (a), communication relationship between nodes (a), positioning error of various algorithms (b–e), and overall positioning error to constrast (f). There was no efficacious message in iteration 1 transmitted to target nodes with no anchor neighbor, resulting in their first prediction near the center. After two iterations, all three algorithms except PVSPA improve prediction accuracy. PVSPA makes locating even worse.

5. Discussion

Considering the bandwidth requirement, the required bandwidth of SPAWN is proportional to grid divisions, for its message is the probability of the grid. The data amount of NBP are proportional to the number of particles used to express potential function. SBPVEM and PVSPA fit the given distribution before sending parameters as messages, so their requirement in bandwidth is small. In summary, C S P A C N B P C S B P V E M C P V S P A .
SBPVEM and PVSPA reduce the communication bandwidth. Experiments show that the SBPVEM algorithm can sufficiently cope with the challenging environment that the PVSPA algorithm cannot handle. SBPVEM can obtain accurate node localization while reducing bandwidth requirements and has a better expressive ability than PVSPA.

6. Conclusions

This paper proposes the SBPVEM method for the distributed cooperative localization of sparse anchor targets. A simplified belief propagation method is proposed as the overall reasoning framework by modeling the cooperative localization problem as a graph model. The high-aggregation sampling and variation expectation–maximization algorithm are applied to sample and fit the complicated distribution.
SBPVEM can obtain accurate node localization in a challenging environment while reducing bandwidth requirements. Our work solves the self-localization problem of distributed nodes under the condition of a minimal number of anchor nodes, makes cooperative localization with sparse anchor nodes in wireless sensor networks a reality, and lowers the cost in large-scale applications.

Author Contributions

Conceptualization, M.W. and J.C.; software, X.W.; validation, X.W.; formal analysis, X.W. and Y.G.; investigation, X.W.; resources, Y.G.; data curation, X.W.; writing—original draft preparation, X.W., C.L.; writing—review and editing, Y.G. and Z.S.; visualization, X.W. and Z.S.; supervision, Y.G. and M.W.; project administration, M.W. and J.C.; funding acquisition, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (No. 62103424) and Junior faculty Innovattion Research Project, College of Intelligence Science and Engineering, National University of Defense Technology (No. ZN2019-16).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BPBelief Propagation
LBPLoopy Belief Propagation
NBPNonparametric Belief Propagation
SPAWNSum-Product Algorithm over a Wireless Network
PVSPAParametric Variational Sum-Product Algorithm
SBPVEMSimplify Belief Propagation and Variation Expectation Maximization

References

  1. Elazab, M.; Noureldin, A.; Hassanein, H.S. Integrated cooperative localization for Vehicular networks with partial GPS access in Urban Canyons. Veh. Commun. 2017, 9, 242–253. [Google Scholar] [CrossRef]
  2. Li, Y.C.; Wang, Y.Y.; Yu, W.B.; Guan, X.P. Multiple Autonomous Underwater Vehicle Cooperative Localization in Anchor-Free Environments. IEEE J. Ocean. Eng. 2019, 44, 895–911. [Google Scholar] [CrossRef]
  3. Benini, A.; Mancini, A.; Longhi, S. An IMU/UWB/Vision-based Extended Kalman Filter for Mini-UAV Localization in Indoor Environment using 802.15.4a Wireless Sensor Network. J. Intell. Robot. Syst. 2013, 70, 461–476. [Google Scholar] [CrossRef]
  4. Shi, Q.; Cui, X.; Li, W.; Xia, Y.; Lu, M. Visual-UWB navigation system for unknown environments. In Proceedings of the 31st International Technical Meeting of the Satellite Division of the Institute of Navigation, ION GNSS+ 2018, Miami, FL, USA, 24–28 September 2018; pp. 3111–3121. [Google Scholar] [CrossRef] [Green Version]
  5. Manoharan, A.; Sharma, R.; Sujit, P.B. Nonlinear model predictive control to aid cooperative localization. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems, ICUAS 2019, Atlanta, GA, USA, 11–14 June 2019; pp. 26–32. [Google Scholar] [CrossRef]
  6. Nguyen, T.V.; Jeong, Y.; Shin, H.; Win, M.Z. Least-Square Cooperative Localization. IEEE Trans. Veh. Technol. 2015, 64, 1318–1330. [Google Scholar] [CrossRef]
  7. Song, Y.; Wang, C.X.; Tay, W.P.; Law, C.L. Grid-based belief propagation. In Proceedings of the 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sapporo, Japan, 18–21 September 2017; pp. 1–6. [Google Scholar]
  8. Naseri, H.; Koivunen, V. A Bayesian algorithm for distributed network localization using distance and direction data. IEEE Trans. Wirel. Commun. 2017, 5, 290–304. [Google Scholar] [CrossRef]
  9. Wymeersch, H.; Lien, J.; Win, M.Z. Cooperative localization in wireless networks. Proc. IEEE 2009, 97, 427–450. [Google Scholar] [CrossRef]
  10. Ihler, A.T.; Fisher, J.W.; Moses, R.L.; Willsky, A.S. Nonparametric belief propagation for self-localization of sensor networks. IEEE J. Sel. Areas Commun. 2005, 23, 809–819. [Google Scholar] [CrossRef] [Green Version]
  11. Meyer, F.; Hlinka, O.; Hlawatsch, F. Sigma point belief propagation. IEEE Signal Process. Lett. 2013, 21, 145–149. [Google Scholar] [CrossRef] [Green Version]
  12. Caceres, M.A.; Penna, F.; Wymeersch, H.; Garello, R. Hybrid cooperative positioning based on distributed belief propagation. IEEE J. Sel. Areas Commun. 2011, 29, 1948–1958. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, Y.; Lian, B.; Zhou, T. Gaussian message passing-based cooperative localization with node selection scheme in wireless networks. Signal Process. 2019, 156, 166–176. [Google Scholar] [CrossRef]
  14. Li, S.; Hedley, M.; Collings, I.B. New efficient indoor cooperative localization algorithm with empirical ranging error model. IEEE J. Sel. Areas Commun. 2015, 33, 1407–1417. [Google Scholar] [CrossRef]
  15. Dauwels, J. On variational message passing on factor graphs. In Proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007; pp. 2546–2550. [Google Scholar]
  16. Li, W.; Liu, D. Parametric Variational Sum-Product Algorithm for Cooperative Localization in Wireless Sensor Networks. IEEE Access 2021, 9, 89834–89845. [Google Scholar] [CrossRef]
Figure 1. Localized anchor nodes are represented by the red five-pointed star, while unlocalized target nodes are represented by the blue circle. Each node sends its opinion of the distribution of the neighborhood node on the basis of its belief and measured distance.
Figure 1. Localized anchor nodes are represented by the red five-pointed star, while unlocalized target nodes are represented by the blue circle. Each node sends its opinion of the distribution of the neighborhood node on the basis of its belief and measured distance.
Applsci 12 03851 g001
Figure 2. SBPVEM intra-node data flow. The node receives neighbors belief B i G and computes potential function Ψ between nodes under the constraints of distance measurement. The belief B ˜ can be obtained by multiplying all the neighboring Ψ . B ˜ is complicated, by high-aggregation sampling and variation expectation–maximization, a GMM B is obtained to represent B ˜ .
Figure 2. SBPVEM intra-node data flow. The node receives neighbors belief B i G and computes potential function Ψ between nodes under the constraints of distance measurement. The belief B ˜ can be obtained by multiplying all the neighboring Ψ . B ˜ is complicated, by high-aggregation sampling and variation expectation–maximization, a GMM B is obtained to represent B ˜ .
Applsci 12 03851 g002
Figure 3. Node potential function under different distributions. (a) indicates belief of node a ( N a ), where μ = [ 0.5 , 0.5 ] , σ = [ 0.1 , 0.1 ] , ρ = 0 ; (b) indicates the potential function Ψ a c with d i s a c = 0.35 ; (c) indicates belief of node b ( N b ), where μ = [ 0.5 , 0.5 ] , σ = [ 0.1 , 0.1 ] , ρ = 0.8 ; (d) indicates the potential function Ψ b c with d i s b c = 0.35 .
Figure 3. Node potential function under different distributions. (a) indicates belief of node a ( N a ), where μ = [ 0.5 , 0.5 ] , σ = [ 0.1 , 0.1 ] , ρ = 0 ; (b) indicates the potential function Ψ a c with d i s a c = 0.35 ; (c) indicates belief of node b ( N b ), where μ = [ 0.5 , 0.5 ] , σ = [ 0.1 , 0.1 ] , ρ = 0.8 ; (d) indicates the potential function Ψ b c with d i s b c = 0.35 .
Applsci 12 03851 g003
Figure 4. Location results of connection scenario 1. (a) exhibits the setting of scenario 1. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Figure 4. Location results of connection scenario 1. (a) exhibits the setting of scenario 1. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Applsci 12 03851 g004
Figure 5. Location results of connection scenario 2. (a) exhibits the setting of scenario 2. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Figure 5. Location results of connection scenario 2. (a) exhibits the setting of scenario 2. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Applsci 12 03851 g005
Figure 6. Location results of connection scenario 3. (a) exhibits the setting of scenario 2. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Figure 6. Location results of connection scenario 3. (a) exhibits the setting of scenario 2. Anchors are represented by cyan nodes, whereas yellow nodes represent targets. Blue dotted lines link two nodes with communication. PVSPA, SPAWN, NBP, and SBPVEM iterative positioning results and corresponding positioning errors are shown in (be). (f) comparing the prediction errors of various algorithms in the iterative process.
Applsci 12 03851 g006
Table 1. Basic Symbols Description.
Table 1. Basic Symbols Description.
SymbolDescribtion
x t State of node T
ψ t ( x t ) Potential function of node T
ψ t u ( x t , x u ) Paired potential function between node T and node U
m u t ( x t ) Message from node U to node T
m u t i ( x t ) The message passed by node U to node T in the I-th iteration
B t ( x t ) Belief function of node T
B t i ( x t ) Belief function of node T in the ith iteration
G t Neighbor node of node T
G t 0 The neighbor node of node t that is not the anchor node
N ( μ , Σ ) Multi-dimensional gaussian distribution
U ( a , b ) Uniform distribution
zNormalization factor
α k The weight of the K-th Gaussian distribution in the Gaussian mixture model
θ k Parameters of the kth Gaussian distribution in the mixed Gaussian model, including μ k and Σ k
γ Characteristic function, which indicates the source of the sample. The dimension is K-dimension, when sample J comes from the K-th Gaussian distribution, γ j k = 1 , and the rest γ j n | n k = 0
G ( α , μ , Σ ) Gaussian mixture model
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Guo, Y.; Cao, J.; Wu, M.; Sun, Z.; Lv, C. Simplify Belief Propagation and Variation Expectation Maximization for Distributed Cooperative Localization. Appl. Sci. 2022, 12, 3851. https://doi.org/10.3390/app12083851

AMA Style

Wang X, Guo Y, Cao J, Wu M, Sun Z, Lv C. Simplify Belief Propagation and Variation Expectation Maximization for Distributed Cooperative Localization. Applied Sciences. 2022; 12(8):3851. https://doi.org/10.3390/app12083851

Chicago/Turabian Style

Wang, Xueying, Yan Guo, Juliang Cao, Meiping Wu, Zhenqian Sun, and Chubing Lv. 2022. "Simplify Belief Propagation and Variation Expectation Maximization for Distributed Cooperative Localization" Applied Sciences 12, no. 8: 3851. https://doi.org/10.3390/app12083851

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop