Next Article in Journal
Aerosol Optical Retrieval and Surface Reflectance from Airborne Remote Sensing Data over Land
Next Article in Special Issue
Bathymetry Determination via X-Band Radar Data: A New Strategy and Numerical Results
Previous Article in Journal
In-situ Monitoring of Internal Local Temperature and Voltage of Proton Exchange Membrane Fuel Cells
Previous Article in Special Issue
EMMNet: Sensor Networking for Electricity Meter Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation

Department of Electrical Engineering, University of Ulsan, Namgu, Ulsan 680-749, Korea
*
Author to whom correspondence should be addressed.
Sensors 2010, 10(7), 6406-6420; https://doi.org/10.3390/s100706406
Submission received: 1 February 2010 / Revised: 17 April 2010 / Accepted: 12 May 2010 / Published: 29 June 2010
(This article belongs to the Special Issue Intelligent Sensors - 2010)

Abstract

:
This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified.

1. Introduction

Recently, networked monitoring systems are becoming increasingly popular, where sensor data are transmitted to a monitoring station through wired or wireless networks [1, 2]. In a monitoring station, estimation algorithms (such as a Kalman filter) are used to estimate the system states. A network between sensor nodes and a monitoring station can induce many problems such as time delays, packet dropouts, and limited bandwidth, where they depend on network types and scheduling methods. We note that the network issue (for example, what kinds of scheduling methods should be used?) itself is a big research area [3]. Also time delays and packet dropouts [47] are one of most important problems in networked estimation problems.
In this paper we focus on the case where there are many sensors and the network bandwidth is limited. For example, suppose there are three sensors (A,B,C) and 90 bytes/s can be transmitted over a network. How does one assign 90 bytes/s to each sensor? One method is to assign 30 bytes/s to each sensor. If sensor A monitors fast changing value and sensor B monitors slowly changing value, it might not be the best strategy: more data rate should be assigned to sensor A and the data rate of sensor B should be reduced. These issues are discussed quantitatively in this paper.
Note that the data rate of each sensor depends on the sampling frequency and the quantization bit length. For example, (100 Hz, 8 bit) case and (50 Hz, 16 bit) case have the same data rate (100 bytes/s). Thus given the same data rate, we have many possible combinations of the sampling frequencies and the quantization bit lengths. We first investigate how the sampling frequency and the quantization bit length affect the estimation performance and then propose a method to choose the sampling frequency and the quantization bit length of each sensor.
Using different sampling frequencies for different sensors is discussed in [10], where the sampling frequencies is chosen by minimizing the Kalman filter error covariance. In [11], the sampling frequency assignment algorithm is given, where a sampling frequency is chosen from a finite discrete set. In [12], a similar sampling frequency assignment is considered, where location of sensors and cost of measurement are also considered in the optimization problem. We note there are other attempts, where an event-based transmission method [8, 9] is used instead of a periodic transmission. In this paper, we assume that periodic sampling of sensor data.
Quantization is an extensively studied area [13]. Relating the estimation problem, a logarithm quantizer is proposed in [14]. Although theoretically appealing, the quantizer is applied to the innovation of a filter rather than to an output directly. The effect of quantization can be reduced by treating the quantization error as measurement noises as in [15]. In [16] and [17], quantization bit length assignment algorithms are proposed, where the bit length is computed by minimizing a performance index. The performance index is not directly related to estimation performance (e.g., the filter error covariance).
Simultaneous optimization of the sampling frequency and the quantization bit length has not been reported yet. In this paper, both parameters are selected so that the estimation performance is optimized given the transmission rate constraint.
The paper is organized as follows. In Section 2., estimation performance P is defined, which depends on the sampling periods and quantization bit lengths. In Section 3., a suboptimal algorithm to compute sampling period and quantization bit length combination is proposed. Through numerical examples, the proposed method is verified in Section 4. and conclusion is given in Section 5.

2. Problem Formulation

In this section, estimation performance is defined when the sampling frequency and the quantization bit length of each sensor are given. How to optimize the sampling frequency and the quantization bit length is discussed in Section 3.
Consider a linear time-invariant system given by
x ˙ ( t ) = A x ( t ) + w ( t ) y ( t ) = C x ( t ) + v ( t )
where xRn is the state we want to estimate and yRp is the measurement. Process noise w(t) and measurement noise υ(t), which are uncorrelated, zero mean white Gaussian random processes, satisfy:
E { w ( t ) w ( s ) } = Q δ ( t s ) E { v ( t ) v ( s ) } = R δ ( t s ) E { w ( t ) v ( s ) } = 0
where E{·} denotes the mean and Q and R are a process noise covariance and a measurement noise covariance, respectively.
Let Ti be the sampling period of the i-th output and thus the corresponding sampling frequency is 1/Ti. We assume that Ti is an integer multiple of constant T: that is, Ti satisfies the following condition:
T i = M i T
where Mi is an integer and T is the base sampling period.
Let li be the quantization bit length of the i-th output. Let ymax,i be the absolute maximum value of the i-th output: that is,
| y k , i | y max , i
where yk,i is the i-th element of yk. The index k is used to denote the discrete time index. We assume that the uniform quantizer is used. Let δi be the quantization level of the i-th output, which is given by
δ i = y max , i 2 l i 1
Let qk be the quantization error in yk, then the following is satisfied
| q k , i | δ i 2
where qk,i is the i-th element of qk.
Now we are going to model (1) in the discrete time considering the sampling period Ti and the quantization length bit li. Assume Mi = 1 (1 ≤ i ≤ p) temporarily:
x k + 1 = Φ x k + w k y k = C x k + v k + q k
where xkx(kT), Φ ≜ exp(AT) and yk is the quantized output of y(kT). Process noise wk and υk are uncorrelated and satisfy
E { w k w k } = Q d , E { v k v k } = R
where
Q d 0 T exp ( Ar ) Q exp ( Ar ) dr
We treat the quantization error as an additional measurement noise as in (6), which is also considered in [15]. The quantization error qk is assumed to be uncorrelated with wk and υk. If the uniform distribution is assumed, the covariance is given by
E { q k q k } = Δ = Diag ( δ 1 2 12 , , δ p 2 12 )
Now the temporary assumption (Mi = 1) is removed. The second equation of (6) is no longer true and yk,i is available if k is an integer multiple of Mi. Let k be a collection of all available yk,i at time k. To define k in a more formal way, let {rk,1, rk,2, . . ., rk,pk} be a set of all row numbers of available yk. Then k is given by
y ˜ k [ y k , r k , 1 _ y k , r k , 2 _ _ y k , r k , p k ]
Similarly υ̃k and k can be defined and k is defined as follows:
C ˜ k = [ C r k , 1 _ C r k , 2 _ _ C r k , p k ]
where Ci is the i-th row of C. Thus the measurement equation at time k is given by
y ˜ k = C ˜ k x k + v ˜ k + q ˜ k
where
E { v ˜ k v ˜ k } = R ˜ k E { q ˜ k q ˜ k } = Δ ˜ k
kRpk×pk is a matrix extracted from R so that k(i, j) = R(rk,i, rk,j). Δ̃k is defined in the same way.
For example, if M1 = 1 and M2 = 2, then k and k are given in Table 1. We can see that k is periodic with the period 2, which is the least common multiple of M1 = 1 and M2 = 2.
Generally k is periodic with the period M, where M is the least common multiple of {M1, M2, . . ., Mp}.
Using the first equation of (6) repeatedly b times, we have
x b + a = Φ b x a + i = 0 b 1 Φ b 1 i w a + i
It is known that a periodic system can be transformed into a time-invariant system [18, 19]. From (12) with a = kM and b = M, we have
x ( k + 1 ) M = Φ M x kM + i = 0 M 1 Φ M 1 i w M k + i
Also from (12) with a = kM − j and b = j, we have
x kM = Φ j x kM j + i = 0 j 1 Φ j 1 i w kM j + i
Multiplying Φj, we obtain a backward equation:
x kM j = Φ j x kM i = 0 j 1 Φ 1 i w kM j + i
Let k, k k, ῡk and k be defined by
x ¯ k x kM
w ¯ k [ w kM w kM + 1 w kM + M 1 ] , y ¯ k [ y ˜ ( k 1 ) M + 1 y ˜ ( k 1 ) M + 2 y ˜ k M ] v ˜ k [ v ˜ ( k 1 ) M + 1 v ˜ ( k 1 ) M + 2 v ˜ k M ] , q ¯ k [ q ˜ ( k 1 ) M + 1 q ˜ ( k 1 ) M + 2 q ˜ k M ]
Combining (13), (15) and (10), we have the following time invariant system:
x ¯ k + 1 = A ¯ x ¯ k + B ¯ w ¯ k y ¯ k = C ¯ x ¯ k + v ¯ k + q ¯ k + D ¯ w ¯ k 1
where
A ¯ = Φ M
B ¯ [ Φ M 1 Φ M 2 I ]
C ¯ [ C ˜ 1 Φ M + 1 C ˜ 2 Φ M + 2 C ˜ M 1 Φ 1 C ˜ M ]
D ¯ [ 0 C ˜ 1 Φ 1 C ˜ 1 Φ 2 C ˜ 1 Φ M + 1 0 0 C ˜ 2 Φ 1 C ˜ 2 Φ M + 2 0 0 0 C ˜ M 1 Φ 1 0 0 0 0 ]
We will apply a Kalman filter to (16). Note that
E { w ¯ k w ¯ k } = Q ¯
where
Q ¯ = Diag ( Q d , Q d , , Q d , Q d )
E { ( v ¯ k + q ¯ k + D ¯ w ¯ k 1 ) ( v ¯ k + q ¯ k + D ¯ w ¯ k 1 ) } = R ¯
where
R ¯ = Diag ( R ˜ 1 , , R ˜ M ) + Δ ¯ + D ¯ Q ¯ D ¯ Δ ¯ = Diag ( Δ ˜ 1 , , Δ ˜ M ) .
E { B ¯ w ¯ k 1 ( v ¯ k + q ¯ k + ( D ¯ w ¯ k 1 ) ) } = M ¯ = B ¯ Q ¯ D ¯
It is standard to apply a Kalman filter to (16) using (17), (18) and (19): the measurement update and the time update equations are given as follows [20]:
  • measurement update
    K k = ( P k C ¯ + M ¯ ) ( C ¯ P k C ¯ + R ¯ + C ¯ M ¯ + M ¯ C ¯ ) 1
    P k = P k K k ( C ¯ P k C ¯ + R ¯ + C ¯ M ¯ + M ¯ C ¯ ) K k
  • time update
    P k + 1 = A ¯ P k A ¯ + B ¯ Q ¯ B ¯
We will use P as an estimation performance, where P is the steady-state value of P k :
P lim k P k
In the steady state, we have P k + 1 = P k = P. Inserting this into the Kalman filter equation, we have the following Riccati equation:
P = A ¯ P A ¯ ( A ¯ P C ¯ + A ¯ M ¯ ) ( C ¯ P C ¯ + R ¯ + C ¯ M ¯ + M ¯ C ¯ ) 1 ( A ¯ P C ¯ + A ¯ M ¯ ) + B ¯ Q ¯ B ¯
If the sampling period Ti and the quantization bit length li are given, the corresponding estimation performance can be computed from (20).

3. Ti and li Optimization

In this section, a method to select the sampling period Ti and the quantization bit li are proposed. The main trade-off is between the transmission rate and the estimation performance.
The optimization problem can be formulated as follows:
min T i , l i λ Diag P subject to Σ i = 1 p l i T i S max
where λ ∈ Rn is a weighting vector. Note that the transmission rate S is given by
S i = 1 p l i T i
and Smax is the transmission rate constraint. The transmission rate S is defined as the sum of each sensor data rate without considering packet overhead. When we apply the algorithm to a specific network, the transmission rate S should be modified to take the packet overhead into account.
Note that Ti = MiT and P depends on M, which is the least common multiple of M1, . . ., Mp. To make M constant, Mi is assumed to satisfy
M i = 2 m i ( m i is an integer ) m i , m i n m i m i , max
With this assumption, the least common multiple M of all possible combinations of Mi = 2mi is given by
M = 2 max i { m i , max }
We assume li satisfies
l i , min l i l i , max
If the number of combinations is small, P can be computed for all possible combinations. For a case that the number is too large, we propose a suboptimal algorithm. The proposed algorithm is based on the following lemma.
Lemma 1 Let P (m1, l1, . . ., mi, li, . . ., mp, lp) be the solution to (20). With the assumption (23), the following is satisfied.
P ( m 1 , l 1 , , m i 1 , l i , , m p , l p ) P ( m 1 , l 1 , , m i , l i , , m p , l p )
P ( m 1 , l 1 , , m i , l i + 1 , , m p , l p ) P ( m 1 , l 1 , , m i , l i , , m p , l p )
P ( m 1 , l 1 , , m i 1 , l i + 1 , , m p , l p ) P ( m 1 , l 1 , , m i , l i , , m p , l p )
Proof: We will prove with a simple case with m1 = 0 m2 = 0, T = 1, M = 2, and p = 2: note that T1 = 2m1T = T and T2 = 2m2T = T. C̄ for m1 = 0 and m2 = 0 is given by
C ¯ m 1 = 0 , m 2 = 0 = [ [ C 1 C 2 C 1 C 2 ] ]
The subscript (m1 = 0, m2 = 0) is used to emphasize that m1 = 0 and m2 = 0. Also let m1=0,m2=0 be defined in (18) and P(0, l1, 0, l2) be a solution to (20) when m1 = 0 and m2 = 0.
Now consider a case with m1 = 0 and m2 = 1 Instead of computing P(0, l1, 1, l2) using (20) with m1=0,m2=1, we can compute P(0, l1, 1, l2) using (20) with m1 = 0 and m2 = 0 except that m1=0,m2=0 is replaced by modified, which is defined by
R ¯ modified = R ¯ ( m 1 = 0 , m 2 = 0 ) + Diag ( 0 , , 0 , 0 ) .
Note that adding ∞ to the (2, 2) element of m1=0,m2=0 is equivalent to ignoring the second output of yk when k is an integer multiple of 2. Thus P(0, l1, 1, l2) computed in this way is the estimation error covariance when m1 = 0 and m2 = 1. Since modified, we have from the monotonicity of the Riccati equation (see Corollary 5.2 in [21])
P ( 0 , l 1 , 0 , l 2 ) P ( 0 , l 1 , 1 , l 2 )
The general case for (25) can be proved similarly.
Proving the second inequality is more straightforward from the monotonicity of the Riccati equation [21] and from the fact
R ¯ ( m 1 , l 1 , , m i , l i + 1 , , m p , l p ) R ¯ ( m 1 , l 1 , , m i , l i , , m p , l p )
The third inequality (27) is just a combination of (25) and (26):
P ( m 1 , l 1 , , m i 1 , l i + 1 , , m p , l p ) P ( m 1 , l 1 , , m i , l i + 1 , , m p , l p ) P ( m 1 , l 1 , , m i , l i , , m p , l p )
To explain Lemma 1, we consider a simple example with the following parameters:
p = 2 m 1 , min = m 2 , min = 0 , m 1 , max = m 2 , max = 4 l 1 , min = l 2 , min = 9 , l 1 , max = l 2 , max = 16
There are 5 × 8 × 5 × 8 = 1, 600 possible combinations (see Figure 1). Using the result in Lemma 1, we know that P is the smallest when (m1, l1, m2, l2) = (0, 16, 0, 16) and the largest when (m1, l1, m2, l2) = (4, 9, 4, 9). On the other hand, the transmission rate S is the largest when (m1, l1, m2, l2) = (0, 16, 0, 16) and the smallest when (m1, l1, m2, l2) = (4, 9, 4, 9). Note that mi 1 and li + 1 in Lemma 1 corresponds to the upper and right combination of (mi, li), respectively. Thus as we move the combination from the left-bottom corner toward the right-top corner, λ′P becomes smaller while the transmission rate increases.
In the proposed algorithm, we start from the left-bottom corner combination and move the combination toward the right-top corner combination while the transmission constrained is satisfied. The proposed algorithm is stated in pseudo-codes.
  • (mi, li) = (mi,max, li,min), i = 1, . . ., p
  • compute λ′P and S
  • (P denotes P(m1, l1, . . ., mp, lp) and S is from (22))
  • while (S < Smax)
  •   L = { }
  •   for i = 1:p
  •     if (mi > mi,min)
  •       L = L ∪ {(m1, l1, . . ., mi − 1, li, . . ., mp, lp)}
  •     if (li < li,max)
  •       L = L ∪ {(m1, l1, . . ., mi, li + 1, . . ., mp, lp)}
  •   end
  •   for every element of L, compute j
    G ˜ j = λ P λ P ˜ j S ˜ j S
      where j = P for the j-th element of L
      and j = S for the j-th element of L
  •   (mi,old, li,old) = (mi, li), i = 1, . . ., p
  •   Find the maximum of Gj and choose the
      corresponding combination as (mi, li)
  •   compute λ′P and S
  • end
  • Choose the combination (mi,old, li,old)
Note that j represents the estimation performance improvement per transmission rate increase. For each given (mi, li), we choose (mi − 1, li) and (mi, li + 1) for the next combination candidates. There are at most 2p combinations. Among the combinations, we find a combination of which j is the largest. This process is continued until the current transmission rate exceeds Smax.
The number of combinations tested in the proposed algorithm is small compared with the brute force search. For example, the proposed algorithm starts with the top-right-most combination (m1, l1, m2, l2) = (0, 16, 0, 16) and moves toward bottom-left-most combination (m1, l1, m2, l2) = (4, 9, 4, 9) at each step until SSmax is no longer satisfied. Unfortunately, there is no guarantee that the solution found by the proposed method is near the optimal solution. In Section 4., it is shown, however, through a numerical example that the gap between the suboptimal and optimal solution is not large.
We note that the optimization algorithm is applied once when the networked system is designed. Once the sampling period Ti and quantization length li are determined, they are programmed in each sensor node. Thus no additional computation is needed in the sensor node.

4. Numerical Example

In this section, the proposed method is verified for the one dimensional attitude estimation problem. The state is defined by
x ( t ) = [ θ θ ˙ θ ¨ ]
where θ is the attitude we want to estimate. An accelerometer-based inclinometer and a gyroscope are used as sensors. The system model is given by
A = [ 0 1 0 0 0 1 0 0 0 ] , C = [ 1 0 0 0 1 0 ] Q = [ 0 0 0 0 0 0 0 0 0.2 ] , R = [ 0.0056 0 0 0.003 ]
The values given in (28), ymax,1 = 3.1416, ymax,2 = 2.6180 and T = 1 are used.
The optimization problem (21) with Smax = 500 and λ = [1 0 0]′ is considered. λ is a natural choice since we want to estimate θ.
First the optimization problem is solved by a brute force search: all possible combinations are examined. The optimal solution is given by
( m 1 , l 1 , m 2 , l 2 ) = ( 2 , 10 , 2 , 10 )
and S and λP at the combination are
S = 500 , λ P = 0.00259.
Secondly the proposed suboptimal algorithm is used, where the solution is given by
( m 1 , l 1 , m 2 , l 2 ) = ( 2 , 9 , 2 , 9 )
and S and λP at the combination are
S = 450 , λ P = 0.00260.
The proposed method was able to find a nearly optimal solution with less computation time. In the brute force search, 479 combinations are tested while 21 combinations are tested in the proposed algorithm.
To test whether the proposed λP is a good indicator of the estimation performance, the data is generated with Matlab and tested with a Kalman filter. The estimation performance is evaluated with the following:
P experiment = 1 N k , 1 N θ error , k 2
where N is the number of data and θerror,k = θ − θ̂. Note that θ̂ is computed by [1 0 0] k. We computed Pexperiment for all possible combinations and the minimizing combination is given by
( m 1 , l 1 , m 2 , l 2 ) = ( 2 , 9 , 2 , 10 )
and S and Pexperiment at the combination is
S = 475 , P experiment = 0.00159
It can be seen that the optimal solution predicted by λP nearly coincides with the real optimal solution. To see how similar λP and Pexperiment are, λP and Pexperiment are plotted for different (m1, l1, m2, l2) combinations. Since the parameter space is four dimensional, it is not easy to visualize the result. Thus we fix (m1, l1) = (2, 9) and plot λP and Pexperiment for (m2, l2) combinations in Figures 2 and 3. The data marked with “o” satisfies SSmax and the data marked with “*” does not satisfy SSmax.
It can be seen that the trend of λP is almost similar to that of Pexperiment. Thus λP can be used to predict the estimation performance given the sampling periods and the quantization bit lengths.
The transmission rate S is given in Figure 4. To see the trade-off between S and the estimation performance, S and λP are given for three (m1, l1, m2, l2) combinations in Table 2. We can see that when S decreases (that is, if we transmit less data), λP tends to increase (that is, the estimation performance degrades).
Finally, to test the efficiency of the proposed algorithm, we applied the proposed algorithm to 100 random models, where A is randomly generated and the same C as in the previous simulation is used. In the brute force method, 479 combinations are tested as in the previous simulation since the same setting is used. In the proposed algorithm, the number of combinations tested is between 13 and 21. That is, the number of combinations tested in the worst case is 21. Thus we can see the convergence rate is relatively fast. To see the accuracy of the proposed algorithm, the following value is computed:
Accuracy = λ 100 × ( P proposed P optimal ) P optimal
where Poptimal is computed using the brute force method. In the 100 trials, the worst case accuracy was 7.26% while the average value is 1.73%. Thus we believe the proposed method can find near optimal value while avoiding the large computation.

5. Conclusions

In this paper, attitude estimation over a network with a transmission rate constraint is considered. The transmission rate depends on the sampling periods and the quantization bit lengths. Basically the problem is trade-off between the estimation performance and the transmission rate, where the parameters are the sampling period and the quantization bit length.
First, how the sampling period and the quantization bit length affect the estimation performance is investigated. To do this, we introduced an augmented system and defined the estimation performance P. Secondly, the trade-off problem is formulated as an optimization problem and a suboptimal algorithm is provided. Through numerical examples, we showed that the defined estimation performance matches the real estimation performance in the sense that graphs of P and the real estimation performance are similar. We also showed the proposed algorithm could find a reasonably good solution.
While defining P, we made assumption (23), which makes the derivation of P easier but not essential. Removing that assumption and obtaining a general result is a future work. Also to test an algorithm using a real network is a future work.

Acknowledgments

This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (No. 2009-0067447).

References

  1. Zhao, F; Guibas, L. Wireless Sensor Networks; Elsevier: San Francisco, CA, USA, 2004. [Google Scholar]
  2. Choi, DH; Kim, DS. Wireless fieldbus for networked control systems using LR-WPAN. Int. J. Control Autom. Syst 2008, 6, 119–125. [Google Scholar]
  3. Wu, W; Arapostathis, A. Optimal sensor querying: General markovian and LQG models with controlled observations. IEEE Trans. Automat. Contr 2008, 53, 1392–1405. [Google Scholar]
  4. He, X; Wang, Z; Zhou, DH. Robust fault detection for networked systems with communication delay and data missing. Automatica 2009, 45, 2634–2639. [Google Scholar]
  5. Wei, G; Wang, Z; Shu, H. Robust filtering with stochastic nonlinearities and multiple missing measurements. Automatica 2009, 45, 836–841. [Google Scholar]
  6. Dong, H; Wang, Z; Gao, J. Robust H Filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts. IEEE Trans. Signal Process 2010, 58, 1957–1966. [Google Scholar]
  7. Lee, I; Choi, S. Discrimination of visual and haptic rendering delays in networked environments. Int. J. Control Autom. Syst 2009, 7, 25–31. [Google Scholar]
  8. Miskowicz, M. Send-on-delta concept: An event-based data reporting strategy. Sensors 2006, 65, 49–63. [Google Scholar]
  9. Suh, YS; Nguyen, VH; Ro, YS. Modified Kalman filter for networked monitoring systems employing a send-on-delta method. Automatica 2007, 43, 332–338. [Google Scholar]
  10. Mehra, RK. Optimization of measurement schedules and sensor designs for linear dynamic systems. IEEE Trans. Automat. Contr 1976, 21, 55–64. [Google Scholar]
  11. Do, LMK; Suh, YS; Nguyen, VH. Networked Kalman filter with sensor transmission interval optimization. Proceedings of SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 1047–1052.
  12. Kadu, SC; Bhushan, M; Gudi, R. Optimal sensor network design for multirate systems. J. Process Control 2008, 18, 594–609. [Google Scholar]
  13. Gersho, A; Gray, RM. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Norwell, MA, USA, 1992. [Google Scholar]
  14. Elia, N; Mitter, SK. Stabilization of linear systems with limited information. IEEE Trans. Automat. Contr 2001, 46, 1384–1400. [Google Scholar]
  15. Luong-Van, D; Tordon, MJ; Katupitiya, J. Covariance profiling for an adaptive Kalman filter to suppress sensor quantization effects. Proceedings of the 43rd IEEE Conference on Decision and Control, Paradise Island, Bahamas, 14–17 December 2004; pp. 2680–2685.
  16. Sun, S; Lin, J; Xie, L; Xiao, W. Quantized Kalman filtering. Proceedings of 22nd IEEE International Symposium on Intelligent Control, Singapore, 26–28 September 2007; pp. 7–12.
  17. Wen, C; Tang, X; Ge, Q. Decentralized quantized Kalman filter with limited bandwidth. Proceedings of Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 December 2008; pp. 291–295.
  18. Lee, DJ; Tomizuka, M. Multirate Optimal State Estimation with Sensor Fusion. Proceedings of the American Control Control Conference, Denver, CO, USA, 4–6 June 2003; pp. 2887–2892.
  19. Chen, T; Francis, B. Optimal Sampled-Data Control Systems; Springer-Verlag: Tokyo, Japan, 1995. [Google Scholar]
  20. Brown, RG; Hwang, PYC. Introduction to Random Signals and Applied Kalman Filtering; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  21. Clements, DJ; Wimmer, HK. Monotonicity of the optimal cost in the discrete-time regulator problem and Schur complements. Automatica 2001, 37, 1779–1786. [Google Scholar]
Figure 1. P (estimation error covariance) and S (transmission rate) relationship according to combinations.
Figure 1. P (estimation error covariance) and S (transmission rate) relationship according to combinations.
Sensors 10 06406f1
Figure 2. λP plot with (m1, l1) = (2, 9) (o: SSmax, *: S > Smax)
Figure 2. λP plot with (m1, l1) = (2, 9) (o: SSmax, *: S > Smax)
Sensors 10 06406f2
Figure 3. Pexperiment plot with (m1, l1) = (2, 9) (o: SSmax, *: S > Smax)
Figure 3. Pexperiment plot with (m1, l1) = (2, 9) (o: SSmax, *: S > Smax)
Sensors 10 06406f3
Figure 4. S plot with (m1, l1) = (2, 9)
Figure 4. S plot with (m1, l1) = (2, 9)
Sensors 10 06406f4
Table 1. k and k example (M1 = 1 and M2 = 2)
Table 1. k and k example (M1 = 1 and M2 = 2)
k1234
k[yk,1] [ y k , 1 y k , 2 ][yk,1] [ y k , 1 y k , 2 ]
k[C1] [ C 1 C 2 ][C1] [ C 1 C 2 ]
Table 2. S and λP comparison
Table 2. S and λP comparison
m1l1m2l2SλP
01601632000.000080
2122126000.000165
4949112.50.001282

Share and Cite

MDPI and ACS Style

Soo Suh, Y.; Sik Ro, Y.; Jun Kang, H. Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation. Sensors 2010, 10, 6406-6420. https://doi.org/10.3390/s100706406

AMA Style

Soo Suh Y, Sik Ro Y, Jun Kang H. Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation. Sensors. 2010; 10(7):6406-6420. https://doi.org/10.3390/s100706406

Chicago/Turabian Style

Soo Suh, Young, Young Sik Ro, and Hee Jun Kang. 2010. "Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation" Sensors 10, no. 7: 6406-6420. https://doi.org/10.3390/s100706406

Article Metrics

Back to TopTop