Next Article in Journal
Open-Gated pH Sensor Fabricated on an Undoped-AlGaN/GaN HEMT Structure
Next Article in Special Issue
Adaptive Marginal Median Filter for Colour Images
Previous Article in Journal
Analysis of RFI Identification and Mitigation in CAROLS Radiometer Data Using a Hardware Spectrum Analyser
Previous Article in Special Issue
ECS: Efficient Communication Scheduling for Underwater Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks

1
Department of Mechanical Engineering, Michigan State University, East Lansing, MI 48823, USA
2
Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(3), 3051-3066; https://doi.org/10.3390/s110303051
Submission received: 4 January 2011 / Revised: 25 February 2011 / Accepted: 27 February 2011 / Published: 9 March 2011
(This article belongs to the Special Issue Adaptive Sensing)

Abstract

: This paper presents a novel class of self-organizing sensing agents that adaptively learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. This approach is based on a class of anisotropic covariance functions of Gaussian processes introduced to model a broad range of spatio-temporal physical phenomena. The covariance function is assumed to be unknown a priori. Hence, it is estimated by the maximum a posteriori probability (MAP) estimator. The prediction of the field of interest is then obtained based on the MAP estimate of the covariance function. An optimal sampling strategy is proposed to minimize the information-theoretic cost function of the Fisher Information Matrix. Simulation results demonstrate the effectiveness and the adaptability of the proposed scheme.

Graphical Abstract

1. Introduction

In recent years, due to global climate changes, more environmental scientists are interested in the change of ecosystems over vast regions in lands, oceans, and lakes. For instance, for certain environmental conditions, rapidly reproducing harmful algal blooms in the Great Lakes can produce cyanotoxins. Besides such natural disasters, there exist growing ubiquitous possibilities of the release of toxic chemicals and contaminants in the air, lakes, and public water systems. This resulted in the rising demands to utilize autonomous robotic systems that can perform a series of tasks such as estimation, prediction, monitoring, tracking and removal of a scalar field of interest undergoing often complex transport phenomena (Common examples are diffusion, convection, and advection).

Significant enhancements have been made in the areas of mobile sensor networks and mobile sensing vehicles such as unmanned ground vehicles, autonomous underwater vehicles, and unmanned aerial vehicles. Emerging technologies have been reported on the coordination of mobile sensing agents [16]. Mobile sensing agents form an ad-hoc wireless communication network in which each agent usually operates under a short communication range, with limited memory and computational power. Mobile sensing agents are often spatially distributed in an uncertain surveillance environment.

The mobility of mobile agents can be designed in order to perform the optimal sampling of the field of interest. Recently in [5], Leonard et al. developed mobile sensor networks that optimize ocean sampling performance defined in terms of uncertainty in a model estimate of a sampled field. In [6], distributed learning and cooperative control were developed for multi-agent systems to discover peaks of the unknown field based on the recursive estimation of an unknown field. In general, we design the mobility of sensing agents to find the most informative locations to make observations for a spatio-temporal phenomenon. To find these locations that predict the phenomena best, one needs a model of the spatio-temporal phenomenon itself. In our approach, we focus on Gaussian processes to model fields undergoing transport phenomena. A Gaussian process (or kriging in geostatistics) has been widely used as a nonlinear regression technique to estimate and predict geostatistical data [711]. A Gaussian process is a natural generalization of the Gaussian probability distribution. It generalizes the Gaussian distribution with a finite number of random variables to a Gaussian process with an infinite number of random variables in the surveillance region. Gaussian process modeling enables us to predict physical values, such as temperature and plume concentration, at any of spatial points with a predicted uncertainty level efficiently. For instance, near-optimal static sensor placements with a mutual information criterion in Gaussian processes were proposed by [12,13]. Distributed kriged Kalman filter for spatial estimation based on mobile sensor networks are developed by [14]. A distributed adaptive sampling approach was proposed by [15] for sensor networks to find locations that maximize the information contents by assuming that the covariance function is known up to a scaling parameter. Multi-agent systems that are versatile for various tasks by exploiting predictive posterior statistics of Gaussian processes were developed by [16,17].

The motivation of our work is as follows. Even though there have been efforts to utilize Gaussian processes to model and predict the spatio-temporal field of interest, most of recent papers assume that Gaussian processes are isotropic, implying that the covariance function only depends on the distance between locations. Many studies also assume that the corresponding covariance functions are known a priori for simplicity. However, this is not the case in general as pointed out in literature [12,13,18], in which they treat the non-stationary process by fusing a collection of isotropic spatial Gaussian processes associated with a set of local regions. Hence our motivation is to develop theoretically-sound algorithms for mobile sensor networks to learn the anisotropic covariance function of a spatio-temporal Gaussian process. Mobile sensing agents can then predict the Gaussian process based on the estimated covariance function in a nonparametric manner.

The contribution of this paper is to develop covariance function learning algorithms for the sensing agents to perform nonparametric prediction based on a properly adapted Gaussian process for a given spatio-temporal phenomenon. By introducing a generalized covariance function, we expand the class of Gaussian processes to include the anisotropic spatio-temporal phenomena. The maximum a posteriori probability (MAP) estimator is used to find hyperparameters for the associated covariance function. The proposed optimal navigation strategy for autonomous vehicles minimizes the information-theoretic cost function such as D-or A-optimality criterion using the Fisher Information Matrix (or Cramér-Rao lower bound (CRLB)[19], improving the quality of the estimated covariance function. A Gaussian process with a time-varying covariance function has been proposed to demonstrate the adaptability of the proposed scheme.

This paper is organized as follows. In Section 2, we briefly review the mobile sensing network model and the notation related to a graph. A nonparametric approach to predict a field of interest based on measurements is presented in Section 3. Section 4 introduces a covariance function learning algorithm for an anisotropic, spatio-temporal Gaussian process. An optimal navigation strategy is described in Section 5. In Section 6, simulation results illustrate the usefulness of our approach and its adaptability for unknown and/or time-varying covariance functions.

The standard notation will be used in the paper. Let ℝ, ℝ≥0, ℤ denote, respectively, the set of real, non-negative real, and integer numbers. The positive semi-definiteness of a matrix A is denoted by A ≽ 0. Let |B| denotes the determinant of a matrix B. 𝔼 denote the expectation operator.

2. Mobile Sensor Networks

First, we explain the mobile sensing network and the measurement model used in this paper. Let Ns be the number of sensing agents distributed over the surveillance region 𝒬 ∈ ℝ2. Assume that 𝒬 is a compact set. The identity of each agent is indexed by := {1, 2,⋯, Ns}. Let qi(t) ∈ 𝒬 be the location of the i-th sensing agent at time t ∈ ℝ≥0. We assume that the measurement y(qi(t), t) of agent i is the sum of the scalar value of the Gaussian process z(qi(t), t) and sensor noise wi(t), at its position qi(t) and some measurement time t,

y ( q i ( t ) ,   t ) : = z ( q i ( t ) ,   t ) + w i ( t ) .

The communication network of mobile agents can be represented by a graph with edges. Let G(t) = (, (t)) be an undirected communication graph such that an edge (i, j) ∈ (t) if and only if agent i can communicate with agent ji. We define the neighborhood of agent i at time t by Ni(t) := {j | (i, j) ∈ (t), i}. We also define the closed neighborhood of agent i at time t by the union of its index and its neighbors, i.e., i (t) := {i} ∪ Ni(t).

3. The Nonparametric Approach

With the spatially distributed sampling capability, agents need to estimate and predict the field of interest by fusing the collective samples from different space and time indices. We show a nonparametric approach to predict a field of interest based on measurements. We assume that a field undergoing a physical transport phenomenon can be modeled by a spatio-temporal Gaussian process, which can be used for nonparametric prediction.

Consider a spatio-temporal Gaussian process:

z ( s , t ) 𝒢 𝒫   ( μ ( s , t ) , 𝒦 ( s , t , s , t ) )
where s, s′ ∈ 𝒬, t, t′ ∈ ℝ≥0 and μ(s, t) denotes the mean value at location s and time t. We then propose the following generalized covariance function 𝒦(s, t, s′, t′;Ψ) with a hyperparameter vector Ψ : = [ σ f σ x σ y σ t ] T:
𝒦 ( s ,   t ,   s ,   t ; Ψ ) = σ f 2   exp   ( l { x , y } ( s l s l ) 2 2 σ l 2 )   exp   ( ( t t ) 2 2 σ t 2 )
where sl is the l-th entry of s. {σx, σy} and σt are kernel bandwidths for space and time, respectively. Equation (2) shows that points close in the measurement space and time indices are strongly correlated and produce similar values. In reality, the larger temporal distance two measurements are taken with, the less correlated they become, which strongly supports our generalized covariance function in Equation (2). This may also justify the truncation (or windowing) of the observed time series data to limit the size of the covariance matrix for reducing the computational cost.

In the case that the global coordinates are different from the local model coordinates, a similarity transformation can be used to address this issue. For instance, a rotational relationship between the model basis {e⃗x, e⃗y} and the global basis {E⃗x, E⃗y} is:

[ e x e y ] = [ cos   θ sin   θ sin   θ cos   θ ] [ E x E y ]
where θ represents the angle of rotation. We can then use the following relationship to change the coordinates:
{ x = X   cos   θ + Y   sin   θ y = X   sin   θ + Y   cos   θ
where x and y indicate coordinates in the local basis and X and Y indicate their counterparts in the global basis. Equation (2) can then be rewritten in terms of global coordinates as
𝒦 ( s ,   t ,   s ,   t ,   Ψ ) = σ f 2   exp   ( [ ( s X s X )   cos   θ + ( s Y s Y )   sin   θ ] 2 2 σ x 2 )   exp   ( [ ( s X s X )   sin   θ + ( s Y s Y )   cos   θ ] 2 2 σ y 2 )   exp   ( ( t t ) 2 2 σ t 2 )
where sX and sY are the coordinates in the global basis. In this case, the parameter vector Ψ is redefined as Ψ : = [ σ f σ x σ y σ t θ ] T.

Up to time tk, agent i has noisy collective data {y(qj(tm), tm) | m ∈ ℤ, ji(tm), 1 ≤ mk}, where i(tm) denotes the closed neighborhood of agent i at time tm. The measurements y(qj(tm), tm) = z(qj(tm), tm) + wj(tm) are taken at different positions qj(tm) ∈ 𝒬 and different times tm ∈ ℝ≥0. The measurements are corrupted by the sensor and communication noises represented by Gaussian white noise w j 𝒩 ( 0 , σ w 2 ). For the case in which the noise level σw is not known and needs to be estimated, the hyperparameter vector can be expanded to include σw, i.e., Ψ : = [ σ f σ x σ y σ t σ w ] T. The column-vectorized measurements collected by agent i is denoted by

Y k : = col   ( y ( q j ( t m ) ,   t m ) | m 𝓁 ,   j N ¯ i ( t m ) ,   1 m k )
with a joint distribution
p ( Y k | Ψ ) : = 1 ( 2 π ) n / 2 | Y k | 1 / 2   exp   ( 1 2 ( Y k μ Y k ) T Y k 1 ( Y k μ Y k ) )
where n is the total number of observations up to time tk, μYk := 𝔼(Yk) is the mean vector of Yk, ΣYk:= 𝔼((YkμY≤k) (YkμYk)T) is the covariance matrix of Yk obtained by [ Y k ] ij = 𝒦 ( s i ,   t i ,   s j ,   t j ) + σ w 2 δ ij in which δij denotes the Kronecker delta function.

If the covariance function is known a priori, the prediction of the random field z(s, t) at location s and time t is then obtained by

z ( s ,   t | t k )   : = z ( s ,   t ) | Y k 𝒩 ( z ^ ( s ,   t | t k ) , σ 2 ( s ,   t | t k ) )
where (s, t|tk) := 𝔼 (z(s, t|tk)) is
z ^ ( s ,   t | t k )   : = μ ( s ,   t ) + z Y k Y k 1 ( Y k μ Y k )
and the prediction error variance is
σ 2 ( s ,   t | t k )   : = z z Y k Y k 1 Y k z
where Σz is the covariance of z, obtained by 𝒦(s, t, s, t;Ψ), z Y k = Y k z T is the covariance matrix between z and Y≤k, obtained by [ΣzYk]j = 𝒦(s, t, sj, tj; Ψ). Each agent can then predict the field of interest at any location and time with the associated uncertainty in a nonparametric way. In the next section, we present a learning approach for unknown covariance functions.

4. The MAP Estimate of the Hyperparameter Vector

Without loss of generality, we use a zero mean Gaussian process z(s, t) ∼ 𝒢𝒫(0, 𝒦(s, t, s′, t′)) for modeling the field undergoing a physical transport phenomenon. This is not a strong limitation since the mean of the posterior process is not confined to zero [11].

If the covariance function of a Gaussian process is not known a priori, mobile agents need to estimate parameters of the covariance function (Ψ) based on the observed samples. Using Bayes’ rule, the posterior p|Yk) is proportional to the likelihood p(Yk|Ψ) times the prior p(Ψ), i.e.,

p ( Ψ | Y k ) p ( Y k | Ψ ) p ( Ψ )
At time tk, the maximum a posteriori (MAP) estimate Ψ̂k of the hyperparameter vector can be obtained by
Ψ ^ k = arg   max Ψ   p ( Ψ | Y k ) = arg   max Ψ   p ( Y k | Ψ ) p ( Ψ )
This is equivalent to maximize the logarithm of the posterior p|Yk), i.e.,
Ψ ^ k = arg   max Ψ   ( ln   p ( Y k | Ψ ) + ln   p ( Ψ ) )
The log likelihood function is given by
ln   p ( Y k | Ψ ) = 1 2 Y k T Y k 1 Y k 1 2 ln   | Y k | n 2 ln   2 π
where n is the size of Yk. Notice that if no prior information is given, the MAP estimate in Equation (4) is equal to the maximum likelihood (ML) estimate.

A gradient ascent algorithm is used to find a MAP estimate of Ψ:

Ψ ^ k ( i + 1 ) = Ψ ^ k ( i ) + ε k ( i ) Ψ ^ k ( i )   ln   p ( Ψ ^ k ( i ) | Y k ) , i 0
where ε k ( i ) is a small positive number which can be obtained by using a backtracking line search and ∇xf(x) is the partial derivative of f(x) with respect to x. The partial derivative of the log likelihood function with respect to a hyperparameter ψj is given by
  ln   p ( Y k | Ψ ) ψ j = 1 2 Y k T Y k 1 Y k ψ j Y k 1 Y k 1 2 tr   ( Y k 1 Y k ψ j ) = 1 2 tr   ( ( β β T Y k 1 ) Y k ψ j )
where β = Y k 1 Y k. Alternatively, a simplex search method [20] can be used to find a MAP estimate of Ψ. This is a direct search method that does not use numerical or analytic gradients.

After finding a MAP estimate of Ψ, agents can proceed the prediction of the field of interest using Equation (3).

5. An Adaptive Sampling Strategy

Agents should find new sampling positions to improve the quality of the estimated covariance function in the next iteration at time tk+1. For instance, to precisely estimate the anisotropic phenomenon, i.e., processes with different correlations along x-axis and y-axis directions, sensing agents need to explore and sample measurements along different directions.

To this end, we consider a centralized scheme. Suppose that a leader agent (or a central station) knows the communication graph at the next iteration time tk+1 and also has access to all measurements collected by agents. Let Yk+1 and Y≤k be the measurements at time tk+1 and the collective measurements up to time tk, respectively, i.e.,

Y k + 1 : = col   ( y ( q i ( t k + 1 ) ,   t k + 1 ) | i 𝒤 ) , Y k : = col   ( y ( q i ( t m ) ,   t m ) | m 𝓁 ,   i 𝒤 ,   1 m k )

To derive the optimal navigation strategy, we compute the log likelihood function of observations of Yk+1:

𝒧 ( Y k + 1 , Ψ ) : = ln   p ( Y k + 1 | Ψ ) = 1 2 Y k + 1 T Y k + 1 1 Y k + 1 1 2 ln | Y k + 1 | n k + 1 2   ln   2 π
where nk+1 is the size of Yk+1.

Since the locations of observations in Yk were already fixed, we represent the log likelihood function in terms of a vector of future sampling points at time tk+1 only and the hyperparameter vector Ψ:

𝒧 ( q ˜ ,   Ψ ) : = ln   p ( Y k + 1 ( q ˜ ) | Ψ )

Now consider the Fisher Information Matrix (FIM) that measures the information produced by measurements Yk+1 for estimating the hyperparameter vector at time tk+1. The Cramér-Rao lower bound (CRLB) theorem states that the inverse of the FIM is a lower bound of the estimation error covariance matrix [19,21]:

𝔼   ( ( Ψ ^ k + 1 Ψ ) ( Ψ ^ k + 1 Ψ ) T ) FIM 1
where Ψ̂k+1 represents the estimation of Ψ at time tk+1. The FIM [19] is given by
[ FIM ( q ˜ ,   Ψ ) ] ij = 𝔼 ( 2 𝒧 ( q ˜ ,   Ψ ) ψ i ψ j )
where the expectation is taken with respect to p(Yk+1 | Ψ). The analytical closed-form of FIM is given by
[ FIM ( q ˜ ,   Ψ ) ] ij = 1 2 tr   ( Y k + 1 1 Y k + 1 ψ i Y k + 1 1 Y k + 1 ψ j )
Since the true value of Ψ is not available, we will evaluate the FIM in Equation (7) at the currently available best estimate Ψ̂k. This has been an effective practical solution when we evaluate the FIM and estimate Ψ simultaneously [22,23]. The term due to the MAP estimation error in evaluating the FIM in Equation (7) will decrease as the number of samples increases.

We can expect that minimizing the CRLB results in a decrease of uncertainty in estimating Ψ [22]. Using the D-optimality criterion [24,25], the objective function J is given by

J ( q ˜ ,   Ψ ^ k ) : = det   ( FIM 1 ( q ˜ ,   Ψ ^ k ) )
Minimizing J in Equation (8) corresponds to minimizing the volume of the ellipsoid which represents the maximum confidence region for the estimated hyperparameters. However, if one hyperparameter has a much larger variance compared to the others, minimizing the volume may not be very useful [25]. As an alternative, the A-optimality which minimizes the sum of the variances may be used. The objective function J based on the A-optimality criterion is
J ( q ˜ ,   Ψ ^ k ) : = tr   ( FIM 1 ( q ˜ ,   Ψ ^ k ) )
A control law for the mobile sensor network can be formulated as follows:
q ( t k + 1 ) = arg min q ˜ 𝒬   J   ( q ˜ ,   Ψ ^ k )
A gradient descent strategy can be used to find the next optimal sampling positions:
q ˜ ( i + 1 ) = q ˜ ( i ) α ( i ) q ˜ ( i )   J   ( q ˜ ( i ) ,   Ψ ^ k )
where α(i) is a small positive number which can be obtained by using a backtracking line search. Alternatively, a control law for the mobile sensor network can be formulated as follows:
q ( t k + 1 ) = arg min q ˜ Π i = 1 N s Q i J ( q ˜ , Ψ ^ k )
where
Q i = q i ( t k ) + j = 1 n d [ δ j   δ j ]
where nd = 2 denotes the dimension of the surveillance region 𝒬 and δj is the maximum distance for each agent to move in x and y directions.

However, optimization on ln p(Yk+1|Ψ) in Equation (6) and J(, Ψ̂k) in Equation (8) or (9) can be numerically costly due to the increasing size of ΣYk. One way to deal with this problem is to use a truncated date set

Y k δ , k : = col   ( y ( s j ( t m ) ,   t m ) | m 𝕑 ,   j 𝒤 ,   k δ m k )
instead of using Yk. In addition, this approach based on the truncated observations can be viewed as a strategy to deal with a slowly time-varying parameter vector Ψ, which will further investigated in Section 6.2.

The overall protocol for the sensor network is summarized as in Table 1.

6. Simulation Results

In this section, we evaluate the proposed approach for a spatio-temporal Gaussian process (Section 6.1) and an advection-diffusion process (Section 6.3). For both cases, we compare the simulation results using the proposed optimal sampling strategy with results using a benchmark random sampling strategy. In this random sampling strategy, each agent was initially randomly deployed in the surveillance region. At each time step, the next sampling position for agent i is generated randomly with the same mobility constraint, viz. a random position within a square region with length 2 centered at the current position qi. For fair comparison, the same values were used for all other conditions. In Section 6.2, our approach based on truncated observations has been applied to a Gaussian process with a time-varying covariance function to demonstrate the adaptability of the proposed scheme.

6.1. A Spatio-Temporal Gaussian Process

We apply our approach to a spatio-temporal Gaussian process. The Gaussian process was numerically generated for the simulation [11]. The hyperparameters used in the simulation were chosen such that Ψ = [ σ f σ x σ y σ t σ w ] T = [ 5 4 2 8 0.5 ] T. Two snap shots of the realized Gaussian random field at time t = 1 and t = 20 are shown in Figure 1. In this case, Ns = 5 mobile sensing agents were initialized at random positions in a surveillance region 𝒬 = [ 0 20 ] × [ 0 20 ]. The initial values for the algorithm were given to be Ψ ( 0 ) = [ 1 10 10 1 0.1 ] T. A prior of the hyperparameter vector has been selected as

p ( Ψ ) = p ( σ f ) p ( σ x ) p ( σ y ) p ( σ t ) p ( σ w )
where p(σf) = p(σx) = p(σy) = p(σt) = Γ(5, 2), and p(σw) = Γ(5, 0.2). Γ(a, b) is a Gamma distribution with mean ab and variance ab2 in which all possible values are positive. The gradient method was used to find the MAP estimate of the hyperparameter vector.

For simplicity, we assumed that the global basis is the same as the model basis. We considered a situation where at each time, measurements of agents are transmitted to a leader (or a central station) that uses our Gaussian learning algorithm and sends optimal control back to individual agents for next iteration to improve the quality of the estimated covariance function. The maximum distance for agents to move in one time step was chosen to be 1 for both x and y directions. The A-optimality criterion was used for optimal sampling.

For both proposed and random strategies, Monte Carlo simulations were run for 100 times and the statistical results are shown in Figure 2. The estimates of the hyperparameters (shown in circles and error bars) tend to converge to the true values (shown in dotted lines) for both strategies. As can be seen, the proposed scheme (Figure 2(a)) outperforms the random strategy (Figure 2(b)) in terms of the A-optimality criterion.

Figure 3 shows the predicted field along with agents’ trajectories at time t = 1 and t = 20 for one trial. As shown in Figure 1(a) and Figure 3(a), at time t = 1, the predicted field is far from the true field due to the inaccurate hyperparameters estimation and small number of observations. As time increases, the predicted field will be closer to the true field due to the improved quality of the estimated the covariance function and the cumulative observations. As expected, at time t = 20, the quality of the predicted field is very well near the sampled positions as shown in Figure 3(b). With 100 observations, the running time is around 30s using Matlab, R2008a (MathWorks) in a PC (2.4 GHz Dual-Core Processor). No attempt has been made to optimize the code. After converging to a good estimate of Ψ, agents can switch to a decentralized configuration and collect samples for other goals such as peak tracking and prediction of the process [6,16,17].

6.2. Time-Varying Covariance Functions

To illustrate the adaptability of the proposed strategy to time-varying covariance functions, we introduce a Gaussian process defined by the following covariance function. The time-varying covariance function is modeled by a time-varying weighted sum of two known covariance functions 𝒦1(·) and 𝒦2(·) such as

𝒦 ( ) = λ ( t ) 𝒦 1 ( ) + ( 1 λ ( t ) ) 𝒦 2 ( )
where λ(t) ∈ [0, 1] is a time-varying weight factor that needs to be estimated. In the simulation study, 𝒦1(·) is constructed with σf = 1, σx = 0.2, σy = 0.1, σt = 8, and σw = 0.1; and 𝒦2(·) is with σf = 1, σx = 0.1, σy = 0.2, σt = 8, and σw = 0.1. This Gaussian process defined in (12) with theses particular 𝒦1 and 𝒦2 effectively models hyperparameter changes in x and y directions.

To improve the adaptability, the mobile sensor network uses only observations sampled during the last 20 iterations for estimating λ(t) online. The true λ(t) and the estimated λ(t) are shown in Figure 4(a,b), respectively. From Figure 4, it is clear that the weighting factor λ(t) can be estimated accurately after some delay about 5–8 iterations. The delay is due to using the truncated observations that contain past observations since the time-varying covariance function changes continuously in time.

6.3. Fitting a Gaussian Process to an Advection-Diffusion Process

We apply our approach to a spatio-temporal process generated by physical phenomena (advection and diffusion). This work can be viewed as a statistical modeling of a physical process, i.e., as an effort to fit a Gaussian process to a physical advection-diffusion process in practice. The advection-diffusion model developed in [26] was used to generate the experimental data numerically. An instantaneous release of Qkg of gas occurs at a location (x0, y0, z0). This is then spread by the wind with mean velocity u = [ u x 0 0 ] T Assuming that all measurements are recorded at a level z = 0, and the release occurs at a ground level (i.e., z0 = 0), the concentration C at an arbitrary location (x, y, 0) and time t is described by the following analytical solution [27]:

C ( x ,   y ,   0 ,   t ) = Q   exp   ( ( Δ x u Δ t ) 2 4 K x Δ t Δ y 2 4 K y Δ t ) 4 π 3 2 ( K x K y K z ) 1 2 ( Δ t ) 3 2
where Δx = xx0, Δy = yy0, and Δt = 5(t − 1) + t0. The parameters used in the simulation study are shown in Table 2. Notice that this process generates an anisotropic concentration field with parameters Kx = 20m2/min and Ky = 10m2/min as in Table 2. The fields at time t = 1 and t = 20 are shown in Figure 5. Notice the center of the concentration moved. In this case, Ns = 5 mobile sensing agents were initialized at random positions in a surveillance region 𝒬 = [ 50 150 ] × [ 100 100 ].

The initial values for the algorithm was chosen to be Ψ ( 0 ) = [ 100 100 100 ] T where we assumed σf = 1 and σw = 0.1. For this application, we did not assume any prior knowledge about the covariance function. Hence, the MAP estimator was the same as the ML estimator. The gradient method was used to find the ML estimate.

We again assumed that the global basis is the same as the model basis and assumed all agents have the same level of measurement noises for simplicity. In our simulation study, agents start sampling at t0 = 100min and take measurements at time tk with a sampling time of ts = 5min as in Table 2.

Monte Carlo simulations were run for 100 times, and Figure 6 shows the estimated σx, σy, and σt with (a) the random sampling strategy and (b) the optimal sampling strategy, respectively. With 100 observations, the running time at each time step is around 20s using Matlab, R2008a (MathWorks) in a PC (2.4 GHz Dual-Core Processor). No attempt has been made to optimize the code. As can be seen in Figure 6, the estimates of the hyperparameters tend to converge to similar values for both strategies. Clearly, the proposed strategy outperforms the random sampling strategy in terms of the estimation error variance.

7. Summary

In this paper, we presented a novel class of self-organizing sensing agents that learn an anisotropic, spatio-temporal Gaussian process using noisy measurements and move in order to improve the quality of the estimated covariance function. The MAP estimator was used to estimate the hyperparameters in the unknown covariance function and the prediction of the field of interest was obtained based on the MAP estimates. An optimal navigation strategy was proposed to minimize the information-theoretic cost function of the Fisher Information Matrix for the estimated hyperparameters. The proposed scheme was applied to both a spatio-temporal Gaussian process and a true advection-diffusion field. Simulation study indicated the effectiveness of the proposed scheme and the adaptability to time-varying covariance functions. The trade-off between a precise estimation and computational efficiency using truncated observations will be studied in the future work.

Acknowledgments

This work has been supported by the National Science Foundation through CAREER Award CMMI-0846547. This support is gratefully acknowledged.

References

  1. Lynch, KM; Schwartz, IB; Yang, P; Freeman, RA. Decentralized environmental modeling by mobile sensor networks. IEEE Trans. Robot 2008, 24, 710–724. [Google Scholar]
  2. Tanner, HG; Jadbabaie, A; Pappas, GJ. Stability of Flocking Motion; Technical Report MS-CIS-03-03,; The GRASP Laboratory, University Pennsylvania: Philadelphia, PA, USA; May; 2003. [Google Scholar]
  3. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Automat. Contr 2006, 51, 401–420. [Google Scholar]
  4. Ren, W; Beard, RW. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Automat. Contr 2005, 50, 655–661. [Google Scholar]
  5. Leonard, NE; Paley, DA; Lekien, F; Sepulchre, R; Fratantoni, DM; Davis, RE. Collective motion, sensor networks, and ocean sampling. Proc. IEEE 2007, 95, 48–74. [Google Scholar]
  6. Choi, J; Oh, S; Horowitz, R. Distributed learning and cooperative control for multi-agent systems. Automatica 2009, 45, 2802–2814. [Google Scholar]
  7. Cressie, N. Kriging nonstationary data. J. Ame. Statist. Assn 1986, 81, 625–634. [Google Scholar]
  8. Cressie, N. Statistics for Spatial Data; A Wiley-Interscience Publication, John Wiley and Sons, Inc: Chichester, West Sussex, UK, 1991. [Google Scholar]
  9. Gibbs, M; MacKay, DJC. Efficient implementation of Gaussian processes. Available online: http://www.cs.toronto.edu/mackay/gpros.ps.gz.Preprint/ (accessed on 3 March 2011).
  10. Mackay, DJC. Introduction to Gaussian Processes. In NATO ASI Series Series F: Computer and System Sciences; Springer-Verlag: Heidelberg, Germany, 1998; pp. 133–165. [Google Scholar]
  11. Rasmussen, CE; Williams, CKI. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  12. Krause, A; Guestrin, C; Gupta, A; Kleinberg, J. Near-optimal sensor placements: Maximizing information while minimizing communication cost. Proceedings of the 5th International Symposium on Information Processing in Sensor Networks (IPSN), Nashville, TN, USA, April 2006.
  13. Krause, A; Singh, A; Guestrin, C. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. J. Mach. Learn. Res 2008, 9, 235–284. [Google Scholar]
  14. Cortés, J. Distributed Kriged Kalman filter for spatial estimation. IEEE Trans. Automat. Contr 2010, 54, 2816–2827. [Google Scholar]
  15. Graham, R; Cortés, J. Cooperative adaptive sampling of random fields with partially known covariance. Int. J. Robust Nonlinear Contr 2009, 1, 1–2. [Google Scholar]
  16. Choi, J; Lee, J; Oh, S. Swarm intelligence for achieving the global maximum using spatio-temporal Gaussian processes. Proceedings of the 27th American Control Conference (ACC), Seattle, WA, USA, June 2008.
  17. Choi, J; Lee, J; Oh, S. Biologically-inspired navigation strategies for swarm intelligence using spatial Gaussian processes. Proceedings of the 17th International Federation of Automatic Control (IFAC) World Congress, Seoul, South Korea, July 2008.
  18. Nott, DJ; Dunsmuir, WTM. Estimation of nonstationary spatial covariance structure. Biometrika 2002, 89, 819–829. [Google Scholar]
  19. Kay, SM. Fundamentals of Statistical Signal Processing: Estimation Theory; Prentice Hall, Inc: Upper Saddle River, NJ, USA, 1993. [Google Scholar]
  20. Lagarias, J; Reeds, J; Wright, M; Wright, P. Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM J. Optimization 1999, 9, 112–147. [Google Scholar]
  21. Mandic, M; Franzzoli, E. Efficient sensor coverage for acoustic localization. Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, December 2007.
  22. Martínez, S; Bullo, F. Optimal sensor placement and motion coordination for target tracking. Automatica 2006, 42, 661–668. [Google Scholar]
  23. Nagamune, R; Choi, J. Parameter reduction in estimated model sets for robust control. J. Dynam. Syst. Meas. Contr 2010, 132, 021002. [Google Scholar]
  24. Pukelsheimi, F. Optimal Design of Experiments; Wiley: New York, NY, USA, 1993. [Google Scholar]
  25. Emery, AF; Nenarokomov, AV. Optimal experiment design. Meas. Sci. Technol 1998, 9, 864–76. [Google Scholar]
  26. Kathirgamanathan, P; McKibbin, R. Source term estimation of pollution from an instantaneous point source. Res. Lett. Infor. Math. Sci 2002, 3, 59–67. [Google Scholar]
  27. Christopoulos, VN; Roumeliotis, S. Adaptive sensing for instantaneous gas release parameter estimation. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain; 2005. [Google Scholar]
Figure 1. Snap shots of the realized Gaussian process at (a) t = 1 and (b) t = 20.
Figure 1. Snap shots of the realized Gaussian process at (a) t = 1 and (b) t = 20.
Sensors 11 03051f1 1024
Figure 2. Monte Carlo simulation results (100 runs) for a spatio-temporal Gaussian process using (a) the random sampling strategy, and (b) the adaptive sampling strategy. The estimated hyperparameters are shown in blue circles with error-bars. The true hyperparameters that used for generating the process are shown in red dashed lines.
Figure 2. Monte Carlo simulation results (100 runs) for a spatio-temporal Gaussian process using (a) the random sampling strategy, and (b) the adaptive sampling strategy. The estimated hyperparameters are shown in blue circles with error-bars. The true hyperparameters that used for generating the process are shown in red dashed lines.
Sensors 11 03051f2 1024
Figure 3. The predicted fields along with agents’ trajectories at (a) t = 1 and (b) t = 20.
Figure 3. The predicted fields along with agents’ trajectories at (a) t = 1 and (b) t = 20.
Sensors 11 03051f3 1024
Figure 4. (a) The weighting factor λ(t) and (b) the estimated λ(t).
Figure 4. (a) The weighting factor λ(t) and (b) the estimated λ(t).
Sensors 11 03051f4 1024
Figure 5. Snap shots of the advection-diffusion process at (a) t = 1 and (b) t = 20.
Figure 5. Snap shots of the advection-diffusion process at (a) t = 1 and (b) t = 20.
Sensors 11 03051f5 1024
Figure 6. Simulation results (100 runs) for a advection-diffusion process. The estimated hyperparameters with (a) random sampling and (b) optimal sampling.
Figure 6. Simulation results (100 runs) for a advection-diffusion process. The estimated hyperparameters with (a) random sampling and (b) optimal sampling.
Sensors 11 03051f6 1024
Table 1. An adaptive sampling strategy for mobile sensor networks.
Table 1. An adaptive sampling strategy for mobile sensor networks.
  • Learning: At time tk, the sensor network updates Ψ̂k using a MAP estimate Equation (4) for a data set Yk. Start this MAP optimization with the initial point Ψ̂k−1.

  • Prediction: For given Yk and Ψ̂k, agents can compute prediction at any point and time using Equation (3), i.e., p(z(s, t)|Yk; Ψ̂k).

  • Sampling: Based on {Ψ̂k, Yk}, the sensor network computes the control Equation (10) in order to maximize J(, Ψ̂k). Update the positions of agents accordingly and collect measurements at time tk+1.

  • Repeat the steps 1–3 until Ψ converges.

Table 2. Parameters used in simulation.
Table 2. Parameters used in simulation.
ParameterNotationUnitValue
Number of agentsNs-5
Sampling timetsmin5
Initial timet0min100
Gas release massQkg106
Wind velocity in x axisuxm/min0.5
Eddy diffusivity in x axisKxm2/min20
Eddy diffusivity in y axisKym2/min10
Eddy diffusivity in z axisKzm2/min0.2
Location of explosionx0m2
Location of explosiony0m5
Location of explosionz0m0
Sensor noise levelσwkg/m30.1

Share and Cite

MDPI and ACS Style

Xu, Y.; Choi, J. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks. Sensors 2011, 11, 3051-3066. https://doi.org/10.3390/s110303051

AMA Style

Xu Y, Choi J. Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks. Sensors. 2011; 11(3):3051-3066. https://doi.org/10.3390/s110303051

Chicago/Turabian Style

Xu, Yunfei, and Jongeun Choi. 2011. "Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks" Sensors 11, no. 3: 3051-3066. https://doi.org/10.3390/s110303051

APA Style

Xu, Y., & Choi, J. (2011). Adaptive Sampling for Learning Gaussian Processes Using Mobile Sensor Networks. Sensors, 11(3), 3051-3066. https://doi.org/10.3390/s110303051

Article Metrics

Back to TopTop