Next Article in Journal
Sky Image Classification Based on Transfer Learning Approaches
Previous Article in Journal
Full-Scale Aggregated MobileUNet: An Improved U-Net Architecture for SAR Oil Spill Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Sensor Path Planning for Kalman Filter Spatiotemporal Estimation

1
Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA
2
Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
3
Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3727; https://doi.org/10.3390/s24123727
Submission received: 16 April 2024 / Revised: 4 June 2024 / Accepted: 6 June 2024 / Published: 8 June 2024
(This article belongs to the Section Navigation and Positioning)

Abstract

:
The estimation of spatiotemporal data from limited sensor measurements is a required task across many scientific disciplines. In this paper, we consider the use of mobile sensors for estimating spatiotemporal data via Kalman filtering. The sensor selection problem, which aims to optimize the placement of sensors, leverages innovations in greedy algorithms and low-rank subspace projection to provide model-free, data-driven estimates. Alternatively, Kalman filter estimation balances model-based information and sparsely observed measurements to collectively make better estimation with limited sensors. It is especially important with mobile sensors to utilize historical measurements. We show that mobile sensing along dynamic trajectories can achieve the equivalent performance of a larger number of stationary sensors, with performance gains related to three distinct timescales: (i) the timescale of the spatiotemporal dynamics, (ii) the velocity of the sensors, and (iii) the rate of sampling. Taken together, these timescales strongly influence how well-conditioned the estimation task is. We draw connections between the Kalman filter performance and the observability of the state space model and propose a greedy path planning algorithm based on minimizing the condition number of the observability matrix. This approach has better scalability and computational efficiency compared to previous works. Through a series of examples of increasing complexity, we show that mobile sensing along our paths improves Kalman filter performance in terms of better limiting estimation and faster convergence. Moreover, it is particularly effective for spatiotemporal data that contain spatially localized structures, whose features are captured along dynamic trajectories.

1. Introduction and Related Work

Many scientific disciplines require the estimation of spatiotemporal data from limited, point-source sensor measurements for the purpose of characterization, forecasting, reconstructing, and/or controlling a given system. Traditionally, limited stationary sensors are placed in the system of interest, while mobile sensors and autonomous vehicles have also gained interest lately. In this paper, we consider the task of estimating the spatiotemporal system given measurements from limited mobile sensors, by utilizing Kalman filtering and low-dimensional representation of the system. Under this setup, our goal is to efficiently plan sensor trajectories so that estimation can be achieved with very few sensors.
The goal of optimal sensor placement, also known as sensor selection, is to find the optimal locations in the state space to place only a few sensors so as to achieve the best performance in one or more of the above listed metrics. The combinatorial optimization problem of sensor selection is NP-hard, so most algorithms aim to find suboptimal solutions by leveraging greedy searches and low-rank subspace representations of the system in order to efficiently find a near-optimal solution [1]. Greedy methods [2,3] are computationally efficient and include QR decomposition [4] with column pivoting [1,5,6], (Q)DEIM [7,8], and GappyPOD [9,10,11], all of which take advantage of the submodularity, or near-submodularity, of criteria such as the trace, spectral norm, condition number, determinant, and/or its low-rank projection basis. Greedy searches can also be modified to include cost constraints in the sensor placement problem [5]. Other objectives, such as the reconstruction error [12] and the observability matrix [13], can also be used for sensor selection. Statistical methods using Gaussian process models [14,15,16] also are effective in leveraging entropy or mutual information as the main objective for optimization. Furthermore, more recently, shallow decoder networks can be trained within the context of greedy algorithms [17,18].
In contrast to instantaneous estimation from sensor measurements, Kalman filtering provides a recursive method that estimates based on collective information from prior knowledge of the dynamical model and a time-history of the sensor measurements [19,20]. In the sensor placement problem, it is often required to have the number of sensors to be at least the same or more than the latent rank of the system in order to be able to capture enough information for reconstruction [1]. However, with Kalman filter estimation, fewer sensors can be used to achieve the same performance given that the system is observable with these sensor measurements [20]. Commonly, the Kalman filter sensor selection (KFSS) problem studies the objective based on a posteriori error covariance, which is a metric in Kalman filtering for how much the estimates deviate from the truth. The metric can be considered within an observation period [21], but it is more commonly taken to the limit at the infinite-time horizon when the full convergence of Kalman filtering is reached. Although optimization over the trace of the error covariance matrix, which represents the mean squared error (MSE), does not have a constant-factor polynomial-time approximation [22,23,24], greedy methods are still near-optimal [25].
The diversity of mathematical methods highlighted above for optimal sensor placement typically focus on stationary point sensors. However, in many applications, sensors can be mobile, in which case sensors are allowed to freely move in the measurement space while collecting measurements along the way. The problem concerning the design of trajectories or paths of sensors is called the sensor path planning problem. In the field of engineering and robotics, path planning problem has been long considered for the purposes of navigation as well as estimation in a dynamical environment [26,27,28,29,30]. The task of tracking and estimating a flow field has often been tackled by constructing a simplified, restricted problem that focuses on a network of sensors with a simple formation for efficient paramerization and optimization [31,32,33,34,35]. Different control laws for the path of the sensors are considered for different tasks, including a simple circular or elliptical control [31], gradient climbing control [33], control along level curves [34], or control based on smoothed particle hydrodynamics [36]. Lynch et al. [37] proposed a decentralized mobile network to collectively estimate environmental functions through communication networks, while the sensors move according to a gradient control law that maximizes information. Shriwastav et al. [38] built a trajectory by connecting a cost-efficient path among optimal sensor placement locations under proper orthogonal decomposition (POD)-based reconstruction. For many of these work, the emphasis is on modeling and control of the sensor positions. Sensor scheduling [39,40] is a similar problem that concerns a schedule of densely placed sensors. Unlike the path planning problem, the sensors do not move in the scheduling problem, although it still can be formulated and solved as a special case of the path planning problem.
While many consider the sensor path in an infinite-time horizon, theoretical studies [41,42,43] have shown that the optimal infinite-time schedule is independent of the initial error covariance and can be approximated arbitrarily closely by a periodic schedule. This provides a mathematical foundation for studying problems that consider the planning of a periodic sensor trajectory for spatiotemporal estimation with Kalman filtering. Lan and Schwager [44,45] approached the periodic path planning problem with a rapidly exploring random cycles (RRC) method that constructs and evaluates cycles found by randomly exploring the state space using a tree structure; Chen et al. [46] utilized deep reinforcement learning instead as a learnable deterministic method for finding cycles. The problem extends to multiple sensors that do not have a set network formation, each following its individual path. These works are most closely related to the problem considered here. They approach the combinatorial optimization with a randomized or active search method, first searching for possible cycles, then evaluating their costs. By assumption, the sensors move to a different location at each discrete time step based on the trajectory found in this way.
In this paper, we consider the use of mobile sensors to improve the performance of estimating spatiotemporal data with Kalman filtering, where we focus on planning a periodic sensor trajectory that optimizes estimation. We assume that the sensor has free movement within a certain radius distance constrained by a speed limit. We consider the condition number of the observability matrix of the model as a metric for the Kalman filter estimation. The study of observability is not new and has been discussed previously in different sensor problems [13,32,47,48,49]. In particular, Manohar et al. [47] presented a balanced model reduction for sensor and actuator selection through observability and controllability in a linear quadratic Gaussian (LQG) controller setting. We build on these ideas, developing an optimization for the path planning of mobile sensors with the objective of dynamic Kalman filter estimation. We identify three distinct timescales related to Kalman filter design and estimation with mobile sensors: (i) the timescale of the spatiotemporal dynamics, (ii) the velocity of the sensors, and (iii) the rate of sampling. We propose an approach for greedy selection based on the empirical observability matrix for path planning, and leverage low-rank representation of the system to promote efficient computation complexity. Figure 1 shows how our overall strategy leverages low-rank approximations in order to determine trajectories in the spatiotemporal fields of interest. Compared with previous works, our approach does not restrict the formation of the sensor network nor the shape of the trajectory and builds the path by leveraging a low-rank system representation and greedy optimization. Our approach provides a scalable and efficient periodic path planning procedure for multi-sensor and high-dimensional problems. We conduct a series of experiments on synthetic data, the Kuramoto–Sivashinsky system, and sea surface temperature data to show that mobile sensing improves Kalman filter performance in terms of better limiting estimation and faster convergence.

2. Problem Formulation and Background Methods

The mathematical formulation of the sensor selection problem considers a discrete-time linear system model:
x t + 1 = A x t + w t ,
where x t R n , A R n × n , and w t R n denote the system disturbance following a Gaussian distribution with zero mean and a covariance matrix 0 Q R n × n . The measurements from k sensors are of the form:
y t = C t x t + v t ,
where y t R k , C t R k × n , and v t R k refer to the measurement noise following a Gaussian distribution with zero mean and a covariance matrix 0 R R k × k . Directly measuring in the state space, we write the matrix C t as a selection matrix made up of standard unit vectors as columns. We further assume that the measurement noise are independent and identical across sensors with variance ρ , so the covariance matrix for v t is R = ρ I . In a time-invariant system with a stationary sensing scenario at fixed locations, C t = C .
We denote a sensor trajectory σ = { σ 1 , σ 2 , . . . } of k sensors. σ t [ n ] , | σ t | = k is a set containing sensor locations at time t, which determine the selection matrix C t = C ( σ t ) responsible for collecting measurements along the trajectory. In general, σ can extend to an infinite-time horizon. Zhang et al. [41,42] show that any infinite-time trajectory can be approximated by a periodic trajectory. Therefore, we focus on a periodic trajectory of fixed cycle rather than a trajectory over infinite-time. In practice, periodic trajectories also make sense since many systems contain some periodic or quasi-periodic characteristics. Furthermore, it is often favorable to plan a trajectory such that the sensor can return to a specified location periodically for maintenance and sensor recharging. Then, we write σ = { σ 1 , σ 2 , . . . , σ l } to be a periodic trajectory of length l, so that σ l + 1 = σ 1 , σ l + 2 = σ 2 , and so on.
In Section 2.1, we discuss the use of low-rank representation of the system for sparse sampling. We then introduce observability of the system in Section 2.2 and relate it to Kalman filter estimation performance in Section 2.3. Finally, we give attention to three key timescales in the Kalman filter model design in Section 2.4.

2.1. Reduced-Order Model and Sparse Sampling

In order to promote efficient computation and better model representation for sparse sampling, we consider a reduced-order model (ROM). Specifically, we consider a system with a low-rank linear representation:
x t = Ψ z t , z t + 1 = Λ z t + w t , y t = C t x t + v t = C t Ψ z t + v t ,
where z t R m ( m < n ) is the internal low-rank dynamics state, Λ R m × m is the low-rank dynamical system, and Ψ R n × m is the linear projection basis. Measurements y t are collected in the original high-dimensional state space.
One can define a projection basis to be a universal basis for compressed sensing, or a tailored POD basis for a data-driven approach [1]. However, such basis does not necessarily project to a proper low-rank dynamical system. To find a low-rank representation, supposing that the dynamics A are known, we can take a spectral decomposition of A and truncate the eigenvalues and eigenvectors to a low-rank representation. Alternatively, a data-driven approach is to find a close estimation of the model from the data by using dynamic mode decomposition (DMD) and its many variants [50,51,52,53,54] that can be useful in sparse sensing [55]. DMD modes constitute the linear projection from high-dimensional data to the low-rank representation. The DMD eigenvalues form a diagonal dynamics matrix for the low-rank system.
ROMs are commonly utilized in the stationary sensor placement problem [1,8,11]. Assuming no disturbance and noise, the measurements can be expressed as y t = C Ψ z t . Then, we can obtain x t through a simple linear reconstruction via the Moore–Penrose pseudoinverse, x ^ t = Ψ z ^ t = Ψ ( C Ψ ) y t . It is clear that the reconstruction depends on the conditioning of matrix C Ψ . Given a tailored basis, Q-DEIM [8] uses QR factorization with column pivoting (QRcp) to greedily find near optimal selections. At each step, QRcp selects a new pivot column with the largest norm and removes the orthogonal projections onto the pivot column from the remaining columns. Controlling the condition number by maximizing the matrix volume, QRcp enforces a diagonal dominance structure through column pivoting and expands the submatrix volume. In a more recent work, GappyPOD+E [11] extends the Q-DEIM method to an “oversampling” case where the sample/selection size is larger than the basis rank to improve stability. Based on the theory of random sampling in GappyPOD [10], it is a deterministic method that utilizes a lower bound for the smallest eigenvalue of the submatrix to continue sensor selection over model rank.

2.2. Observability

Observability is concerned with the possibility of finding the states of the system from the observations. A time-varying system of the form x t + 1 = A x t , y t = C t x t , or a pair ( A , C t ) , is observable at time t if the system state can be determined from the observations in [ t , τ ] for some τ > t  [56]. The system is said to be observable if it is true for all time. Observability of a system is examined through the observability Gramian. In our discrete-time system, it is equivalent to study the observability matrix:
O t = C t C t + 1 A . . . C t + n 1 A n 1 .
The system is observable if and only if the observability matrix has full (column) rank. When all states are measured, C t = I , the full observability matrix is:
O = I A . . . ( A n 1 ) .
In the reduced-order model representation, the projected full observability matrix is:
O Ψ = Ψ ( Ψ Λ ) . . . ( Ψ Λ n 1 ) .
In a time invariant system where C t = C is fixed in time, it may need multiple sensors or a long period in time to achieve full rank of the observability matrix. For example, for a fully measured system, O is trivially full rank and the system states can be determined immediately at each time step. However, for sparse sensing on a state space of large dimension, observability of the system is harder to achieve. By considering mobile sensing, it opens up possibilities to generate better observability with limited sensors.
A fully observable system is necessary for an accurate estimation using sparse measurements. In particular, it allows Kalman filter estimation to converge to steady-state values on an infinite-time horizon. We discuss in more detail the connection between observability and Kalman filter estimation in the following section.

2.3. Kalman Filter

We use a Kalman filtering for spatiotemporal estimation on a linear model. Under the assumption of Gaussian noise, it is known to be the best linear estimator for minimizing the mean squared error [19]. Kalman filter algorithms combine the information from the prior knowledge of the system and the observed measurements over time to find an optimal estimate of the system. Let Σ t denote the error covariance matrix at time t in the Kalman filter estimation. By definition, its trace is the expected squared estimation error at time t. The error covariance follows a recurrence relation:
Σ t + 1 = A Σ t A * A Σ t C t * ( C t Σ t C t * + R ) 1 C t Σ t A * + Q .
In the time-invariant case, when t , the limiting error covariance satisfies Σ = A Σ A * A Σ C * ( C Σ C * + R ) 1 C Σ A * + Q , which is known as the discrete algebraic Riccati equation (DARE). When the system is observable, the error covariance is guaranteed to converge to a limit or a limit cycle in a periodic schedule [41].
The limiting error covariance is a common metric to evaluate a Kalman filter model. However, finding this limit by solving the DARE is difficult and computationally expensive since it does not have a closed-form solution. Therefore, knowing that observability is a necessary condition for Kalman filter estimation performance, we further show how the conditioning of the observability matrix drives better estimation. We first relate the limiting expected squared error to the conditioning of C in a time-invariant model with full-rank measurements. We then show that any model with sparse measurements or periodic trajectory can be reformulated at a larger time step to a time-invariant representation with full-rank measurements. Furthermore, the reformulated selection matrix is the same as the observability matrix in the original form.
The expected squared error is represented as the trace of the error covariance matrix, whose limit is the solution of a DARE. Since the DARE does not have a closed-form solution, we consider an upper and a lower bound for the trace of the solution. While various works have derived different bounds on the DARE solution [57,58], our analysis leverages the following results that isolate the selection matrix C to draw connections between the error covariance and the selection matrix:
Theorem 1. 
Consider the DARE Σ = A Σ A * A Σ C * ( C Σ C * + R ) 1 C Σ A * + Q with dimension n, assuming that C * R 1 C 0 , Q 0 . We then have bounds:
  • [59] t r ( Σ ) 2 t r ( Q ) a 1 + a 1 2 + 4 λ n ( C * R 1 C ) t r ( Q ) / n , where a 1 = 1 λ 1 ( A * A ) λ 1 ( Q ) λ n ( C * R 1 C ) ;
  • [60] t r ( Σ ) 2 t r ( Q 1 / 2 ) 2 a 2 + a 2 2 + 4 n λ 1 ( C * R 1 C ) t r ( Q 1 / 2 ) 2 , where a 2 = n i | λ i ( A ) | t r ( Q 1 / 2 ) 2 λ 1 ( C * R 1 C ) .
λ i ( X ) represents the i-th largest eigenvalue of X .
We can easily show that the lower bound is monotonically decreasing with λ 1 ( C * R 1 C ) , and the upper bound is monotonically decreasing with λ n ( C * R 1 C ) given that λ 1 ( A * A ) 1 t r ( Q ) n λ 1 ( Q ) . In the special case Q = q I , λ i ( Q ) = q , t r ( Q ) = n λ 1 ( Q ) , λ 1 ( A * A ) 0 is trivially satisfied. The condition is usually satisfied as well for a stable system in general when the disturbance covariance does not have a heavily dominant eigenvalue.
Since we consider the model with independent and identical measurement noise, R = r I , so λ i ( C * R 1 C ) λ i ( C * C ) = σ i ( C ) , where σ i ( C ) is the i-th largest singular value of C. Therefore, in a time-invariant model where C is full rank, we can minimize the condition number κ ( C ) = σ 1 ( C ) σ n ( C ) in order to achieve lower squared error.
However, in most scenarios, the system model is more complicated. When using limited sensors, the measurements C will not be full rank. In the mobile sensor with periodic trajectory scenario where C t depends on time, the system is not time-invariant. We can show a reformulation of these models to a time-invariant representation in which C is full rank. Then, the above result applies to these models as well. Consider the general model x t + 1 = A x t + w t , y t = C t x t + v t . Let k = n t be a larger time step where n is the dimension of the state space or multiples of the sensor trajectory period. Then, we can follow [61] and rewrite the system in the form of Equations (4) and (5):
x ^ k + 1 = x n ( t + 1 ) = A n x n t + i = 1 n A i 1 w n t + n i : = A ^ x ^ k + w ^ k ,
y ^ k : = y n t y n t + 1 . . . y n ( t + 1 ) 1 = C n t x n t + v n t C n t + 1 ( A x n t + w n t ) + v n t + 1 . . . C n ( t + 1 ) 1 ( A n 1 x n t + i = 2 n A i 2 w n t + n i ) + v n ( t + 1 ) 1 = C n t C n t + 1 A . . . C n ( t + 1 ) 1 A n 1 x n t + v n t C n t + 1 w n t + v n t + 1 . . . C n ( t + 1 ) 1 i = 2 n A i 2 w n t + n i + v n ( t + 1 ) 1 : = C ^ x ^ k + v ^ k .
In the reformation, C ^ is time-invariant, and it is exactly the observability matrix O in the original form. If the system is observable, O has full rank, and so does C ^ . By representing a time-variant system of mobile sensors in a time-invariant form, we can draw the same conclusion as the time-invariant system that the condition number of the observability matrix bounds the limiting error covariance matrix of Kalman filter estimation. Thus, lowering the condition number of the observability matrix leads to better estimation performance.

2.4. Kalman Filter Design Factors

In the mobile sensor scenario, besides planning the trajectory of the sensors, we should also consider in the model design the following key factors: the system Nyquist rate, discrete sampling rate, and sensor speed. These three timescales relate to the conditioning of the observability matrix of the system and the performance of Kalman filter estimation. Although not a definite guide, the following provides useful heuristics for estimation performance based on these timescales.
The Nyquist rate represents the internal time scale of the continuous-time dynamics. It is defined to be twice the highest frequency of the spatiotemporal dynamics. The discretization of the continuous-time system is considered good if it samples faster than the Nyquist rate. We believe the same applies to mobile sensing with Kalman filter estimation. At least one measurement should be collected within the Nyquist rate to capture the highest frequency feature of the system at the most relevant location.
The sampling rate refers to the rate at which the measurements are collected. It also represents the time step of the discrete model. A faster sampling rate, above the Nyquist rate, adds more measurements in a fixed time interval. In the stationary sensor setting, this improves stability of the estimation. With mobile sensors, faster sampling rate further adds more information since the measurement locations change. This leads to better system observability and Kalman filter estimation until the statistical independence of the measurements no longer holds.
The sensor speed determines the maximum region a sensor can measure in a fixed time interval. A faster sensor can reach and observe at a farther location in the state space to achieve better observability. More importantly, when the data contain local structures, it is essential to plan the sensor trajectories to capture those structures. Faster sensors can achieve this purpose when local structures are well separated in the state space, without the need to assign additional sensors.
The effect of these timescales will be further discussed in the numerical experiments in Section 4.

3. Computing Mobile Sensor Trajectories

With the problem formulation (3), and the discussion in Section 2.3, we consider the objective to minimize the condition number of the observability matrix under the schedule σ :
min σ : | σ | = l , | σ i | = k κ ( O σ ) .
The observability matrix with respect to trajectory σ of length l is written as:
O σ = C ( σ 1 ) Ψ C ( σ 2 ) Ψ Λ . . . C ( σ l ) Ψ Λ l 1 = C ( σ 1 ) C ( σ 2 ) C ( σ l ) Ψ Ψ Λ . . . Ψ Λ l 1 : = C σ O Ψ ,
where O Ψ is the projected observability matrix with full measurements and C σ is a block-diagonal selection matrix determined by σ . The minimization problem described in Equation (6) becomes a submatrix selection problem minimizing the condition number. In the special case when the length of the periodic trajectory is 1, the objective becomes max σ : | σ | = k κ ( C σ Ψ ) , which is identical to that of a stationary sensor placement problem under the DMD basis. Solving such a problem is in general NP-hard, but just as in the stationary sensor placement problem, we can leverage greedy algorithms and utilize the same idea as QRcp/Q-DEIM for under-sampling and GappyPOD+E or over-sampling to solve it approximately.
We introduce our greedy time-forwarding algorithm in Section 3.1 and illustrate it on a synthetic example of sparse linear dynamics on a torus in Section 3.2 before presenting the main results in Section 4.

3.1. Algorithm

The projected full observability matrix O Ψ is by definition segmented into blocks, so for the purpose of efficient computation, we propose a greedy algorithm that finds sensor locations σ 1 , σ 2 , . . . , σ l by sequentially focusing on each block Ψ , Ψ Λ , . . . , Ψ Λ l 1 .
The approach is detailed in Algorithms 1 and 2.
In the selection step, we want to find a row in X from the candidate set S to append to the current observability matrix in order to minimize κ ( O σ ) . We use the same selecting rules as in QRcp and GappyPOD+E. The candidate set S is critical when sensor movement constraints are present. When the sensor is unrestricted to move in time, we can simply set S = [ n ] σ i . However, in practice, the sensors have a limit on their speed so there is a restricted region in which the sensors can move between time steps. Additionally, the state space can have special multiply-connected topological structure such that not all locations are accessible from every other. Future work will incorporate the background flow field in this estimated restriction region, although this is neglected for simplicity in the present work.
Algorithm 1: Greedy Time-forwarding Observability-based Path Planning Algorithm
Sensors 24 03727 i001
Algorithm 2: Selection Step
Sensors 24 03727 i002
Under a sensor speed constraint, we only consider the locations where
  • A sensor can move to within a time step from its current location;
  • It can go back to its initial location at the end of the period to form a cycle.
These requirements guide the selection of the candidate set S in the algorithm. When the topology of the state space is regularly shaped, a simple Euclidean distance can be used; while it is irregular with obstructions or complex network structures, we can resolve to other types of distance functions.

3.2. Illustrative Example: Sparse Linear Dynamics on a Torus

To show the effectiveness of mobile sensors, we demonstrate the algorithm with a random simulation of sparse linear dynamical system on a torus. We design the system to contain two types of structures: the 2D discrete inverse Fourier transform function and the Gaussian basis function. A Fourier mode is a global feature present across the state space, while a Gaussian mode is a local feature that only concentrates in a small neighbor around a center.
On a 128 × 128 spatial grid, we build the sparse system with two conjugate Fourier modes and three conjugate Gaussian modes by generating randomly their oscillation frequencies and damping rates (Figure 2). This is a system of size n = 128 2 = 16,384 with a low-dimensional linear representation of rank m = 10 , where the projection basis Ψ contains the modes, and the low-dimension linear dynamics matrix Λ is diagonal with the oscillation and damping information. The sampling rate is at d t = 0.01 . We generate the data by adding system disturbances and measurement noise. Since all parameters in the model are known in the synthetic example, we use the trace of the error covariance matrix as an accurate representation of the expected squared error to evaluate the estimation.
First, we estimate the system with sensors at fixed locations selected by applying QRcp on the basis Ψ , a common sensor placement strategy. We see from Figure 3 that there is a significant performance improvement as we increase the number of fixed sensors up to three. At least three sensors are needed to obtain a good estimation of the system so that they can be placed to observe the local regions of the Gaussian modes.
We then show that equivalent performance can be achieve using only one mobile sensor instead with the same sampling rate and fast enough speed. We choose a trajectory period such that the cycle is complete within the Nyquist rate of the system. When the sensor is slow, there is no significant improvement since the sensor cannot move to other local features within a cycle. However, with fast enough speed, our algorithm is able to direct the sensor to reach the localization of all three Gaussian modes and make a better estimation (Figure 4). Under the same sampling rate, three sensors collect three times as many measurements as only one sensor within any time interval. This fundamental differences in measurement size due to number of sensors contributes to the difference between three stationary sensors and one mobile sensors. We can narrow this performance gap by increasing the sampling rate. At a sampling rate of 0.001, the difference in estimation error is minimal.
Through this synthetic experiment we see that a mobile sensor can indeed improve Kalman filter estimation, and the trajectory planned by our greedy method is effective to pinpoint local structures and achieve good observability.

4. Numerical Experiments

In practice, it is often the case that for spatiotemporal data and systems, the underlying low-rank dynamic model, the disturbance, and the noise are not known. In this case, we would fit an estimated model representation from data via DMD, and approximate disturbance and noise covariances either from data or through hyperparameter tuning. Here, we look at two examples: (i) the Kuramoto–Sivashinsky (KS) system and (ii) the Sea Surface Temperature (SST) dataset from NOAA [62,63]. We study the performance when we use a DMD approximated model for Kalman filter estimation and sensor path planning by our greedy algorithm. In both examples, we fit a linear DMD model with a chosen low rank. The dynamics matrix Λ is diagonal with DMD eigenvalues and the basis Ψ consists of the DMD modes. We further add a white measurement noise with variance R = I to the data, and we set the system disturbance to be Q = q I where the uniform variance q is a hyperparameter tuned by experiment.
These two examples are representative in different aspects. The KS system is known for its chaotic behavior. Therefore, a linear representation of the system is extremely difficult. Additionally, the modes from the linear approximated model are mostly local since linear correlation between locations is small, so that full observability is hard to achieve with few fixed sensors. We show in Section 4.1 that mobile sensors can perform particularly well comparing to fixed sensors by reaching more locations and capturing more local structures.
The SST dataset from NOAA contains weekly mean optimum interpolated sea surface temperature measurements from global satellite data. The dataset can be well approximated by a linear model and most modes in the approximated system are global so that observability is easily achieved with even just one stationary sensor. We show then in Section 4.2 that mobile sensors further accelerate the convergence of error.

4.1. Kuramoto–Sivashinsky System

The KS system is given by the equation u t + u u x + u x x + u x x x x = 0 . We consider the numerical solution of the system on a spatial grid of size 2048 over x [ 0 , 2 π ] . The initial condition is randomly generated over a standard normal distribution. With a random initial condition, we numerically solve the KS equation and collect data on the time interval t [ 0 , 10 ] with a time step of d t = 10 4 . We first perform singular value decomposition (SVD) to find a low rank representation of the data. The first 100 singular values capture 99.99% of the energy, so we estimate a low-dimensional linear representation of the system by fitting a standard DMD model [50] with an SVD rank of 100.
Because of the chaotic nature of the system, accurate estimation is not possible with a limited number of 10 sparse fixed sensors. Indeed, we need 100 fixed sensors, equivalent to the full rank of our approximated linear system, to effectively estimate the system (Figure 5). Additionally, we see that there is no significant improvement in performance with increasing sampling rate using fixed sensors since more frequent measurements at the same locations add little information of the unobserved states.
On the other hand, mobile sensors can move to measure different locations and gain more information of the entire state space. With fast enough sensor speed, 10 moving sensors can achieve a significantly improved estimation comparing to 10 fixed sensors by increasing the sampling rate (Figure 5). The improvement in performance is limited by the sensor speed. We set the minimum speed in this example to be v min = 2 π 2048 10 4 so that the sensor is able to move to its left and right neighbor at a discrete sampling time step of 10 4 . With higher sensor speed, sensors can make observations over a wider spatial range, thus giving better estimation. As v , the performance of 10 moving sensors approaches that of 100 fixed sensors with fast enough sampling rate.
Due to the greedy nature of our algorithm, it selects based on the immediate reward at the next time step and cannot look ahead. When the sampling is sufficiently fast, the greedy algorithm makes a decision based on the closest neighbors of the current location. Such a decision is not informative, and the trajectory planned fails to have a good performance; as we can see in Figure 5, the estimation error rises at faster sampling rate for mobile sensors with a speed constraint. One way to reduce the greediness of the algorithm is to perform a multiscale path completion. We start by finding a trajectory at a slower sampling rate. Then, we gradually decrease the time step and apply the same path planning algorithm, except using the previously found trajectory as guidance and filling in the gap to construct a more complete trajectory at the faster sampling rate. We apply this multiscale expansion procedure on the KS example, initiated at the sampling rate with the smallest error, and expand to faster sampling rate based on that path. We see the performance is no longer worse with a fast sampling rate in the KS experiments, but it flattens and reaches a limit determined by the sensor speed (Figure 6).

4.2. Sea Surface Temperature

The SST dataset contains weekly collection from satellite data of sea surface temperature measurements on the 1 degree latitude by 1 degree longitude (180 by 360) global grid from 1990 to the present. We fit a standard DMD model and obtain a r = 10 low-dimensional representation.
First, we perform Kalman filter estimation using stationary sensor measurements and observe that one stationary sensor achieves comparable performance in the end to ten stationary sensors (Figure 7). This verifies that the approximated linear model contains mostly global features that can be observed well with very few sensors. However, with a bad initial estimation, Kalman filtering with 10 fixed sensors converges to low error very quickly (below 0.1 within one year), while it takes 1 sensor almost 28 years to reach a comparable error.
The issue of slow Kalman filter convergence can be solved using a mobile sensor instead, while still maintaining a good limiting estimation. The DMD eigenvalues suggest the max frequency of the system to be around a half-year, so we choose the period of mobile sensor trajectory to be about a quarter-year (14 weeks). Since the globe has continental land as obstructions, we need to ensure the planned trajectory does not cross any land as the sensor moves in water. We build a connectivity graph and adjacency matrix for the candidate selection step in our algorithm instead of a simple Euclidean distance function.
Figure 7 shows the results of one mobile sensor with different sensor speed limits. Mobile sensor estimation indeed produces much faster convergence compared to a stationary sensor. As the speed limit increases, the sensor can move to farther locations with better observability, further improving the convergence of estimation. Figure 8 shows the paths of the sensor. When v is small, the initial location plays an important role since the trajectory does not move far from it. The first location picked by the algorithm is close to Alaska, so the first two trajectories with low speed center around the North Pacific and the Arctic Ocean. As v increases, the sensor explores the equator and south hemisphere regions, especially the El Niño regions around the equatorial Pacific, which is an important local feature. Figure 9 shows the planned trajectory using two mobile sensors.
Convergence speed also matters when the underlying dynamics is nonstationary and changes over time. If the estimation does not reach a meaningful error in time, the shifting dynamics will further slow down the convergence and increase the limiting error. To show this, we instead fit a DMD model using only the first half of the SST data and use it as the approximated linear model for Kalman filter estimation. In this case, the fitted linear model is representative and relevant only in the first training half, and does not reflect any possible changes in the data dynamics afterwards. Then, one stationary sensor performs significantly worse due to slow convergence (Figure 10). On the other hand, the error from one mobile sensor converges fast enough within the training period so that in the second half the error is still relatively low. Therefore, fast convergence with mobile sensors ensures a fast adjustment in estimation when the dynamics change in time.

5. Conclusions and Future Work

In this work, we developed a mathematical strategy for planning a periodic trajectory for limited mobile sensors to estimate a spatiotemporal system using Kalman filter estimation. We examine the system observability as a metric that influences the estimation performance in terms of the limiting squared error as well as the convergence rate. We consider an objective to minimize the condition number of the discrete observability matrix along the trajectory and formulate it as a submatrix selection problem. We then propose a time-forwarding greedy algorithm that selects sensor locations along the trajectory using the same rules as QRcp and GappyPOD+E from a carefully chosen candidate subset.
The experiments show that the method is able to plan a trajectory that locates the local features and improves the estimation performance. In these experiments, we explore Kalman filter design factors and their impact on estimation as they relate to the three important timescales: the Nyquist rate of the underlying dynamics, the rate of sampling, and the velocity of the sensors. We find that mobile sensors are especially beneficial for a complex, nonlinear system to capture local features in an approximated linear model, without deploying a large amount of sensors. We also see an improvement in estimation convergence rate using mobile sensors, which more rapidly reaches an accurate estimation.
The greedy approach depends on the key assumption that the sensor moves freely towards any location only restricted to a radius distance defined by a movement speed limit. In future work, a weighted cost function can be added to the objective to better incorporate different costs to the path planning. Sensor speed can be formulated as a cost instead of a hard constraint imposed in the selection process. Furthermore, energy consumption caused by sensor movement can also be included to plan a trajectory that is also more energy efficient. For example, as in the flow field applications, we can consider the flow field information and the energy cost associated with it as the sensor moves with or against the flow. Incorporating the background flow velocity in the set of possible next locations is an important future extension of this work. We can also refer to many different sensor control laws as cost constraints for incorporating other tasks such as simultaneous structure tracking.
For multi-sensor planning, it will be interesting to consider different asynchronous periodic trajectory for each individual sensor instead of all having the same period. This will be particularly useful for multiscale systems so that each sensor can be responsible for estimating features of different timescales.
As shown in the numerical experiments, the performance of Kalman filter estimation fundamentally depends on the accurate modeling of the linear system and the correct choices of hyperparameters. Better data-driven linear or possibly nonlinear system identification can be explored. An alternating model fitting and estimation approach can be explored to update both the model and the sensor trajectory continuously to achieve even better performance.

Author Contributions

Conceptualization, J.M., S.L.B. and J.N.K.; methodology, J.M.; software, J.M.; validation, J.M.; formal analysis, J.M.; investigation, J.M.; writing—original draft preparation, J.M.; writing—review and editing, J.M., S.L.B. and J.N.K.; visualization, J.M.; supervision, S.L.B. and J.N.K.; project administration, S.L.B. and J.N.K.; funding acquisition, S.L.B. and J.N.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge support from the National Science Foundation AI Institute in Dynamic Systems (grant number 2112085). S.L.B. acknowledges support from the Air Force Office of Scientific Research (FA9550-21-1-0178). J.N.K. acknowledges support from the Air Force Office of Scientific Research (FA9550-19-1-0011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The sea surface temperature data is available at https://psl.noaa.gov. Other simulation data presented in this study are available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Manohar, K.; Brunton, B.W.; Kutz, J.N.; Brunton, S.L. Data-driven sparse sensor placement for reconstruction: Demonstrating the benefits of exploiting known patterns. IEEE Control Syst. Mag. 2018, 38, 63–86. [Google Scholar]
  2. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
  3. Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
  4. Trefethen, L.N.; Bau, D., III. Numerical Linear Algebra; SIAM: Philadelphia, PA, USA, 1997; Volume 50. [Google Scholar]
  5. Clark, E.; Askham, T.; Brunton, S.L.; Kutz, J.N. Greedy sensor placement with cost constraints. IEEE Sens. J. 2018, 19, 2642–2656. [Google Scholar] [CrossRef]
  6. Saito, Y.; Nonomura, T.; Yamada, K.; Nakai, K.; Nagata, T.; Asai, K.; Sasaki, Y.; Tsubakino, D. Determinant-based fast greedy sensor selection algorithm. IEEE Access 2021, 9, 68535–68551. [Google Scholar] [CrossRef]
  7. Chaturantabut, S.; Sorensen, D.C. Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput. 2010, 32, 2737–2764. [Google Scholar] [CrossRef]
  8. Drmac, Z.; Gugercin, S. A new selection operator for the discrete empirical interpolation method—Improved a priori error bound and extensions. SIAM J. Sci. Comput. 2016, 38, A631–A648. [Google Scholar] [CrossRef]
  9. Everson, R.; Sirovich, L. Karhunen–Loeve procedure for gappy data. JOSA A 1995, 12, 1657–1664. [Google Scholar] [CrossRef]
  10. Astrid, P.; Weiland, S.; Willcox, K.; Backx, T. Missing point estimation in models described by proper orthogonal decomposition. IEEE Trans. Autom. Control 2008, 53, 2237–2251. [Google Scholar] [CrossRef]
  11. Peherstorfer, B.; Drmac, Z.; Gugercin, S. Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM J. Sci. Comput. 2020, 42, A2837–A2864. [Google Scholar] [CrossRef]
  12. Li, B.; Liu, H.; Wang, R. Efficient Sensor Placement for Signal Reconstruction Based on Recursive Methods. IEEE Trans. Signal Process. 2021, 69, 1885–1898. [Google Scholar] [CrossRef]
  13. Ilkturk, U. Observability Methods in Sensor Scheduling; Order No. 3718711; Arizona State University: Tempe, AZ, USA, 2015; Available online: https://www.proquest.com/dissertations-theses/observability-methods-sensor-scheduling/docview/1712400301/se-2 (accessed on 15 April 2024).
  14. Caselton, W.F.; Zidek, J.V. Optimal monitoring network designs. Stat. Probab. Lett. 1984, 2, 223–227. [Google Scholar] [CrossRef]
  15. Krause, A.; Singh, A.; Guestrin, C. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. J. Mach. Learn. Res. 2008, 9, 235–284. [Google Scholar]
  16. Wang, L.; Zhao, Y.; Liu, J. A Kriging-based decoupled non-probability reliability-based design optimization scheme for piezoelectric PID control systems. Mech. Syst. Signal Process. 2023, 203, 110714. [Google Scholar] [CrossRef]
  17. Erichson, N.B.; Mathelin, L.; Yao, Z.; Brunton, S.L.; Mahoney, M.W.; Kutz, J.N. Shallow neural networks for fluid flow reconstruction with limited sensors. Proc. R. Soc. A 2020, 476, 20200097. [Google Scholar] [CrossRef]
  18. Williams, J.; Zahn, O.; Kutz, J.N. Data-driven sensor placement with shallow decoder networks. arXiv 2022, arXiv:2202.05330. [Google Scholar]
  19. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  20. Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  21. Tzoumas, V.; Jadbabaie, A.; Pappas, G.J. Sensor placement for optimal Kalman filtering: Fundamental limits, submodularity, and algorithms. In Proceedings of the 2016 American Control Conference (ACC), Boston, MA, USA, 6–8 July 2016; pp. 191–196. [Google Scholar]
  22. Ye, L.; Roy, S.; Sundaram, S. On the complexity and approximability of optimal sensor selection for Kalman filtering. In Proceedings of the 2018 Annual American Control Conference (ACC), Milwaukee, WI, USA, 27–29 June 2018; pp. 5049–5054. [Google Scholar]
  23. Zhang, H.; Ayoub, R.; Sundaram, S. Sensor selection for Kalman filtering of linear dynamical systems: Complexity, limitations and greedy algorithms. Automatica 2017, 78, 202–210. [Google Scholar] [CrossRef]
  24. Dhingra, N.K.; Jovanović, M.R.; Luo, Z.Q. An ADMM algorithm for optimal sensor onand actuator selecti. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 4039–4044. [Google Scholar]
  25. Chamon, L.F.; Pappas, G.J.; Ribeiro, A. Approximate supermodularity of Kalman filter sensor selection. IEEE Trans. Autom. Control 2020, 66, 49–63. [Google Scholar] [CrossRef]
  26. Gunnarson, P.; Mandralis, I.; Novati, G.; Koumoutsakos, P.; Dabiri, J.O. Learning efficient navigation in vortical flow fields. Nat. Commun. 2021, 12, 7143. [Google Scholar] [CrossRef]
  27. Krishna, K.; Song, Z.; Brunton, S.L. Finite-horizon, energy-efficient trajectories in unsteady flows. Proc. R. Soc. A 2022, 478, 20210255. [Google Scholar] [CrossRef]
  28. Biferale, L.; Bonaccorso, F.; Buzzicotti, M.; Clark Di Leoni, P.; Gustavsson, K. Zermelo’s problem: Optimal point-to-point navigation in 2D turbulent flows using reinforcement learning. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 103138. [Google Scholar] [CrossRef]
  29. Buzzicotti, M.; Biferale, L.; Bonaccorso, F.; Clark di Leoni, P.; Gustavsson, K. Optimal Control of Point-to-Point Navigation in Turbulent Time Dependent Flows using Reinforcement Learning. In Proceedings of the International Conference of the Italian Association for Artificial Intelligence, Milano, Italy, 24–27 November 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 223–234. [Google Scholar]
  30. Madridano, Á.; Al-Kaff, A.; Martín, D.; de la Escalera, A. Trajectory planning for multi-robot systems: Methods and applications. Expert Syst. Appl. 2021, 173, 114660. [Google Scholar] [CrossRef]
  31. Leonard, N.E.; Paley, D.A.; Lekien, F.; Sepulchre, R.; Fratantoni, D.M.; Davis, R.E. Collective motion, sensor networks, and ocean sampling. Proc. IEEE 2007, 95, 48–74. [Google Scholar] [CrossRef]
  32. DeVries, L.; Majumdar, S.J.; Paley, D.A. Observability-based optimization of coordinated sampling trajectories for recursive estimation of a strong, spatially varying flowfield. J. Intell. Robot. Syst. 2013, 70, 527–544. [Google Scholar] [CrossRef]
  33. Ogren, P.; Fiorelli, E.; Leonard, N.E. Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment. IEEE Trans. Autom. Control 2004, 49, 1292–1302. [Google Scholar] [CrossRef]
  34. Zhang, F.; Leonard, N.E. Cooperative filters and control for cooperative exploration. IEEE Trans. Autom. Control 2010, 55, 650–663. [Google Scholar] [CrossRef]
  35. Paley, D.A.; Wolek, A. Mobile sensor networks and control: Adaptive sampling of spatiotemporal processes. Annu. Rev. Control. Robot. Auton. Syst. 2020, 3, 91–114. [Google Scholar] [CrossRef]
  36. Peng, L.; Lipinski, D.; Mohseni, K. Dynamic data driven application system for plume estimation using UAVs. J. Intell. Robot. Syst. 2014, 74, 421–436. [Google Scholar] [CrossRef]
  37. Lynch, K.M.; Schwartz, I.B.; Yang, P.; Freeman, R.A. Decentralized environmental modeling by mobile sensor networks. IEEE Trans. Robot. 2008, 24, 710–724. [Google Scholar] [CrossRef]
  38. Shriwastav, S.; Snyder, G.; Song, Z. Dynamic Compressed Sensing of Unsteady Flows with a Mobile Robot. arXiv 2021, arXiv:2110.08658. [Google Scholar]
  39. Liu, S.; Fardad, M.; Masazade, E.; Varshney, P.K. Optimal periodic sensor scheduling in networks of dynamical systems. IEEE Trans. Signal Process. 2014, 62, 3055–3068. [Google Scholar] [CrossRef]
  40. Shi, D.; Chen, T. Approximate optimal periodic scheduling of multiple sensors with constraints. Automatica 2013, 49, 993–1000. [Google Scholar] [CrossRef]
  41. Zhang, W.; Vitus, M.P.; Hu, J.; Abate, A.; Tomlin, C.J. On the optimal solutions of the infinite-horizon linear sensor scheduling problem. In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 396–401. [Google Scholar] [CrossRef]
  42. Zhao, L.; Zhang, W.; Hu, J.; Abate, A.; Tomlin, C.J. On the optimal solutions of the infinite-horizon linear sensor scheduling problem. IEEE Trans. Autom. Control 2014, 59, 2825–2830. [Google Scholar] [CrossRef]
  43. Mo, Y.; Garone, E.; Sinopoli, B. On infinite-horizon sensor scheduling. Syst. Control Lett. 2014, 67, 65–70. [Google Scholar] [CrossRef]
  44. Lan, X.; Schwager, M. Planning periodic persistent monitoring trajectories for sensing robots in gaussian random fields. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2415–2420. [Google Scholar]
  45. Lan, X.; Schwager, M. Rapidly exploring random cycles: Persistent estimation of spatiotemporal fields with multiple sensing robots. IEEE Trans. Robot. 2016, 32, 1230–1244. [Google Scholar] [CrossRef]
  46. Chen, J.; Shu, T.; Li, T.; de Silva, C.W. Deep reinforced learning tree for spatiotemporal monitoring with mobile robotic wireless sensor networks. IEEE Trans. Syst. Man Cybern. Syst. 2019, 50, 4197–4211. [Google Scholar] [CrossRef]
  47. Manohar, K.; Kutz, J.N.; Brunton, S.L. Optimal sensor and actuator selection using balanced model reduction. IEEE Trans. Autom. Control 2021, 67, 2108–2115. [Google Scholar] [CrossRef]
  48. Asghar, A.B.; Jawaid, S.T.; Smith, S.L. A complete greedy algorithm for infinite-horizon sensor scheduling. Automatica 2017, 81, 335–341. [Google Scholar] [CrossRef]
  49. Rafieisakhaei, M.; Chakravorty, S.; Kumar, P.R. On the use of the observability gramian for partially observed robotic path planning problems. In Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, VIC, Australia, 12–15 December 2017; pp. 1523–1528. [Google Scholar] [CrossRef]
  50. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014, 1, 391–421. [Google Scholar] [CrossRef]
  51. Kutz, J.N.; Brunton, S.L.; Brunton, B.W.; Proctor, J.L. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]
  52. Jovanović, M.R.; Schmid, P.J.; Nichols, J.W. Sparsity-promoting dynamic mode decomposition. Phys. Fluids 2014, 26, 024103. [Google Scholar] [CrossRef]
  53. Askham, T.; Kutz, J.N. Variable projection methods for an optimized dynamic mode decomposition. SIAM J. Appl. Dyn. Syst. 2018, 17, 380–416. [Google Scholar] [CrossRef]
  54. Brunton, S.L.; Budišić, M.; Kaiser, E.; Kutz, J.N. Modern Koopman Theory for Dynamical Systems. SIAM Rev. 2022, 64, 229–340. [Google Scholar] [CrossRef]
  55. Kramer, B.; Grover, P.; Boufounos, P.; Nabi, S.; Benosman, M. Sparse sensing and DMD-based identification of flow regimes and bifurcations in complex flows. SIAM J. Appl. Dyn. Syst. 2017, 16, 1164–1196. [Google Scholar] [CrossRef]
  56. Kalman, R.E. On the general theory of control systems. In Proceedings of the First International Conference on Automatic Control, Moscow, Russia, 27 June–2 July 1960; pp. 481–492. [Google Scholar]
  57. Dai, H.; Bai, Z.Z. On eigenvalue bounds and iteration methods for discrete algebraic Riccati equations. J. Comput. Math. 2011, 29, 341–366. [Google Scholar]
  58. Kwon, W.H.; Moon, Y.S.; Ahn, S.C. Bounds in algebraic Riccati and Lyapunov equations: A survey and some new results. Int. J. Control 1996, 64, 377–389. [Google Scholar] [CrossRef]
  59. Komaroff, N. Upper bounds for the solution of the discrete Riccati equation. IEEE Trans. Autom. Control 1992, 37, 1370–1373. [Google Scholar] [CrossRef]
  60. Komaroff, N.; Shahian, B. Lower summation bounds for the discrete Riccati and Lyapunov equations. IEEE Trans. Autom. Control 1992, 37, 1078–1080. [Google Scholar] [CrossRef]
  61. Bittanti, S.; Colaneri, P.; Nicolao, G.D. The periodic Riccati equation. In The Riccati Equation; Springer: Berlin/Heidelberg, Germany, 1991; pp. 127–162. [Google Scholar]
  62. NOAA Optimum Interpolation (OI) SST V2 Data Provided by the NOAA PSL, Boulder, Colorado, USA. Available online: https://psl.noaa.gov (accessed on 15 April 2024).
  63. Reynolds, R.W.; Rayner, N.A.; Smith, T.M.; Stokes, D.C.; Wang, W. An improved in situ and satellite SST analysis for climate. J. Clim. 2002, 15, 1609–1625. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed approach to sensor path planning for dynamic estimation. The panels are divided into two main steps for estimating spatiotemporal data under a Kalman filter setting. The top panel shows the construction of a low-rank representation of the data as the prior model for Kalman filter through dynamic mode decomposition (DMD). The DMD modes and eigenvalues make up a linear dynamical model in a reduced dimension and a projection back to the original dimension. The dimension of the observability matrix is also reduced by the low-rank representation for efficient computation. The bottom panel illustrates the greedy path finding algorithm that optimizes the observability matrix along the path and improves Kalman filter estimation performance. It leverages a greedy row selection on the projected full observability matrix. Conceptually, at each time step, based on the historical selection of sensor locations, the sensors are led to the next valid locations within a velocity constraint.
Figure 1. Overview of the proposed approach to sensor path planning for dynamic estimation. The panels are divided into two main steps for estimating spatiotemporal data under a Kalman filter setting. The top panel shows the construction of a low-rank representation of the data as the prior model for Kalman filter through dynamic mode decomposition (DMD). The DMD modes and eigenvalues make up a linear dynamical model in a reduced dimension and a projection back to the original dimension. The dimension of the observability matrix is also reduced by the low-rank representation for efficient computation. The bottom panel illustrates the greedy path finding algorithm that optimizes the observability matrix along the path and improves Kalman filter estimation performance. It leverages a greedy row selection on the projected full observability matrix. Conceptually, at each time step, based on the historical selection of sensor locations, the sensors are led to the next valid locations within a velocity constraint.
Sensors 24 03727 g001
Figure 2. A snapshot of the random system in 2D (left) and on a 3D torus (right).
Figure 2. A snapshot of the random system in 2D (left) and on a 3D torus (right).
Sensors 24 03727 g002
Figure 3. Expected squared error of the KF estimation in time, with (a) stationary sensor placement by number of sensors and (b) one mobile sensor by velocity constraints.
Figure 3. Expected squared error of the KF estimation in time, with (a) stationary sensor placement by number of sensors and (b) one mobile sensor by velocity constraints.
Sensors 24 03727 g003
Figure 4. Planned sensor trajectory in black arrows with speed constraint of 5 (left) and 37 (right) units per time step of 0.01. The trajectory with smaller speed constraint is only able to explore one Gaussian mode, while that with higher speed constraint freely oscillates among all three Gaussian modes.
Figure 4. Planned sensor trajectory in black arrows with speed constraint of 5 (left) and 37 (right) units per time step of 0.01. The trajectory with smaller speed constraint is only able to explore one Gaussian mode, while that with higher speed constraint freely oscillates among all three Gaussian modes.
Sensors 24 03727 g004
Figure 5. (a) True spatiotemporal dynamics of the KS system in T [ 9 , 10 ] . (b) Bode plot of estimation error against sampling rate. (c) Estimated x-t plot by 10 mobile sensors in T [ 9 , 10 ] with sampling rate d t = 0.001 ,   0.005 ,   0.01 ,   0.02 (corresponding with the dashed vertial lines on the bode plot).
Figure 5. (a) True spatiotemporal dynamics of the KS system in T [ 9 , 10 ] . (b) Bode plot of estimation error against sampling rate. (c) Estimated x-t plot by 10 mobile sensors in T [ 9 , 10 ] with sampling rate d t = 0.001 ,   0.005 ,   0.01 ,   0.02 (corresponding with the dashed vertial lines on the bode plot).
Sensors 24 03727 g005
Figure 6. Bode plot of the estimation error against sampling rate. Estimation errors against sampling rate using multiscale path completion are plotted in full transparency connected by line. The dots with partial transparency are previous results by the greedy approach.
Figure 6. Bode plot of the estimation error against sampling rate. Estimation errors against sampling rate using multiscale path completion are plotted in full transparency connected by line. The dots with partial transparency are previous results by the greedy approach.
Sensors 24 03727 g006
Figure 7. Estimation error over (a) all time and (b) the first two years (104 weekly measurements).
Figure 7. Estimation error over (a) all time and (b) the first two years (104 weekly measurements).
Sensors 24 03727 g007
Figure 8. Planned sensor trajectory (black dots connected by yellow arrows) with a cycle period of 14 weeks, where the movement speed is limited to 5, 15, 25, and 50 spatial units (1 degree of latitude or longitude). Zoomed in map on the right. The initial location picked by the algorithm is close to Alaska, so all trajectories expand from the region.
Figure 8. Planned sensor trajectory (black dots connected by yellow arrows) with a cycle period of 14 weeks, where the movement speed is limited to 5, 15, 25, and 50 spatial units (1 degree of latitude or longitude). Zoomed in map on the right. The initial location picked by the algorithm is close to Alaska, so all trajectories expand from the region.
Sensors 24 03727 g008
Figure 9. Planned sensor trajectory for 2 sensors with a cycle period of 14 weeks, with sensor speed limit at 15 spatial units (1 degree of latitude or longitude).
Figure 9. Planned sensor trajectory for 2 sensors with a cycle period of 14 weeks, with sensor speed limit at 15 spatial units (1 degree of latitude or longitude).
Sensors 24 03727 g009
Figure 10. Kalman filter estimation error in time using approximated DMD model trained on the first half of the data.
Figure 10. Kalman filter estimation error in time using approximated DMD model trained on the first half of the data.
Sensors 24 03727 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, J.; Brunton, S.L.; Kutz, J.N. Mobile Sensor Path Planning for Kalman Filter Spatiotemporal Estimation. Sensors 2024, 24, 3727. https://doi.org/10.3390/s24123727

AMA Style

Mei J, Brunton SL, Kutz JN. Mobile Sensor Path Planning for Kalman Filter Spatiotemporal Estimation. Sensors. 2024; 24(12):3727. https://doi.org/10.3390/s24123727

Chicago/Turabian Style

Mei, Jiazhong, Steven L. Brunton, and J. Nathan Kutz. 2024. "Mobile Sensor Path Planning for Kalman Filter Spatiotemporal Estimation" Sensors 24, no. 12: 3727. https://doi.org/10.3390/s24123727

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop