1. Introduction
In recent years, distributed target tracking has played a crucial role in various applications, including autonomous surveillance, environmental monitoring, and cooperative robotics. These systems rely on distributed estimation and local communication among agents to track moving targets in a scalable, robust, and real-time manner. However, maintaining estimation accuracy and network resilience in distributed systems is significantly challenged by faulty measurement, environmental disturbances, and the increasing risk of cyber–physical threats.
To address these challenges, researchers have examined various architectural strategies for estimation and communication in multi-agent systems. Estimation architectures are generally divided into two primary categories based on how information is shared. The first category is a centralized platform, where a single central node collects data from all agents in the communication network, conducts the necessary computations, and subsequently shares the results with those agents [
1,
2,
3,
4]. While this approach can achieve high estimation accuracy, it suffers from several critical drawbacks. The reliance on a single node introduces a potential point of failure, requires high computational capacity at that node, and leads to poor scalability and security concerns. As a result, many researchers have shifted their attention toward decentralized frameworks, in which the central node is eliminated and each agent processes data locally while communicating with neighboring agents. This structure improves system scalability and fault tolerance, and significantly reduces communication overheads. Nonetheless, decentralized systems come with their own set of challenges, including increased network complexity and the need for reliable communication and coordination to ensure accurate and consistent performance across all agents.
Several estimation strategies have been explored to address these challenges. Among the most widely studied are variants of the Kalman filter, which are adapted for distributed use, commonly known as distributed Kalman filters (DKFs) [
5,
6,
7,
8,
9,
10]. While these approaches allow each agent to maintain a local estimate, they still face limitations in practice. DKFs typically require high communication bandwidth due to the need for covariance information exchange and are not well-suited to adversarial environments due to their limited robustness and dependence on accurate model assumptions. These factors reduce their scalability and make them vulnerable to both faults and intentional data corruption in dynamic settings.
To address the shortcomings of DKFs, researchers have turned to adaptive and learning-based approaches that aim to improve resilience. For example, model-free adaptive control allows agents to follow reference signals using only local information, making it effective even in the presence of communication delays or denial-of-service attacks [
11]. Neural network-based techniques have been employed to approximate unknown nonlinear dynamics in time-varying systems, enabling faster adaptation and more accurate tracking [
12]. Likewise, adaptive fuzzy controllers provide a way to estimate unmeasurable states and reduce the impact of cyber–physical attacks, keeping tracking errors within acceptable bounds in large-scale networks [
13]. More recently, data-driven and federated learning strategies have been explored to counter more severe disruptions such as Byzantine attacks or actuator faults, often by combining robust aggregation rules with online learning of uncertain system dynamics [
14,
15]. Despite these advances, such methods often entail trade-offs, including intensive computation, significant communication overhead, or complex agent coordination. These drawbacks limit their practicality in large, resource-constrained networks and leave open the need for simpler, more scalable, and inherently robust alternatives.
To overcome these limitations, researchers have increasingly turned to consensus-based estimation algorithms [
16,
17,
18,
19,
20,
21,
22,
23,
24,
25]. These approaches use simple local rules and information exchange between neighboring agents to gradually reach a shared understanding across the entire network. Unlike traditional distributed Kalman filters, consensus-based algorithms require less communication, scale more efficiently with larger systems, and are more resilient to issues like unstable connections and noisy measurements. Their simple design makes them particularly suitable for real-world deployments, especially in mobile or resource-constrained multi-agent systems [
26].
Despite the advantages offered by decentralized estimation, a significant challenge remains in ensuring the resilience of such systems against cyber–physical threats, particularly false data injection attacks (FDIAs). In such situations, an attacker deliberately feeds false data into the system, often by taking control of certain agents or by tampering with their communication channels. When it comes to tracking moving targets, these types of attacks can cause serious errors in estimating position and velocity, which are essential for maintaining accurate and reliable system behavior. As a result, improving the resilience of distributed estimation methods against false data injection has become a key focus in recent research. Numerous studies have explored ways to enhance both the communication and estimation processes within distributed networks. A central approach in this area involves developing mechanisms that can detect and isolate suspicious data inputs quickly, preventing them from affecting the accuracy of the overall network estimation [
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37].
This paper builds upon our previous study in [
38], which introduced a decentralized, consensus-based target tracking algorithm employing a nearly constant velocity (NCV) model and saturation-based filtering to mitigate impulsive measurement variations. While that earlier work established the baseline framework, it did not incorporate mechanisms to identify or isolate malicious data. In the present study, we extend the framework by introducing a dynamic false data injection (FDI) detection and isolation mechanism that allows the algorithm not only to achieve accurate estimates under nominal conditions, but also to remain resilient against adversarial attacks in multi-agent networks. This mechanism enables each agent to evaluate incoming measurements in real time, detect anomalies, and disregard suspicious data before they can corrupt the global estimate. To assess the effectiveness of this extension, we conduct a series of baseline and adversarial case studies, including localized, coordinated, and transient attack scenarios. The results demonstrate substantial improvements compared with [
38]; for instance, under widespread coordinated attacks with 50% of agents compromised, the proposed method reduces the mean-squared estimation error (MSEE) by about 65% relative to the baseline algorithm. Overall, this paper goes beyond our earlier contribution by providing a more comprehensive solution that improves both the accuracy and the security of distributed target tracking, while keeping communication and computational costs low.
The remainder of this article is organized as follows.
Section 2 reviews the system modeling setup, including the dynamic model of the moving target and the derivation of the observation matrix as introduced in our earlier work, along with a brief overview of graph theory and network representation.
Section 3 is divided into two subsections. The first subsection outlines the consensus-based estimation algorithm with saturation filtering, as proposed in our previous study, and the second subsection introduces the enhancements made through a false data injection detection and isolation mechanism, including the detection thresholds and agent isolation process.
Section 4 presents the simulation setup, baseline, and robustness case studies, and the associated results, illustrating the algorithm’s tracking accuracy and resilience under different network and attack scenarios, while also summarizing the key findings of this article.
Section 5 discusses the resilience, limitations, and potential improvements of the proposed methods, and
Section 6 provides the overall conclusion of this work.
2. System Modeling
This section introduces the mathematical models used to describe the systems. The target’s motion is described using a nearly constant velocity (NCV) model, which is a kinematic-based model that incorporates process noise to account for the acceleration uncertainties. The measurement model is based on time-of arrival (TOA) multilateration, enabling agents to estimate distances from the target and build local observation matrices. Finally, we used randomly generated networks to present the multi-agent system. These components together form the foundation for the distributed estimation and fault detection methods, which will be presented later on in this study.
2.1. Target Dynamics with NCV Model
The motion of the moving target is modeled using a nearly constant velocity framework. This approach employs a kinematic transition model and accounts for system uncertainty by incorporating acceleration terms as process noise inputs. Thus, the unmodeled accelerations are represented as disturbances in the following state-space formulation:
where
and
denote the state vector at the current and next time steps, respectively.
and
are the state transition and input matrices, and
is the process noise vector. Using the NCV model with sampling interval
, the matrices
and
are given by:
in the next section, we present the measurement model used by the agents to complete the mathematical representation of the system.
2.2. Measurement Model Using TOA Multilateration
In the context of distributed state estimation, each agent independently collects measurement data that are linked to the global state of the system. The process of capturing these local observations can be mathematically described by the following measurement model:
where
denotes the observation vector obtained by agent
at time step
.
models the measurement noise in the data collected by the agents.
is the observation matrix that shows how the system state is projected onto the measurements accessible to agent
.
The accurate formulation of the observation matrix
is essential to ensure that each agent can accurately interpret and reconstruct the state of the system from its own measurements. In this study, we construct
using a multilateration approach based on the Time-of-Arrival (TOA) principle, as introduced in [
39]. In this setup, the target continuously sends out a signal that moves through the environment at a known speed. When agents detect this signal, they log the time at which it arrives, enabling the calculation of their respective distances from the source. These distance estimates form the basis for both localization and the mathematical formulation of the measurement model by capturing the differences in signal arrival times at various agents.
This derivation is visually illustrated in
Figure 1, which shows a scenario where three agents work together to determine the location of a moving target. The figure demonstrates how sending and receiving the beacon signal helps the agents to understand their positions relative to the target.
For practical implementation, we assume that each agent exchanges data only with its set of immediate neighbors, reflecting a realistic scenario in localized and scalable multi-agent networks. This neighbor-limited information flow allows for the observation matrix to be defined in a linearized format as below:
where
expresses the differences of positions between neighbor
and agent
. The notation
denotes the immediate neighboring set for the agent
. This setup ensures that each agent’s observation matrix is customized to its local surroundings, using only data that can be directly measured or shared within a distributed system.
2.3. Graph Theory and Network Representation
In this section, we provide an overview of graph theory and describe the specific network configuration employed in this study. Multi-agent systems are modeled within the framework of graph theory as , where denotes the set of nodes, represents the set of links (edges) among nodes, and corresponds to the weight matrix quantifying relevant attributes of the connections, such as distance.
For the purposes of this work, all network topologies are generated using the Erdős–Rényi random graph model. In this model , a graph is formed from a fixed set of nodes , where each possible edge between node pairs is included independently with probability . This approach offers analytical simplicity, scalability, and serves as a standard benchmark for examining the effects of random connectivity and network structure in multi-agent systems.
In the following section, we provide a detailed demonstration of the estimation and fault detection algorithms.
3. Proposed Estimation and Fault Detection Algorithms
In this section, we first introduce the consensus-based estimation with saturation filtering, which was comprehensively explained in our previous work [
38]. We then enhance the performance of this method by incorporating a false data injection detection mechanism. This addition aims to improve the robustness of our approach in adversarial environments and against cyber-physical threats, particularly false data injection attacks on agents, which typically appear as sharp anomalies in the innovation during the tracking of a moving target. For clarity and ease of reference,
Table 1 summarizes all major variables, parameters, and symbols used throughout this manuscript.
3.1. Consensus-Based Estimation with Saturation Filtering
To provide context, we briefly summarize the distributed filtering framework in our earlier work. This framework operates in two steps: updating the state estimate using local observations (observation update) and then refining estimates by exchanging information between neighboring agents (estimation consensus).
During the observation update, each agent integrates its own measurement
through a saturation-based filtering for the purpose of limiting the effect of corrupted data. The update rule for agent
at time step
is computed as:
where
represents the estimated state,
is the known state transition matrix obtained from Equation (2),
is the observation matrix for agent
obtained from Equation (4), and
is a saturation gain that addresses the innovation magnitude, which is defined by the rule:
in this format,
is the observation confidence parameter. This parameter restricts the influence of large innovations, which may be caused by measurement noise or adversarial interference, thereby enhancing the filter’s robustness against such disturbances.
Following the observation update, agents engage in an iterative consensus process to improve estimation accuracy by exchanging information with neighboring agents. This consensus step is performed multiple times within a single time iteration
. At consensus iteration
, agent
’s estimate is updated by combining its own estimate with those of its immediate neighbors
:
in this expression,
is the updated estimate,
is a small positive that controls the step size of the consensus update, and the summation adjusts the estimate toward the average of neighbors’ estimates. Performing multiple consensus steps promotes agreement across the network and improves accuracy compared to using local information alone.
By integrating the saturation-based observation filter with the consensus-based estimation step, the proposed method achieves robustness against corrupted measurements while effectively fusing information across agents. This combined approach forms the foundation of our decentralized mobile target tracking algorithm based on the NCV model. The detailed steps of this method are presented in Algorithm 1.
Algorithm 1. Decentralized mobile target tracking using the consensus-based estimation filter algorithm |
Initialize variables = () |
for do |
for do |
Observation Matrix Calculation: |
for do |
Compute “from Equation (4)” |
End |
Measurement Update with Saturation Filtering: |
|
|
Estimate Consensus: |
Let |
for do |
|
End |
Let |
End |
End |
3.2. False Data Injection Detection and Isolation
To improve the resilience of Algorithm 1 against cyber–physical threats, particularly false data injection attacks (FDIAs), we introduce a detection and isolation mechanism. The core idea is to verify whether each agent’s local measurement is consistent with its predicted observation before it is incorporated into the distributed estimation process. If a measurement deviates significantly from what the model expects, it is considered potentially compromised and excluded from the update step.
At each time step
, agent
computes an innovation value, which represents the difference between the current measurement and the predicted output based on the agent’s previous state estimate. This innovation is given by:
where
is the measurement received by agent
,
is the observation matrix,
is the state transition matrix defined by the nearly constant velocity model, and
is the predicted state estimate from the previous step.
To assess whether the innovation
is acceptable, it is compared against a dynamic detection threshold
. The threshold is defined as:
where
is the norm of the state transition matrix
. The term
denotes the estimation error bound at the previous time step for agent
, and
represents the consensus mismatch across the network, which measures how far apart the agents’ estimates are at time
. The constants
and
represent predefined upper bounds on the process and measurement noise, respectively.
The consensus mismatch term
is computed as the maximum deviation between any two agents’ state estimates at the previous time step and is calculated as:
where
denotes the set of all agents in the network. This term reflects the global estimation disagreement and captures the worst-case divergence among the agents.
The estimation error bound
is updated recursively at each time step using the following relation:
where
is the consensus step size and
is the local consensus disagreement for agent
, representing the maximum deviation between its own estimate and those of its immediate neighbors of
. This local mismatch is calculated as:
if the magnitude of the innovation exceeds the threshold
), the agent is flagged as compromised. In this case, the agent does not incorporate its current measurement into the state update. Instead, it uses the best prediction of the dynamic model:
this action effectively isolates the compromised measurement, preventing it from corrupting the overall consensus process.
If the innovation falls within the acceptable threshold, the agent continues with the normal measurement update and consensus-based fusion steps as outlined in the original Algorithm 1. By applying this detection and isolation process at every iteration, the estimation process becomes capable of automatically identifying and isolating faulty measurements. The complete procedure, incorporating this fault-resilient mechanism, is summarized in Algorithm 2.
In the next section, we take a closer look at how Algorithms 1 and 2 perform in a range of testing scenarios. These fall into two main categories: tests under normal conditions with varying network setups, and tests where agents are subjected to false data injection attacks. This approach allows us to evaluate how effectively the algorithms handle disruptions, adapt to changing conditions, and detect faulty information. The analysis aims to assess estimation accuracy, convergence behavior, and overall resilience in a decentralized framework.
Algorithm 2. Decentralized mobile target tracking using consensus-based estimation algorithm with fault detection |
Initialize variables = () |
for do |
for do |
Observation Matrix Calculation: |
for do |
Compute “ from Equation (4)” |
End |
Innovation and Fault Detection: |
Compute and “ from Equations (8) and (9)” |
if then |
|
Else |
Measurement Update with Saturation Filtering: |
|
|
Estimate Consensus: |
Let |
for |
|
end if |
Let |
End |
End |
4. Case Studies and Results
This section presents a comprehensive evaluation of the proposed decentralized, consensus-based target-tracking algorithm under a range of operating conditions. The evaluation is divided into two main categories. The first category, referred to as the Baseline Case Studies, investigates the performance of Algorithm 1 in attack-free environments. This part examines the effects of key design parameters, such as graph connectivity, the number of consensus iterations, and the saturation filter settings, on the accuracy of target tracking and the convergence behavior of the distributed estimates.
The second category, referred to as the Robustness Case Studies, evaluates the performance of both Algorithms 1 and 2 under various false data injection attacks. This part emphasizes the resilience and adaptability of the proposed framework when facing cyber-physical threats and highlights its effectiveness in mitigating the impact of faulty measurements.
Performance is assessed using the mean-squared estimation error (MSEE), defined in Equation (14), the average mean-squared estimation error given in Equation (15), and the innovation norm as defined in Equation (8):
all simulations were implemented and executed in the MATLAB R2022 environment. The simulation setup reflects realistic network conditions and includes multiple agent configurations. Finally, a summary and discussion of the results is provided to show clear comparative analysis.
4.1. Baseline Case Studies (1–3): Without FDIA
This set of experiments evaluates the performance of Algorithm 1 under benign conditions, where no false data injection attacks occur. The purpose of these baseline studies is to examine the influence of key design parameters, including the communication topology, the number of consensus iterations, and the configuration of the saturation filter on the estimation accuracy and convergence behavior. These tests provide a foundation for understanding the algorithm’s behavior under ideal conditions, before introducing adversarial scenarios.
4.1.1. Case Study 1—Effect of Graph Connectivity
This case study investigates how the density of the communication graph affects the tracking performance. Two random graphs with twenty agents are generated: one with a connection probability of
, representing a highly connected network (
Figure 2b), and one with a low connectivity of
(
Figure 2a). The corresponding average MSEE results for these configurations are shown in
Figure 2c,d.
The results indicate that a higher level of connectivity improves estimation accuracy and accelerates convergence by allowing more information to be exchanged between agents. However, this enhanced performance requires greater communication bandwidth and computational resources. Therefore, designers must carefully balance the trade-off between estimation performance and the communication costs when choosing the network topology.
4.1.2. Case Study 2—Number of Consensus Iterations
This experiment examines the effect of the number of consensus iterations, denoted by , which controls how often each agent exchanges information with its immediate neighbors between successive time steps. Two different settings are tested: a lower communication rate () and a higher rate ().
The results for a random communication network in
Figure 3a indicate that using fewer iterations slows the convergence process and yields higher estimation errors (
Figure 3b,c), whereas increasing the number of iterations improves the convergence rate and reduces estimation errors (
Figure 3d,e). However, this performance gain requires increased communication and computation. Therefore, the number of iterations must balance accuracy with system constraints.
4.1.3. Case Study 3—Saturation Filter Parameters
In this final baseline case, the influence of the saturation filter’s configuration on tracking robustness and sensitivity is investigated. Two filter settings are tested: a permissive (soft) threshold with () and a conservative (harsh) threshold with (). These settings control how much deviation is tolerated in received data before they are attenuated or discarded.
The results are shown for a random communication network in
Figure 4a. Under the soft threshold setting (
), the system achieves higher estimation errors, as seen in the innovation norm and average MSEE plots (
Figure 4b,c), and it is more vulnerable to faulty measurements. In contrast, the harsh threshold setting (
) yields smoother innovation norms and more stable behavior (
Figure 4d,e), at the cost of less sensitivity to measurement innovations. This case highlights the critical trade-off between filter sensitivity and robustness.
4.2. Robustness Case Studies (4–6): With FDIA
In this section, we extend our experiments to evaluate the performance of Algorithm 2 alongside baseline Algorithm 1 under adversarial conditions. Our focus is on cyber–physical threats posed by false data injection attacks from an external attacker. FDIAs can cause substantial disturbances in measurements, creating anomalies or sharp spikes in the data that degrade the accuracy of the state estimates.
In the first case, we introduce a localized false data injection attack of high magnitude to show how the detection and isolation mechanism protects the estimates and improves the reliability of the communication system under cyber–physical threats. The second case considers a widespread coordinated attack where half of the agents receive false data at varying magnitudes, allowing us to assess the algorithm’s resilience when a significant portion of the network is compromised. Finally, the third case investigates the algorithm’s behavior under transient attacks that last for only a short period of time. In each case, we evaluate and compare the performance of both algorithms to highlight the effectiveness of the proposed detection mechanism under different adversarial scenarios.
4.2.1. Case Study 4—Localized False Data Injection
In this case study, we assume that two randomly selected agents are subjected to a high-magnitude false data injection attack after a specified time step (). The objective is to demonstrate how the detection and isolation mechanism can identify and isolate the compromised agents so that their corrupted data do not propagate through the network or degrade the overall consensus-based state estimation.
The detection process on a randomly generated network is illustrated in
Figure 5a. The innovation norm for each agent is compared against the dynamic threshold explained in Equation (9), and any value exceeding this threshold is flagged as compromised, as shown in
Figure 5b.
Figure 5c,d present the MSEE over time for the algorithms without and with FDIA detection, respectively. The results clearly show that the proposed detection and isolation method effectively mitigates the impact of the attacks, achieving lower errors than the baseline algorithm without detection.
Figure 5e compares the average MSEE across all agents for both algorithms, further confirming the significant performance improvements provided by the detection mechanism under cyber–physical threats. Finally,
Figure 5f illustrates the detection heat map, which correctly identifies agents 5 and 17 as under attack after
.
4.2.2. Case Study 5—Widespread Coordinated Attacks
In this case study, half of the agents in the communication network (
Figure 6a) are randomly selected and subjected to false data injection attacks of varying magnitudes (low, medium, and high). The objective is to evaluate the algorithms under severe conditions where a large number of agents are compromised. Particular attention is paid to assessing the performance of the proposed detection algorithm when the attack magnitudes approach the detection threshold.
Figure 6b displays the innovation norm measurements across all agents, showing the presence of attacks with different magnitudes. The MSEE results for both algorithms are shown in
Figure 6c,d, and a comparison of the average MSEE is provided in
Figure 6e. In the case of low-magnitude attacks, which are close to the detection threshold, some of the compromised agents are successfully detected and isolated. In contrast, others remain undetected and continue to influence the estimates. Even under these challenging conditions, the algorithm with the detection mechanism outperforms the baseline version that lacks detection, achieving significantly lower estimation errors. Finally,
Figure 6f presents a heat map indicating all detected attacks over time, illustrating the effectiveness of the proposed algorithm in mitigating widespread coordinated false data injections.
4.2.3. Case Study 6—Transient Attacks
In this final robustness case study, we evaluate the performance of the FDIA detection algorithm under transient attacks. Unlike the previous case studies, where the attackers influence continuously throughout the entire process, this scenario considers a situation where a subset of agents in the network (
Figure 7a) experiences short-duration false data injection attacks with medium to high magnitudes. The corresponding innovation norm is shown in
Figure 7b.
Figure 7c,d present the MSEE for the algorithms without and with FDIA detection, respectively. A comparison of the average MSEE for both algorithms is displayed in
Figure 7e. The results demonstrate that the algorithm with detection enabled successfully prevents transient attacks from significantly affecting the estimation process and is able to quickly detect the attacked agents, as indicated in
Figure 7f.
4.3. Summary and Discussion of Results
This section summarizes the key findings from the baseline and robustness case studies presented earlier. The baseline case studies assessed the performance of Algorithm 1 under attack-free conditions. In the first baseline case, increased network connectivity led to faster convergence and lower mean-squared state estimation error, although this also increased communication overhead. In the second baseline case, the impact of the number of consensus iterations was examined. The results showed that increasing the number of iterations accelerated agreement among agents and reduced estimation errors, but required greater communication bandwidth and power consumption. In the third baseline case, the influence of the saturation filter parameters was investigated. A more conservative threshold achieved higher resilience to measurement variations, but also reduced responsiveness to legitimate changes.
The second set of case studies focused on the performance of Algorithm 1 alongside its enhanced version with FDIA detection presented in Algorithm 2, under cyber–physical threats. The fourth case introduced localized high-magnitude false data injection attacks on two agents starting at time step . Algorithm 2 successfully detected and isolated the compromised agents and maintained accurate tracking estimates. The fifth case involved widespread coordinated attacks on 50% of the agents with varying attack magnitudes. The results showed that the medium- and high-magnitude attacks were reliably detected and mitigated, while a few low-magnitude attacks near the detection threshold occasionally evaded detection. Even under these challenging conditions, Algorithm 2 achieved substantially better average MSEE and tracking accuracy than Algorithm 1. Finally, the sixth case investigated the algorithms performance under transient attacks. The enhanced algorithm responded quickly to these brief anomalies and preserved estimation performance with minimal delay.
Table 2 concisely summarizes the different scenarios investigated in this study, including both baseline and adversarial conditions explored through simulation. This table highlights the key experimental setups, such as graph connectivity, consensus steps, and attack types, along with the primary outcomes and performance observations for each case. It is important to note that all scenarios are evaluated under the assumption that a majority of agents in the network are benign, which is a standard requirement for consensus-based resilience.