**1. Introduction**

Recently, radar network systems, such as multiple-input multiple-output (MIMO) radar, have attracted great attention from academic researchers [1–5]. It has been shown that a radar network system has numerous potential advantages over traditional monostatic and bistatic radar, such as waveform diversity [1], multiplexing gain [2], enhanced target tracking, localization performance [6,7], etc. As far as multi-target tracking in a radar network, in order to best utilize the system potential under the limited system resources, the resource allocation is of great importance, and receives more and more attention in recent years [8–21].

An effective radar resource allocation strategy can efficiently optimize system parameters, leading to performance enhancements. Therefore, it is necessary to allocate the total launch resources in the radar networks reasonably. As we all know, power allocation is one crucial factor in the resource management of any radar network [8–11]. Godrich et al. (2011) [9] proposed a power allocation strategy for target localization in distributed MIMO radar systems, whose objective can be divided into two parts. In the first part, the total transmission power is minimized for a given accuracy requirement, while in the latter part, the tracking accuracy is maximized under the constraint of a given power budget. As an extension, Xie et al. [10] extended this work to a more general case of unknown previous position information, which promotes the real-time applications.

A performance-driven power allocation algorithm is proposed by maximizing the achievable tracking accuracy with a given total power budget [11]. The algorithm can be regarded as the response of the cognitive transmitter to the environment, which is observed by the receiver in the radar network.

In addition, the time resource allocation is also critical, such as revisit time and dwell time allocation [12–14]. The concept of radar dwell time optimization for target tracking is studied for the first time [12], under the premise of meeting the predetermined target tracking accuracy requirements, and the total dwell time of the phased array radars is minimized. Narykov et al. [13] employed the Markov decision to manage the time resource for target tracking. Specifically, the dwell time and revisit time are adjusted adaptively to increase the maximum number of tracking targets. Wang et al. [14] proposed a joint revisit and dwell time management strategy for single target tracking, which aims to minimize the total time resource used for target tracking, while meeting a desired tracking accuracy requirement.

However, most of the above researches only focus on the single parameter optimization. On the basis of the research mentioned above, many joint resource management optimization algorithms are proposed. Yan et al. [15] proposed a joint beam selection and power allocation strategy for multiple targets tracking, whose basis is to allocate the limited beam and power resource of the radar network for the purpose of achieving an accurate target state estimation. Xie et al. [16] take two variable parameters into consideration: The number of radar nodes and the transmitted power of radar network, and then propose a joint node selection and power allocation strategy with the objective of tracking multiple targets. A cooperative nodes and transmit waveform scheduling scheme is proposed for multiple targets tracking in a distributed radar network [17], where this scheme aims at minimizing the cost of the allocation of waveforms, while guaranteeing a predefined target tracking accuracy.

Although the above works provide us an opportunity to deal with resource management, they have little regard of the low probability of intercept (LPI) performance in radar network systems. With the development of passive detectors, such as the radar warning receiver (RWR), electronic warfare support (ES), anti-radiation missile (ARM), and so on, a serious threat is posed to the radar network. As a result, the study of LPI optimization for radar network systems has attracted significant interest in recent years [18–23]. She et al. [21] proposed a sensor selection and power allocation algorithm for multi-target tracking, whose basis is to reduce the total transmitted power under the constraint of target tracking accuracy, with the purpose of improving the LPI performance of the radar network. A joint transmitter selection and resource management strategy based upon LPI is proposed by controlling transmitting resources while meeting a specified target-tracking accuracy requirement [22]. Generally, the above literature have put forward the idea of joint resource management for LPI performance in radar network systems, which lays a foundation for future study.

For multi-target tracking in a radar network, the information from each monostatic component must be gathered to the fusion center for fusion and processing. However, the data processing rate is commonly limited. Therefore, in order to process all the measurement data before the next observation time and feed back to the radar transmitter in time, it is necessary to strictly control the total amount of data, which is related to the bandwidth of transmitted waveform. Furthermore, the target tracking accuracy is also related to the bandwidth of the radar-transmitted signal. Garcia et al. [24] take the signal bandwidth into account for the first time, and propose a joint power and bandwidth allocation (JPBA) method, with the purpose of maximizing the localization accuracy of a single target. Yan et al. [25] extend the JPBA strategy to the target tracking scenario, where signal bandwidth is allocated to meet the real-time processing requirements. To conclude, bandwidth allocation is also one of the critical factors which needs to be considered in the resource management of radar transmission.

However, to the best of our knowledge, the problem of dwell time allocation and bandwidth allocation to realize the LPI performance optimization for multi-target tracking in a radar network, which has never been taken into consideration, needs to be analyzed in detail.

In this paper, an LPI-based joint dwell time and bandwidth allocation optimization strategy in a radar network is proposed. The strategy can adaptively adjust the radar selection, dwell time and signal bandwidth allocation according to the target motion characteristics at each observation moment. As the Bayesian Cramer–Rao lower bound (BCRLB) combines the revisit time, dwell time, target RCS, transmission signal bandwidth and some other variables, it offers insight effect into the parameters on the tracking performance. Consequently, we utilized BCRLB as the accuracy metric for target tracking. For a predefined target tracking accuracy threshold, the resulting problem is minimizing the total dwell time by optimizing the radar selection, dwell time and transmit signal bandwidth. Then, an efficient two-step method is proposed to solve this problem. Finally, two different RCS cases is considered in this paper to verify the superiority of the proposed strategy.

The remainder of this paper is organized as follows. The system model is introduced in Section 2. Section 3 presents the joint dwell time and bandwidth optimization strategy. In Section 3.1 we derive the BCRLB as the performance metric of the target tracking accuracy. Then, the LPI performance optimization problem based on BCRLB is formulated in Section 3.2. A nonlinear programming-based genetic algorithm (NPGA)-based method is proposed to solve this problem in Section 3.3. Simulation results are provided in Section 4. Finally, conclusions are given in Section 5.
