1. Introduction
Waveform diversity and multiantenna technologies are utilized in multiple-input multiple-output (MIMO) radars to improve their angular resolution, antijamming ability, and other target detection abilities [
1,
2,
3,
4]. By transmitting orthogonal waveform sets, an MIMO radar can separate the received waveforms transmitted by different antennas. Generally, an MIMO radar uses a matched filter bank to process the echoes. Thus, the cross-correlation functions among the orthogonal waveforms should be as low as possible. Meanwhile, in order to achieve a good pulse compression performance, the autocorrelation functions’ side-lobes should also be as low as possible.
Designing multiple different phase-coding sequences with a low cross-correlation is one of the most common ways to realize an orthogonal waveform set [
5,
6,
7]. With a large number of transmitted waveforms, a phase-coded waveform set is difficult to intercept by traditional jammers [
8]. Integrated side-lobe level (ISL) and peak side-lobe level (PSL) are two commonly used metrics for orthogonal MIMO radar phase-coded waveform sets [
9,
10,
11]. Aiming to minimize the ISL, multicyclic-algorithm-new (multi-CAN [
12]), majorization–minimization correlation (MM-Corr [
13]), ISL-New [
14], alternating-direction-method-of-multipliers (ADMM [
15]), etc., algorithms have been proposed, although they cannot minimize the PSL. PSL minimization is more complex and harder than ISL minimization. Thus far, some researchers have proposed effective PSL optimization algorithms [
16,
17,
18,
19,
20], with the method based on primal-dual achieving the best performance [
18]. Particularly, the
p-MM algorithm can obtain almost the same PSL metrics as primal-dual, with much lower ISL metrics [
19]. All the abovementioned ISL and PSL optimization algorithms lead to a single set of orthogonal waveforms. With a well-designed orthogonal phase-coded waveform set, the MIMO radar achieves a high waveform diversity gain and a low probability of intercept.
With the advent of digital radio frequency memory (DRFM) [
21,
22], more advanced jamming technologies have been rapidly developed. DRFM-based jammers pose a great threat to MIMO radar, and can perform read and copy functions and diverse parameter modulations (like delay, Doppler, etc.) within a short time. Therefore, DRFM-based jammers may seriously affect the operational capabilities of MIMO radars in the future.
The transmitting agile waveform is an effective way to combat modern DRFM-based jamming [
23,
24]. Although DRFM-based jammers have a strong ability to intercept waveforms, they must delay at least one pulse repetition time (PRT) to complete jamming signal generation. If the correlation between adjacent pulses is low, it is difficult for the DRFM-based jammers to interfere with the waveforms. From the perspective of orthogonality, MIMO radars should transmit an orthogonal waveform set within each pulse. In different pulse intervals, the waveform sets should also be orthogonal to each other. Thus, even though the jammers have intercepted the waveforms in the previous pulse, they cannot interfere with the subsequent pulses. Therefore, in this article, the MIMO radar antijamming agile waveform is modeled as multiple groups of orthogonal waveform sets. The abovementioned PSL and ISL optimization algorithms can be used to design all groups of waveforms directly. However, they cannot finely control the intra- and intergroup orthogonal performance.
In order to balance the intra- and intergroup orthogonal performances, in this article, an optimization model for designing group orthogonal waveforms is proposed, which is able to maximize the waveform diversity gain when diverse DRFM-based jamming signals are encountered. The proposed model can overcome the shortcomings of existing methods. Considering that target detection is based on correlation peak detection, in the proposed model, the objective function is formulated considering the following two aspects. The first is to minimize the cross-correlation peak and autocorrelation side-lobe peaks within each group of orthogonal waveforms, which can be evaluated by the traditional PSL metric. The second is to minimize the cross-correlation peak among different groups of orthogonal waveforms, which can be evaluated by the peak cross-correlation level (PCL) metric. The objective function of the proposed model is the weighted sum of the PSL and PCL metrics. These metrics evaluate the MIMO waveform diversity gain and jamming suppression performance, respectively.
In order to solve the proposed optimization model, in this article, a group orthogonal waveform optimal design algorithm is proposed. Minimizing the correlation function peak in the proposed model is difficult. Based on some relaxations, ref. [
18] divided the origin PSL minimization problem into a series of convex subproblems. Inspired by the relaxation approach in [
18], in this article, the intra- and intergroup correlation functions are processed separately and the proposed optimization model is transformed into a series of solvable subproblems. Finally, the proposed algorithm generates solutions to each subproblem. The numerical results show that the proposed algorithm can effectively design multiple groups of orthogonal waveform sets. The cross-correlation functions between the waveform sets are also low. The antijamming simulation results demonstrate that the designed group orthogonal waveforms can balance the DRFM-based deceptive jamming suppression and range compression performances of MIMO radar by adjusting the weighting factors.
2. Problem Formulation
The optimization variables in group orthogonal waveform design are
G groups of waveforms, with each group consisting of
M waveforms. Considering phase-coded waveform design, group orthogonal waveforms with intra- and intergroup orthogonality can be realized by assigning different waveforms with different phase-coded sequences. Since the pulse length, chip length, carrier frequency, and other radar system parameters have little effect on the correlation function side-lobe level, in this article, the group orthogonal waveforms
are modeled as
where
N is the sequence length. The phase value continuously satisfies
. The correlation functions of
are defined as
where
rij[
k] is the auto- or cross-correlation function and
represents the complex conjugate. The set of the indexes of intra- and intergroup correlation functions is defined as
.
To evaluate the group orthogonal waveforms’ correlation properties, we consider the intra- and intergroup correlation functions separately. We use the traditional peak side-lobe level (PSL) metric [
9,
10,
11] to evaluate the waveforms within each group. The PSL metric for the
g-th group is defined as
where set
contains the correlation function indexes within the
g-th group of waveforms, except for the indexes of the autocorrelation peak values that are equal to
N. Because there are only cross-correlation functions between different groups of waveforms, the following peak cross-correlation level (PCL) is defined to evaluate the intergroup cross-correlation peak.
The set in Equation (6) contains all the indexes of intergroup cross-correlation functions.
The object of group orthogonal waveform design is to minimize the PSL
1, PSL
2,…, PSL
G and PCL metrics, which are functions of the optimization variables
. Thus, the optimization model can be expressed as
where
f represents the objective function vector, which contains the convolution and max(·) operations. In addition, the waveforms
have constant modulus constraints. The optimization model (7) is a complex minimax problem with nonconvex constraints. In order to simplify problem (7), we introduce the variables
ε1,
ε2,…,
εG and
γ to constrain the PSL
1, PSL
2,…, PSL
G and PCL metrics values, respectively. Meanwhile, in order to balance the intra- and intergroup orthogonal performances, we introduce a weighting factor
w, thereby transforming the original problem into the following single-objective problem.
Note that the convolution operations and constant modulus constraints in problem (8) are not conducive to solving this problem. Inspired by ref. [
18], we introduced the auxiliary variables
and
ρij(
k) to decompose the complex nonlinear convolutions. Then, the correlation function constraints are replaced by the following linear equality constraints:
In Equation (9), if hi = xi holds for all i = 1, 2,…, GM, then ρij(k) is equal to the correlation functions rij(k). This equivalent constraint condition in Equation (9) is correct according to the proposition below.
Proposition 1. The constraints are equivalent to the constraint hi =xi for all i = 1, 2,…, GM.
Proof of Proposition 1. According to the Cauchy–Schwartz inequality,
If
holds, then inequality (10) implies
. Using the equality condition of the Cauchy–Schwartz inequality, we obtain
According to Equation (11) and , it is not difficult to find . Thus, hi = xi is obtained and the proof is complete. □
By introducing the variables
and
ρij(
k), the nonlinear constraints are relaxed to linear convex constraints. However, the constant modulus constraints of
in problem (8) are still coupled with its other constraints, which leads to difficulties in solving for
. Therefore, we introduce the variables
to simplify the subproblem regarding
. The following conditions are added to ensure the obtained waveforms have a constant modulus.
Then, the constant modulus constraints are transferred to the subproblem regarding , which is easy to solve, even with constant modulus constraints. The subproblem regarding becomes an unconstrained convex problem.
After introducing series auxiliary variables into the optimization model (8), we formulate the following group orthogonal waveform optimal design model:
The weighting factor w can balance the intra- and intergroup correlation peak values. The optimization variables include , , , ρij(k), γ, and εg, g = 1, 2,…, G. Although the dimensions of the optimization variables increase, the intractable nonlinear and nonconvex constraints in the optimization model (8) are relaxed into a series of linear and convex constraints.
3. Proposed Group Orthogonal Waveform Design Algorithm
Based on the characteristics of the objective function and the constraints in the optimization model (13), we formulated an augmented Lagrange function [
25] and transformed problem (13) into the following constrained minimization problem:
The augmented Lagrange function
L can be expressed as follows:
where Re[·] represents the real part of a complex number.
θ1,
θ2, and
θ3 are penalty parameters. {
λi}, {
αijk}, and {
βin} are Lagrangian multipliers, also dual variables. To solve problem (14), we propose a group orthogonal waveform design algorithm based on a primal-dual type method. The proposed algorithm decomposes problem (14) into a series of simple subproblems. By sequentially updating the optimization variables and dual variables, the proposed algorithm minimizes the augmented Lagrange function
L in Equation (15) after several iterations. According to optimization model (13), the variables
,
, and
converge to the same point. The convergence condition is
. Algorithm 1 summarizes the proposed algorithm. The subproblems of the proposed group orthogonal waveform design algorithm can be expressed as follows:
Parameter
c in Equation (20) is the step length. Superscript (
l) represents the values of the variables at the
l-th iteration. Except for subproblem
1, subproblems
2,
3, and
4 are actually the same as the primal-dual algorithm [
18], and can be solved using similar methods to primal-dual. In the remainder of this section, methods to solve these subproblems are introduced.
Algorithm 1 Group Orthogonal Waveform Design Algorithm |
Initialization Randomly select (constant modulus is not required). Set constant modulus using the phases of . Randomly select {λi}, {αijk}, {βin} and , set l = 0. Repeat Compute by solving subproblem (16). Compute by solving subproblem (17). Compute by solving subproblem (18). Compute by solving subproblem (19). Compute using (20), l = l + 1. Until . |
3.1. Solving Subproblem 1
According to the augmented Lagrange function (15), subproblem (16) can be separated into two independent parts as follows:
where
. Considering problem (21), when
εg is fixed, the optimal solution of
ρij(
k) for every index (
i,
j,
k) can be expressed as
where
is a function of
εg. According to Equation (23), problem (21) is equivalent to
where
Because
, the condition
can be ignored. The optimal
value can be determined by solving the following equation:
Because
,
is a monotonically increasing function of
εg. The optimal
value can be obtained efficiently via the bisection method. Similarly, the optimal solution
of problem (22) can be obtained by solving the following equation:
3.2. Solving Subproblem 2
According to Equation (15), subproblem (17) is separable for each
hi,
i = 1, 2,...,
GM. The minimization problem can be expressed as follows:
where
where
xj,k represents the aperiodic delayed copy of the discrete signal
xj. For
,
; for
,
. The augmented Lagrange function of problem (28) can be expressed as follows:
The Karush–Kuhn–Tucker (KKT) conditions for problem (28) are as follows:
Obviously, when
, KKT condition (32) is equivalent to
If condition (33) is true, then ; otherwise, the optimal solution should found under the condition of .
When
, condition (32) is equivalent to
In order to solve Equation (34), the value of
λ* should be determined. According to Equation (31), for any fixed value of
, the optimal
v(
λ) with minimal
Li(
v(
λ),
λ) is below.
According to function (31), and after summing the two inequalities in (36), then
Therefore, is a monotone decreasing function of λ. Solving the KKT conditions when is equivalent to finding the zero of function , which can be achieved via the bisection method.
3.3. Solving Subproblem 3
According to Equation (15), subproblem (18) can be separated into the following unconstrained convex problem for each
xi,
i = 1, 2,…,
GM:
where
where
hj,k represents the periodic delayed copy of
hj, similar to
xj,k in Equations (29) and (30). It is easy to determine that
Ci in Equation (39) is an
N ×
N positive definite matrix. Therefore, subproblem (38) is an unconstrained quadratic optimization problem, the optimal solution of which is
, which can be found efficiently via conjugate gradient methods.
3.4. Solving Subproblem 4
According to Equation (15), subproblem (19) can be separated for each
yi[
n],
i = 1, 2,…,
GM,
n = 1, 2,…,
N, as follows:
where
Define . Then, the solution of subproblem (41) can be expressed as .