**1. Introduction**

A market of small unmanned aerial vehicles (UAVs)—commonly called drones—is developing very fast [1] and is stimulated by a very good quality to price ratio—for a relatively small amount a drone can be bought that has excellent performance regarding the possibility of flight and shooting a high-quality video from the air. Unfortunately, the drones could be used not only for commercial, research, surveillance, fun, or any other legal activities but also as a tool of crime. Sometimes intentionally, like in the case of drug smuggling across prison walls or dropping explosive materials [2]. Another time inappropriate and dangerous use of drones is caused by curiosity or lack of imagination. Camera-equipped UAVs can invade people's privacy or cause a threat to their life. For example, the report by Bard College's Center for the Study of the Drone [3] found that 327 incidents between December 2013 and September 2015 (USA only) posed a "proximity danger" where an unmanned aircraft got within 500 feet (152.4 m) of a plane, helicopter, or other manned aircraft (involved multiengine jet aircraft). The above mentioned, as well as similar accidents, have caused many institutions to

start thinking about anti-drone solutions. In the case of objects permanently threatened by drones, for example, airports or buildings crucial for national security, stationary anti-drone systems can be justified. However, there are many cases when using a permanent solution seems to be aimless due to the high cost of the system not corresponding with threat frequency, for example, mass events or meetings with very important persons (VIP). For protecting people and objects in such cases, mobile systems (on vehicle or man-carried) perfectly meet the demands. As an example of such solutions, anti-drone systems Hawk by Hertz Systems Ltd (Poland) [4] and Radar Backpack Kit by SpotterRF (USA) [5], are presented in Figure 1.

Hawk by Hertz Systems Ltd http://thehawksystem.com/pl/

Radar Backpack Kit by SpotterRF https://spotterrf.com/products/mobilesolutions/

**Figure 1.** Examples of mobile anti-drone systems.

Many scientists and engineers involved in the development of anti-drone systems agree that a multi-sensor system could provide a higher probability of drone detection than a single sensor [6–12]. Experiments also suggest that one of the most efficient detectors of UAV is a frequency modulated continuous wave (FMCW) radar [6–12]. It has a big detection range, transmits low power [13], and is independent of electromagnetic signals and sounds emitted by drones (UAV operating in a fully autonomous mode and gliding can be detected, too).

Active radars do measurements in a local polar system, where distance and azimuth of the detected object are determined. Three-dimensional radars also provide information about the object elevation. It means that the accuracy of determining the drone localization in a global coordinate system depends not only on the radar performance but also on the accuracy of the radar coordinates and spatial orientation relative to the local horizon and the north. Admittedly, using precise Global Navigation Satellite System (GNSS) positioning, for example, Real Time Kinematic (RTK), can solve the problem of accurate localization. Orientation angles can also be relatively easily determined in the case of a stationary system. However, there would be some difficulties in mobile solutions, especially if the system changes location during the mission. Obviously, some sophisticated techniques, like Moving Base RTK, can provide precise orientation angles; however, using them is not always possible. They also do not ensure expected accuracy in every condition and require additional devices and often specialized knowledge. Although a magnetic compass could be a low-cost and convenient solution, it is not accurate enough if we consider that accuracy better than 1<sup>0</sup> is required. Due to magnetic variation, magnetic deviation, errors connected with radar and magnetic sensor installation on a vehicle, and method of measurement, accuracy better than 200 is difficult to achieve in a real application.

Long-range FMCW radars used in non-military anti-drone systems have detection ranges of micro-drones (DJI Phantom sizes) not exceeding 3 km, due to the small radar cross section (RCS) of such objects [14–17]. However, the classification range has a practical meaning, as anti-drone radars also detect birds and other small objects. Determining if the object is a drone allows relevant actions to be taken. The most common method of classification is the micro-Doppler signature (MDS) analysis. It allows investigating micromotions of the object detected by the radar. The contributions of the rotating turbine or propeller blades in radar backscattering in the form of micro-Doppler contents contain information that there are rotating parts on the object. On this basis, the method distinguishes a drone from a bird [18–23]. If the analysis of MDS is a classifier, the classification range is about 50% of the detection range, due to the RCS of the rotating turbine or propeller blades being smaller than the RCS of the drone body. The classification range is the range in which the classifier works accurately. Examples of detection and classification characteristics of the DJI Phantom 4 drone for one of the anti-drone radars are presented in Figure 2. It should be noted that it is currently one of the radars with the longest range of the micro-drone detection and classification available on the commercial market.

**Figure 2.** DJI Phantom 4 drone detection (upper) and classification (lower) ranges for one of FMCW radars available on a commercial market (the characteristics were obtained from the manufacturer, who asked not to disclose the name and model of the radar for obvious reasons).

Considering the real range of non-military FMCW radars and the fact that effective protection of the area or object requires detection and classification of the drone several hundred meters before the forbidden zone, it is often necessary to use several mobile systems operating in the network. One of the key conditions for their proper cooperation is the measurement of azimuths relative to the same direction (preferably geographical north, also called the true north, or just, the north). Otherwise, errors will occur in determining the actual location of the drone. In a single-radar system, a large error in determining the coordinates of the detected object may result in incorrect classification as an object "violating" or "not violating" the forbidden area (see Figure 3) and, consequently, an inadequate response. In network systems, where several radars operate in the same area and their coverages partly overlap, there will be additional uncertainty concerning a location of the detected drone—each of the radars will indicate different coordinates of the same object (see Figure 4). Moreover, one will not know how many objects are really in the area. A single drone can be classified as a swarm or vice versa. All described cases are very undesirable, and their elimination is necessary to ensure the proper functioning of the anti-drone system.

Due to the fact that non-military anti-drone systems are still developing and many of them are on a low Technology Readiness Level (TRL) - similarly as some of the radar solutions in this area [3,24], the market usually offers single radars or radar systems integrated on one platform or assembly set. It ensures that the measurements are in the same local coordinate system (not necessarily oriented to

the north). Then, the problem of azimuths misalignment does not exist due to all the devices making the same constant error in azimuths measurements. However, if single radars are integrated into a bigger system, where sensors are distributed over a wide area, bringing all the measurements to a common coordinates system is required. It means that a misalignment reduction of azimuths is necessary. In other cases, after the transformation of local coordinates of the detected object to the global one, different results will be obtained, depending on the azimuth error of the particular radar (see Figure 4).

**Figure 3.** An example of drone misclassification due to a large azimuth error. The radar indicates "GHOST DRONE" (blue rectangle with a white background), and the system classifies it as an object that does not violate the airport zone. In fact, "DRONE" (red rectangle with a yellow background) violates the forbidden zone.

The co-author of the paper has participated in the research project granted by the Polish National Center for Research and Development since 2017. One of its aims is to develop an anti-drone system consisting of many mobile sensors (on the vehicles) spread over a wide area. Experiments already performed in a real environment show the problem of azimuth misalignment of particular sensors. It causes improper operation of the whole system and repeats every time the vehicles change location. As an example, the PrintScreen from the developing system console is shown in Figure 4.

**Figure 4.** An example of improper operation of the system due to azimuths misalignment. The system detected and was tracking three objects instead of one.

In Figure 4, it can be seen that one object is identified as three by the system due to azimuth misalignment. The radars one, two, and three made constant azimuths errors of +5.10, <sup>−</sup>10.30, +14.50, respectively, relative to the north. The errors were determined using GNSS RTK on a drone and the radars.

The azimuth misalignment of 5<sup>0</sup> produces the drone coordinate error of about 87.5 m at a distance of 1 km (see Figure 5). It means that two radars will determine the coordinates of the same object differing by 87.5 m. As already mentioned, in mobile systems (especially on vehicles), it is difficult to obtain the radar azimuths alignment better than 200. Then, the coordinates error will be 364 m (see Figure 5) at a distance of 1 km. Doubtlessly, this will disturb the proper operation of the system. Moreover, variable errors of azimuth measurement of both radars will cause a further increase in the drone position uncertainty. This is a quite serious issue and must be taken into account.

**Figure 5.** The additional error of drone coordinates caused by the azimuth misalignment.

As was mentioned earlier, regarding a permanent system, the azimuths misalignment can be reduced using, for example, geodetic methods. However, radars in a mobile system can change location during the mission. Therefore, in new places, they must be aligned to the rest of the system as quickly as possible. To the best knowledge of the authors, this issue has not been described in publications yet (probably due to the commercial multi-radars mobile systems are still on low TRL). This statement is based on an unsuccessful search for a suitable method description. 'Suitable' means giving the desired accuracy at an acceptable time. However, the authors are conscious that this statement may be a kind of abuse. Research done by the authors shows that, at the moment, in practical applications, each of the radars is only roughly set separately using magnetic sensors and more advanced devices, such as gyrocompasses, GNSS RTK, or satellite compasses, are not applied in non-military mobile systems. Such a method does not ensure the desired accuracy.

This work presents a proposal to solve the problem of azimuths misalignment, consisting of the initial/coarse orientation of the radar system to the north using a magnetic compass, and then carrying out a calibration procedure consisting of automatically calculating corrections to azimuths measured by individual radars. The calibration process uses the "friend" drone flying in autonomous mode along a fixed route. The position of the drone does not have to be known, and the drone does not need to communicate with the system. The proposed method is simple, automatic, and requires no specialized knowledge. It gives very good results and can be successfully applied in real applications.

#### **2. Proposed Method of Reducing the Azimuths Misalignment**

In an ideal situation (no measurement errors), the coordinates of the drone detected by the radar are expressed by the following system of equations:

$$\begin{cases} \begin{array}{c} \text{x}\_{\text{D\\_ENII}} = \text{x}\_{\text{R\\_ENII}} + D\_{\text{2D}} \cdot \sin(A) \\ \text{y}\_{\text{D\\_ENII}} = \text{y}\_{\text{R\\_ENII}} + D\_{\text{2D}} \cdot \cos(A) \end{array} \tag{1}$$

where:


However, in real conditions, all quantities occurring on the right side of equations in the system (1) are affected by errors. Therefore, (1) will take the form

) *x*ˆ*D*\_*ENU* = *xR*\_*ENU* + δ*RP*\_*ENU* + (*D*2*<sup>D</sup>* + δ*D*\_2*D*)·*sin*(*A* + δ*A*\_*ENU*) *<sup>y</sup>*ˆ*D*\_*ENU* <sup>=</sup> *yR*\_*ENU* <sup>+</sup> <sup>δ</sup>*RP*\_*ENU* <sup>+</sup> (*D*2*<sup>D</sup>* <sup>+</sup> <sup>δ</sup>*D*\_2*D*)·*cos*(*<sup>A</sup>* <sup>+</sup> <sup>δ</sup>*A*\_*ENU*) , (2)

where:


Taking the practical experience in using FMCW radars for drone detection into consideration, it should be noted:

**Note 1.:** In practice, the radar position is fixed by GNSS before the operation and taken as a constant until the radar is moved to a new location. In such a case, δ*RP*\_*ENU* is a constant error. It can be minimized by using Differential GNSS or GNSS RTK measurement techniques and by mounting the satellite system antenna centrally above the center of the radar measuring system. As already mentioned, the GNSS RTK technique is not used in commercial systems. A convenient solution is a Satellite Based Augmentation System (SBAS)—GNSS augmented by the differential system based on satellites. This reduces the radar position error to a sub-meter level. It allows omitting it in further considerations due to a small impact on the final performance of the proposed method.

**Note 2.:** δ*D*\_2*<sup>D</sup>* is a random variable with some small constant component resulting from the leveling of the mobile platform or mounting system of the radar. The constant component can be reduced almost to zero by using a leveling system and then δ*D*\_2*<sup>D</sup>* can be modeled as an independent random variable with a normal standard distribution. Then, the highest probability density is concentrated around zero. Therefore, δ*D*\_2*<sup>D</sup>* is also omitted in the proposed method.

**Note 3.:** δ*A*\_*ENU* is a random variable with a small variable component and a very large constant component, resulting from magnetic declination and magnetic deviation, radar and compass assembly on the platform, and the physical possibilities of setting the mobile platform or mounting system of the radar in the direction indicated by the magnetic compass. In practical applications, the variable component varies within 1<sup>0</sup> <sup>−</sup> 30 (depending on the class of anti-drone radar), while the constant component can reach 200. Such a large error in measuring the azimuth of the detected object will result in an error in determining its location (coordinates). According to Equation (2), it will be larger the greater the distance between the radar and the detected object is (see Figure 5). This will entail some complications, both in single- and multi-radar systems, which have already been mentioned in the Introduction.

Bearing in mind Notes 1 to 3, and that

$$A\_m = A + \delta\_{A\\_ENIL} \to A = A\_m - \delta\_{A\\_ENIL} \tag{3}$$

the system of Equation (1) can be written as

$$\begin{cases} \pounds\_{D\_{-ENII}} = \pounds\_{R\_{-ENII}} + D\_{m2D} \cdot \cos(A\_{\mathfrak{m}} - \delta\_{A\_{-ENII}})\\ \pounds\_{D\_{-ENII}} = \pounds\_{R\_{-ENII}} + D\_{m2D} \cdot \sin(A\_{\mathfrak{m}} - \delta\_{A\_{-ENII}}) \end{cases} \tag{4}$$

This system has 3 unknowns: The estimated drone position in the ENU system (*x*ˆ*D*\_*ENU*, *y*ˆ*D*\_*ENU*) and the azimuth measurement error in the ENU system (δ*A*\_*ENU*). The radar measurement results are the horizontal distance (*Dm*<sup>2</sup>*D*) and the azimuth (*Am*), where

$$D\_{m2D} = D\_{m3D} \cdot \cos(E\_m),\tag{5}$$

and:


Estimated radar coordinates (fixed using DGNSS/SBAS measurement) in the ENU system (*x*ˆ*R*\_*ENU*, *y*ˆ*R*\_*ENU*) are also known. On the left side of the equations, instead of true coordinates, there are their estimators, because both the position of the radar and the measurements are burdened with the errors. The errors were omitted in the system (4); however, they have an impact and cause some error in determining the coordinates of the detected drone.

From (4), it follows that unambiguous determination of unknowns based on measurements from a single radar is not possible. The next radar measurement will introduce the next 2 unknowns (new 2D coordinates of the drone), and 2 independent equations. Therefore, regardless of the number of measurements done by a single radar, the number of independent equations will always be 1 less than the number of unknowns.

Let us now consider a situation where a few radars in the network are tracking the same object. Then, the following system of independent equations can be formed

$$\begin{cases} \begin{aligned} \text{\reflectbox{\$\reflectbox{\$\reflectbox{\$\reflectbox{}}\$}\$}} \text{\reflectbox{\$\reflectbox{}}}\_{\text{D\\_ENII1}} &= \text{\reflectbox{\$\reflectbox{}}}\_{\text{R\\_ENII1}} + D\_{m2D1} \cdot \cos(A\_{m1} - \delta\_{A\\_ENIN1})\\ \text{\reflectbox{}}\_{D\\_ENII1} &= \text{\reflectbox{}}\_{R\\_ENII1} + D\_{m2D1} \cdot \sin(A\_{m1} - \delta\_{A\\_ENIN1})\\ &\vdots\\ \text{\reflectbox{}}\_{D\\_ENIIn} &= \text{\reflectbox{}}\_{R\\_ENIIn} + D\_{m2Dn} \cdot \cos(A\_{mn} - \delta\_{A\\_ENINn})\\ \text{\reflectbox{}}\_{\text{D\\_ENIIn}} &= \hat{\text{y}}\_{R\\_ENIIn} + D\_{m2Dn} \cdot \sin(A\_{mn} - \delta\_{A\\_ENINn}) \end{aligned} \end{cases} \tag{6}$$

where:


Let us mark the position of the detected drone in the ENU system as *P*ˆ*D*\_*ENUi*. This is the position estimated on the base of measurements done by *i*-th radar. Since both estimators of the radar coordinates and radar measurements are affected by errors, then

$$
\clubsuit\_{D\\_ENIL1} \neq \clubsuit\_{D\\_ENIL2} \neq \dots \neq \clubsuit\_{D\\_ENILn}.\tag{7}
$$

In this case, system (6) also has no unambiguous solution, because the number of unknowns will be 3*n*, and the number of independent equations will be 2*n*. However, due to all radars are observing the same object, we can assume a simplification:

$$\mathcal{P}\_{\text{D\\_ENuli}} = \mathcal{P}\_{\text{D\\_ENuli2}} = \dots = \mathcal{P}\_{\text{D\\_ENuli}} = \mathcal{P}\_{\text{D\\_ENuli}}(\mathbf{x}\_{\text{D\\_ENuli}}, \mathbf{y}\_{\text{D\\_ENuli}}),\tag{8}$$

and then, system (6) will take the following form:

⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

$$\begin{array}{l} \text{xD\\_ENI} = \hat{\mathbf{x}} \mathbf{R\\_ENI1} + D\_{m2D1} \mathbf{\cdot} \cos(A\_{m1} - \delta\_{A\_{\.ENI}})\\ \text{yD\\_ENI} = \hat{\mathbf{y}} \mathbf{R\\_ENI1} + D\_{m2D1} \mathbf{\cdot} \sin(A\_{m1} - \delta\_{A\_{\.ENI}})\\ \vdots\\ \mathbf{x}\_{D\\_ENI} = \hat{\mathbf{x}}\_{R\\_ENIn} + D\_{m2Dn} \mathbf{\cdot} \cos(A\_{mn} - \delta\_{A\_{\.ENI}})\\ \mathbf{y}\_{D\\_ENI} = \hat{\mathbf{y}}\_{R\\_ENIn} + D\_{m2Dn} \mathbf{\cdot} \sin(A\_{mn} - \delta\_{A\_{\.ENI}}) \end{array} \tag{9}$$

In this situation, the number of unknowns will be 2 + *n*, and the number of independent equations 2*n*, and already for *n* = 2, all 3 unknowns can be fixed. However, for *n* > 2, redundant equations will appear. In such a case, we have <sup>2</sup> <sup>+</sup> *<sup>n</sup>* 2*n* numbers of possible equations combinations, which give a solution in the form of *S P*ˆ*D*\_*ENU*, δˆ *<sup>A</sup>*\_*ENU*1, ... , δˆ *A*\_*ENUn* . Unfortunately, due to errors both in the estimated coordinates of the radar position and in the radar measurements, each of these combinations will give a slightly different solution. These differences will depend not only on the radar coordinates errors and the azimuth, elevation, and distance measurement errors but also on the relative location of the radars and the detected object (the system geometry described by the dilution of precision coefficient). It should be noted that the geometry of the system will change during the drone's flight. Therefore δ*A*\_*ENUi* calculated for the moment *t*, based on observations from *n* radars, can be expressed by the following equation:

$$\delta\_{A\_{\text{\\_ENuli}}}(t) = \delta\_{A\_{\text{\\_ENuli}}}^{\text{const}} + \delta\_{A\_{\text{\\_ENuli}}}^{\text{urr}} = \delta\_{A\_{\text{\\_ENuli}}}^{\text{const}} + f\Big(\delta\_{A\_{\text{\\_ENuli}}}(t), \dots, \delta\_{A\_{\text{\\_ENuli}}}(t), \delta\_{A\_{\text{\\_trū}}}^{\text{DOP}}(t)\Big), \tag{10}$$

where:


δ*const <sup>A</sup>*\_*ENUi* is the sought difference between the radar north and the true north:

$$\text{Norm}\_{\text{real}} + \delta\_{A\_{\text{EN}li}}^{\text{const}} = \text{Norm}\_{\text{RADBRi}} + \delta\_{A\_{\text{EN}li}}^{\text{const}} = \text{Norm}\_{\text{RADBRi}} - \text{Norm}\_{\text{real}}.\tag{11}$$

Due to δ*const <sup>A</sup>*\_*ENUi* <sup>δ</sup>*var <sup>A</sup>*\_*ENUi*, the determination of *NorthRADARi* would significantly improve both the accuracy of determining the coordinates of the detected object and minimize the likelihood of classifying a single drone as a swarm and vice versa. Both factors are crucial for the anti-drone system. The calculated δ*const <sup>A</sup>*\_*ENUi* value with the opposite sign would be a correction to the measured azimuth by the radar and could be automatically taken into account by the master system responsible for the multi-sensors data fusion. In the proposed method, determining the difference *NorthRADARi* − *Northreal*, and hence the correction for measuring azimuths consists of eliminating the δ*var <sup>A</sup>*\_*ENUi* component from Equation (10). According to the proposed idea, this is done by multi-radars observations of a flying drone with unknown coordinates. Azimuth measurement error δ*A*\_*ENU* is estimated in every measurement epoch (*te*), and then the constant component of the azimuth measurement error is calculated as:

$$\delta\_{A\\_ENII}^{\text{const}} = \frac{1}{m} \sum\_{j=1}^{m} \delta\_{A\\_ENII} \{t\_{\epsilon j}\}\_\prime \tag{12}$$

where:


*Remote Sens.* **2019**, *11*, 2617

Therefore, the key issue is to estimate the azimuth measurement error δ*A*\_*ENU* in every measurement epoch. In the proposed method, instead of the classical form of system of equations describing the coordinates of the detected object (9), the following form is used:

$$\begin{cases} D\_{\text{m2D1}} = \sqrt{(\text{x}\_{\text{D\\_ENII}} - \text{f}\_{\text{R\\_ENII}})^2 + \left(y\_{\text{D\\_ENII}} - \hat{g}\_{\text{R\\_ENII}}\right)^2} \\ A\_{\text{m1}} = \arctan\left(\frac{\text{x}\_{\text{D\\_ENII}} - \hat{\mathfrak{x}}\_{\text{R\\_ENII}}}{\hat{y}\_{\text{D\\_ENII}} - \hat{g}\_{\text{R\\_ENII}}}\right) + \delta\_{A\text{\\_ENII}1} \\ \vdots \\ D\_{\text{m2Dn}} = \sqrt{\left(\text{x}\_{\text{D\\_ENII}} - \hat{\mathfrak{x}}\_{\text{R\\_ENII}}\right)^2 + \left(y\_{\text{D\\_ENII}} - \hat{g}\_{\text{R\\_ENII}}\right)^2} \\ A\_{\text{mm}} = \arctan\left(\frac{\text{x}\_{\text{D\\_ENII}} - \hat{\mathfrak{x}}\_{\text{R\\_ENII}}}{y\_{\text{D\\_ENII}} - \hat{g}\_{\text{R\\_ENII}}}\right) + \delta\_{A\text{\\_ENII}} \end{cases} \tag{13}$$

Let us assume the approximate drone position and mark it as *P*<sup>0</sup> *D*\_*ENU x*0 *<sup>D</sup>*\_*ENU*, *<sup>y</sup>*<sup>0</sup> *D*\_*ENU* . Let it be a position calculated on the basis of observations from any one radar from the network:

$$\begin{cases} \mathbf{x}\_{D\\_ENII}^0 = \mathbf{\hat{x}}\_{\text{R\\_ENIIi}} + D\_{m\mathbf{2}Di\text{\\_}}\cos(A\_{mi})\\ \mathbf{y}\_{D\\_ENII}^0 = \mathbf{\hat{y}}\_{\text{R\\_ENIIi}} + D\_{m\mathbf{2}Di\text{\\_}}\sin(A\_{mi}) \end{cases} \tag{14}$$

Now let us expand the right sides of the equations of the system (13) in the Taylor series relative to *P*0 *<sup>D</sup>*\_*ENU*, limiting to the second order components:

$$\begin{aligned} D\_{n2D1} &= D\_{m2D1}^{0} + \frac{\partial}{\partial x\_{D\_{-EN}}^{0}} (D\_{m2D1}^{0}) \cdot \Delta x\_{D\\_ENI} + \frac{\partial}{\partial y\_{D\\_ENI}^{0}} (D\_{m2D1}^{0}) \cdot \Delta y\_{D\\_ENI} \\ A\_{m1} - \delta\_{A\\_ENI\ln I} &= A\_{m1}^{0} + \frac{\partial}{\partial x\_{D\\_ENI}^{0}} \{A\_{m1}^{0}\} \cdot \Delta x\_{D\\_ENI} + \frac{\partial}{\partial y\_{D\\_ENI}^{0}} \{A\_{m1}^{0}\} \cdot \Delta y\_{D\\_ENI} \\ &\vdots \\ D\_{m2Dn} &= D\_{m2Dn}^{0} + \frac{\partial}{\partial x\_{D\\_ENI}^{0}} (D\_{m2Dn}^{0}) \cdot \Delta x\_{D\\_ENI} + \frac{\partial}{\partial y\_{D\\_ENI}^{0}} (D\_{m2Dn}^{0}) \cdot \Delta y\_{D\\_ENI} \\ A\_{m1} - \delta\_{A\\_ENI\ln I} &= A\_{m1}^{0} + \frac{\partial}{\partial x\_{D\\_ENI}^{0}} (A\_{m\nu}^{0}) \cdot \Delta x\_{D\\_ENI} + \frac{\partial}{\partial y\_{D\\_ENI}^{0}} (A\_{m\nu}^{0}) \cdot \Delta y\_{D\\_ENI} \end{aligned} \tag{15}$$

wherein:

⎧

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

$$D\_{m2Di}^0 = \sqrt{\left(\hat{\mathbf{x}}\_{D\\_ENII}^0 - \hat{\mathbf{x}}\_{R\\_ENIIi}\right)^2 + \left(\hat{y}\_{D\\_ENII}^0 - \hat{y}\_{R\\_ENIIi}\right)^2} \tag{16}$$

$$A\_{mi}^0 = \arctan\left(\frac{\mathbf{x}\_{D\\_ENII}^0 - \mathbf{\hat{x}\_{R\\_ENIIi}}}{y\_{D\\_ENII}^0 - \mathbf{\hat{y}\_{R\\_ENIIi}}}\right). \tag{17}$$

Let's mark the individual derivatives as follows:

$$\frac{\partial}{\partial \mathbf{x}^{0}\_{D\_{-ENII}}} \Big(D^{0}\_{m2Di}\big) = m\_{Dxi}, \qquad \frac{\partial}{\partial y^{0}\_{D\_{-ENII}}} \Big(D^{0}\_{m2Di}\big) = m\_{Dyi}.$$

$$\frac{\partial}{\partial \mathbf{x}^{0}\_{D\_{-ENII}}} \Big(A^{0}\_{mi}\big) = m\_{Axi}, \qquad \frac{\partial}{\partial y^{0}\_{D\_{-ENII}}} \Big(A^{0}\_{mi}\big) = m\_{Ayi}.$$

Then, the system of Equation (15) will take the form

$$\begin{cases} \begin{array}{c} D\_{m2D1} - D\_{m2D1}^0 = m\_{D11} \Delta x\_{D\_{ENI}l} + m\_{D9} \cdot \Delta y\_{D\_{ENI}l} \\ A\_{m1} - A\_{m1}^0 = m\_{A11} \cdot \Delta x\_{D\_{ENI}l} + m\_{A1} \cdot \Delta y\_{D\_{ENI}l} + \delta A\_{ENII} \end{array} \\\\ \begin{array}{c} \vdots \\ D\_{m2Dn} - D\_{m2Dn}^0 = m\_{D3n} \cdot \Delta x\_{D\_{ENI}l} + m\_{D9n} \cdot \Delta y\_{D\_{ENI}l} \\ A\_{mn} - A\_{mn}^0 = m\_{A3n} \cdot \Delta x\_{D\_{ENI}l} + m\_{A3n} \cdot \Delta y\_{D\_{ENI}l} + \delta A\_{ENII} \end{array} \end{cases} \tag{18}$$

The system of Equation (18) can be written in matrix form as

$$\begin{bmatrix} D\_{n2D1} - D\_{n2D1}^{0} \\ A\_{m1} - A\_{m2D1}^{0} \\ D\_{n2D2} - D\_{n2D2}^{0} \\ A\_{m2} - A\_{m2}^{0} \\ \vdots \\ D\_{n2Du} - D\_{n2Du}^{0} \\ A\_{mnu} - A\_{mu}^{0} \end{bmatrix}\_{2\text{inc}2} = \begin{bmatrix} m\_{D\times 1} & m\_{D\times 1} & 0 & 0 & \cdots & 0 \\ m\_{A\times 1} & m\_{A\times 1} & 1 & 0 & \cdots & 0 \\ m\_{D\times 2} & m\_{D\times 2} & 0 & 0 & \cdots & 0 \\ m\_{A\times 2} & m\_{A\times 2} & 0 & 1 & \cdots & 0 \\ & & \vdots & & & \\ & & & \vdots & & \\ m\_{D\times n} & m\_{D\times n} & 0 & 0 & \cdots & 0 \\ & & & & \mathbf{A} \end{bmatrix}\_{2\text{inc}2} \begin{bmatrix} \Delta x\_{\text{D\times U}} \\ \Delta y\_{\text{D\times U}} \\ \delta\_{A\times 1} \\ \delta\_{A\times 2} \\ \vdots \\ \delta\_{A\times (n+2)} \\ \delta\_{A\times (n+2)} \end{bmatrix}\_{(n+2)\times 1} \tag{19}$$

**X** is a vector of unknowns. Estimated value of **X** using least square estimator is:

$$\mathbf{\hat{X}} = \left(\mathbf{A}^T \mathbf{A}\right)^{-1} \mathbf{A}^T \mathbf{d}.\tag{20}$$

The solution of (20) is:

$$
\hat{\mathbf{X}} = \begin{bmatrix}
\Delta \hat{\mathbf{x}}\_{D\_{-ENI}} \\
\Delta \hat{\mathbf{y}}\_{D\_{-ENI}} \\
\hat{\boldsymbol{\delta}}\_{A\_{-ENI}} \\
\hat{\boldsymbol{\delta}}\_{A\_{-ENI}} \\
\vdots \\
\hat{\boldsymbol{\delta}}\_{A\_{-ENI\ln}}
\end{bmatrix}\_{(n+2)\times 1}.\tag{21}
$$

Using Δ*x*ˆ*D*\_*ENU*, Δ*y*ˆ*D*\_*ENU*, the estimated position of the drone can be calculated as:

$$
\mathfrak{Ax}\_{\text{D\\_ENII}} = \mathfrak{x}\_{\text{D\\_ENII}}^0 + \Delta \mathfrak{x}\_{\text{D\\_ENII}} \mathfrak{y}\_{\text{D\\_ENII}} = y\_{\text{D\\_ENII}}^0 + \Delta \mathfrak{y}\_{\text{D\\_ENII}}.\tag{22}
$$

If Δ*x*ˆ*D*\_*ENU* > 1*m* or Δ*y*ˆ*D*\_*ENU* > 1*m*, the next iteration is necessary, taking the estimated drone position calculated from (22) as the new approximate position. The final value of the azimuth measurement error for each of the radars is the value of δˆ *<sup>A</sup>*\_*ENUi* from the last iteration. Then, the δˆ*const <sup>A</sup>*\_*ENU* value is calculated based on (12). According to the presented idea the correction to measuring azimuths (the difference between the true north and the radar north) for a given radar is:

$$
\sigma\_{Ai} = -\delta\_{A\\_ENuli}^{\text{const}}.\tag{23}
$$

Then, the azimuth estimator of the detected object for a given radar is:

$$
\mathcal{A}\_{i} = \mathcal{A}\_{mi} + \mathfrak{c}\_{Ai}.\tag{24}
$$

The practical implementation of the proposed method can be presented in the form of the following algorithm:

**BEGIN** //drone started a calibration flight and all radars detected a drone *j* := 0; //first measurements epoch *n* := number of radars; **REPEAT**

**FOR** *i* := 1 **TO** *n* **DO** Get the radars measurements *Mi* = {*Ami*, *Dm*<sup>2</sup>*Di*}; Bring *Mi* to a common moment *tej* ; //explanation below the algorithm Compute δˆ *A*\_*ENU tej* for all radars; //using Equation (20) *j :*= *j* + 1; //next measurement epoch **UNTIL** drone ended a calibration flight *m :*= *j*; //number of measurements epoch Computation of δ*const <sup>A</sup>*\_*ENU*; //using Equation (12)

**END.**

Since observations of the drone by each of the radars are generally not done at the same time, but in some time window (approximately equal to the period of scanning of a particular radar), the values of measured azimuths and distances must be brought to a common moment (*te*). The proposed method uses linear interpolation between the measurements immediately preceding and following (*te*), according to the formulas:

$$\begin{aligned} \hat{A}\_m(t\_c) &= A\_m(t\_0) + \frac{t\_c - t\_0}{t\_1 - t\_0} (A\_m(t\_1) - A\_m(t\_0)),\\ \hat{D}\_{m2D}(t\_c) &= D\_{m2D}(t\_0) + \frac{t\_c - t\_0}{t\_1 - t\_0} (D\_{m2D}(t\_1) - D\_{m2D}(t\_0)), \end{aligned} \tag{25}$$

where:

*t*<sup>0</sup> - the time immediately preceding (*te*), when the measurement was done, *t*<sup>1</sup> - the time immediately following (*te*), when the measurement was done, *A*ˆ*m*(*te*), *D*ˆ *<sup>m</sup>*2*D*(*te*) - estimated values of azimuth and distance at moment (*te*), *Am*(*t*0), *Am*(*t*1) - measured azimuths at moments (*t*0) i (*t*1), *Dm*<sup>2</sup>*D*(*t*0), *Dm*<sup>2</sup>*D*(*t*1) - measured distances at moments (*t*0) i (*t*1).

#### **3. Method of Verification**

Intensive numerical simulations were performed to verify the correctness of the developed method. There were simulated drone flights along a fixed route and indications of FMCW radars tracking them—measurements of azimuth and distance. The simulation software SimWizardADS developed by the Polish company Basic Solution was used in the research. The software was developed as part of a project granted by the Polish National Center for Research and Development (NCRD) in 2017 under the name SimWizardSSAD. The software was accepted by NCRD experts as a reliable tool for simulation of anti-drone systems in the scope of detection and is being further developing under the name SimWizardADS. An example of the simulation wizard window is shown in Figure 6.

**Figure 6.** Example window of the SimWizzardADS application (creator of the simulation) of the Basic Solution company (Poland) [www.basicsolution.eu].

The numerical simulations were performed with the following assumptions:

	- a. In the variant with 3 radars, they will be located in the nodes of an equilateral triangle. The side length will be 2 km (see Figure 7-left),
	- b. In the variant with 4 radars, they will be located in the nodes of the rectangle. The side length will be 2 km (see Figure 7-right),
	- c. Detection range of the radars will be 2.5 km,
	- d. The beam width of the radars in the elevation will be 100. Due to the fact that the radars do not measure the height of the detected object, it is assumed that it equals half the beam width. As mentioned earlier, this is a common assumption in 2D radars,
	- e. Range measurement errors will contain only the variable component. It will be generated randomly according to the normal distribution. In each simulation, each radar will be assigned a standard deviation of the range measurement error (RMS range). It will be selected randomly (for each radar independently) from a set of 0.6, 0.8, 1.0, or 1.2 m. These are typical values of the RMS range for anti-drone FMCW radars available on the commercial (open) market. The values will be constant for one simulation. In the next one, the new values will be drawn on the same rules,
	- f. Azimuth measurement errors will contain a variable and constant component. The variable component will be generated randomly according to the normal distribution. In each simulation, a standard deviation of azimuth measurement error (RMS azimuth) will be assigned to each radar. It will be selected randomly (for each radar independently) from a set of 0.8, 1.0, 1.2, or 1.4 m. These are typical RMS azimuth values for anti-drone FMCW radars available on the commercial (open) market. The values will be constant for one simulation. The next one will draw new values on the same principles,
	- a. Calibration accuracy. The measure will be the difference between the radar north and the true north after calibration,
	- b. The radar north increasing. A measure will be a value of which a difference between the radar north and the true north has changed,
	- c. For each of the value listed in part 4, the following parameters will be computed:
	- d. Mean value,
	- e. Standard deviation (RMS),
	- f. Absolute minimum and maximum values and parameters at which they occurred (radar configuration, actual radar north deviation from the true north, and a standard deviation of azimuth and distance measurement.

Experiences of the authors with commercial FMCW radars used in anti-drone systems indicate that when choosing a calibration flight route, certain factors should be considered. They result from the characteristics of commercial FMCW radars used in anti-drone systems. The idea is to minimize the likelihood of losing drone tracking, as this will adversely affect the effectiveness of the proposed method. At the same time, the likelihood of various random measurement errors associated with the mutual geometry between the drone and the radars should be maximized. Practical experiences show that:


calibration flight route should include sections where the drone is approaching and moving away from the radar.

5. The route should not be too long, as this will make the calibration process time consuming, and at the same time, should consider points 1–4 as much as possible.

Taking the above into account, several possible variants of calibration flight routes were analyzed. It was finally found that the optimal ones (for selected locations of the radars) will be as shown in Figure 7.

**Figure 7.** The optimal routes of the calibration flight for the selected location of 3 radars (left) and 4 radars (right).

### **4. Results**

According to the assumptions, 95,000 simulations were performed for each of the two variants of the radar location (the 3- and 4-radars variants). Tables 1 and 2 show the calibration accuracy obtained due to the proposed method. The measure used is the difference between the radar north and the true north after calibration.


**Table 1.** Calibration accuracy in the network system consisting of 3 radars.

**Table 2.** Calibration accuracy in the network system consisting of 4 radars.


As it results from the data presented in Tables 1 and 2, the average value of the constant component of the azimuth error after calibration is close to 0 for each of the radars in the network, both in the case of the system consisting of 3 and 4 radars. In the best cases, it has been eliminated. However, in the worst cases, after calibration, it was still around 9 <sup>−</sup> 100. Therefore, to better analyze the effectiveness of the proposed method, Figure 8 presents histograms of the azimuth error constant component for radars R1, R2, and R3 in the 3-radars network (see Figure 7 (left)) before and Figure 9 after calibration. On the horizontal axis of the charts, there are ranges of the azimuth error constant component in degrees, and the numbers in the blue rectangles are the numbers of individual ranges.

**Figure 8.** Histogram of the azimuth error constant component for radars R1(upper), R2(middle), and R3(lower) in the 3-radars system before calibration.

**Figure 9.** Histogram of the azimuth error constant component for radars R1(upper), R2(middle), and R3(lower) in the 3-radars system after calibration.

Figures 8 and 9 show that for each of the radars, there was a clear improvement—a decrease in the value of the constant component of the azimuth error. Moreover, before calibration, the errors had a uniform distribution (according to the simulation assumptions), while after calibration, it was similar to the Gaussian distribution, with the highest probability density centered around 0. It is worth noting that only about 5% percent of errors after calibration exceeded the value of 60, which should be considered a very good result, given that before calibration, such errors were about 65%. It is also apparent from Figures 8 and 9 that the method impacts on a constant azimuth measurement error for each of the radars in a similar way. The same is also true in the 4-radars network (shown in Figure 7 (right)). Histograms of the azimuth error constant component for radars R1, R2, R3, and R4 before calibration are shown in Figure 10 and after calibration in Figure 11.

**Figure 11.** Histogram of the azimuth error constant component for radars R1(upper), R2(upper-middle), R3(lower-middle), and R4(lower) in the 4-radars system after calibration.

It can be seen in Figures 8–11 that the azimuth error constant component before and after calibration for both 3- and 4-radars networks have similar distributions. Therefore, the results of further analyses are presented based on one selected radar from both tested cases. The solution was applied to limit the volume of the paper. To maintain proper reliability, the results of the analyses are presented for the worst case (least improvement and greatest deterioration after calibration). This was the case of R1 radar from the 4-radars network (see Figure 7 (right)).

The graphs in Figures 8–11 show that the proposed method significantly improves the accuracy of the azimuth measurement by reducing azimuths misalignment (decreasing the constant error of the azimuths measurements). However, the distribution of the azimuth error constant component after calibration does not yet give a complete picture of the method's effectiveness. There are no answers to the following important questions:


To answer the above questions, an analysis of the radar north improvement was performed. The measure used was the value by which the difference between the radar north and the true north changed due to calibration. To better show the effect of the proposed method, the radar north improvement was expressed as a percentage of the constant azimuth error before calibration. The results of the analysis for the selected radar (the worst case) are presented in Figures 12–14.

Figure 12 shows the percentage histogram of improvement and deterioration of the selected radar orientation relative to the north. The horizontal axis presents the percentages, and the numbers in the blue rectangles are the number of individual ranges. A negative percentage value (≤ 0) means that after calibration, the radar orientation has deteriorated.

**Figure 12.** Histogram of the percentage improvement (upper) and deterioration (lower) of the R1 radar north in the 4-radars system after calibration.

Figure 12 shows that up to 11.6% of cases, after calibration by the proposed method, the radar orientation deteriorated relative to the true north. Most values of degradation did not exceed 100% of the pre-calibration value, but there were also cases where they reached 200%. At first glance, this seemed very worrying, because even a 20% deterioration may be unacceptable if it concerns large initial values (before calibration). Therefore, it seems to be important to answer the following questions:


To answer them, deterioration cases were thoroughly analyzed. Figure 13 shows the histogram of the initial (before calibration) constant azimuth errors for the R1 radar in the 4-radars system, where the orientation was deteriorated after the calibration.

**Figure 13.** Histogram of initial (before calibration) errors of the R1 radar orientation in the 4-radars system, for which the azimuth misalignment increased after calibration: Above 0%(upper), above 50%(middle), and above 100%(lower).

Figure 13 shows that the large percentage deterioration of radar orientation relative to the north was related to small initial values (before calibration). The deterioration by more than 50% basically concerned angles not greater than 30, and 100 not greater than 20. This means that even in the worst case of deterioration, the azimuth misalignment after calibration did not exceed 40, which is completely acceptable from a practical point of view. Moreover, it should be emphasized that in all analyzed cases, the deterioration of radar orientation relative to true north after calibration, concerned only one radar in the network, while significantly improving the orientation of the other radars. The deterioration effect occurred when one radar had a small initial error and the rest had large ones.

To show the advantages of the proposed method for the same radar, the results of an analogous analysis regarding the improvement of orientation relative to true north are presented below. Figure 14 shows the distribution of the initial (before calibration) azimuth constant errors for which the R1 radar orientation improved.

**Figure 14.** Histogram of initial (before calibration) errors of the R1 radar orientation in the 4-radars system, for which the azimuth misalignment decreased after calibration: Above 0% (upper), above 50% (middle), and above 80% (lower).

The graphs shown in Figure 14 show the advantages of the proposed calibration method. It can be seen that the method improves the initial orientation of the radar relative to the north over the entire range of initial errors. It is the most effective for errors above 30.
