*Article* **Mathematical Considerations for Unmanned Aerial Vehicle Navigation in the Magnetic Field of Two Parallel Transmission Lines**

**Dean Martinovi´c, Stjepan Bogdan \* and Zdenko Kovaˇci´c**

Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia; dean.martinovic@fer.hr (D.M.); zdenko.kovacic@fer.hr (Z.K.)

**\*** Correspondence: stjepan.bogdan@fer.hr

**Abstract:** This publication deals with the navigation of unmanned aerial vehicles (UAVs) moving in the magnetic field of two long, straight, parallel conductors, which is of high interest for several new technical applications. How the position and orientation of the UAV can be calculated using a minimal number of only three three-axis magnetometers are discussed. It is shown that the angles can be determined without the knowledge of the conductor currents and the magnetic field equations, but only by combining the sensor measurements with the rotation matrix and exploiting a characteristic property of the magnetic field. Furthermore, different strategies were investigated to determine the respective sensor positions. An analytical solution was derived from the nonlinear magnetic field equations, which promises a low computational time. It is shown that for a given sensor, several solutions exist, from which the correct one has to be selected. Therefore, a specific detection method is introduced. Once the solution is known, the UAV location can be determined. Finally, the overall algorithm was tested by simulations far from and near the conductors with superimposed typical magnetic noise.

**Keywords:** magnetic field navigation; parallel conductors; transmission lines; unmanned aerial vehicles; aerial manipulation

#### **1. Introduction**

Magnetic field-based navigation is an old discipline that had its beginnings in the 20th Century [1]. Magnetic fields have several advantages over other physical quantities. For example, unlike electromagnetic waves, they are not subject to multipath effects or fading because they do not reflect off surfaces. This allows for much higher accuracy. In addition, they are not attenuated when passing through magnetically neutral materials, making magnetic field-based navigation ideal for use between obstacles, at night, or in difficult weather conditions such as snow or fog. A large number of publications exist dealing with techniques for object navigation and localization. Depending on the application, they all use one or more distributed magnetic field sources that emit an alternating magnetic signal. This is usually sampled and measured by several sensors, which may be from different technologies. By combining these measurement data with the mathematical model of the source used, the position and orientation of the observed object are then determined. This can be divided into four different types of sources.

The first type is a simple rectangular or circular coil that typically has a large number of turns. It represents the basic element for the construction of more complex coil types and is often referred to as a dipole, since above a minimum distance from the winding, its magnetic field distribution can be described by the equations of a magnetic dipole. In various research projects [2–6], such as MILPS (magnetic indoor local positioning system), multiple coils mounted on walls are used to locate cell phones and robots in buildings or patients in hospitals. Other use cases include the navigation of medical instruments

**Citation:** Martinovi´c, D.; Bogdan, S.; Kovaˇci´c, Z. Mathematical Considerations for Unmanned Aerial Vehicle Navigation in the Magnetic Field of Two Parallel Transmission Lines. *Appl. Sci.* **2021**, *11*, 3323. https://doi.org/10.3390/app11083323

Academic Editor: Alejandro Suarez

Received: 9 March 2021 Accepted: 2 April 2021 Published: 7 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

inside the human body [7] or the localization of an implantable transponder for intensitymodulated radiation therapy [8]. In addition, several projects have investigated how these coils can be used to locate trapped persons as quickly as possible [9–12]. These could be buried miners or avalanche victims, who can be found by the emitted signal from a small coil worn close to the body. Another interesting topic is the detection of buried landmines in old war zones, also known as humanitarian demining [13,14]. This involves the use of hand-held coils such as metal detectors. Finally, a new application has emerged with the advent of the electric vehicle age.

In the context of inductive charging, the receiver coil mounted on the underbody of the car must be positioned relatively precisely above the transmitter coil in the parking lot, as shown in Figure 1. Since this is very difficult for the driver to achieve, the MPPS (magnetic pulse positioning system) was developed in [15]. Here, the transmitting coil in the ground emits a low-frequency pulsed magnetic signal that allows precise localization by the vehicle in the sub-centimeter range.

**Figure 1.** Navigation using a one-axis coil [15].

The next two coil types consist of two or three base coils wound orthogonally to each other. An example of a three-axis magnetic field source is shown in Figure 2.

**Figure 2.** Principle of a three-axis coil.

For two-axis sources, the coils carry phase quadrature currents that produce a rotating magnetic dipole [1,16,17]. Together with phase detection algorithms and amplitude evaluation, the object position and also the orientation can be determined. For three-axis sources, all source axes are successively excited with currents of the same frequency and phase. In this way, each axis generates an equivalent dipole of arbitrary orientation [18–20]. The three sensor responses are linearly independent and provide sufficient information to determine both the sensor position and its orientation.

The last type of source to be mentioned is a long wire with an alternating signal. In [21–23], its magnetic field was used to allow an underwater vehicle to locate and track a buried underwater cable.

Until now, however, no attention has been paid to magnetic sources consisting of two long parallel conductors. They enable many new useful engineering applications, especially when it comes to the realization of unmanned aerial vehicles (UAVs). For example, UAVs could autonomously perform inventory in warehouses. For this purpose, parallel guides can be attached to the floor or ceiling for navigation between shelves. In addition, UAVs could provide an inspection of rail systems efficiently when a current flows through them.

Another interesting topic is the autonomous performance of inspection and assembly work around power line systems. This is exactly what was investigated in the Aerial-Core (https://aerial-core.eu/, accessed on 06.04.2021) project. One of the goals of the project was to develop a technology that will enable a UAV to perform (Figure 3) the placement of distance separators between power lines.

**Figure 3.** Distance separator for transmission lines.

This is usually performed by workers who are exposed to very high electric and magnetic fields. Moreover, such a task requires a highly skilled helicopter pilot to keep the helicopter close to the power lines while a worker performs the installation. Therefore, the main objective of this work is to derive the equations for determining the position and orientation of a UAV moving through the magnetic field of two parallel conductors, thus enabling precise navigation of a UAV in close proximity to the power lines. We assumed that a UAV is equipped with visual and other sensors that are used both for navigating farther away from the lines and for approaching the lines.

#### **2. Orientation and Localization Equations**

#### *2.1. System Overview*

Figure 4 shows two parallel lines, which together form one phase of a three-phase power transmission system and a UAV flying in close proximity. The transmission lines lie in the *xy*-plane of the global coordinate frame {*G*} positioned exactly between the lines. For the purpose of sensing the magnetic field of the transmission lines, there are magnetometers mounted on the UAV, which are marked with yellow points.

**Figure 4.** System overview. Global frame and UAV frame.

The transmission lines of a single phase carry sinusoidal currents of fixed frequency and phase, which can be used to our advantage. They generate an alternating magnetic field, which allows static and low-frequency magnetic interference fields such as the Earth's field to be canceled out. This is realized by a differential measurement of the amplitude of the magnetic signal. The calculation can be implemented in the time domain and is very efficient [24]. In addition, the system can be treated as a static field problem with constant currents. For simplicity, they are chosen as *I* in the following. The magnetic field thus generated can be calculated using the law of Biot–Savart for thin conductors. In general, transmission lines are not thin. However, as the distance from the lines increases, the magnetic flux density distribution approaches that of thin conductors relatively quickly:

$$\vec{B}(\vec{r}) = -\frac{\mu\_0}{4\pi} I \int\_{-\infty}^{\infty} \frac{(\vec{r} - \vec{r}\,') \times d\vec{s}\,'}{|\vec{r} - \vec{r}\,'|^3} \,. \tag{1}$$

The integration is executed only along the cables according to the arrangement in Figure 4, i.e., *ds* = (*dx* , 0, 0)*T*. Thus, from the cross-product in Equation (1), it immediately follows that:

$$B\_{\mathbf{x}} = \mathbf{0}, \forall (\mathbf{x}, \mathbf{y}, \mathbf{z})^{T}, \tag{2}$$

which means that there is no field component in the direction parallel to the cables. Solving the integral for one cable and using the superposition principle, the remaining field components *By* and *Bz* can be determined. The total magnetic field equation with the transmission lines at ±*y*<sup>0</sup> is then:

$$\vec{B}(y, z) = \mathcal{C} \begin{pmatrix} 0\\ \frac{-z}{z^2 + (y + y\_0)^2} + \frac{-z}{z^2 + (y - y\_0)^2} \\ \frac{y + y\_0}{z^2 + (y + y\_0)^2} + \frac{y - y\_0}{z^2 + (y - y\_0)^2} \end{pmatrix}, \quad \text{with} \quad \mathbb{C} = \frac{I\mu\_0}{2\pi}, \tag{3}$$

where *C* is a prefactor depending on the transmission line current.

Figure <sup>5</sup> shows the signal amplitude <sup>|</sup>- *B*(*y*, *z*)| around the transmission lines for *I* = 700 A and *y*<sup>0</sup> = 0.25 m. Figure 6 shows the locations of equal amplitude in the *yz*-plane expressed by isolines. The magnetic field contributions of the other two phases are neglected because the cables are typically at least 2 m apart.

**Figure 5.** Signal power around the transmission lines.

**Figure 6.** Isolines of the magnetic field describing equal power.

The magnetic field-based navigation of a UAV requires that the UAV can sense the magnetic field in all spatial directions. It uses this information to determine its relative position and orientation with respect to the fixed position and orientation of the global coordinate frame {*G*}. The flight capabilities allow the UAV to achieve an arbitrary orientation by rotating the UAV's body about the *z*- (blue), *y*- (green), and *x*-axes (red) of the local drone frame *Lu* for the respective yaw, pitch, and roll angles *α*, *β*, and *γ*. The origin position of the local frame *Lu* in {*G*}, denoted by *pD*, also represents the position of the UAV to be determined. The sensors have predefined positions *pi*<sup>0</sup> in *Lu*, where the index *i* refers to a particular sensor. Moreover, each magnetometer has its own local coordinate frame *Li*. Then, their respective positions *pi* <sup>=</sup> (*xi*, *yi*, *zi*)*<sup>T</sup>* in {*G*} can be expressed using the rotation matrix *RLu <sup>G</sup>* (*α*, *β*, *γ*) = *Rx*(*γ*) *Ry*(*β*) *Rz*(*α*):

$$
\vec{p}\_{\vec{l}} = \mathcal{R}\_{G}^{L\_{\text{av}}}(\mathfrak{a}, \mathfrak{z}, \mathfrak{z}) \,\vec{p}\_{\vec{n}0} + \vec{p}\_{D\prime} \,\tag{4}
$$

$$R\_{\mathbb{Z}}(\mathfrak{a}) = \begin{pmatrix} c\_{\mathfrak{a}} & s\_{\mathfrak{a}} & 0 \\ -s\_{\mathfrak{a}} & c\_{\mathfrak{a}} & 0 \\ 0 & 0 & 1 \end{pmatrix}, \\ R\_{\mathbb{Y}}(\mathfrak{f}) = \begin{pmatrix} c\_{\mathfrak{f}} & 0 & -s\_{\mathfrak{f}} \\ 0 & 1 & 0 \\ s\_{\mathfrak{f}} & 0 & c\_{\mathfrak{f}} \end{pmatrix}, \\ R\_{\mathbb{X}}(\gamma) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & c\_{\gamma} & s\_{\gamma} \\ 0 & -s\_{\gamma} & c\_{\gamma} \end{pmatrix}.$$

where *sx* and *cx* are abbreviations for *sin x* and *cos x* needed in the following. From Equation (3), it can be seen that the position of the UAV along the cables (*x*-direction) cannot be determined because the magnetic field does not provide sufficient information. Thus, only the projections of the magnetometers and the UAV frame origin in the *yz*-plane can be computed. The representation of the magnetic field *vi* measured by the *i*-th sensor at position *pi* expressed in {*G*} then becomes:

$$\vec{B}(\vec{p}\_i) = \mathcal{R}\_G^{L\_u}(\alpha, \mathcal{G}, \gamma) \; R\_{L\_u}^{L\_i} \; \vec{v}\_{i\prime} \tag{5}$$

where *RLi Lu* is the user-defined orientation of *Li* in the UAV body frame *Lu* and *<sup>R</sup>Lu <sup>G</sup>* is the orientation of *Lu* in the global frame {*G*}. If *Li* is aligned with {*G*}, then it is *<sup>R</sup>Lu <sup>G</sup> <sup>R</sup>Li Lu* = *I*, where *I* is the identity matrix. In this case, the sensor measures exactly the magnetic field given by Equation (3), i.e., - *B*(*pi*) = *vi*. The system of three equations in (5) establishes a relationship between the sensor measurement, sensor position, and UAV orientation. It is nonlinear and has five unknowns (*yi*, *zi*, *α*, *β*, *γ*)*T*, which are determined in the following sections.

#### *2.2. Orientation Determination*

In this section, the UAV angles *α*, *β*, and *γ* are derived. As will be shown below, the calculation is possible without knowledge of the field equation and the transmission

line current. Three magnetometers were considered. For the following analysis, their corresponding equations were set up in matrix notation, where 03×<sup>3</sup> is a 3 × 3 zero matrix:

$$
\begin{pmatrix}
\vec{B}(\vec{p}\_1) \\
\vec{B}(\vec{p}\_2) \\
\vec{B}(\vec{p}\_3)
\end{pmatrix} = \begin{pmatrix}
R\_G^{L\_u} R\_{L\_u}^{L\_1} & 0\_{3\times 3} & 0\_{3\times 3} \\
0\_{3\times 3} & R\_G^{L\_u} R\_{L\_u}^{L\_2} & 0\_{3\times 3} \\
0\_{3\times 3} & 0\_{3\times 3} & R\_G^{L\_u} R\_{L\_u}^{L\_3}
\end{pmatrix} \begin{pmatrix}
\vec{v}\_1 \\
\vec{v}\_2 \\
\vec{v}\_3
\end{pmatrix}.\tag{6}
$$

It is obvious that this system of equations is solvable since the rank of the matrix is nine, which is equal to the number of unknowns. In general, in addition to the three angles, each row provides two unknowns *yi* and *zi*. Now, the UAV's yaw and pitch angles can be calculated directly from the two sensor measurements *vl* and *vn* and the zero field property (2). With this and the Cartesian unit vector*ex*, it follows from Equation (6) that:

$$0 = (R\_G^{L\_u} R\_{L\_u}^{L\_i} \vec{v}\_i) \cdot \vec{e}\_{x\_{\prime}} \qquad i = l, n. \tag{7}$$

Obviously, this equation does not depend on the sensor positions or the choice of their local frames. Thus, they can be located anywhere on the UAV and have any local frame. However, to avoid additional transformations or computational effort, the sensor frames are aligned with the UAV body frame. Then, the equation system (7) becomes:

$$\begin{array}{ccccc} 0 = & c\_{\beta} \ c\_{a} \ v\_{lx} & + c\_{\beta} \ s\_{a} \ v\_{ly} & -s\_{\beta} \ v\_{lz} \\ 0 = & c\_{\beta} \ c\_{a} \ v\_{nx} & + c\_{\beta} \ s\_{a} \ v\_{ny} & -s\_{\beta} \ v\_{nz} \end{array} \tag{8}$$

After division by *c<sup>β</sup>* and subsequent subtraction, *tan β* can be eliminated. Solving the remaining equation for *α* leads to:

$$\alpha = \operatorname{atan} \left( \frac{\upsilon\_{lz}\upsilon\_{nx} - \upsilon\_{lx}\upsilon\_{nx}}{\upsilon\_{ly}\upsilon\_{nz} - \upsilon\_{lz}\upsilon\_{ny}} \right) = \operatorname{atan} \left( \frac{(\vec{\upsilon}\_{l} \times \vec{\upsilon}\_{n}) \cdot \vec{\varepsilon}\_{y}}{(\vec{\upsilon}\_{l} \times \vec{\upsilon}\_{n}) \cdot \vec{\varepsilon}\_{x}} \right) \tag{9}$$

and further with *A* = *tan α* to:

$$\beta = \operatorname{atan} \left( \frac{v\_{nx} + v\_{ny}A}{v\_{nz}\sqrt{A^2 + 1}} \right) = \operatorname{atan} \left( \frac{v\_{lx} + v\_{ly}A}{v\_{lx}\sqrt{A^2 + 1}} \right). \tag{10}$$

The roll angle *γ* can also be computed by exploiting the zero field property (2). However, the rotation matrix *RLu <sup>G</sup>* does not in general contain *γ* in its *x*-row. Thus, a different sensor *k* must have a different local frame *Lk* in order for the zero to appear in the *y*- or *z*-row containing all three angles. This can be achieved by, for example, placing the sensor so that it is rotated relative to the UAV body frame *Lu* by an angle *δ* about the *y*-axis, i.e., it is:

$$\mathcal{R}\_{L\_{\mu}}^{L\_{k}} = \mathcal{R}\_{\mathcal{Y}}(\boldsymbol{\delta}).\tag{11}$$

Then, from Equation (5), it follows that after applying the coordinate transformation *A* = *RLu <sup>G</sup> <sup>R</sup><sup>T</sup> <sup>y</sup>* (*δ*)*R<sup>G</sup> Lu* , the vector - *B*(*pk*) is expressed in *Lu*:

$$\begin{split} \vec{\vec{B}}(\vec{p}\_k) &= A \, \vec{B}(\vec{p}\_k) = R\_G^{L\_w} R\_\mathcal{Y}^T(\delta) R\_{L\_w}^G R\_G^{L\_w} R\_\mathcal{Y}(\delta) \, \vec{v}\_{k\prime} \\ &\quad \text{for } \delta = \frac{\pi}{2} \Rightarrow \begin{pmatrix} \mathcal{B}\_x(\vec{p}\_k) \\ \hat{\mathcal{B}}\_\mathcal{Y}(\vec{p}\_k) \\ 0 \end{pmatrix} = R\_G^{L\_w} \vec{v}\_k. \end{split} \tag{12}$$

From Equation (12), it follows that the rotation of - *B*(*pk*) for a given angle *δ* = *<sup>π</sup>* 2 around the *<sup>y</sup>*-axis of *Lu* has forced the *<sup>z</sup>*-component of the resulting vector - *B*ˆ(*pk*) to zero:

$$\mathcal{B}\_z(\vec{p}\_k) = 0 = (R\_G^{L\_u} \vec{v}\_k) \cdot \vec{e}\_{z\prime} \tag{13}$$

where*ez* is the Cartesian unit vector. Then, in a first step, Equation (13) can be written as:

$$0 = (s\_{\gamma}s\_{a} + c\_{\gamma}c\_{a}s\_{\beta})\upsilon\_{kx} + (c\_{\gamma}s\_{\beta}s\_{a} - c\_{a}s\_{\gamma})\upsilon\_{ky} + c\_{\gamma}c\_{\beta}\upsilon\_{kz}.$$

Next, the division by *c<sup>γ</sup>* forms the tangent function *tγ*, which can be isolated and finally solved for *γ*:

$$\begin{split} 0 &= (t\_{\gamma}s\_{\mathfrak{a}} + c\_{\mathfrak{a}}s\_{\beta})v\_{kx} + (s\_{\beta}s\_{\mathfrak{a}} - c\_{\mathfrak{a}}t\_{\gamma})v\_{ky} + c\_{\beta}v\_{kx} \\ &\Rightarrow \gamma = \operatorname{atan}\left(\frac{v\_{ky}s\_{\beta}s\_{\mathfrak{a}} + v\_{kx}s\_{\beta}c\_{\mathfrak{a}} + v\_{kx}c\_{\beta}}{v\_{ky}c\_{\mathfrak{a}} - v\_{kx}s\_{\mathfrak{a}}}\right) . \end{split} \tag{14}$$

The sensor can be placed anywhere on the UAV. However, the result only applies to the local frame definition in (11). Theoretically, arbitrary local frame definitions can be chosen, but in this case, there is no zero in any coordinate, and the whole system of Equation (12) must be solved since the sensor position *pk* is unknown. However, the equations are complicated, and it is questionable whether an analytical solution exists. The presented formulas are valid if the UAV angles are within the definition range *D*, respectively *<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>* <sup>∈</sup> *<sup>D</sup>* = [−*<sup>π</sup>* <sup>2</sup> , *<sup>π</sup>* <sup>2</sup> ]. The limits of the equations presented are discussed in Section 3.

#### *2.3. Position Determination*

In the previous section, it was shown that the UAV orientation and position computation can be decoupled into two independent problems. In doing so, it was shown how the UAV orientation can be computed using the zero field property and without knowledge of the transmission line currents or the magnetic field equation. This section now deals with the computation of the respective sensor positions *pi*, which are needed to finally determine the UAV position *pD* via Equation (4).

With the decoupling presented, the complexity of the equation system (6) can be greatly reduced, since by knowing the angles, the right side can be computed and becomes a simple real-numbered vector. With *RLu <sup>G</sup> <sup>R</sup>Li Lu vi* = *V<sup>i</sup>*, the system becomes:

$$
\begin{pmatrix}
\vec{B}(\vec{p}\_1) \\
\vec{B}(\vec{p}\_2) \\
\vec{B}(\vec{p}\_3)
\end{pmatrix} = \begin{pmatrix}
\vec{\mathbb{V}}\_1 \\
\vec{\mathbb{V}}\_2 \\
\vec{\mathbb{V}}\_3
\end{pmatrix}.
\tag{15}
$$

Obviously, the computation of the respective sensor positions is also decoupled. Thus, to obtain *pD*, it is sufficient to solve one of these rows in (15). Then, for a sensor *i*, the corresponding row is:

$$
\vec{B}(\vec{p}\_i) = \begin{pmatrix} B\_y(y\_{i\prime}z\_i) \\ B\_z(y\_{i\prime}z\_i) \end{pmatrix} = \begin{pmatrix} V\_{\rm iy} \\ V\_{\rm iz} \end{pmatrix} = \vec{V}\_{\rm i\prime} \tag{16}
$$

which represents a system of two equations. The components *By* and *Bz* describe the magnetic field in two dimensions in {*G*} and were obtained by calculating the Biot–Savart integral (1). This means, on the one hand, that the two equations in (16) are linearly independent and theoretically allow the determination of the unknowns *yi* and *zi*. On the other hand, the equations are nonlinear, and the question arises whether it is possible to solve (16) analytically and, if so, how. In this context, there are several approaches, all of which were investigated in the context of this work. Since the following considerations are equally valid for all sensors, the subscript *i* is omitted for simplicity.

The first row in (16) can be solved for one coordinate *y* or *z*, which is then inserted into the second row to compute the other coordinate; and the same in reverse. However, solving *By* = *Vy* for *z* or *Bz* = *Vz* for *y* in the general form leads to extremely large solutions *z*(*y*, *Vy*) and *y*(*z*, *Vz*), which contain root terms with eighth-order polynomials as arguments. The

other two cases, i.e., solving *By* = *Vy* for *y* or *Bz* = *Vz* for *z*, lead to smaller solutions. For example, from the latter, the four solutions for *z* follow:

$$z(y, V\_z) = \pm \sqrt{\pm D(y^2, y) + a(y^2, y)},$$

$$\begin{array}{ll}\text{with} & D = \frac{1}{V\_z}\sqrt{4V\_z^2 y^2 y\_0^2 - 4CV\_z y y\_0^2 + C^2 y^2} \\ & a = -y\_0^2 - y^2 + \frac{Cy}{V\_z} \end{array} \tag{17}$$

The result *y*(*z*, *Vy*) has the same structure, but is not used, as the easiest final equation can be found by inserting (17) into *B*<sup>2</sup> *<sup>y</sup>*(*y*, *z*) = *V*<sup>2</sup> *<sup>y</sup>* . With *D* from (17), it becomes:

$$\frac{V\_z(\text{Cy} \pm D)^2 (-V\_z y\_0^2 - V\_z y^2 + \text{Cy} \pm D)}{y^2 (-2V\_z y\_0^2 + \text{Cy} \pm D)^2} = V\_y^2. \tag{18}$$

The equation can in principle be solved for *y*. However, no suitable substitution could be found, so that the root terms disappear, and at the same time, a polynomial of a smaller order than five is obtained. For example, after the substitution with *E* = *Cy* ± *D*, unwanted *y*<sup>2</sup> expressions still remain in the equation. The square roots can also be eliminated by subsequent isolation and squaring. However, this leads to a high-order polynomial. In summary, it is difficult to impossible to derive an analytical solution only from the equation system (16). In this context, the idea is to consider the signal power as an additional equation. As will be shown in the next steps, its combination with (16) leads to a compact analytical solution, which is derived below: At a given position, the signal power can be calculated by taking the dot product of the corresponding sensor response *V* with itself. This must be equal to the dot product of - *B*(*y*, *z*) with itself, leading to the following equation:

$$|\vec{V} \cdot \vec{V} = |\vec{V}|^2 = \vec{B}(y, z) \cdot \vec{B}(y, z) \Rightarrow \quad |\vec{V}|^2 = \frac{4\mathcal{C}^2(z^2 + y^2)}{z^4 + y^4 + (2y\_0^2 + 2y^2)z^2 + y\_0^4 - 2y^2y\_0^2}.\tag{19}$$

The most simple equation is found by solving the formula for *z*<sup>2</sup> and inserting it into *Bz*(*y*, *<sup>z</sup>*) <sup>=</sup> *Vz*. For this, the two solutions for *<sup>z</sup>*<sup>2</sup> are obtained with *<sup>P</sup>* <sup>=</sup> <sup>|</sup>*V*-|:

$$z^2(y, P) = \pm 2\sqrt{y^2 y\_0^2 - \frac{C^2 y\_0^2}{P^2} + \frac{C^4}{P^4}} - y\_0^2 - y^2 + \frac{2C^2}{P^2}.\tag{20}$$

Next, with the positive solution of (20), the equation *Bz*(*y*, *z*2(*y*, *P*)) = *Vz* can be written as:

$$\frac{-P^4 y g\_0^2 + \hat{D} P^2 y + \mathcal{C}^2 P^2 y}{-\mathcal{C} P^2 y\_0^2 + 2\mathcal{C}\hat{D} + 2\mathcal{C}^3} = V\_{z\_{\prime}} \quad \text{with} \quad \hat{D} = \sqrt{(P^4 y^2 - \mathcal{C}^2 P^2) y\_0^2 + \mathcal{C}^4}.$$

Further, *D*ˆ is isolated and squared, which leads to:

$$D^2 = \frac{((P^4y - CP^2V\_z)y\_0^2 - C^2P^2y + 2C^3V\_z)^2}{(P^2y - 2CV\_z)^2}.$$

This eliminates the square root of *D*ˆ . After re-substituting and transforming the equation, the position calculation is reduced to a root-finding problem of a fourth-order polynomial,

$$\begin{array}{cccc}\sum\_{n=0}^{4}a\_{n}y^{n} = 0, & a\_{4} = P^{8}y\_{0\prime}^{2} & a\_{3} = -4\mathbb{C}P^{6}V\_{z}y\_{0\prime}^{2} \\ a\_{2} = -P^{8}y\_{0}^{4} + \mathbb{C}^{2}P^{6}y\_{0}^{2} + 4\mathbb{C}^{2}P^{4}V\_{z}^{2}y\_{0\prime}^{2} \\ a\_{1} = 2\mathbb{C}P^{6}V\_{z}y\_{0}^{4} - 2\mathbb{C}^{3}P^{4}V\_{z}y\_{0\prime}^{2} & a\_{0} = -\mathbb{C}^{2}P^{4}V\_{z}^{2}y\_{0\prime}^{4}.\end{array} \tag{21}$$

The general analytic root expressions of such a polynomial are extremely large. Therefore, it is important to make Equation (3) as simple as possible by placing the global coordinate frame exactly halfway between the transmission lines. In addition, the lines

should be placed in the *xy*-plane parallel to the *x*-axis. Then, with the coefficients *an* given in (21), the four large expressions for *y* are greatly reduced and become:

$$y = \pm \frac{\sqrt{P^4 y\_0^2 + 2C^2 V\_z \sqrt{V\_z^2 - P^2} + 2C^2 V\_z^2 - C^2 P^2}}{2P^2} \pm \frac{\sqrt{P^4 y\_0^2 - 2C^2 V\_z \sqrt{V\_z^2 - P^2} + 2C^2 V\_z^2 - C^2 P^2}}{2P^2} + \frac{C V\_z}{P^2} \tag{22}$$

where *S* represents the two square root addends. The same result is obtained for the negative solution of Equation (20). Now, in the next step, four different cases can be distinguished, all of which lead to a certain class of solutions.

(I) *Vy*, *Vz* = 0:

**Theorem 1.** *Two of the y-solutions in* (22) *can immediately be excluded, because they are complex.*

**Proof.** *Vz* ≤ |*V*- <sup>|</sup> <sup>=</sup> *<sup>P</sup>*, <sup>∀</sup>*V*- <sup>∈</sup> <sup>R</sup><sup>3</sup> <sup>⇒</sup> &*V*<sup>2</sup> *<sup>z</sup>* <sup>−</sup> *<sup>P</sup>*<sup>2</sup> <sup>∈</sup> <sup>C</sup>. With this, the above square root addends represented by *S* can be written as:

$$S = \pm \sqrt{a + jb} \pm \sqrt{a - jb},$$

$$a = P^4 y\_0^2 + (2V\_z^2 - P^2)C^2, \; b = 2C^2 V\_z \sqrt{P^2 - V\_z^2}.\tag{23}$$

Next, the parameters *r* and *ϕ* are introduced as:

$$
\sigma = \sqrt{a^2 + b^2}, \quad \varrho = \tan^{-1}\left(\frac{b}{a}\right) + \pi. \tag{24}
$$

Now, if the addends have a different sign *S*, this results in:

$$\begin{split} S &= \pm \sqrt{a+j}\,b \mp \sqrt{a-j}\,b = \pm \sqrt{r}\,e^{j\frac{\theta}{2}} \mp \sqrt{r}\,e^{-j\frac{\theta}{2}}, \\ &= \pm 2j\sqrt{r}\,\frac{1}{2\overline{\gamma}}(e^{j\frac{\theta}{2}} - e^{-j\frac{\theta}{2}}) = \pm 2j\sqrt{r}\sin\frac{\theta}{2}, \end{split}$$

whereby the *e*-expressions are obtained using Euler's formula.

Applying this procedure for the case of the same sign, the real-numbered and more compact expressions for *y* can be derived. Therefore, in a first step, *S* is determined:

$$S = \pm \sqrt{a+j'b} \pm \sqrt{a-j'b} = \pm \sqrt{r} \, e^{j\frac{\theta}{2}} \pm \sqrt{r} \, e^{-j\frac{\theta}{2}}$$

$$= \pm 2\sqrt{r} \, \frac{1}{2} (e^{j\frac{\theta}{2}} + e^{-j\frac{\theta}{2}}) = \pm 2\sqrt{r} \cos \frac{\theta}{2}.$$

The square root addends in Equation (22) represented by *S* now can be replaced by this result, leading to:

$$y\_{1,2} = \frac{S}{2P^2} + \frac{CV\_\tau}{P^2} = \frac{1}{P^2} \left( \pm \sqrt{r} \cos \frac{\varphi}{2} + CV\_z \right),\tag{25}$$

with *r* and *ϕ* according to (24).

**Theorem 2.** *There exist two solutions* |*y*1| = |*y*2|*. One each is in y* < 0 *and y* > 0*, respectively.*

**Proof.** The hypothesis is true if the absolute value of the square root addends in Equation (22) is always greater than *<sup>C</sup>*|*Vz*|*P*<sup>−</sup>2.

$$\begin{split} \frac{|\pm(\sqrt{a+jb}+\sqrt{a-jb})|}{2P^2} > \frac{\mathbb{C}|V\_z|}{P^2} &\implies |(\sqrt{a+jb}+\sqrt{a-jb})|^2 > (2\mathbb{C}|V\_z|)^2 \\ \Rightarrow q\_l &= \sqrt{a+jb}\sqrt{a-jb} > 2\mathbb{C}^2V\_z^2 - a = q\_r. \end{split}$$

Further, with *a*, *b* from (23), it follows for the left and right side:

$$q\_I = -P^8 y\_0^4 + (4\mathcal{C}^2 P^4 V\_z^2 - 2\mathcal{C}^2 P^6) y\_0^2 + \mathcal{C}^4 P^4,$$

$$q\_I = -P^8 y\_0^4 - 2\mathcal{C}^2 P^6 y\_0^2 + \mathcal{C}^4 P^4,$$

$$\Rightarrow q\_I - q\_I = 4\mathcal{C}^2 P^4 V\_z^2 y\_0^2 > 0.$$

(II) *Vy* = 0 ⇒ *P* = |*Vz*|: In this case, the excluded complex solutions must also be considered, since they become real. Both give the same result *y*3. The two real solutions *y*1,2 can again be obtained from (25) or from the general Equation (22).

$$y\_{1,2} = \frac{1}{V\_z^2} (\pm \sqrt{V\_z^4 y\_0^2 + \mathcal{C}^2 V\_z^2} + \mathcal{C}V\_z), \quad y\_3 = \frac{\mathcal{C}}{\nabla\_z} \tag{26}$$


(III) *Vz* = 0 ⇒ *P* = |*Vy*|: The solutions are obtained analogous to Case (II). |*y*1| = |*y*2|.

$$y\_{1,2} = \pm \frac{\sqrt{V\_y^4 y\_0^2 - \mathcal{C}^2 V\_y^2}}{V\_y^2}, \quad y\_3 = 0 \tag{27}$$

(IV) *Vy* = *Vz* = 0: If this case occurs, the sensor lies exactly in the origin of the global frame.

Once the corresponding *y* is determined, in a final step, *z* can be calculated from Equation (20):

$$z = \pm \sqrt{\pm 2\sqrt{(y^2 - \frac{C^2}{P^2})y\_0^2 + \frac{C^4}{P^4}} - y\_0^2 - y^2 + \frac{2C^2}{P^2}},\tag{28}$$

offering four possible *z*-solutions, all of which can be real-numbered. There are also situations where, for a given *y*, a subset of the *z*-solutions in (28) can become complex. Therefore, their use must be treated with caution, especially when the formulas are implemented on a microcontroller of a real UAV. A safer way is to check the arguments of the square roots. The argument of the outer square root can be written as:

$$A\_o = \pm g + h, \; h = -y\_0^2 - y^2 + \frac{2C^2}{P^2}, \tag{29}$$

where *g* is the inner square root expression. For all solutions *y*1,2,3 obtained, *g* ≥ 0 holds, since they were obtained using the *z*2-expression (20). Thus, the argument of the inner square root does not need to be checked. In contrast, *Ao* can become negative in general and should be checked to satisfy that the condition *Ao* > 0 is fulfilled before applying the square root.

#### *2.4. Selecting the Right Solution*

Equation (28) yields four *z*-solutions for each *y*-solution of Cases (I)–(IV). Thus, depending on the case, there are up to twelve possible positions in the *yz*-plane for a given sensor. For navigation, it is necessary to know which of these points is the correct one. Finding it is the task of this subsection.

Now, in the first approach, the number of possible positions can be narrowed down by excluding Cases (II)–(IV). The reason for this is that they can only occur in very few locations and for a very short time during navigation, which extremely reduces their probability of occurrence. For example, the solutions of Case (II) all lie on the *z*-axis. In Case (III), one solution lies on the *y*-axis and the other two lie on a circle with a diameter equal to the distance between the transmission lines, i.e., 2*y*0. This can be shown by computing

the radius ' *y*2 1,2 + |*z*2(*y*1,2)|, where *<sup>y</sup>*1,2 comes from Equation (27) and *<sup>z</sup>*<sup>2</sup> comes from (20). The case with the lowest probability of occurrence is Case (IV), where the sensor is located directly at the origin. The locations of Cases (II)–(IV) are summarized in Figure 7 and shown as black and red lines. Since they are not relevant for navigation and only increase the implementation effort, they can be ignored. Thus, since only Case (I) is considered, eight possible positions for a single sensor remain, from which the correct one must still be selected.

In the next steps, it will be seen that six can be excluded. To do this, some constraints must be met. As explained in Section 2.1, the sinusoidal transmission line current can be considered as an equivalent constant current *I*, i.e., it never changes its direction. Furthermore, the UAV orientation must not exceed the definition range *D* from Section 2.2 with *<sup>α</sup>*, *<sup>β</sup>*, *<sup>γ</sup>* <sup>∈</sup> *<sup>D</sup>* = [−*<sup>π</sup>* <sup>2</sup> , *<sup>π</sup>* <sup>2</sup> ]. This would otherwise lead to ambiguities due to the symmetry of the magnetic field. The details concerning ambiguities are the subject of Section 3. Now, the idea is to investigate which of the eight remaining positions lead to a match with the sensor measurement *V*-, i.e., which positions satisfy Equation (16).

**Theorem 3.** *Given a sensor k that measures V<sup>k</sup> =* (*Vky*, *Vkz*)*<sup>T</sup> and has two y-solutions according to* (25) *in Case (I), for each of the two values, there are four associated z-solutions according to Equation* (28)*. However, for each y, there is only one z-solution that leads to a match with Vk, i.e., there are only two possible locations in the yz-plane where the sensor can lie.*

**Proof.** Let *yk* be one of the two *y*-solutions. Then, it generates the four *z*-solutions, which in simplified notation are *zn* = ±&±*g* + *h*, *g*, *h* = 0. *zn* has two outer and two inner signs. Further, the *Bz*-component in Equation (3) has a *z*2-addend in the nominator as the only *<sup>z</sup>*-dependency, so it can be written as *Bz*(*yk*, *<sup>z</sup>*2). Now, if for *zl* <sup>=</sup> ±&*<sup>g</sup>* + *<sup>h</sup>*, it follows that:

$$B\_z(y\_{k'}z\_l^2) = B\_z(y\_{k'} \mathfrak{g} + h) = V\_{kz'}$$

then with *g* + *h* = −*g* + *h*:

$$V\_{kz} = B\_z(y\_{k\prime} \text{g} + h) \not\equiv B\_z(y\_{k\prime} - \text{g} + h),$$

because *Bz*(*yk*, *z*2) is strictly monotonous. This means, that only one of the inner signs of *zn* leads to a match with *Vkz*. This fact is independent of the outer sign. Once the inner sign is known, similarly, the *By*-component in Equation (3) can be analyzed. Therefore, let *z*2 *<sup>m</sup>* = *g* + *h* lead to a match with *Vkz*. Now, if *By*(*yk*, *zm*) = *Vky*, then *By*(*yk*, −*zm*) is:

$$\mathcal{L}\left(\frac{-(-z\_m)}{z\_m^2 + (y\_k + y\_0)^2} + \frac{-(-z\_m)}{z\_m^2 + (y\_k - y\_0)^2}\right) = -V\_{ky}.\tag{30}$$

As shown in the last step, only one of the outer signs of *zn* can be the correct one, leading to a match with *Vky*.

In summary, there are only two possible locations for a single sensor, so that Equation (16) is satisfied. With the constraints defined above, another fact follows from the proof. It is needed for the consideration of the ambiguities in Section 3.

**Corollary 1.** *The sign of Vy of the sensor response V with respect to the global frame* {*G*} *depends only on the half plane z* > 0 *or z* < 0 *in which the sensor is located.*

Consequently, both solutions are always in the same half plane. By Theorem 2, one lies in *y* < 0 and the other in *y* > 0. An example is given in Figure 7, which shows the solutions as bold dots.

**Figure 7.** Two remaining positions of a sensor.

Since both points must be in areas with the same direction of the magnetic field lines to satisfy Equation (16), one is outside the circle and one is inside. Of the two remaining sensor positions that match *V*- , one must still be excluded. By considering only a single sensor, it is impossible to determine the correct one. However, in the following, multiple sensors are considered instead, which provide much more information through which a whole set of correct sensor positions can be selected.

The idea is as follows. On the one hand, the user-defined positions *pi*<sup>0</sup> of *N* sensors form an original polygon in the *yz*-plane of the UAV frame *Lu*. It has a fixed geometric structure *Su* like a fingerprint. On the other hand, the two positions *pi*1,2 computed for each sensor can now be combined into 2*<sup>N</sup>* polygons in the global frame {*G*}. Finally, the one that matches the structure *Su* is selected. Therefore, before comparison, each polygon must be transformed back to *Lu* using Equation (4). However, this is not possible since the UAV position *pD* must already be known for this. One way around this problem is to choose a different reference point. For example, all polygons are shifted so that their upper left corner appears in the origin of {*G*} before comparison. What is important here is that the original polygon is first transformed to {*G*}. Its new geometric structure *SG* can be computed using the known UAV orientation and *pi*<sup>0</sup> via Equation (4) with *pD* =-0. Then, its upper left corner is moved to the origin. The other polygons obtained from the sensor measurements are already representations with respect to {*G*}, so their upper left corner only needs to be shifted to the origin. The shift is allowed since it does not affect the polygon geometry. Finally, the comparison can be done using the least- squares estimation. Here, the deviation of the corresponding polygon nodes is squared and added up. The polygon with the smallest sum best fits *SG* and is selected. From each of its nodes, the UAV position *pD* can then finally be computed using Equation (4). It is recommended to place the sensors in the *yz*-plane of *Lu* so that they span an area as large as possible. This allows for better polygon detection, since during the aerial manipulation, the roll and pitch angle of the UAV will typically have a rather small misalignment with respect to the global frame {*G*}. The polygon detection method proposed here was successfully tested. The implementation details and test results are presented in Section 4.

#### **3. Ambiguities**

#### *3.1. Ambiguities Due to Field Symmetry*

An ambiguity exists if all of the UAV's sensors show the same output in at least two different locations. Under the condition of equivalent static transmission line current, several cases can be identified, which are shown in Figure 8.

**Figure 8.** Ambiguities due to field symmetry.

In the lower left, the real position of the local frame *Li* of the *i*-th sensor is shown. The UAV can now change its flight direction and additionally be above the transmission line in Case (a). The sensor output would be the same. In Case (b), the whole UAV is rotated by angle *π* around the *x*-axis. Here, since the magnetic field is mapped to itself, the sensor will again measure the same. In this case, the UAV is flying on its back.

#### *3.2. Ambiguities Due to Signal Symmetry*

As introduced in Section 2.1, transmission lines carry an alternating current. In addition to improved filtering capabilities, this also introduces a number of ambiguities in navigation, as shown below. The magnitude and sign of the three field components measured by a three-axis magnetometer can be determined in the time domain from the samples. For this purpose, a method was proposed in [24], which is beyond the scope of the present paper. It is taken as given for the following considerations. For example, a sensor with the sample values depicted in Figure 9 would yield the measured value *<sup>v</sup>* <sup>=</sup> (<sup>1</sup> *mT*, −<sup>200</sup> <sup>μ</sup>*T*, −<sup>500</sup> <sup>μ</sup>*T*)*T*.

**Figure 9.** Sample values of a magnetometer.

However, a half period later, the signals in Figure 9 appear shifted, and the signs of the individual field components change; i.e., the sensor now measures −*v*. Since technically, only signal amplitudes are used, it is as if the equivalent static current *I* changes its direction every half period. A direct consequence of this fact follows from Corollary 1. Under the defined constraints in Section 2.4, it allows detecting whether the UAV is flying under or above the transmission lines, since the sign of the measured *y*-components *Viy* in the global frame depends only on which half plane *z* > 0 or *z* < 0 the sensors are located. However, since the direction of *I* changes cyclically, this is no longer possible without additional detection methods, since *Viy* also changes sign. Such a detection could be realized, for example, by comparing the signal energy of several sensors. This could exploit the fact that the closer the sensor is to the cables, the higher the energy is. However, this is beyond the scope of this paper. The fact that it is impossible to uniquely determine the sensor

position can even be seen directly from Equation (25). It contains the prefactor *C*, which, as shown in (17), depends on the current; thus, it can be *C* > 0 or *C* < 0, leading to different solutions. Furthermore, the whole approach proposed in Theorem 3 to narrow down the solutions no longer works.

#### *3.3. Resolving the Ambiguities*

If no additional electronic means are to be used, the ambiguities can be resolved as suggested in [15], where an electric vehicle approaches the inductive charging coil of a parking lot. In the case of using only two magnetometers, it never knows whether the coil is in front or behind it. The vehicle therefore assumes that the coil is always in front of it or that the driver is always moving ahead of the parking space. In the same way, concrete constraints for the navigation of a UAV must be defined and passed to it as a priori knowledge. These are:


Furthermore, a selection criterion for the measured values must be established. This is because, as mentioned earlier, the sign of the measurement *vi* of the sensor *i* changes with each half period. It must now be decided which is the correct one. The selection criterion can be derived from the above constraints. With *C* > 0 and *z* < 0 from Conditions (1) and (2), it follows from the right-hand rule that the magnetic field always has a positive *y*component, i.e., *By*(*y*, *z*) > 0. This can also be shown directly with Equation (3). Moreover, according to Theorem 3, this circumstance never changes as long as the UAV remains in the half plane *z* < 0 and as long as Constraint (3) is not broken. Using Equation (5) and the Cartesian unit vector*ey* leads to the necessary selection criterion:

$$(R \; \vec{v}\_i) \cdot \vec{e}\_y > 0. \tag{31}$$

If *vi* does not satisfy (31), −*vi* does, and vice versa. Only one sign can be a match. Direct comparison of the signs of *viy* and *By*(*yi*, *zi*) is not allowed, since from *By*(*yi*, *zi*) > 0, *viy* > 0 does not necessarily follow. The last point to be clarified is the orientation of the UAV. Since the rotation matrix in (31) is used, it must be ensured that the angles can be computed independently of the direction of *I*.

**Theorem 4.** *Given two sensors i and k with equally aligned local frames and fixed locations pi* = *pk, as long as their measurement values vi and vk are obtained from the same signal period, the yaw angle α and pitch angle β are independent of the direction of the static current I.*

**Proof.** Referenced to the same time interval containing a whole signal period, the sensors measure *vi*,*vk* <sup>=</sup>-0. Half a signal period later, when both measurements are calculated from the same new time interval and the sensors have not moved, the sign of the measurements is reversed. Then, the cross-product in Equation (9) for the yaw angle becomes −*vi* × −*vk*. Now, by applying the rule (*r<sup>a</sup>*) <sup>×</sup>*b* =*a* × (*r b*) = *r*(*<sup>a</sup>* <sup>×</sup>*b*), it follows that:

$$-\vec{\upsilon}\_i \times -\vec{\upsilon}\_k = (-1)^2 (\vec{\upsilon}\_i \times \vec{\upsilon}\_k) = \vec{\upsilon}\_i \times \vec{\upsilon}\_k.$$

Since the direction of the static current obviously has no effect on the cross-product, the result for the yaw angle also remains unchanged. Flipping the signs of the field components in Equation (10) for the pitch angle leads to:

$$\frac{-\upsilon\_{kx} - \upsilon\_{ky}A}{-\upsilon\_{kz}\sqrt{A^2 + 1}} = \frac{\upsilon\_{kx} + \upsilon\_{ky}A}{\upsilon\_{kz}\sqrt{A^2 + 1}}, \quad A = \tan\alpha.$$

The direction of the static current obviously also has no influence on the pitch angle.

Equation (14) for the roll angle *γ* uses the measurement from only one sensor as the input. Thus, it is also independent of the sign of *I*. In summary, the sign of *I* does not affect the orientation as long as the measurements are referenced to the same time interval of the corresponding signal.

#### **4. Simulation Results**

In this section, the presented equations for determining the UAV position and orientation are tested in the context of a software-in-the-loop (SIL) simulation.

#### *4.1. SIL Design and Test Procedure*

The SIL simulation was implemented in MATLAB and cyclically ran through a specified flight trajectory created with the Gazebo tool. Each waypoint of the trajectory is represented by a vector (*yD*, *zD*, *α*ˆ, *β*ˆ, *γ*ˆ)*T*, which describes the position and orientation of the local UAV frame *Lu* with respect to the global frame {*G*}. From this, the SIL computes the positions of the virtual sensors *i* and the corresponding magnetic measurements *vi* with respect to the local sensor frame *Li* in each run. For this purpose, Equations (4) and (5) were used, where the rotation matrix was used according to the convention in Gazebo, i.e., *R*ˆ *LU <sup>G</sup>* <sup>=</sup> *<sup>R</sup><sup>T</sup> <sup>x</sup>* (*γ*ˆ)*R<sup>T</sup> <sup>y</sup>* (*β*ˆ)*R<sup>T</sup> <sup>z</sup>* (*α*ˆ). This corresponds to the transposed version of the definition used in this paper, i.e., *R*ˆ *LU <sup>G</sup>* <sup>=</sup> (*RLu <sup>G</sup>* )*T*. In order for the angles *<sup>α</sup>*, *<sup>β</sup>*, and *<sup>γ</sup>* obtained in the test run to be compared later with the Gazebo angles *α*ˆ, *β*ˆ, and *γ*ˆ, they must first be converted at the end of each run. This is done automatically by the SIL. To calculate the measured values *vi*, the SIL assumes that there is a transmission line system with the arrangement described in Section 2.1; i.e., the cables are in the *xy*-plane at −*y*<sup>0</sup> and *y*0. The SIL then passes the computed *vi* as the input values to the test procedure, which verifies the entire method described in this publication. The test procedure uses the constraints defined in Section 3.3 for navigation as a priori knowledge. It is called cyclically from the SIL until all waypoints on the flight trajectory have been passed and is divided into the following steps:


$$
\delta L\_k = \sum\_{i=1}^{N} (\vec{p}\_{i,\varepsilon} - \vec{p}\_{i,k})^T (\vec{p}\_{i,\varepsilon} - \vec{p}\_{i,k}),
\tag{32}
$$

where *N* is the number of nodes, respectively sensors. Choose the polygons of the two smallest sums. Finally, choose one out of these two whose perimeter most closely matches that of the original polygon.

10. Calculate the UAV position from the found polygon for each node using Equation (4), i.e.,

$$
\vec{p}\_{D,i} = \vec{p}\_{i,k} - \mathcal{R}\_G^{L\_u}(\vec{u}, \vec{\beta}, \vec{\gamma}) \,\vec{p}\_{i0}.
$$

Take the average of all *pD*,*<sup>i</sup>* as the UAV position *pD*.


If the entire trajectory has been run through by the SIL, the specified and actual position are output graphically. The same goes for the angles. The software offers several configuration parameters. These are:


#### *4.2. Test Parameters*

The test cases and the associated results are presented below. If not otherwise stated, the following configuration is used:


$$
\vec{p}\_1 = \begin{pmatrix} 0.05m\_\prime 0.2m\_\prime - 0.05 \end{pmatrix}^T, \quad \vec{p}\_2 = \begin{pmatrix} 0.05m\_\prime - 0.2m\_\prime - 0.05 \end{pmatrix}^T.
$$

• Two sensors rotated by 90◦. Their positions in *Lu* are:

$$
\vec{p}\_3 = (0.05m, 0.2m, -0.25)^T, \quad \vec{p}\_4 = (0.05m, -0.2m, -0.25)^T.
$$

In the previous publication [25], it was shown that the electric motor of a small 1:5 scale electric toy car can produce stray magnetic fields of several μT; especially when the vehicle accelerates. The reason for this is that inadequate magnetic shielding measures are often taken in technical installations. The supply lines of electronic components with high power requirements can therefore emit strong magnetic interference fields. Since the UAV was also equipped with several electric motors, the test cases were performed with different standard deviations *σ* of the white noise. A value of 1 μ*T* was assumed for the case of good shielding and 8 μ*T* for poorly shielded electric motors. These were realistic values based on the measurements in [25] and took into account that the UAV may have multiple electric motors.

#### *4.3. Test Results*

Figure 10 shows the localization results using the derived equations and the proposed polygon detection method.

**Figure 10.** Localization quality, good shielding: *σ* = 1 μT.

The flight started at the arrow in the indicated direction, with the UAV flying one lap under the transmission lines. As was to be expected, good shielding allowed more precise navigation. The quality of localization was obviously very high with a magnetic noise standard deviation of *σ* = 1 μT. The mean deviation from the specified flight path was 21 mm, with a maximum error of 54 mm. The standard deviation of the roll angle error was 0.4◦, and its mean was very close to zero, around 0.025◦. Only the roll angle is shown here, as it had the largest values since the UAV moved almost exclusively in the *yz*-plane in the simulation.

Figure 11 shows the localization results for the case where poorly shielded electrical drives in the UAV produce magnetic noise with a standard deviation of *σ* = 8 μT.

**Figure 11.** Localization quality, poor shielding: *σ* = 8 μT.

Despite the high magnetic interference, the navigation worked very well near the transmission lines due to the high signal-to-noise ratio (SNR). The deviations increased with greater distance from the transmission lines. Figure 12 shows the deviation from the specified flight path and the specified roll angle for each waypoint.

**Figure 12.** Path and roll angle errors, poor shielding: *σ* = 8 μT.

The mean deviation from the specified flight path was 41 mm, where the maximum error was 221 mm. The mean deviation from the specified roll angle was −0.22◦, where the maximum error was 12.3◦. The standard deviation of the roll angle error was 2.96◦.

Figure 13 shows the localization results for the same noise standard deviation of *σ* = 8 μT when only three sensors were used, i.e., Sensor 4 at *p*<sup>4</sup> was excluded.

**Figure 13.** Localization quality, three sensors, *σ* = 8 μT.

The mean deviation from the specified flight path was 91 mm. In areas far from the transmission lines, as in this case, very strong outliers could occur occasionally, so that the maximum error was now about 2260 mm. The outliers occurred very frequently compared to the case with four sensors, as demonstrated by repeated experiments. These were not due to an incorrect calculation of the UAV position, but to an incorrect detection of the polygons. These were extremely distorted by the strong magnetic interference and could therefore hardly be distinguished from each other. This was to be expected, because the more sensors were used, the higher the noise immunity was. In practice, the UAV would carry out aerial manipulations in the vicinity of the transmission lines and would therefore be in an area with a high SNR. Therefore, it may also be possible to use only three sensors, as shown in Figure 14. The UAV started its flight near the transmission lines and then flew between the lines. Only when it then moved away did strong outliers occur. Repeated tests always led to the same result.

**Figure 14.** Localization quality close to the transmission lines and far away, three sensors, *σ* = 8 μT.

In the tests, a current of 700 A was assumed. It should be mentioned that the quality of the localization depended on the current strength in the transmission lines. This in turn depended on the per capita energy consumption of the residents, which may vary from city to city. Thus, for completeness, Figure 15 shows additionally the same case as Figure 10 with a low transmission line current of 100 A.

**Figure 15.** Localization quality in the case of low transmission line current of 100 A, four sensors, *σ* = 1 μT.

#### **5. Discussion**

As mentioned in Section 3.2, in a real transmission line system, it is difficult to detect whether the UAV is located below or above the cables without additional detection mechanisms. This is only possible in Figure 14 because the SIL calculated with a static equivalent current and was only used in the context of the tests.

In this paper, transmission lines were modeled as perfectly straight cables, which may not be the case in real-world circumstances. Transmission lines, for example, hang downward, thus forming more of a curved line. Even though the radius of this curve is very large, the influence on navigation accuracy should be investigated in a further work. In UAV indoor navigation applications, this problem does not occur because the cables are shorter and can be routed to fit the straight line model.

Furthermore, the magnetic field contributions of the other two phases of the threephase system were neglected, since they are typically at least 2 m away from the phase under consideration. This should also be investigated in more detail in a further study in the context of transmission line navigation.

Finally, the magnetometers should be as far away as possible from ferromagnetic materials that may be in the UAV; otherwise, they may affect the sensor readings.

#### **6. Conclusions**

In this work, it was demonstrated that UAV navigation is possible in the magnetic field of two long parallel conductors. In this context, equations were derived to be able to calculate the UAV orientation analytically. It was shown that only three magnetometers are needed for this purpose and that neither the current, nor the magnetic field equations

need to be known. Furthermore, it was shown that combining the magnetic field equation with the magnetic energy equation leads to analytical solutions for the calculation of the sensor position. This results in 16 possible positions for a single sensor, from which the correct one must be determined. For this purpose, theorems were derived, which can be used to exclude 14 of the 16. The last two remaining positions are not distinguishable. However, it was shown that considering multiple sensors allows a whole set of correct positions to be selected. Here, the positions were combined into a number of polygons in the *yz*-plane, from which the one most similar to the original polygon was selected. This can be determined from the known orientation and the user-defined sensor positions in the local UAV frame. This ultimately led to a unique determination of the correct sensor positions and hence the UAV position. All equations, as well as the polygon selection method were successfully verified for different noise.

**Author Contributions:** Investigation, D.M.; methodology, D.M.; software, D.M.; validation, D.M.; writing—original draft preparation, D.M.; writing—review and editing, Z.K., S.B.; visualization, D.M.; supervision, S.B., Z.K.; project administration, S.B. All authors read and agreed to the published version of the manuscript.

**Funding:** The project AERIAL COgnitive integrated multi-task Robotic system with Extended operation range and safety (Aerial-Core) received funding by the European Union's Horizon 2020 research and innovation program under grant agreement No. 871479.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** I would like to thank my colleague Filip Zori´c who generated the test flight data in Gazebo and thus supported the tests.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**

