1. Introduction
In geophysics, when processing gravimetric and magnetometric measurements, mathematical methods are often used that are associated with solving the inverse problem for the Poisson equation. In general, this problem is formulated as follows: based on measurements of the potential field in a region on/above the Earth’s surface, it is necessary to find the sources of this field in a given region beneath the surface. Simple examples can easily show that such a problem is uniquely solvable. Moreover, it has infinitely many solutions that are equivalent in the generated field. A very subtle study of this ambiguity is given in [
1]. Therefore, in order to obtain an unambiguous solution, additional restrictions of a physical, geological, geometric and other nature are introduced into the methods for solving inverse problems of this kind. Specific formulations and restrictions in these inverse problems often depend on the equipment, methodology, structure and type of measurement results. All this can be seen in numerous works, of which, for the sake of brevity, we will highlight only a few that are close in aim to our research: [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16].
In some cases, when analyzing three-dimensional geophysical anomalies, it is difficult to specify suitable constraints or such constraints are algorithmically difficult to implement. In this case, the approach proposed in [
15] is often used. Instead of searching for three-dimensional volumetric sources of potential fields that cannot be found uniquely, we limit ourselves to searching for the distribution of sources of a given type on a given two-dimensional manifold (for example, on a part of a plane) under the Earth’s surface. Such an inverse problem has a unique solution with an appropriate choice in the type of sources and manifold, if we use field measurements in a three-dimensional region (see [
17]). Finding solutions of this kind for planes lying at different depths, we can use these results to interpret geological structures. Examples of such an approach are given in the works [
11,
12,
13,
14,
15], where the so-called method of integral representations is developed from the work [
17].
All the methods from these works, presented in discrete form, are reduced to solving systems of linear algebraic equations (SLAE) and are actually associated with the inversion of some matrices, which we will call
inverse problem matrices. Very often, such matrices turn out to be ill-conditioned or even singular. In this connection, the problem of stability of solutions obtained by inverting such matrices arises. It is solved differently in different works. Thus, many authors apply the regularization method of A. N. Tikhonov [
18,
19] in its various modifications. It is also possible to use algorithms similar to the well-known TSVD method [
20,
21]. The stability of the linear inverse problem being solved can also be increased by replacing its matrix with a “close” one, but having a better (smaller) condition number. This is the approach that will be used in our work to solve some inverse problems of gravimetric and magnetometric remote sensing.
Thus, the structure of this work is as follows. The formulation of the inverse problems of gravity and magnetometry is given in
Section 2. In
Section 3, the uniqueness of the solution of the inverse problems under consideration is noted and their discretization is given.
Section 4 identifies the main problems that arise when solving SLAEs with singular and ill-conditioned matrices. We schematically, using an example, explain the possibility of improving the condition number of the inverse problem matrix. Further,
Section 5 presents, in a general form, an algorithm for improving the condition number of the inverse problem matrix. In the following
Section 6, various modifications of the algorithm are considered and compared. From numerical experiments, it is concluded that the best of these modifications is the
MPMI method. This method uses the so-called
minimal pseudoinverse matrices with improved condition numbers. In
Section 7, the MPMI method is used to separately solve the inverse gravity and magnetometry problems with data for the Kathu region, Northern Cape, South Africa. In
Section 8, these problems are solved jointly with the same data. The conclusions from these numerical experiments are presented in
Section 9.
2. Formulation of Inverse Problems of Gravimetry and Magnetometry
We will solve the inverse problems of gravimetry and magnetometry in the following formulation. We assume that the measurements of the corresponding potential fields are carried out on the Earth’s surface with a known relief (height) where D is the study area with a piecewise smooth boundary. In more detail, the measurements are carried out in a three-dimensional area of the form . The functions are considered piecewise smooth. We are interested in the following problems.
Problem 1. On a given piecewise smooth surface , , find the number N, the locations , , and the masses of point gravity sources that create a gravity field with a vertical component measured in the domain T.
This problem is reduced to solving the following equation:with respect to the quantities N, , , . The coefficient γ is the known gravitational constant. In vector form, the equation iswhereand , , . Problem 2. On a given piecewise smooth surface , find the number N of locations , and the vertical components of the magnetization vectors of point magnetic dipoles that create a magnetic induction field with a vertical component measured in T.
The problem comes down to solving the equationrelative to unknowns N, , , . Here, is the known magnetic constant, and 3. On the Uniqueness of Solutions to Problems (1), (3) and Their Discretization
The inverse problems (
1) and (
3) can be written in a uniform form as follows: find a number
, coordinates
, and numbers
, such that the following equality is true:
Here,
is a given piecewise smooth surface lying in the half-space
, and the value
is known. An abstract problem of this type may have no solution; if its solution exists, it may not be unique. However, in the cases (
1), (
3), and some other cases, the uniqueness of solutions can be guaranteed due to specific properties of the function
.
To describe the uniqueness conditions, we introduce the domain . From the conditions on the functions , it is clear that . We will also assume that the surface is defined by the equation .
Theorem 1. Let for every , and let the function be harmonic in the variables everywhere for , and let for . Then, the problem (5) with given , cannot have more than one solution , , . The proof is given in the paper [
17].
Note that the conditions of the theorem are satisfied for the functions and from Problems 1 and 2, so that the latter have no more than one solution.
The data
in problems (
1) and (
3) are measured, as a rule, at a finite number of points on the relief
or near it. In addition, in these problems, the positions of the sources are often specified as
, for example, on some grids. Therefore, in what follows, we will consider the following discrete versions of the problems (
1) and (
3): for given observation points
,
, and given source locations
,
, find solutions
of each of the systems of linear algebraic equations
Each of these SLAEs can be formally written as
, where
is the matrix of the corresponding problem, and
Y is its right-hand side. Note that the matrices of the systems,
, are given exactly by the Formulas (
2) and (
4).
4. Problems Arising in Solving SLAE and the Essence of the Proposed Approach
Theoretically, the SLAE
may not have a conventional solution when the models under consideration are not adequate to the experimental data. But it always has a solution in the sense of the least squares method (LSM), i.e., there always exists a solution to the system
. The matrix
may be ill-conditioned or even singular. Therefore, in contrast to the problem statements from
Section 2, the discrete problem may be uniquely solvable. Another reason for the ambiguity is the use of measurements of the potential field in a two-dimensional domain
T instead of measurements in a three-dimensional one. In this regard, for definiteness, we will seek a normal pseudo-solution
of the SLAE, i.e., its solution by the least squares method that has a minimal norm. From here on, all norms will be Euclidean. In the case of unique solvability of the SLAE under consideration,
coincides with the usual solution. The data for finding
are the quantities
. For these
exact data, the problem is formally solved using the pseudoinverse matrix
:
.
In practice, the right-hand sides of the SLAE are generally specified approximately with a measurement error . Then, instead of Y, the approximate right-hand side is known such that . Then, the vector is a stable approximation to : in for .
An estimate of the accuracy of such an approximate solution is known (see, for example, [
22]):
This estimate shows the important role of the condition number
of the matrix for solving the SLAE. The smaller the condition number, the higher the accuracy of the approximate solutions of the SLAE.
The matrix
is usually calculated approximately using various types of computing equipment that has a finite bit grid. In this case, in the known pseudoinversion procedures (see, for example, [
22]), due to rounding, instead of
, a very close matrix
is actually used with a known (estimated) perturbation level
. As a result, we calculate
, not
, and in fact, instead of
we obtain an approximate solution
. For singular and ill-conditioned matrices
, many pseudoinversion procedures are numerically unstable with respect to matrix perturbations. As a consequence, approximations
, even for small errors
h, can be arbitrarily “far” from the exact normal pseudo-solution
. Thus, the discrete versions of the problems (
1) and (
3) under consideration are generally ill-posed, and special methods—regularization methods (see [
18,
19,
20] and others)—must be used to solve them. Special stable methods for the approximate determination of
from perturbed data have also been developed (see, for example, [
18,
20,
21] and others). We propose the following new approach.
In some practical cases, including ours, the SLAE matrix is known exactly, often in the form of an analytical expression (see
Section 2). Let the singular value decomposition (SVD, see, for example, [
22]) of this exact matrix be known (pre-computed):
, where
are orthogonal matrices of size
and
, respectively, and
is a diagonal matrix of size
, containing the singular values of matrix
, ordered in non-increasing order:
Here
. In what follows, we will use the spectral condition number of the matrix
:
.
As already mentioned, the matrix can be ill-conditioned or even singular. We propose to replace it with a close matrix , , so that has a better (smaller) condition number, and use it instead of . Here, h is a given small level of matrix perturbation. This procedure can be illustrated by the following example.
Example 1. TSVD method with matrix conditioning improvement.
Consider the SLAE matrix arising in the two-dimensional analogue of Problem 1:
where
are uniform grids on the interval
. For
,
and
, the matrix has the condition number
, i.e., it is extremely ill-conditioned. This is due to the specific (exponential) order of decreasing singular values of the matrix (see
Figure 1a). By applying the TSVD method to the matrix, i.e., for example, by replacing its singular values that satisfy the condition
with zeros, it is possible to improve the condition number with an adequate choice of the value
h. Thus, for
, we obtain the matrix
with the best condition number
. Accordingly, the stability of numerical solutions of SLAE with such a matrix is improved. However, it is not clear how to constructively find
h, since the estimate
h of the admissible proximity of matrices in the considered formulation is unknown.
Developing this approach, we propose a new algorithm for solving the inverse problems under consideration.
5. Algorithm for Improving the Condition Number of Problem Matrices
Consider a general SLAE
, where
and
. According to
Section 4, all norms of vectors and matrices are considered Euclidean. We will seek a normal pseudo-solution
of this SLAE, i.e., its least-squares solution that has the minimum norm.
Due to possible ill-conditioning or degeneracy of the matrix , the regularization of the SLAE under consideration is necessary. One of the regularization options is to improve (reduce) the condition number of the matrix by varying it within certain perturbation limits. We will present the corresponding algorithm, assuming that and the exact right-hand side of the SLAE is nontrivial: .
The singular value decomposition of the matrix introduced above will be used: , as well as the function .
Algorithm for solving SLAE with data .
Preliminary step: find the number
This is the measure of inconsistency of the solved SLAE. Here,
.
Step (1) Set the number
. For each
, we look for approximate matrices in the form
, where
The choice of functions
will be discussed below.
Step (2) Introduce the function , and solve the equation . We denote its solution as . The solvability issues of the equation will be considered below.
Step (3) Find the matrix and use it to calculate the approximate solution of the SLAE: .
Let us make the following assumptions about the functions : for all
(A) for ;
(B) ;
(C) The functions are left-continuous for ;
(D) The functions are monotonically non-increasing for ;
The following properties of such an algorithm were established in [
23].
Theorem 2. Let conditions (A)–(D) be satisfied and, in addition, for each , the inequality holds. Then
(1) The function is monotonically nondecreasing for and is left-continuous at each point ;
(2) ;
(3) The equation has a generalized solution ;
(4) when ;
(5) the approximate solution of the SLAE converges to the normal pseudo-solution as .
Remark 1. In item (3), the equation with the monotone function is solved. Its generalized solution is a point for which the inequalitieshold. An example of a solution to such an equation is shown in Figure 1b. Theorem 3. Let be the rank of the matrix . Suppose that for for each k, , and the numbers are such that . Then, for the estimate holds.
Thus, the condition number of the matrix is less than the condition number of the matrix at least for “small” .
6. Specific Methods Implementing the Proposed Algorithm for Solving SLAE
For simplicity, we assume that the singular values of the matrix are prime: .
As a central example, we consider the method using minimal pseudoinverse matrices with conditionality improvement or, briefly, the
MPMI method. In this method
for
, where
is the solution to the equation
, and
. Then
We also set
. The properties of the function
follow directly from the definition: (A)
and (B)
; (C)
is left-continuous for
. Property (D) is also true. It follows from the increase of the function
as
(see [
19,
23,
24]). Then, the function
decreases as
and is zero as
. Analyzing the asymptotics of the functions
as
, we can verify that
. This means that the condition of Theorem 4 is satisfied with
, and
for all admissible
k.
Thus, Theorems 2 and 4 are valid, guaranteeing the convergence of the method and the improvement of the condition number. In some cases, this number differs significantly from
. For example, the case
is theoretically possible. It is realized when the generalized solution of the equation
is the discontinuity point
of the function
(see
Figure 1b). Then, as in Theorem 4,
and in this case the condition number of the matrix used to solve the SLAE is improved by at least one and a half times.
For the MPMI method, the influence of data errors on the accuracy of the approximate solution is investigated.
Theorem 4. Let the SLAE be solvable. Then, for , the asymptotic estimate of the accuracy of the approximate solution by the MPMI method is valid: The proof of this theorem follows from the properties of the minimal pseudoinverse matrix
, proved in [
24] (Chapter 5). Details of obtaining a similar accuracy estimate can also be found in [
19]. As can be seen from the given estimate, the accuracy of the solution depends significantly on the condition number of the matrix
, which is reduced using the algorithm.
TSVD method with conditionality improvement (TSVDI). In this method, instead of , we use the matrix , in which , and its rank is found as a solution to the equation with a monotonically non-decreasing function . It can be verified that for sufficiently small . Thus, the inequality becomes an equality for small , and the TSVDI method does not improve the condition number.
Tikhonov regularization (TR). In this method, an approximate solution of SLAE is sought in the form
, where the parameter
is chosen using one of the known methods (see, for example, [
19,
20,
25] and others). In any case,
when
. Using the singular value decomposition of the matrix
, we find that
, where
Then, for
we obtain the following:
since
for
.
Let us compare these methods by analyzing the improvement of the condition number of the matrix from Example 1. We will solve a model SLAE with an ill-conditioned matrix
of the form (
7) and an exact solution
. The exact right-hand side of this SLAE is calculated as
and is perturbed by a normally distributed random error with zero mean such that the inequality
holds for the approximate right-hand side. The number
introduced in this way allows us to estimate the error level in percent. To solve such a SLAE with different error levels
, the MPMI method is used, and for comparison, the TSVDI and TR methods are used with the choice of the regularization parameter
based on the discrepancy principle [
25].
Table 1 shows the calculation results: the accuracy
of the obtained approximate solutions
and the condition numbers
,
and
.
The table shows that the method with MPMI has the best accuracy for all considered levels of data disturbance. We especially note the fairly high accuracy of this method for large disturbances: . The table also clearly shows how much the condition numbers of the matrices of the method with MPMI are smaller than the condition numbers of other methods and, especially, the condition number of the original SLAE matrix: . For these reasons, we will use the MPMI method in further calculations.
Note that the main time costs of all the methods considered are associated with the calculation of SVD and the procedure for selecting the regularization parameter. Accordingly, the complexity and limitations of the method are determined by the characteristics of the standard software implementing these procedures. We used standard Python packages (numpy 2.2.1, scipy 1.15.1 and numba 0.60.0). All calculations were performed on a PC with an Intel (R) Core (TM) i7-7700 CPU 3.60 GHz, RAM 16 GB (without parallelization). The solution time for the specified SLAE for one implementation of data using the MPMI method was about 7 s.
7. Separate Solution of Inverse Problems of Gravimetry and Magnetometry for Real Geophysical Data
We consider Problems 1 and 2 in a discrete formulation (
6),
, with practical data
Y on gravity and magnetic anomalies, and we will solve these problems using the MPMI algorithm. The algorithm was applied to processing gravimetry and magnetometry data for the region of Kathu, Northern Cape, South Africa, with latitude and longitude
°
°
°
°],
°
°
. All source data, including relief data, and grids were taken from the WGM2012 GLOBAL MODEL [
26] (gravity data) and WDMAM [
27] (magnetic data) databases. The grids of data and sought solutions were taken to be identical. The problems were solved for different depths
H of the plane of sought field sources. Accordingly, for the gravity and magnetic problems, based on the relief, given grids and
H value, the matrices
linking the columns of unknowns and data of these problems were calculated. The matrices have the following dimensions:
. In all calculations, it was assumed that the gravitational data
and magnetic data
were measured with a relative error of
.
Inverse problem of gravimetry. Figure 2 shows the relief of the corresponding area and the data of the inverse problem of gravimetry
.
Figure 3 shows for comparison the
data, the solution of the inverse gravimetry problem using the MPMI method for the depth
km and the solution of the same problem using the matrix inversion method. The latter solution is distinguished by unrealistically large values of the sought function
. At the same time, the solution using the MPMI algorithm yields quite realistic values of
.
The next
Figure 4 shows the isolines of the
data and their analogues calculated using the solution of the inverse problem. A comparison of these results is also given. The discrepancy between the exact and calculated data,
and
, is about
. Their isolines are graphically close. Note that similar comparison results, as in
Figure 4 and
Figure 5, were also obtained for other depths
H. For brevity, they are not presented.
However, we will show, in comparison, the solutions of the inverse problem of gravimetry, localized at different depths (
Figure 5). We do not set a geological interpretation of these results as our task, but we will note the increasing localization of sources and the increase in their maximum with increasing depth.
Inverse problem of magnetometry. Similar results of magnetometry data processing are shown in
Figure 6,
Figure 7,
Figure 8 and
Figure 9. In particular,
Figure 9 presents a comparison of solutions of the inverse problem
for different depths
H. The discrepancy between the exact and calculated data,
and
, is about
here. It turned out that the matrices of this inverse problem are well conditioned for the depths
H under consideration. Therefore, the results of the solution by the MPMI method and by means of matrix inversion differ little.
8. Joint Solution of Gravitational and Magnetic Problems
A fairly large number of methods for jointly solving inverse problems of this kind have been developed (see, for example, [
4,
5,
8,
16]). Formally, inverse problems of gravimetry and magnetometry in a given region can be solved jointly by combining systems of equations with matrices
and
for unknown columns
and
, respectively, into a single SLAE with a column solution
, a right-hand side
and a matrix of
; the new system can then be solved:
This is the approach we use to illustrate the MPMI algorithm. However, due to the heterogeneity of gravitational and magnetic measurements and data, the scaling of the corresponding quantities is required. We used the following new scaled quantities:
. Thus, the joint solution of the inverse problems of gravity and magnetic exploration is reduced to solving a SLAE of the form
and the sought quantities are calculated as
. We solved the (
9) system using the MPMI algorithm.
The joint solution of the gravity and magnetic problem was carried out with the data for the region of Kathu, Northern Cape, South Africa, from
Section 7. In this case,
.
Let us see how much the solutions obtained here differ from the solutions found separately. We compare the solutions by calculating the deviations
, where
is the solution found in separate solving, and
is the solution found in joint solving. The deviations of the gravitational and magnetic solutions,
and
, are presented in
Table 2 for different depths
H. The ranks and condition numbers of the original matrices
and the corresponding matrices
of the MPMI method are also given there.
The table shows a significant improvement in the condition number of the MPMI matrices compared to the original matrices . Also note the increase in and with increasing depth H.
We have applied the MPMI algorithm to process gravity and magnetometric data for other regions (Kursk Magnetic Anomaly, Boddington region, Australia, etc.). The results of these works will be published separately.
9. Discussion and Conclusions
The formulation of inverse problems of gravity and magnetic exploration proposed in this article guarantees the uniqueness of solutions to these problems. However, when discretized, the problems may lose this property, since the SLAEs corresponding to these discrete problems usually have ill-conditioned or even singular matrices. Therefore, regularizing algorithms must be used to solve them.
We propose a new approach to the regularization of the SLAE solution. The approach is based on replacing the ill-conditioned matrix of the inverse problem with a new matrix, close to the original one, but having a better (smaller) condition number. The developed algorithm for improving the condition number of matrices has regularizing properties. This has been proven theoretically and confirmed when solving model problems. When comparing various modifications of the algorithm, it has been concluded that the best of them is the MPMI method. This method uses minimal pseudoinverse matrices with improved condition numbers. It enables the discovery of a solution to the posed inverse problems of gravity and magnetic exploration that can be considered both separately and jointly and that is stable with respect to data disturbances.
Using the MPMI algorithm, inverse problems were solved for the gravity and magnetic exploration data for the region of Katu, Northern Cape, South Africa, both separately and jointly. The obtained separate and joint approximate solutions turned out to be quite close. Similar calculations were carried out for some other regions.
In the future, we intend to use the MPMI algorithm when processing data from larger datasets.