1. Introduction
The Cauchy problem for the Helmholtz equation is an important problem in mathematical physics that has deep roots in acoustics and sound propagation. This problem is one of the fundamental equations in acoustics and has wide applications in various fields of physics, engineering, and applied sciences.
The Helmholtz equation arises from the equation of wave oscillations in a limited region of space, such as an acoustic chamber or cavity. It describes the behavior of sound waves in a medium and is related to the wave equation. The main physical problem associated with the Helmholtz equation is to model the propagation of sound in various media and conditions. For example, in acoustic engineering, the continuation problem is used to analyze sound fields in rooms, wind tunnels, pipes, and other geometric configurations. In geophysics, it is used to model sound waves in the atmosphere, ocean, and earth’s crust. It is also used in medicine to simulate sound waves inside the human body for diagnostics and treatment of various diseases.
Inverse problems of various types are encountered in everyday life. Refs. [
1,
2,
3] applied numerical methods to solve problems related to acoustic equations, with a special emphasis on the problems of significant practical importance in the field of medical imaging.
One of the key problems associated with the Helmholtz equation is the continuation problem (i.e., the Cauchy problem), the main task of which is to find a solution to the equation in the regions different from the original one. In particular, this may be necessary in modeling the propagation of sound through layers of different densities or in cases where it is necessary to take into account changes in the medium along the wave propagation path.
The Cauchy problem for elliptic equations is a classic example of ill-posed problems, where small perturbations in the data can lead to significant changes in the solution. Therefore, iterative methods play a key role in stabilizing the solution and ensuring its stability.
One of the early studies devoted to the development of an iterative method for solving the Cauchy problem is described in Ref. [
4]. The authors proposed a scheme that successively updates the approximations at each iteration, which gradually improves the accuracy of the solution. This approach was especially useful in stabilizing the solution and minimizing the impact of noise in the data. The work made a significant contribution to the numerical solution of ill-posed problems by demonstrating the effectiveness of the iterative approach.
In Ref. [
5], an optimization method for solving the Cauchy problem based on the minimization of the functionals, including the problem’s data and a regularization term, is proposed. The use of numerical schemes in this method allowed the authors to stabilize the solution process and minimize the influence of errors. This approach is one of the first numerical methods for solving boundary inverse problems and demonstrates significant advantages in the context of ill-posed problems.
In Ref. [
6], the iterative approach is developed by proposing an alternating method for solving the inverse Cauchy problem. This method effectively stabilizes the solution by alternate adjustment of the Dirichlet and Neumann data, which minimizes the impact of the ill-posed nature of the problem. The convergence of the method to the true solution is proved under certain conditions, which is confirmed by both theoretical analysis and numerical experiments.
One of the key problems in solving the Cauchy problem is the restoration of the missing boundary data, which is especially important in applications involving wave processes such as acoustics and electromagnetic waves. Variational methods, such as those proposed in Ref. [
7], allow the Cauchy problem to be reformulated in terms of functional minimization, making the problem more stable and amenable to numerical solution. This approach enables us to obtain a solution to the problem in the corresponding functional space, which improves the convergence and stability of the solution.
Regularization methods, as shown in Refs. [
8,
9], play an important role in stabilizing the solution to the Cauchy problem. Tikhonov regularization and truncation methods allow for effective smoothing of solutions, reducing the impact of noise and other disturbances in the data. In particular, a Fourier regularization method that uses Fourier series expansion of the data to filter out high-frequency components responsible for the instability of the solution is proposed in Ref. [
10].
An important contribution to the solution of the problem of restoring missing boundary data is the optimal control method [
11]. This method is formulated as a problem of minimizing a special functional that takes into account the deviation from the known data and the smoothness of the solution. The optimal control approximation method allows one to effectively solve the problem of data restoration, ensuring high accuracy of the solution. In Ref. [
12], the quasi-reversibility method, which modifies the original Cauchy problem by introducing a regularization term, is considered. This method allows one to obtain stable and accurate approximations of the solution, which is especially important when working with ill-posed problems.
Recently, various approaches to solving this problem have been proposed, each with its own advantages and areas of application. In Ref. [
13], a new method based on mixed quasi-reversibility in the H_div space is presented. This approach enables us to take into account both the partial differential equation and the boundary conditions, which provides a stable and convergent solution for a wide class of elliptic Cauchy problems. This method demonstrates its effectiveness in cases where high accuracy with account for complex boundary conditions is required.
In Ref. [
14], the authors investigate methods for accelerating the modified alternating algorithm (MA) using conjugate gradient methods and Landweber iteration. The aim of the study is to evaluate the accuracy and stability of these methods under different levels of noise in the data. Numerical experiments show that the CGNE (conjugate gradient on normal equations) method provides more accurate and stable solutions compared to other methods, especially at low noise levels. However, the CGME (conjugate gradient for minimum error) method has shown sensitivity to the noise level, which limits its application under strong disturbances.
An interesting approach to solving the Cauchy problem is proposed in Ref. [
15], where the game theory is used. The problem is formulated as a Nash game in which the Cauchy data are distributed between two players. This approach makes it possible to minimize the functionals associated with the divergence of the Dirichlet and Neumann conditions, which ensures high efficiency and stability of data recovery, even in the presence of noise.
In Ref. [
16], a solution to geometric inverse problems for the Cauchy–Stokes system is considered using the Nash game theory. The problem is formulated as a three-player game, where two players are to complement the data, and the third one is to detect inclusions. This approach allows one to solve the problems of detecting unknown cavities in a stationary viscous fluid, demonstrating high efficiency under complex boundary conditions and incomplete data.
These different approaches to solving the Cauchy problem for the Helmholtz equation highlight the importance of developing methods adapted to the specifics of the problem. Each of the proposed methods has its own advantages and areas of application, making them indispensable tools in numerical analysis and applied mathematics.
Solving the Cauchy problem for the Helmholtz equation at large wave numbers is a significant challenge in numerical analysis that requires accurate and robust methods. In Ref. [
17], an adaptive grid construction method, which optimizes numerical computations by refining the grid in areas with high error and reducing the number of nodes where the error is minimal, is presented. This approach provides high accuracy in solving the Cauchy problem due to the efficient distribution of computational resources. In Ref. [
18], this idea is developed by proposing a combination of adaptive grids with regularization methods such as Tikhonov’s method. Adaptive grid algorithms can significantly reduce computational costs while providing accurate results even in problems with inhomogeneous data.
When it is necessary to solve the Cauchy problem for the Helmholtz equation at high wavenumbers, traditional methods often encounter difficulties. In such conditions, adaptive grids and advanced algorithms, as shown in Ref. [
19], demonstrate high numerical stability, especially in the presence of noise in the data. The advantage of adaptive methods is their ability to adapt to the specifics of the problem without the need for preliminary parameter adjustment, which makes them convenient and efficient. In Ref. [
20], the authors also present an efficient algorithm for solving the Cauchy problem based on the relaxation of alternating iterations, which shows high accuracy and stability at various noise levels. This approach is especially effective at large wave numbers where traditional methods can give unstable solutions. Thus, the use of adaptive grids in combination with modern algorithms for solving the Cauchy problem for the Helmholtz equation at high wave numbers provides high accuracy, stability, and efficiency of numerical solutions. These methods offer new possibilities for solving complex problems in various fields of science and technology.
The connection between the continuation problem and the boundary inverse problem demonstrates how the physical aspects of the Helmholtz equation problem become key aspects when solving the problem of finding boundary conditions.
The aim of the Cauchy problem is to find a solution to the Helmholtz equation in regions different from the original, which may be necessary, for example, for modeling the interaction of sound with various obstacles or changes in the medium along the path of wave propagation. When solving this problem, it is important to take into account the boundary conditions that determine the behavior of sound waves at the boundaries of different media or obstacles.
The inverse problem is solved by successively solving the direct and adjoint problems. The efficiency of this approach is determined by the speed and stability of the numerical solution of these problems. In Ref. [
21], a new method for solving the direct problem of the two-dimensional Helmholtz equation, which outperforms traditional approaches in terms of accuracy and stability, is presented. The scalable boundary finite element method (SBFEM) demonstrates significant advantages in modeling complex physical phenomena. In Ref. [
22], an adaptive scaled boundary finite element method (SBFE) for topological optimization of structures, which makes it possible to carry out efficient modeling of complex geometries with dynamic loads, was proposed. This method provides a high convergence rate and accuracy, which is especially important when working with unstable problems.
Quadratic convergence of numerical solution methods plays a key role in ensuring high accuracy and computational efficiency. Such convergence significantly accelerates the achievement of the desired level of accuracy, which makes methods with quadratic convergence preferable for solving many problems. In Ref. [
23], the authors propose a new iterative method for solving the two-dimensional Bratu problem, which demonstrates the ability to accurately calculate both branches of the solution to the problem using quasi-linearization and Newton–Raphson–Kantorovich approximation. These methods, similar to our study, provide quadratic convergence of numerical solutions, which is important for problems requiring high accuracy.
In Ref. [
24], the stability of solutions to the inverse problem of source reconstruction in a medium with two layers separated by an uneven boundary is studied. The authors prove that, despite the complexity of the interface structure, it is possible to obtain stable solutions, which is important for applications in geophysics, where reconstruction of wave radiation sources in layered media is required.
In our work, we consider the Cauchy problem for the Helmholtz equation, which is also known as the continuation problem. The main attention is paid to the reduction of the continuation problem to the boundary inverse problem for a well-posed direct problem. We propose a generalized solution to the direct problem and conduct a detailed analysis of its stability. To solve the inverse problem, an optimization-based approach is used, which uses the gradient method. Special attention is paid to the comparative analysis of the Landweber and Nesterov methods. The article describes in detail the process of calculating the gradient in discrete form. In addition, the formulation of the conjugate problem in discrete form is given, which allows us to build an algorithm for solving the inverse problem using gradient methods. This study performs a computational experiment for the boundary inverse problem at various noise levels, and its results confirm the effectiveness of the proposed method. A graphical comparative analysis of the Landweber and Nesterov methods is presented, and the advantages and disadvantages of gradient methods are demonstrated both from the points of view of convergence and computational efficiency.
2. Materials and Methods
First, we present the statement of the Cauchy problem. The original Cauchy problem for the Helmholtz equation in the domain Ω = (0.1) × (0.1) is written as:
where
is a given constant. In the continuation problem, the task is to find a function
in the domain Ω using the given
.
The aim of the Cauchy problem for the Helmholtz equation is to reconstruct a solution to the equation in the domain using the data at the boundary of the domain or in the other part of space. This problem is often ill-posed in the Hadamard sense, which requires special methods to solve it.
The Cauchy problem for a partial differential equation, in particular for the Helmholtz equation, is an ill-posed problem. This means that small disturbances in the data can lead to large changes in the solution. For such problems, it is important to study conditional stability, that is, to evaluate how sensitive the solution is to small changes in the data and under what conditions a stable solution can be ensured. For the Laplace equation at , we can prove the following theorem on conditional stability.
Theorem 1. For , let there exist a solution to problem (1)–(4).
In the domain let us consider the initial boundary value problem: | |
| |
| |
| |
Then, the following conditional stability estimate is valid: The study of the conditional stability of the continuation problem allows us to obtain important results that can be used for the numerical solution of ill-posed problems. A priori estimates and regularization methods play a key role in ensuring the stability of the solution. Understanding the conditions under which the continuation problem is conditionally stable contributes to the development of effective numerical methods and their application in various fields of science and technology [
25,
26,
27].
2.1. Direct and Inverse Problems
The ill-posed nature of the Cauchy problem creates significant difficulties when trying to find its numerical solution. An effective approach to solving this problem is to reduce the continuation problem to a boundary inverse problem. This allows us to use the methods developed for solving inverse problems to find a solution to the original problem. Let us consider the formulation of the direct and inverse problems.
In the direct problem, one must find
based on the given functions
. In the inverse problem, it is necessary to find
, taking into account additional information:
Definition: The function
will be considered a generalized (or weak) solution to the direct problem (6)–(8), if for any
, such that:
The equality
is satisfied. The function
satisfying conditions (6)–(8) will be called a test function.
The direct problem for the Helmholtz equation involves finding a solution under given boundary conditions. The correctness of this problem is critical for many applications in science and technology. Further, we will consider the correctness of the generalized solution of the direct problem for the Helmholtz equation.
Theorem 2. If then the generalized solution to problem (6)–(8) exists, is unique, and satisfies the estimate Proof of Theorem 2. Consider the function
. Here,
is the space of finite, infinitely differentiable functions with a support belonging to the interval (0,1). Then, for each such function, problem (6)–(8) has a unique, infinitely differentiable solution, which we will further denote by
[
28].
Consider the auxiliary problem:
Let us integrate the identity
over the domain
, and taking (17)–(19) into account, we obtain
From the equalities
it follows that
By virtue of the Cauchy–Bunyakovsky inequalities,
As
we obtain the inequality
Integrating the resulting inequality over Ω, we obtain
Comparing (21) with (20), we obtain
Now, we can integrate the expression
over Ω, and using boundary conditions (17)–(19), we obtain:
From (22), we obtain the inequality .
Let us take the solution
of the auxiliary problem (16)–(19) as a test function (by definition) and substitute it into (14). Replacing
, u with
in (14) and taking into account (23), we obtain
The theorem is proved. □
The well-posed nature of the generalized solution of the direct problem for the Helmholtz equation is important for theoretical and applied problems. The existence, uniqueness, and stability of the solution ensure the reliability of mathematical models and their application in various fields. Understanding these aspects helps us to effectively use functional analysis methods in solving problems in mathematical physics and engineering [
29].
2.2. Formulation and Solution of the Optimization Problem
In this section, we study the formulation of the boundary inverse problem in operator form, as well as the representation of the optimization problem solved using the gradient method.
Operator
is defined as follows:
where
, a solution to the direct problem (6)–(9), is written in operator form as
Operator Equation (24) is transformed into an optimization problem, and the functional is minimized
using the Landweber gradient method,
where
is a descent parameter.
Gradient methods play a key role in solving optimization and numerical analysis problems. One such method is the Landweber method, which is a variant of gradient descent that is used to solve ill-posed problems, in particular, inverse problems. This method is based on the iterative procedure that finds a solution through successive improvement of the initial approximation. One of the key aspects of the method is its convergence, which is determined by the behavior of the functional at each step of the iterative process.
Theorem 3 (on convergence in functional). Let be Hilbert spaces and be a linear bounded operator. Let us assume that for some , there is an exact solution to the equation Then, for any , the sequence defined by equalitiesconverges in the functional and the estimateis satisfied [27]. One of the effective approaches to solving the continuation problems is the use of optimization methods, among which Nesterov’s accelerated method occupies a special place due to its high speed of convergence. Nesterov’s method belongs to the family of gradient methods but has a significantly improved convergence speed compared to classical gradient methods. The main idea of the method is to use both the current and previous gradient directions to adjust the next step.
To apply the accelerated Nesterov’s method to a boundary inverse problem, one must perform the following steps:
Initialization: Set the initial approximation and the initial parameters of the method. The optimization step and smoothing coefficients can be selected as initial parameters.
Main loop: for each iteration , calculate:
Gradient of the objective function .
Corrected direction of movement , where is the acceleration coefficient.
Updating the current solution:
where is the optimization step.
Checking the stopping condition: the process continues until the convergence condition is reached.
The use of the accelerated Nesterov method for solving boundary value inverse problems has a number of advantages:
Accelerated convergence: The Nesterov method achieves convergence faster than conventional gradient methods. This is especially important for tasks where gradient calculation can be expensive. In particular, it achieves a convergence rate of
, where k is the number of iterations. This advantage is especially important in problems where traditional methods, such as the gradient descent method, have a slower convergence of
[
30].
Efficiency: The Nesterov method scales well for high-dimensional problems, which makes it applicable to real-world problems with a large number of unknowns. The Nesterov method is the advantageous choice in problems with large and complex data, where traditional methods may encounter problems of local minimum and slow convergence. It allows us to search for the global minimum more efficiently, avoiding getting stuck in local extremes [
31].
Stability: The method has better resistance to noise and data errors, which is critically important for inverse problems, which often suffer from incorrectness and instability. Although the Nesterov method is sensitive to parameter selection and may require fine-tuning, it often shows better performance even in the presence of noise if the parameters are set correctly. Regularization and the correct choice of initial conditions can significantly improve the stability of the method to noise [
32].
The Nesterov acceleration scheme presents particular interest because it is not only extremely easy to implement, but Nesterov himself has proved that it generates a sequence of q_k iterations, for which [
30]
The accelerated Nesterov method is a powerful tool for solving boundary value inverse problems. Its high convergence rate and noise resistance make it particularly attractive for applications in areas requiring accurate and rapid reconstruction of internal parameters of objects from boundary data. The introduction of this method into the practice of solving inverse problems opens up new prospects for a more efficient and reliable analysis of various physical phenomena.