Next Article in Journal
The Impact of Digital Economy on TFP of Industries: Empirical Analysis Based on the Extension of Schumpeterian Model to Complex Economic Systems
Previous Article in Journal
Results from a Nonlinear Wave Equation with Acoustic and Fractional Boundary Conditions Coupling by Logarithmic Source and Delay Terms: Global Existence and Asymptotic Behavior of Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Solution of the Cauchy Problem for the Helmholtz Equation Using Nesterov’s Accelerated Method

by
Syrym E. Kasenov
1,
Aigerim M. Tleulesova
1,*,
Ainur E. Sarsenbayeva
2,* and
Almas N. Temirbekov
1
1
Faculty of Mechanics and Mathematics, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
2
Department of Mathematics, Mukhtar Auezov South Kazakhstan University, Shymkent 160012, Kazakhstan
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2618; https://doi.org/10.3390/math12172618
Submission received: 20 July 2024 / Revised: 10 August 2024 / Accepted: 21 August 2024 / Published: 23 August 2024
(This article belongs to the Section Applied Mathematics)

Abstract

:
In this paper, the Cauchy problem for the Helmholtz equation, also known as the continuation problem, is considered. The continuation problem is reduced to a boundary inverse problem for a well-posed direct problem. A generalized solution to the direct problem is obtained and an estimate of its stability is given. The inverse problem is reduced to an optimization problem solved using the gradient method. The convergence of the Landweber method with respect to the functionals is compared with the convergence of the Nesterov method. The calculation of the gradient in discrete form, which is often used in the numerical solutions of the inverse problem, is described. The formulation of the conjugate problem in discrete form is presented. After calculating the gradient, an algorithm for solving the inverse problem using the Nesterov method is constructed. A computational experiment for the boundary inverse problem is carried out, and the results of the comparative analysis of the Landweber and Nesterov methods in a graphical form are presented.

1. Introduction

The Cauchy problem for the Helmholtz equation is an important problem in mathematical physics that has deep roots in acoustics and sound propagation. This problem is one of the fundamental equations in acoustics and has wide applications in various fields of physics, engineering, and applied sciences.
The Helmholtz equation arises from the equation of wave oscillations in a limited region of space, such as an acoustic chamber or cavity. It describes the behavior of sound waves in a medium and is related to the wave equation. The main physical problem associated with the Helmholtz equation is to model the propagation of sound in various media and conditions. For example, in acoustic engineering, the continuation problem is used to analyze sound fields in rooms, wind tunnels, pipes, and other geometric configurations. In geophysics, it is used to model sound waves in the atmosphere, ocean, and earth’s crust. It is also used in medicine to simulate sound waves inside the human body for diagnostics and treatment of various diseases.
Inverse problems of various types are encountered in everyday life. Refs. [1,2,3] applied numerical methods to solve problems related to acoustic equations, with a special emphasis on the problems of significant practical importance in the field of medical imaging.
One of the key problems associated with the Helmholtz equation is the continuation problem (i.e., the Cauchy problem), the main task of which is to find a solution to the equation in the regions different from the original one. In particular, this may be necessary in modeling the propagation of sound through layers of different densities or in cases where it is necessary to take into account changes in the medium along the wave propagation path.
The Cauchy problem for elliptic equations is a classic example of ill-posed problems, where small perturbations in the data can lead to significant changes in the solution. Therefore, iterative methods play a key role in stabilizing the solution and ensuring its stability.
One of the early studies devoted to the development of an iterative method for solving the Cauchy problem is described in Ref. [4]. The authors proposed a scheme that successively updates the approximations at each iteration, which gradually improves the accuracy of the solution. This approach was especially useful in stabilizing the solution and minimizing the impact of noise in the data. The work made a significant contribution to the numerical solution of ill-posed problems by demonstrating the effectiveness of the iterative approach.
In Ref. [5], an optimization method for solving the Cauchy problem based on the minimization of the functionals, including the problem’s data and a regularization term, is proposed. The use of numerical schemes in this method allowed the authors to stabilize the solution process and minimize the influence of errors. This approach is one of the first numerical methods for solving boundary inverse problems and demonstrates significant advantages in the context of ill-posed problems.
In Ref. [6], the iterative approach is developed by proposing an alternating method for solving the inverse Cauchy problem. This method effectively stabilizes the solution by alternate adjustment of the Dirichlet and Neumann data, which minimizes the impact of the ill-posed nature of the problem. The convergence of the method to the true solution is proved under certain conditions, which is confirmed by both theoretical analysis and numerical experiments.
One of the key problems in solving the Cauchy problem is the restoration of the missing boundary data, which is especially important in applications involving wave processes such as acoustics and electromagnetic waves. Variational methods, such as those proposed in Ref. [7], allow the Cauchy problem to be reformulated in terms of functional minimization, making the problem more stable and amenable to numerical solution. This approach enables us to obtain a solution to the problem in the corresponding functional space, which improves the convergence and stability of the solution.
Regularization methods, as shown in Refs. [8,9], play an important role in stabilizing the solution to the Cauchy problem. Tikhonov regularization and truncation methods allow for effective smoothing of solutions, reducing the impact of noise and other disturbances in the data. In particular, a Fourier regularization method that uses Fourier series expansion of the data to filter out high-frequency components responsible for the instability of the solution is proposed in Ref. [10].
An important contribution to the solution of the problem of restoring missing boundary data is the optimal control method [11]. This method is formulated as a problem of minimizing a special functional that takes into account the deviation from the known data and the smoothness of the solution. The optimal control approximation method allows one to effectively solve the problem of data restoration, ensuring high accuracy of the solution. In Ref. [12], the quasi-reversibility method, which modifies the original Cauchy problem by introducing a regularization term, is considered. This method allows one to obtain stable and accurate approximations of the solution, which is especially important when working with ill-posed problems.
Recently, various approaches to solving this problem have been proposed, each with its own advantages and areas of application. In Ref. [13], a new method based on mixed quasi-reversibility in the H_div space is presented. This approach enables us to take into account both the partial differential equation and the boundary conditions, which provides a stable and convergent solution for a wide class of elliptic Cauchy problems. This method demonstrates its effectiveness in cases where high accuracy with account for complex boundary conditions is required.
In Ref. [14], the authors investigate methods for accelerating the modified alternating algorithm (MA) using conjugate gradient methods and Landweber iteration. The aim of the study is to evaluate the accuracy and stability of these methods under different levels of noise in the data. Numerical experiments show that the CGNE (conjugate gradient on normal equations) method provides more accurate and stable solutions compared to other methods, especially at low noise levels. However, the CGME (conjugate gradient for minimum error) method has shown sensitivity to the noise level, which limits its application under strong disturbances.
An interesting approach to solving the Cauchy problem is proposed in Ref. [15], where the game theory is used. The problem is formulated as a Nash game in which the Cauchy data are distributed between two players. This approach makes it possible to minimize the functionals associated with the divergence of the Dirichlet and Neumann conditions, which ensures high efficiency and stability of data recovery, even in the presence of noise.
In Ref. [16], a solution to geometric inverse problems for the Cauchy–Stokes system is considered using the Nash game theory. The problem is formulated as a three-player game, where two players are to complement the data, and the third one is to detect inclusions. This approach allows one to solve the problems of detecting unknown cavities in a stationary viscous fluid, demonstrating high efficiency under complex boundary conditions and incomplete data.
These different approaches to solving the Cauchy problem for the Helmholtz equation highlight the importance of developing methods adapted to the specifics of the problem. Each of the proposed methods has its own advantages and areas of application, making them indispensable tools in numerical analysis and applied mathematics.
Solving the Cauchy problem for the Helmholtz equation at large wave numbers is a significant challenge in numerical analysis that requires accurate and robust methods. In Ref. [17], an adaptive grid construction method, which optimizes numerical computations by refining the grid in areas with high error and reducing the number of nodes where the error is minimal, is presented. This approach provides high accuracy in solving the Cauchy problem due to the efficient distribution of computational resources. In Ref. [18], this idea is developed by proposing a combination of adaptive grids with regularization methods such as Tikhonov’s method. Adaptive grid algorithms can significantly reduce computational costs while providing accurate results even in problems with inhomogeneous data.
When it is necessary to solve the Cauchy problem for the Helmholtz equation at high wavenumbers, traditional methods often encounter difficulties. In such conditions, adaptive grids and advanced algorithms, as shown in Ref. [19], demonstrate high numerical stability, especially in the presence of noise in the data. The advantage of adaptive methods is their ability to adapt to the specifics of the problem without the need for preliminary parameter adjustment, which makes them convenient and efficient. In Ref. [20], the authors also present an efficient algorithm for solving the Cauchy problem based on the relaxation of alternating iterations, which shows high accuracy and stability at various noise levels. This approach is especially effective at large wave numbers where traditional methods can give unstable solutions. Thus, the use of adaptive grids in combination with modern algorithms for solving the Cauchy problem for the Helmholtz equation at high wave numbers provides high accuracy, stability, and efficiency of numerical solutions. These methods offer new possibilities for solving complex problems in various fields of science and technology.
The connection between the continuation problem and the boundary inverse problem demonstrates how the physical aspects of the Helmholtz equation problem become key aspects when solving the problem of finding boundary conditions.
The aim of the Cauchy problem is to find a solution to the Helmholtz equation in regions different from the original, which may be necessary, for example, for modeling the interaction of sound with various obstacles or changes in the medium along the path of wave propagation. When solving this problem, it is important to take into account the boundary conditions that determine the behavior of sound waves at the boundaries of different media or obstacles.
The inverse problem is solved by successively solving the direct and adjoint problems. The efficiency of this approach is determined by the speed and stability of the numerical solution of these problems. In Ref. [21], a new method for solving the direct problem of the two-dimensional Helmholtz equation, which outperforms traditional approaches in terms of accuracy and stability, is presented. The scalable boundary finite element method (SBFEM) demonstrates significant advantages in modeling complex physical phenomena. In Ref. [22], an adaptive scaled boundary finite element method (SBFE) for topological optimization of structures, which makes it possible to carry out efficient modeling of complex geometries with dynamic loads, was proposed. This method provides a high convergence rate and accuracy, which is especially important when working with unstable problems.
Quadratic convergence of numerical solution methods plays a key role in ensuring high accuracy and computational efficiency. Such convergence significantly accelerates the achievement of the desired level of accuracy, which makes methods with quadratic convergence preferable for solving many problems. In Ref. [23], the authors propose a new iterative method for solving the two-dimensional Bratu problem, which demonstrates the ability to accurately calculate both branches of the solution to the problem using quasi-linearization and Newton–Raphson–Kantorovich approximation. These methods, similar to our study, provide quadratic convergence of numerical solutions, which is important for problems requiring high accuracy.
In Ref. [24], the stability of solutions to the inverse problem of source reconstruction in a medium with two layers separated by an uneven boundary is studied. The authors prove that, despite the complexity of the interface structure, it is possible to obtain stable solutions, which is important for applications in geophysics, where reconstruction of wave radiation sources in layered media is required.
In our work, we consider the Cauchy problem for the Helmholtz equation, which is also known as the continuation problem. The main attention is paid to the reduction of the continuation problem to the boundary inverse problem for a well-posed direct problem. We propose a generalized solution to the direct problem and conduct a detailed analysis of its stability. To solve the inverse problem, an optimization-based approach is used, which uses the gradient method. Special attention is paid to the comparative analysis of the Landweber and Nesterov methods. The article describes in detail the process of calculating the gradient in discrete form. In addition, the formulation of the conjugate problem in discrete form is given, which allows us to build an algorithm for solving the inverse problem using gradient methods. This study performs a computational experiment for the boundary inverse problem at various noise levels, and its results confirm the effectiveness of the proposed method. A graphical comparative analysis of the Landweber and Nesterov methods is presented, and the advantages and disadvantages of gradient methods are demonstrated both from the points of view of convergence and computational efficiency.

2. Materials and Methods

First, we present the statement of the Cauchy problem. The original Cauchy problem for the Helmholtz equation in the domain Ω = (0.1) × (0.1) is written as:
u x x + u y y + k 2 u = 0 , x , y Ω ,
u x 0 , y = 0 , y 0,1 ,
u 0 , y = f y , y 0,1 ,
u x , 0 = u x , 1 = 0 , x 0,1 .
where k is a given constant. In the continuation problem, the task is to find a function u x , y in the domain Ω using the given f y .
The aim of the Cauchy problem for the Helmholtz equation is to reconstruct a solution to the equation in the domain using the data at the boundary of the domain or in the other part of space. This problem is often ill-posed in the Hadamard sense, which requires special methods to solve it.
The Cauchy problem for a partial differential equation, in particular for the Helmholtz equation, is an ill-posed problem. This means that small disturbances in the data can lead to large changes in the solution. For such problems, it is important to study conditional stability, that is, to evaluate how sensitive the solution is to small changes in the data and under what conditions a stable solution can be ensured. For the Laplace equation at = 0 , we can prove the following theorem on conditional stability.
Theorem 1. 
For  f C 2 0,1 , let there exist a solution  u C 2 ( Ω ¯ )  to problem (1)–(4).
In the domain  Ω x , y R 2 : x 0,1 ,   y 0,1 ,  let us consider the initial boundary value problem:
u x x + u y y = 0 , x , y Ω ,
u x 0 , y = 0 , y 0,1 ,
u 0 , y = f y , y 0,1 ,
u x , 0 = u x , 1 = 0 , x 0,1 .
Then, the following conditional stability estimate is valid:
0 1 u 2 ( x , y ) d y 0 1 f 2 ( y ) d y 1 x 0 1 u 2 ( 1 , y ) d y x .
The study of the conditional stability of the continuation problem allows us to obtain important results that can be used for the numerical solution of ill-posed problems. A priori estimates and regularization methods play a key role in ensuring the stability of the solution. Understanding the conditions under which the continuation problem is conditionally stable contributes to the development of effective numerical methods and their application in various fields of science and technology [25,26,27].

2.1. Direct and Inverse Problems

The ill-posed nature of the Cauchy problem creates significant difficulties when trying to find its numerical solution. An effective approach to solving this problem is to reduce the continuation problem to a boundary inverse problem. This allows us to use the methods developed for solving inverse problems to find a solution to the original problem. Let us consider the formulation of the direct and inverse problems.
u x x + u y y + k 2 u = 0 , x , y Ω ,
u x 0 , y = 0 , y 0,1 ,
u 1 , y = q y , y 0,1 ,
u x , 0 = u x , 1 = 0 , x 0,1 .
In the direct problem, one must find u ( x , y ) based on the given functions q ( y ) . In the inverse problem, it is necessary to find q ( y ) = u 1 , y , taking into account additional information:
u 0 , y = f y , y 0,1 .
Definition: The function u x , y L 2 Ω will be considered a generalized (or weak) solution to the direct problem (6)–(8), if for any ω H 2 Ω , such that:
ω x 0 , y = 0 , y 0,1 ,
ω 1 , y = 0 , y 0,1 ,
ω x , 0 = ω x , 1 = 0 , x 0,1 .
The equality
Ω u ( ω x x + ω y y + k 2 ω ) d x d y 0 1 q y · ω x 1 , y d y = 0
is satisfied. The function ω H 2 Ω satisfying conditions (6)–(8) will be called a test function.
The direct problem for the Helmholtz equation involves finding a solution under given boundary conditions. The correctness of this problem is critical for many applications in science and technology. Further, we will consider the correctness of the generalized solution of the direct problem for the Helmholtz equation.
Theorem 2. 
If  q L 2 0,1   a n d 2 < k < 2 ,  then the generalized solution  u L 2 Ω  to problem (6)–(8) exists, is unique, and satisfies the estimate
u L 2 ( Ω ) 2 q L 2 0,1 2 · 4 2 2 k 2 2 .
Proof of Theorem 2. 
Consider the function q ¯ C 0 ( 0,1 ) . Here, C 0 0,1 is the space of finite, infinitely differentiable functions with a support belonging to the interval (0,1). Then, for each such function, problem (6)–(8) has a unique, infinitely differentiable solution, which we will further denote by u ¯ [28].
Consider the auxiliary problem:
ω x x + ω y y = u ¯ , x , y Ω ,
ω x 0 , y = 0 , y 0,1 ,
ω 1 , y = 0 , y 0,1 ,
ω x , 0 = ω x , 1 = 0 , x 0,1 .
Let us integrate the identity
ω · u ¯ = ω · ω x x + ω y y = ω · ω x x ω x 2 + ω · ω y y ω y 2
over the domain Ω , and taking (17)–(19) into account, we obtain
0 1 0 1 ω u ¯ d x d y = 0 1 ω x ω | 0 1 d y + 0 1 ω y · ω | 0 1 d x 0 1 0 1 ω x 2 + ω y 2 d x d y .
It means that
Ω ω · u ¯ d x d y = Ω ω x 2 + ω y 2 d x d y .
Hence,
Ω ( ω x 2 + ω y 2 ) d x d y ω L 2 Ω · u ¯ L 2 Ω .
From the equalities 0 = ω 1 , y = ω x , y + x 1 ω ξ ( ξ , y ) d ξ
ω x , y = ω x , 0 + 0 y ω η ( x , η ) d η = 0 y ω η ( x , η ) d η
it follows that
2 ω x , y = 0 y ω η ( x , η ) d η x 1 ω ξ ( ξ , y ) d ξ
Hence,
ω ( x , y ) 1 2 0 1 ω η x , η d η + 0 1 ω ξ ξ , y d ξ .
By virtue of the Cauchy–Bunyakovsky inequalities,
0 1 ω η ( x , η ) d η 0 1 ω η 2 x , η d η 1 2 ,
0 1 ω ξ ( ξ , y ) d ξ 0 1 ω ξ 2 ( ξ , y ) d ξ 1 2 .
As a + b 2 2 a 2 + 2 b 2 , we obtain the inequality
ω ( x , y ) 2 1 4 0 1 ω η x , η d η + 0 1 ω ξ ( ξ , y ) d ξ 2 1 4 · 2 0 1 ω η ( x , η ) d η 2 + 0 1 ω ξ ( ξ , y ) d ξ 2 1 2 0 1 ω η 2 ( x , η ) d η + 0 1 ω ξ 2 ( ξ , y ) d ξ
Integrating the resulting inequality over Ω, we obtain
ω L 2 ( Ω ) 2 1 2 Ω ω x 2 + ω y 2 d x d y .
Comparing (21) with (20), we obtain
ω L 2 Ω 2 1 2 Ω ω x 2 + ω y 2 d x d y 1 2 ω L 2 Ω · u ¯ L 2 Ω , ω L 2 ( Ω ) 1 2 u ¯ L 2 Ω , Ω ( ω x 2 + ω y 2 ) d x d y 1 2 u ¯ L 2 Ω 2 .
Now, we can integrate the expression ω x u ¯ = ω x ( ω x x + ω y y ) over Ω, and using boundary conditions (17)–(19), we obtain:
Ω   ω x ω x x + ω y y d x d y = Ω 1 2 ω x 2 x + ω x · ω y y ω x y · ω y d x d y = 1 2 0 1 ω x 2 1 , y d y 1 2 0 1 ω x 2 0 , y d y + 0 1 ω x · ω y x , 1 d x 0 1 ω x · ω y x , 0 d x 1 2 Ω ω y 2 x d x d y = 1 2 0 1 ω x 2 1 , y d y 1 2 0 1 ω y 2 1 , y d y + 1 2 0 1 ω y 2 0 , y d y = 1 2 0 1 ω x 2 1 , y d y + 1 2 0 1 ω y 2 ( 0 , y ) d y .
Hence,
0 1 ω x 2 ( 1 , y ) d y 2 Ω ω x u ¯ d x d y 2 ω x L 2 ( Ω ) · u ¯ L 2 ( Ω )
From (22), we obtain the inequality ω x L 2 ( Ω ) 2 1 2 u ¯ L 2 ( Ω ) 2 .
Hence,
0 1 ω x 2 ( 1 , y ) d y 2 u ¯ L 2 Ω 2 ,
0 1 ω x 2 1 , y d y 2 ω x L 2 Ω · u ¯ 2 · 1 2 u ¯ L 2 2 = 2 u ¯ L 2 ( Ω ) 2 .
Let us take the solution ω x , y of the auxiliary problem (16)–(19) as a test function (by definition) and substitute it into (14). Replacing q , u with q ¯ ,   u   ¯ in (14) and taking into account (23), we obtain
Ω ( u ¯ 2 + k 2 ω · u ¯ ) d y d x 0 1 ω x 2 1 , y d y 1 2 · q L 2 0,1 .
This gives the estimate
u ¯ L 2 ( Ω ) 2 q L 2 0,1 · 2 4 u ¯ L 2 Ω + k 2 ω L 2 ( Ω ) · u ¯ L 2 ( Ω ) q L 2 0,1 · 2 4 u ¯ L 2 Ω + k 2 · 1 2 u ¯ L 2 ( Ω ) 2 1 k 2 2 · u ¯ L 2 ( Ω ) 2 q L 2 ( 0,1 ) · 2 4 u ¯ L 2 ( Ω ) u ¯ L 2 ( Ω ) q L 2 ( 0,1 ) · 2 2 4 2 k 2
The theorem is proved. □
The well-posed nature of the generalized solution of the direct problem for the Helmholtz equation is important for theoretical and applied problems. The existence, uniqueness, and stability of the solution ensure the reliability of mathematical models and their application in various fields. Understanding these aspects helps us to effectively use functional analysis methods in solving problems in mathematical physics and engineering [29].

2.2. Formulation and Solution of the Optimization Problem

In this section, we study the formulation of the boundary inverse problem in operator form, as well as the representation of the optimization problem solved using the gradient method.
Operator A is defined as follows:
A : u 1 , y = q ( y ) u ( 0 , y ) = f ( y )
where ( x , y ) , a solution to the direct problem (6)–(9), is written in operator form as
A q = f .
Operator Equation (24) is transformed into an optimization problem, and the functional is minimized
J q n = A q n f 2 = 0 π u 0 , y ; q n f ( y ) 2 d y
using the Landweber gradient method,
q n + 1 = q n α J q n
where α is a descent parameter.
Gradient methods play a key role in solving optimization and numerical analysis problems. One such method is the Landweber method, which is a variant of gradient descent that is used to solve ill-posed problems, in particular, inverse problems. This method is based on the iterative procedure that finds a solution through successive improvement of the initial approximation. One of the key aspects of the method is its convergence, which is determined by the behavior of the functional at each step of the iterative process.
Theorem 3 
(on convergence in functional). Let  Q   a n d   F  be Hilbert spaces and  A  be a linear bounded operator. Let us assume that for some  f F , there is an exact solution  q T  to the equation  A q = f .  Then, for any  q 0 Q   a n d   α 0 , A 2 , the sequence  q n  defined by equalities
q n + 1 = q n α J q n   ,
converges in the functional and the estimate
  J q n q 0 q T 2 n α 1 α A 2
is satisfied [27].
One of the effective approaches to solving the continuation problems is the use of optimization methods, among which Nesterov’s accelerated method occupies a special place due to its high speed of convergence. Nesterov’s method belongs to the family of gradient methods but has a significantly improved convergence speed compared to classical gradient methods. The main idea of the method is to use both the current and previous gradient directions to adjust the next step.
To apply the accelerated Nesterov’s method to a boundary inverse problem, one must perform the following steps:
  • Initialization: Set the initial approximation q 0 and the initial parameters of the method. The optimization step and smoothing coefficients can be selected as initial parameters.
  • Main loop: for each iteration k , calculate:
    • Gradient of the objective function   J q k .
    • Corrected direction of movement v k = q k + β k q k q k 1 , where β k is the acceleration coefficient.
    • Updating the current solution: q k + 1 = v k α k J v k
    where α k is the optimization step.
  • Checking the stopping condition: the process continues until the convergence condition is reached.
The use of the accelerated Nesterov method for solving boundary value inverse problems has a number of advantages:
Accelerated convergence: The Nesterov method achieves convergence faster than conventional gradient methods. This is especially important for tasks where gradient calculation can be expensive. In particular, it achieves a convergence rate of O 1 / k 2 , where k is the number of iterations. This advantage is especially important in problems where traditional methods, such as the gradient descent method, have a slower convergence of O 1 / k [30].
Efficiency: The Nesterov method scales well for high-dimensional problems, which makes it applicable to real-world problems with a large number of unknowns. The Nesterov method is the advantageous choice in problems with large and complex data, where traditional methods may encounter problems of local minimum and slow convergence. It allows us to search for the global minimum more efficiently, avoiding getting stuck in local extremes [31].
Stability: The method has better resistance to noise and data errors, which is critically important for inverse problems, which often suffer from incorrectness and instability. Although the Nesterov method is sensitive to parameter selection and may require fine-tuning, it often shows better performance even in the presence of noise if the parameters are set correctly. Regularization and the correct choice of initial conditions can significantly improve the stability of the method to noise [32].
The Nesterov acceleration scheme presents particular interest because it is not only extremely easy to implement, but Nesterov himself has proved that it generates a sequence of q_k iterations, for which [30]
  J q k J q T = O k 2 .
The accelerated Nesterov method is a powerful tool for solving boundary value inverse problems. Its high convergence rate and noise resistance make it particularly attractive for applications in areas requiring accurate and rapid reconstruction of internal parameters of objects from boundary data. The introduction of this method into the practice of solving inverse problems opens up new prospects for a more efficient and reliable analysis of various physical phenomena.

3. Results

3.1. Gradient Calculation in Discrete Form

The numerical solution of inverse problems has two main approaches: (1) the first step is the formulation of the optimization problem and then its discretization [5], and (2) the second step is the discretization of the problem and then construction and solution of the optimization problem. In this paper, the second approach, which consists of a sequential “discretization and then optimization” scheme, will be considered.
Direct discretization of equations before setting the optimization problem allows us to obtain more precise control over the discretization process. The methods such as finite differences can be applied directly to differential equations, which enables us to take into account problem-specific features such as boundary conditions or mesh features. This provides a more accurate representation of the original problem in discrete form.
Let us consider how to discretize the direct problem for the Helmholtz equation based on Ref. [33]. To proceed to the numerical solution of the problem, we construct a grid ω h in the domain Ω with step h = 1 / N , where N is a positive integer. This grid will consist of points ω h = x i = i h ,   y j = j h ;   i = 0 , N ¯ ,   j = 0 , N ¯ . Now, we can write a corresponding difference problem for the Helmholtz equation on this grid. Thus, the direct problem (5)–(8) in discrete form will be written as follows:
1 h 2 u i + 1 , j 2 u i , j + u i 1 , j + 1 h 2 u i , j + 1 2 u i , j + u i , j 1 + k 2 u i , j = 0 ,
u 1 , j u 0 , j = 0 ,
u N , j = q j n ,
u i , 0 = u i , N = 0 .
The objective functional can be presented in the following discrete form:
J q j n = j = 0 N 1 u 0 , j q j n f j 2 · h .
The increment of the functional is realized through the perturbation q j n + δ q j n ; hence, we obtain
u i , j u x , y ; q j n ,
u ~ i , j u x , y ; q j n + δ q j n ,
δ u i , j = u ~ i , j u i , j .
Using notation (31), we calculate the increment of the objective functional J q n .
J q j n + δ q j n J q j n = j = 0 N 1 u ~ 0 , j f j 2 · h j = 0 N 1 u 0 , j f j 2 · h
= j = 0 N 1 δ u 0 , j · 2 u 0 , j f j · h + ο δ u .
Let us consider the formulation of the perturbed problem for the problem (27)–(30).
1 h 2 u ~ i + 1 , j 2 u ~ i , j + u ~ i 1 , j + 1 h 2 u ~ i , j + 1 2 u ~ i , j + u ~ i , j 1 + k 2 u ~ i , j = 0 ,
u ~ 1 , j u ~ 0 , j = 0 ,
u ~ N , j = q j n + δ q j n ,
u ~ i , 0 = u ~ i , N = 0 .
To obtain the problem for δ u 0 , j , we subtract problem (27)–(30) from problem (34)–(37), and taking into account (32), we obtain the following relations:
1 h 2 δ u i + 1 , j 2 δ u i , j + δ u i 1 , j + 1 h 2 δ u i , j + 1 2 δ u i , j + δ u i , j 1 + k 2 δ u i , j = 0
δ u 1 , j δ u 0 , j = 0 ,
δ u N , j = δ q j n ,
δ u i , 0 = δ u i , N = 0 .
To derive the gradient of the objective functional, we use summation by parts:
Δ i υ , ω i = υ N ω N 1 υ 1 ω 0 υ i , Δ i 1 ω , Δ i 1 υ , ω i = υ N 1 ω N υ 0 ω 1 υ i , Δ i ω .
where
Δ i υ = υ i + 1 υ i ,     υ i , ω i = i = 1 N 1 υ i · ω i
Multiplying (38) by the derivative function ψ i , j , we sum up with respect to i and j from 1 to N 1 .
0 = I = h 2 · i = 1 N 1 j = 1 N 1 1 h 2 δ u i + 1 , j 2 δ u i , j + δ u i 1 , j + 1 h 2 δ u i , j + 1 2 δ u i , j + δ u i , j 1 + k 2 δ u i , j ψ i , j = j = 1 N 1 i δ u j , ψ i , j Δ i 1 δ u j , ψ i , j + i = 1 N 1 j δ u i , ψ i , j Δ j 1 δ u i , ψ i , j + k 2 · h 2 · j = 1 N 1 i = 1 N 1 δ u i , j ψ i , j .
Let us transform the last expression using the formulas for summation by parts.
I = j = 1 N 1 δ u N , j ψ N 1 , j δ u 1 , j ψ 0 , j δ u i , j , Δ i 1 ψ j δ u N 1 , j ψ N , j + δ u 0 , j ψ 1 , j + δ u i , j , Δ i ψ j + i = 1 N 1 δ u i , N ψ i , N 1 δ u i , 1 ψ i , 0 δ u i , j , Δ j 1 ψ i δ u i , N 1 ψ i , N + δ u i , 0 ψ i , 1 + δ u i , j , Δ j ψ j + k 2 · h 2 · j = 1 N 1 i = 1 N 1 δ u i , j ψ i , j .
Taking into account the boundary conditions (39)–(41), we obtain:
I = j = 1 N 1 δ q j n ψ N 1 , j δ u N 1 , j ψ N , j + δ u 0 , j ψ 1 , j ψ 0 , j + δ u i , j , Δ i ψ j Δ i 1 ψ j + i = 1 N 1 δ u i , N 1 ψ i , N δ u i , 1 ψ i , 0 + δ u i , j , Δ j ψ j Δ j 1 ψ i + k 2 · h 2 · j = 1 N 1 i = 1 N 1 δ u i , j ψ i , j = h 2 · i = 1 N 1 j = 1 N 1 1 h 2 ψ i + 1 , j 2 ψ i , j + ψ i 1 , j + 1 h 2 ψ i , j + 1 2 ψ i , j + ψ i , j 1 + k 2 ψ i , j δ u i , j j = 1 N 1 δ u N 1 , j ψ N , j i = 1 N 1 δ u i , N 1 ψ i , N + δ u i , 1 ψ i , 0 + h · j = 1 N 1 δ q j n ψ N 1 , j h + δ u 0 , j ψ 1 , j ψ 0 , j h
As this expression is identically zero, we obtain the following expressions:
1 h 2 ψ i + 1 , j 2 ψ i , j + ψ i 1 , j + 1 h 2 ψ i , j + 1 2 ψ i , j + ψ i , j 1 + k 2 ψ i , j = 0 ,
ψ N , j = 0 ,
ψ i , 0 = ψ i , N = 0 .
0 = h · j = 1 N 1 δ q j n ψ N 1 , j h + δ u 0 , j ψ 1 , j ψ 0 , j h .
Therefore, taking into account expression (33) and based on expression (42), we obtain:
ψ 1 , j ψ 0 , j h = 2 u 0 , j f j , δ q n , J q n = h · j = 1 N 1 δ q j n ψ N 1 , j h ,
When deriving the gradient of the functional, one can notice that the gradient takes into account only the internal points, excluding two end-points. Therefore, the numerical solution to the inverse problem may not completely coincide.
Hence, we can formulate the conjugate problem.
1 h 2 ψ i + 1 , j 2 ψ i , j + ψ i 1 , j + 1 h 2 ψ i , j + 1 2 ψ i , j + ψ i , j 1 + k 2 ψ i , j = 0 ,
ψ 1 , j ψ 0 , j h = 2 u 0 , j f j ,
ψ N , j = 0 ,
ψ i , 0 = ψ i , N = 0 .
It follows from this that we can prove the following theorem.
Theorem 4. 
The functional  J q j n  in the point  q j n  has a Fréchet derivative and the following equalities hold:
J q j n = ψ N 1 , j h
Proof of Theorem 4. 
Based on the definition, the derivative of the Fréchet functional has the following form [27]:
J q n + δ q n J q n = δ q n , J q n + o δ q n
From equality (33),
J q j n + δ q j n J q j n = j = 0 N 1 δ u 0 , j · 2 u 0 , j f j · h + ο δ u
taking into account estimate (15), we come to the following conclusion:
ο δ u o δ q n .
Thus,
J q j n = ψ N 1 , j h
where ψ i , j is a solution to the conjugate problem (43)–(46). The theorem is proved. □
Using the gradient functional, we can develop an algorithm that will enable us to numerically solve the inverse problem for the Helmholtz equation.

3.2. Algorithm for Solving a Boundary Inverse Problem Using Nesterov’s Method

To implement Nesterov’s method, we set the following parameters:   λ 0 = 1 ,   α 0 = 1 / L , where L is the Lipschitz constant of the gradient.
  • Choose the initial approximation q j 0 and set p j 0 = q j 0 .
  • Solve numerically the direct problem (27)–(30) for p j 0 .
  • Calculate the value of the functional J p j 0 using formula (31).
  • If the value of the objective functional is not small enough, then solve the conjugate problem (43)–(46).
  • Calculate the gradient of the functional J p j 0 using formula (47).
  • Calculate the approximation q j 1 = p j 0 α 0 J p j 0 .
  • Suppose that q j n   a n d   q j n 1 are known, then calculate the parameters
    λ n = 1 + 1 + 4 λ n 1 2 2 ,     γ n 1 = 1 λ n 1 λ n
  • Calculate p j n = 1 γ n 1 q j n + γ n 1 q j n 1 .
  • Solve numerically the direct problem for p j n .
  • Calculate the value of the functional J p j n .
  • If the value of the objective functional is not small enough, then solve the conjugate problem (43)–(46).
  • Calculate the gradient of the functional J p j n .
  • Calculate the next approximation q j n + 1 = p j n α n J p j n and go to point 7.

4. Discussion

4.1. Numerical Solution of the Direct Problem

The direct method is used to solve direct and conjugate problems. To do this, we construct a system of difference equations for the direct problem (27)–(30) [33]:
A T X T = B T ,
where A T is a matrix of order N + 1 2 , X T is an unknown vector of the type X T = u 0,0 , u 0,1 , , u N , N , and B T is a data vector (boundary conditions of the direct problem). A system of linear algebraic equations can be solved with the SVD decomposition method.

4.2. Numerical Calculations for Solving the Inverse Problem

Let us consider a numerical solution to the boundary inverse problem for the Helmholtz equation using the Landweber and Nesterov methods. This problem is an important example of ill-posed problems that require the use of special regularization methods and numerical algorithms.
To solve the problem, two approaches are used: the Landweber method and the accelerated Nesterov method. A comparative analysis of the results obtained by these methods is presented for several experimental scenarios: without noise, with the addition of 1% noise, and with the addition of 5% noise.
Let us carry out numerical calculations for solving the inverse problem. Consider a solution in the domain Ω = ( 0.1 ) × ( 0.1 ) , and choose parameter k = 0.9 . We set the exact boundary condition
q e x y = 0.25 · e x 1 / 4 2 + y 1 / 4 2 0.1 2 + 0.25 · e x 3 / 4 2 + y 3 / 4 2 0.1 2 ,
As a criterion for stopping iterations of the gradient method, we choose the following expression J q n < ε [34].
For the Landweber method, the descent step parameter α should be chosen from the interval 0 , 1 / A 2 , where A is the norm of the problem operator [27]. This choice ensures the convergence of the method and control over the step size. If α is too large, the method may not converge, and if it is too small, the convergence will be too slow. Choosing the optimal value requires adjustment and it often depends on the specific problem and data. For this problem, α = 0.1 .
The accelerated Nesterov method is an improvement of the gradient method and is used to accelerate convergence. This method uses an additional variable to account for previous steps, which enables us to avoid getting stuck in local minima and improves convergence. However, the Nesterov method also has its limitations. One of these limitations is the sensitivity to the choice of initial parameters, such as the optimization step and acceleration coefficients. Incorrect selection of these parameters can lead to a slowdown in convergence or even to its absence. In addition, the method may be less resistant to noise in the data, which requires additional measures. For this problem, the following parameters were chosen λ 0 = 1 ,   α = 0.01 . These parameters, combined with the acceleration factor γ n 1 , provide a faster descent compared to the classic gradient method.

4.3. Results without Noise

In the scenario without noise, the minimum values of the functional were obtained with a different number of iterations for each method. Table 1 shows the results of the comparison of the methods.
Figure 1 and Figure 2 show graphs of the functional decreasing over iterations. The value of the functional can decrease to 10 9 after performing 20,344 iterations for the Landweber method and 4400 iterations for the Nesterov method. From the graph of the decrease in the functional obtained using the Nesterov method, it is clear that this method, unlike the Landweber method, is able to avoid getting stuck in local minima, which allows it to find more global minima at various stages of the iterative process.
Figure 3 and Figure 4 show the comparison of the reconstructed functions obtained by the Landweber and Nesterov methods with the exact solution. These graphs show that the Nesterov method provides a more accurate restoration. This method not only speeds up the process of finding a solution but also increases the accuracy of the reconstructed values, which makes it preferable for solving boundary inverse problems for the Helmholtz equation.

4.4. Results with 1% Noise

When adding 1% noise, both methods converge, but the Nesterov method demonstrates higher accuracy and lower iteration complexity. The results are presented in Table 2.

4.5. Results with 5% Noise

When adding 5% noise, the differences between the methods are also obvious. Table 3 shows the results.
The Nesterov method once again demonstrated its effectiveness, achieving a smaller error with a smaller number of iterations. Thus, with the noisier data, the Nesterov method provides a more stable and accurate solution.
Numerical experiments have shown that the Nesterov method outperforms the Landweber method both in accuracy and in the number of iterations required to achieve the minimum value of the functional (Figure 5 and Figure 6). This is especially evident in noisy conditions, where the Nesterov method demonstrates higher stability and a lower tendency to get stuck in the local minima. This makes it a preferred choice for solving boundary value inverse problems, such as the problem considered for the Helmholtz equation.
In comparison with the results of Ref. [20], the Nesterov method requires a larger number of iterations when calculating without noise, but the achieved accuracy is almost the same. In the presence of noise, the Nesterov method demonstrates superiority in accuracy. The comparative analysis shows that in Refs. [14,19] the accuracy is slightly higher, but the Nesterov method stands out with good results in terms of the number of iterations.

5. Conclusions

The discretization-then-optimization scheme provides many advantages for solving inverse problems. It provides precision and control over the discretization process, reduces computational complexity, facilitates numerical implementation, and provides flexibility in the choice of optimization methods. These advantages make this approach preferable for many practical applications in various fields of science and technology. Finally, this approach helps solve complex inverse problems more efficiently and reliably, ensuring high accuracy and robustness of solutions. Nesterov’s accelerated method is a powerful tool for solving boundary inverse problems. Its high convergence speed and noise tolerance make it particularly attractive for applications that require accurate and rapid reconstruction of internal object parameters from boundary data. The introduction of this method into the practice of solving inverse problems opens up new prospects for more efficient and reliable analysis of various physical phenomena. Thus, the numerical results of the inverse problem show that the Nesterov method not only converges faster to the minimum value of the functional, but also provides a more accurate restoration of the desired function compared to the Landweber method.

Author Contributions

Methodology, S.E.K. and A.M.T.; software, A.E.S.; formal analysis, A.M.T. and A.N.T.; writing—original draft, A.E.S.; writing—review and editing, S.E.K. and A.N.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP19579325).

Data Availability Statement

The data is available from the authors upon polite request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shishlenin, M.; Kozelkov, A.; Novikov, N. Nonlinear Medical Ultrasound Tomography: 3D Modeling of Sound Wave Propagation in Human Tissues. Mathematics 2024, 12, 212. [Google Scholar] [CrossRef]
  2. Klyuchinskiy, D.; Novikov, N.; Shishlenin, M. Recovering density and speed of sound coefficients in the 2d hyperbolic system of acoustic equations of the first order by a finite number of observations. Mathematics 2021, 9, 199–211. [Google Scholar] [CrossRef]
  3. Novikov, N.; Shishlenin, M. Direct Method for Identification of Two Coefficients of Acoustic Equation. Mathematics 2023, 11, 3029. [Google Scholar] [CrossRef]
  4. Kozlov, V.A.; Maz’ya, V.G.; Fomin, A.V. An iterative method for solving the Cauchy problem for elliptic equations. Comput. Math. Phys. 1991, 31, 45–52. [Google Scholar]
  5. Kabanikhin, S.I.; Karchevsky, A.L. Optimizational method for solving the Cauchy problem for an elliptic equation. J. Inverse Ill-Posed Probl. 1995, 3, 21–46. [Google Scholar]
  6. Jourhmane, M.; Nachaoui, A. An alternating method for an inverse Cauchy problem. Numer. Algorithms 1999, 21, 247–260. [Google Scholar] [CrossRef]
  7. Belgacem, F.B.; El Fekih, H. On Cauchy’s problem: I. A variational Steklov–Poincaré theory. Inverse Probl. 2005, 21, 1915. [Google Scholar] [CrossRef]
  8. Azaïez, M.; Belgacem, F.B.; El Fekih, H. On Cauchy’s problem: II. Completion, regularization and approximation. Inverse Probl. 2006, 22, 1307. [Google Scholar] [CrossRef]
  9. Qin, H.H.; Wei, T. Two regularization methods for the Cauchy problems of the Helmholtz equation. Appl. Math. Model. 2010, 34, 947–967. [Google Scholar] [CrossRef]
  10. Fu, C.L.; Feng, X.L.; Qian, Z. The Fourier regularization for solving the Cauchy problem for the Helmholtz equation. Appl. Numer. Math. 2009, 59, 2625–2640. [Google Scholar] [CrossRef]
  11. Aboulaϊch, R.; Abda, A.B.; Kallel, M. Missing boundary data reconstruction via an approximate optimal control. Inverse Probl. Imaging 2008, 2, 411–426. [Google Scholar]
  12. Qian, A.L.; Xiong, X.T.; Wu, Y.J. On a quasi-reversibility regularization method for a Cauchy problem of the Helmholtz equation. J. Comput. Appl. Math. 2010, 233, 1969–1979. [Google Scholar] [CrossRef]
  13. Dardé, J.; Hannukainen, A.; Hyvonen, N. An H_div-based mixed quasi-reversibility method for solving elliptic Cauchy problems. SIAM J. Numer. Analysis. 2013, 51, 2123–2148. [Google Scholar] [CrossRef]
  14. Berntsson, F.; Kozlov, V.A.; Mpinganzima, L.; Turesson, B.O. An accelerated alternating procedure for the Cauchy problem for the Helmholtz equation. Comput. Math. Appl. 2014, 68, 44–60. [Google Scholar] [CrossRef]
  15. Habbal, A.; Kallel, M. Neumann--Dirichlet Nash Strategies for the Solution of Elliptic Cauchy Problems. SIAM J. Control Optim. 2013, 51, 4066–4083. [Google Scholar] [CrossRef]
  16. Habbal, A.; Kallel, M.; Ouni, M. Nash strategies for the inverse inclusion Cauchy-Stokes problem. Inverse Probl. Imaging 2019, 13, 36. [Google Scholar] [CrossRef]
  17. Bergam, A.; Chakib, A.; Nachaoui, A.; Nachaoui, M. Adaptive mesh techniques based on a posteriori error estimates for an inverse Cauchy problem. Appl. Math. Comput. 2019, 346, 865–878. [Google Scholar] [CrossRef]
  18. Nachaoui, M.; Chakib, A.; Hilal, M.A. Some novel numerical techniques for an inverse Cauchy problem. J. Comput. Appl. Math. 2021, 381, 113030. [Google Scholar] [CrossRef]
  19. Berdawood, K.A.; Nachaoui, A.; Nachaoui, M. An accelerated alternating iterative algorithm for data completion problems connected with Helmholtz equation. Stat. Optim. Inf. Comput. 2023, 11, 2–21. [Google Scholar] [CrossRef]
  20. Berdawood, K.A.; Nachaoui, A.; Nachaoui, M.; Aboud, F. An effective relaxed alternating procedure for cauchy problem connected with helmholtz equation. Numer. Methods Partial Differ. Equ. 2023, 39, 1888–1914. [Google Scholar] [CrossRef]
  21. Li, B.; Cheng, L.; Deeks, A.J.; Zhao, M. A semi-analytical solution method for two-dimensional Helmholtz equation. Appl. Ocean Res. 2006, 28, 193–207. [Google Scholar] [CrossRef]
  22. Su, R.; Zhang, X.; Tangaramvong, S.; Song, C. Adaptive scaled boundary finite element method for two/three-dimensional structural topology optimization based on dynamic responses. Comput. Methods Appl. Mech. Eng. 2024, 425, 116966. [Google Scholar] [CrossRef]
  23. Temimi, H.; Ben-Romdhane, M.; Baccouch, M.; Musa, M.O. A two-branched numerical solution of the two-dimensional Bratu’s problem. Appl. Numer. Math. 2020, 153, 202–216. [Google Scholar] [CrossRef]
  24. Hu, G.; Xu, X.; Yuan, X.; Zhao, Y. Stability for the inverse source problem in a two-layered medium separated by rough interface. Inverse Probl. Imaging 2024, 18, 642–656. [Google Scholar] [CrossRef]
  25. Kasenov, S.E.; Urmashev, B.A.; Sarsenbaeva, A.E.; Tleulesova, A.M.; Temirbekov, A.N. Investigation of the well-posedness of the Cauchy problem for the Helmholtz equation. Bull. NEA RK Almaty Kazakhstan 2022, 4, 169–177. [Google Scholar] [CrossRef]
  26. Kasenov, S.; Nurseitova, A.; Nurseitov, D. A conditional stability estimate of continuation problem for the Helmholtz equation. AIP Conf. Proc. 2016, 1759, 020119. [Google Scholar]
  27. Kabanikhin, S.I. Inverse and Ill-Posed Problems: Theory and Applications; de Gruyter: Berlin, Germany, 2011. (In Russian) [Google Scholar]
  28. Landyzhenskaya, O.A.; Uraltseva, N.N. Linear and Quasilinear Equations of Elliptic Type; Nauka: Moscow, Russia, 1973; 576p. [Google Scholar]
  29. Azimov, A.; Kasenov, S.; Nurseitov, D.; Serovajsky, S. Inverse problem for the Verhulst equation of limited population growth with discrete experiment data. AIP Conf. Proc. 2016, 1759, 2016. [Google Scholar]
  30. Nesterov, Y. A method for solving the convex programming problem with convergence rate O (1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543. [Google Scholar]
  31. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  32. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  33. Samarsky, A.A.; Gulin, A.V. Numerical Methods; Nauka: Moscow, Russia, 1989; 432p. (In Russian) [Google Scholar]
  34. Hamarik, U.; Palm, R. Comparison of stopping rules in conjugate gradient type methods for solving ill-posed problems. In Proceedings of the MMA2005 Proceedings: 10th International Conference Mathematical Modelling and Analysis, 2nd International Conference Computational Methods in Applied Mathematics, Trakai, Lithuania, 1–5 June 2005; pp. 285–291. [Google Scholar]
Figure 1. A graph showing a decrease in the functional J(qn) obtained using the Landweber method.
Figure 1. A graph showing a decrease in the functional J(qn) obtained using the Landweber method.
Mathematics 12 02618 g001
Figure 2. A graph showing a decrease in the functional J(qn) obtained using the Nesterov method.
Figure 2. A graph showing a decrease in the functional J(qn) obtained using the Nesterov method.
Mathematics 12 02618 g002
Figure 3. Comparison of the exact function qex and the reconstructed function qn using the Landweber method.
Figure 3. Comparison of the exact function qex and the reconstructed function qn using the Landweber method.
Mathematics 12 02618 g003
Figure 4. Comparison of the exact function qex and the reconstructed function qn using the Nesterov method.
Figure 4. Comparison of the exact function qex and the reconstructed function qn using the Nesterov method.
Mathematics 12 02618 g004
Figure 5. Comparison of the exact function qex and the reconstructed function qn using Nesterov and Landweber methods with 1% noise.
Figure 5. Comparison of the exact function qex and the reconstructed function qn using Nesterov and Landweber methods with 1% noise.
Mathematics 12 02618 g005
Figure 6. Comparison of the exact function qex and the reconstructed function qn using the Nesterov and Landweber methods with 5% noise.
Figure 6. Comparison of the exact function qex and the reconstructed function qn using the Nesterov and Landweber methods with 5% noise.
Mathematics 12 02618 g006
Table 1. Comparative analysis of the methods without noise.
Table 1. Comparative analysis of the methods without noise.
Number of Iterations q e x q n with Landweber Method q e x q n with Nesterov Method
01.500651.50065
1000.4768760.242637
44000.0524190.0006319
20,3440.021723
Table 2. Comparative analysis of the methods with 1% noise.
Table 2. Comparative analysis of the methods with 1% noise.
Number of Iterations q e x q n with Landweber Method q e x q n with Nesterov Method
06.754676.75467
1000.4423820.243775
4560.3039520.010445
10,6380.191796
Table 3. Comparative analysis of the methods with 5% noise.
Table 3. Comparative analysis of the methods with 5% noise.
Number of Iterations q e x q n with Landweber Method q e x q n with Nesterov Method
07.506527.50652
1001.6147210.312367
3460.8033980.15323
25960.304825
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kasenov, S.E.; Tleulesova, A.M.; Sarsenbayeva, A.E.; Temirbekov, A.N. Numerical Solution of the Cauchy Problem for the Helmholtz Equation Using Nesterov’s Accelerated Method. Mathematics 2024, 12, 2618. https://doi.org/10.3390/math12172618

AMA Style

Kasenov SE, Tleulesova AM, Sarsenbayeva AE, Temirbekov AN. Numerical Solution of the Cauchy Problem for the Helmholtz Equation Using Nesterov’s Accelerated Method. Mathematics. 2024; 12(17):2618. https://doi.org/10.3390/math12172618

Chicago/Turabian Style

Kasenov, Syrym E., Aigerim M. Tleulesova, Ainur E. Sarsenbayeva, and Almas N. Temirbekov. 2024. "Numerical Solution of the Cauchy Problem for the Helmholtz Equation Using Nesterov’s Accelerated Method" Mathematics 12, no. 17: 2618. https://doi.org/10.3390/math12172618

APA Style

Kasenov, S. E., Tleulesova, A. M., Sarsenbayeva, A. E., & Temirbekov, A. N. (2024). Numerical Solution of the Cauchy Problem for the Helmholtz Equation Using Nesterov’s Accelerated Method. Mathematics, 12(17), 2618. https://doi.org/10.3390/math12172618

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop