Next Article in Journal
Missing Data Imputation Based on Causal Inference to Enhance Advanced Persistent Threat Attack Prediction
Previous Article in Journal
TIME REFRACTION and SPACETIME OPTICS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Mollification Regularization Method to Solve a Cauchy Problem for the Multi-Dimensional Modified Helmholtz Equation

College of Mathematics and Computer Science, Gannan Normal University, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(11), 1549; https://doi.org/10.3390/sym16111549
Submission received: 29 October 2024 / Revised: 17 November 2024 / Accepted: 18 November 2024 / Published: 19 November 2024
(This article belongs to the Section Mathematics)

Abstract

:
This paper considers a Cauchy problem for the multi-dimensional modified Helmholtz equation with inhomogeneous Dirichlet and Neumann data. The Cauchy problem is severely ill-posed, and a general mollification method is introduced to solve the problem. Both the a priori and a posteriori choice strategies of the regularization parameter are proposed, and error estimations of the corresponding regularization solutions are also presented. Finally, two numerical examples are introduced to show the effectiveness of the general mollification regularization method.

1. Introduction

This paper aims to solve the Cauchy problem for a multi-dimensional modified Helmholtz equation with inhomogeneous Dirichlet and Neumann data:
Δ w ( x , y ) k 2 w ( x , y ) = 0 , 0 < x 1 , y R n , w ( 0 , y ) = f ( y ) , y R n , w x ( 0 , y ) = g ( y ) , y R n ,
where Δ = 2 x 2 + i = 1 n 2 y i 2 is an n + 1 -dimensional Laplace operator, w x is the partial derivative with respect to x, k > 0 is the wave number, and the domain of y has symmetry. More precisely, the Cauchy problem is used to determine the solution w ( x , y ) from conditions f ( y ) and g ( y ) given in (1).
The modified Helmholtz equation appears in many fields of science and engineering, such as microwave tomography, ground-penetrating radar [1], the Debye–Huckel theory, the implicit marching scheme for heat equation [2], the gravity gradient, the intensity of electrostatic field [3], etc. The Cauchy problem for the modified Helmholtz equation is ill-posed, i.e., the solution does not depend continuously on the boundary data [4]. Many regularization methods have been proposed to construct stable solutions for this Cauchy problem, such as the Tikhonov regularization method [5], the conjugate gradient method [6], the truncation method [7], the iteration method [8,9], the mollification method [10,11], the adaptive Runge–Kutta method [12], etc. Among these methods, the mollification method aims at constructing stable solutions by mollifying the disturbed conditions in (1). It has been widely used to solve the Cauchy problem for elliptic equations [13] and other ill-posed problems, e.g., the inverse heat conduction problem [14], the edge detection problem [15], the inverse source problem of diffusion equations [16], etc. In the mollification method, the choice of the kernel function plays an important role in theoretical analysis and numerical implementation. The common kernel functions include the Dirichlet kernel, the Poussin kernel, the Gaussian kernel, the Weierstrass kernel, and so on [13].
The mollification method with the Dirichlet kernel was used to solve the Cauchy problem for the (modified) Helmholtz equation in [4,10,17]. In [10], the regularization parameter was chosen by an a priori strategy, and the optimal convergence rate of the regularization solution was shown to be O ( δ 1 x ) , where δ is the noise level of the given data. In [18], the a posteriori parameter choice strategy for the Dirichlet kernel was proposed based on Morozov’s discrepancy principle. The convergence rate of the regularization solution was the same as the one given in [10]. The Cauchy problem for the (modified) Helmholtz equation was solved by the mollification method with the Poussin kernel in [11,19]. In [11], the optimal convergence rate of the regularization solution was O ( δ 1 x 2 ) for the a priori parameter choice strategy. In [20], the mollification method with Gaussian kernel was proposed, and the optimal convergence rate of the regularization solution with an a priori parameter choice strategy was O ( 1 ln 1 δ ) . Based on the above work, we give a general framework of the mollification method to solve the Cauchy problem for the modified Helmholtz equation. All the above three kernel functions are included in this framework. Both the a priori and a posteriori choice strategies of the regularization parameter are considered, and error estimations of regularization solutions are also introduced.
The rest of the paper is organized as follows. In Section 2, the ill-posedness of the Cauchy problem for the modified Helmholtz equation is analyzed. The general conditions that the kernel function should satisfy are given in Section 3. In Section 4, both the a priori and a posteriori choice strategies of the regularization parameter are considered, and error estimations of regularization solutions are also discussed. In Section 5, two numerical examples are introduced to show the numerical validity of the proposed method. Finally, the conclusion is given in Section 6.

2. Ill-Posedness Analysis of the Cauchy Problem

We assume that f δ ( y ) and g δ ( y ) are the measured data satisfying
f δ f δ , g δ g δ ,
where · denotes the L 2 -norm, and δ > 0 represents the noise level. The ill-posedness of the Cauchy problem (1) can be explained in the frequency domain. The Fourier transform of f ( y ) is as follows:
f ^ ( ξ ) : = 1 ( 2 π ) n 2 R n e i ξ y f ( y ) d y , ξ R n ,
and the inverse Fourier transform of f ^ ( ξ ) is
f ( y ) : = 1 ( 2 π ) n 2 R n e i ξ y f ^ ( ξ ) d ξ , y R n .
In order to simplify the analysis of problem (1), we decompose it into two problems [10]:
Δ u ( x , y ) k 2 u ( x , y ) = 0 , 0 < x 1 , y R n , u ( 0 , y ) = f ( y ) , y R n , u x ( 0 , y ) = 0 , y R n ,
and
Δ v ( x , y ) k 2 v ( x , y ) = 0 , 0 < x 1 , y R n , v ( 0 , y ) = 0 , y R n , v x ( 0 , y ) = g ( y ) , y R n .
Therefore, the solution of problem (1) is w ( x , y ) = u ( x , y ) + v ( x , y ) . Applying the Fourier transform to problems (3) and (4) for the variable y R n , we can obtain the following two problems in the frequency domain:
u ^ x x ( x , ξ ) + ( i ξ ) 2 u ^ ( x , ξ ) k 2 u ^ ( x , ξ ) = 0 , 0 < x 1 , ξ R n , u ^ ( 0 , ξ ) = f ^ ( ξ ) , ξ R n , u ^ x ( 0 , ξ ) = 0 , ξ R n ,
and
v ^ x x ( x , ξ ) + ( i ξ ) 2 v ^ ( x , ξ ) k 2 v ^ ( x , ξ ) = 0 , 0 < x 1 , ξ R n , v ^ ( 0 , ξ ) = 0 , ξ R n , v ^ x ( 0 , ξ ) = g ^ ( ξ ) , ξ R n .
The solutions of problems (5) and (6) are
u ^ ( x , ξ ) = cosh ( x ξ 2 + k 2 ) f ^ ( ξ )
and
v ^ ( x , ξ ) = sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 g ^ ( ξ ) ,
respectively. Therefore, the solutions of (3) and (4) are
u ( x , y ) = 1 ( 2 π ) n 2 R n e i ξ y cosh ( x ξ 2 + k 2 ) f ^ ( ξ ) d ξ
and
v ( x , y ) = 1 ( 2 π ) n 2 R n e i ξ y sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 g ^ ( ξ ) d ξ ,
respectively.
The unbounded functions cosh ( x ξ 2 + k 2 ) and sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 increase rapidly with exponential order as ξ [10]. Therefore, the exact data f ^ ( ξ ) and g ^ ( ξ ) must decay sharply when u ( x , · ) L 2 ( R n ) and v ( x , · ) L 2 ( R n ) . In fact, the measured data f ^ δ ( ξ ) and g ^ δ ( ξ ) do not possess such a decay property. Thus, the high-frequency components in the noise can be magnified by the unbounded functions; this leads to the ill-posedness of the Cauchy problem (1). To overcome this ill-posedness, we introduce a general framework of the mollification method in this paper.
First, we define the convolution operator as
J μ f ( x ) : = ( Q μ f ) ( x ) = R n Q μ ( t ) f ( x t ) d t = R n Q μ ( x t ) f ( t ) d t .
The convolution theorem shows that
( J μ f ^ ) ( ξ ) = ( 2 π ) n Q ^ μ ( ξ ) f ^ ( ξ ) .
Second, we use the convolution data J μ f δ ( y ) and J μ g δ ( y ) instead of f δ ( y ) and g δ ( y ) to solve problems (3) and (4). Then, the regularization solutions u μ , δ ( x , y ) and v μ , δ ( x , y ) satisfy
Δ u μ , δ ( x , y ) k 2 u μ , δ ( x , y ) = 0 , 0 < x 1 , y R n , u μ , δ ( 0 , y ) = J μ f δ ( y ) , y R n , ( u μ , δ ) x ( 0 , y ) = 0 , y R n
and
Δ v μ , δ ( x , y ) k 2 v μ , δ ( x , y ) = 0 , 0 < x 1 , y R n , v μ , δ ( 0 , y ) = 0 , y R n , ( v μ , δ ) x ( 0 , y ) = J μ g δ ( y ) , y R n .
Solving (7) and (8) in frequency domain by Fourier transform yields
u ^ μ , δ ( x , ξ ) = P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) f ^ δ ( ξ )
and
v ^ μ , δ ( x , ξ ) = P ^ μ ( ξ ) sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 g ^ δ ( ξ ) ,
where P ^ μ ( ξ ) = ( 2 π ) n Q ^ μ ( ξ ) is the convolution kernel function. Then, it has
u μ , δ ( x , y ) = 1 ( 2 π ) n 2 R n e i ξ y P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) f ^ δ ( ξ ) d ξ
and
v μ , δ ( x , y ) = 1 ( 2 π ) n 2 R n e i ξ y P ^ μ ( ξ ) sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 g ^ δ ( ξ ) d ξ .

3. General Framework of the Convolution Kernel Function

We give some general conditions that the convolution kernel function P ^ μ ( ξ ) should satisfy in this section.
Definition 1.
The function P ^ μ ( ξ ) , ξ R n , 0 < μ < 1 is called the regularization convolution kernel function if it satisfies three conditions:
(1)
0 P ^ μ ( ξ ) 1 ;
(2)
| 1 P ^ μ ( ξ ) | b μ 2 ξ 2 , where b > 0 is a constant;
(3)
| P ^ μ ( ξ ) | e x | ξ | e c x μ 2 , where 0 < c 2 is a constant.
In Definition 1, μ is the regularization parameter. Many kernel functions satisfy the above definition, and three cases are given as follows.
Case 1: The kernel function is P ^ μ ( ξ ) = e μ 2 ξ 2 4 ; it is the Fourier transform of the Gaussian kernel function P μ ( x ) = 1 μ e x 2 4 μ 2 .
Case 2: The kernel function is
P ^ μ ( ξ ) = 1 , | ξ | < 1 μ , 2 μ ξ , 1 μ | ξ | 2 μ , 0 , | ξ | > 2 μ ;
it is the Fourier transform of the Poussin kernel function
P μ ( x ) = sin 2 ( μ x ) sin 2 ( μ x / 2 ) μ sin 2 ( x / 2 ) .
Case 3: The kernel function is P ^ μ ( ξ ) = χ 1 μ , 1 μ ( ξ ) ; it is the Fourier transform of the Dirichlet kernel function P μ ( x ) = 1 x sin x μ .
Proof. 
Condition (1) obviously holds in all three cases. We only prove conditions (2) and (3) in the following.
In case 1, condition (2) holds with b = 1 4 since | 1 P ^ μ ( ξ ) | = | 1 e μ 2 ξ 2 4 | μ 2 ξ 2 4 . Condition (3) holds with c = 1 since
| P ^ μ ( ξ ) e x | ξ | | = e x | ξ | μ 2 ξ 2 4 e x | ξ | μ 2 ξ 2 4 | ξ | = 2 x μ 2 = e x 2 μ 2 e x μ 2 .
In case 2, condition (2) holds with b = 1 . When | ξ | < 1 μ , it has
| P ^ μ ( ξ ) e x | ξ | | = e x | ξ | e x μ e x μ 2 .
When 1 μ | ξ | 2 μ , it has 0 2 μ | ξ | 1 . Then,
| P ^ μ ( ξ ) e x | ξ | | = | 2 μ ξ | e x | ξ | e 2 x μ e 2 x μ 2 .
When | ξ | > 2 μ , it has P ^ μ ( ξ ) e x | ξ | = 0 . Therefore, condition (3) holds with c = 2 .
In case 3, condition (2) holds with b = 1 . When μ ξ 1 , it has
P ^ μ ( ξ ) e x | ξ | = e x | ξ | e x μ e x μ 2 .
When μ ξ > 1 , it has P ^ μ ( ξ ) e x | ξ | = 0 . Therefore, condition (3) holds with c = 1 . □

4. Parameter Choice Strategies and Error Estimations

We first present some auxiliary results given in [11].
Lemma 1.
When 0 < x 1 , the following inequalities hold.
(1)
cosh ( x ξ 2 + k 2 ) e x | ξ | 2 + k 2 ;
(2)
sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 e x | ξ | 2 + k 2 ;
(3)
sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 cosh ( x ξ 2 + k 2 ) ;
(4)
cosh ( x ξ 2 + k 2 ) cosh ( ξ 2 + k 2 ) 2 e ( 1 x ) | ξ | 2 + k 2 ;
(5)
sinh ( x ξ 2 + k 2 ) sinh ( ξ 2 + k 2 ) e ( 1 x ) | ξ | 2 + k 2 .

4.1. A priori Parameter Choice Strategies and Error Estimations

We first consider the case of 0 < x < 1 in this subsection. We need to give some a priori assumptions of u ( 1 , · ) and v ( 1 , · ) , i.e.,
u ( 1 , · ) E , v ( 1 , · ) E ,
where E > 0 is a constant.
Theorem 1.
Let P ^ μ ( ξ ) be a regularization convolution kernel function satisfying Definition 1, where w ( x , y ) is the exact solution of (1) when 0 < x < 1 , and w μ , δ ( x , y ) = u μ , δ ( x , y ) + v μ , δ ( x , y ) is the regularization solution. Assume that conditions (2) and (11) hold. Then, we obtain
w ( x , · ) w μ , δ ( x , · ) 12 b E ( 1 x ) 2 e 2 μ 2 + 2 δ e x k e c x μ 2 .
Proof. 
By the Parseval’s identity and the triangle inequality, we have
u ( x , · ) u μ , δ ( x , · ) = u ^ ( x , · ) u ^ μ , δ ( x , · ) u ^ ( x , · ) u ^ μ , 0 ( x , · ) + u ^ μ , 0 ( x , · ) u ^ μ , δ ( x , · ) = ( 1 P ^ μ ( ξ ) ) cosh ( x ξ 2 + k 2 ) f ^ ( ξ ) + P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) ( f ^ ( ξ ) f ^ δ ( ξ ) ) 1 P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) cosh ( ξ 2 + k 2 ) u ^ ( 1 , ξ ) + sup ξ R n | P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) | δ sup ξ R n | 1 P ^ μ ( ξ ) cosh ( x ξ 2 + k 2 ) cosh ( ξ 2 + k 2 ) | E + sup ξ R n | P ^ μ ( ξ ) e x ( | ξ | + k ) | δ sup ξ R n | 2 b μ 2 ξ 2 e ( 1 x ) | ξ | | E + sup ξ R n | P ^ μ ( ξ ) e x | ξ | | e x k δ 2 b E μ 2 sup ξ R n | ξ 2 e ( 1 x ) | ξ | | + δ e x k e c x μ 2 .
It is easy to show that
sup ξ R n | ξ 2 e ( 1 x ) | ξ | | = 4 ( 1 x ) 2 e 2 ,
and then
u ( x , · ) u μ , δ ( x , · ) 8 b E ( 1 x ) 2 e 2 μ 2 + δ e x k e c x μ 2 .
Similarly, we have
v ( x , · ) v μ , δ ( x , · ) = v ^ ( x , · ) v ^ μ , δ ( x , · ) v ^ ( x , · ) v ^ μ , 0 ( x , · ) + v ^ μ , 0 ( x , · ) v ^ μ , δ ( x , · ) = ( 1 P ^ μ ( ξ ) ) sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 g ^ ( ξ ) + P ^ μ ( ξ ) sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 ( g ^ ( ξ ) g ^ δ ( ξ ) ) 1 P ^ μ ( ξ ) sinh ( x ξ 2 + k 2 ) sinh ( ξ 2 + k 2 ) v ^ ( 1 , ξ ) + sup ξ R n | P ^ μ ( ξ ) sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 | δ sup ξ R n | b μ 2 ξ 2 e ( 1 x ) | ξ | | E + sup ξ R n | P ^ μ ( ξ ) e x | ξ | | e x k δ b E μ 2 sup ξ R n | ξ 2 e ( 1 x ) | ξ | | + δ e x k e c x μ 2 4 b E ( 1 x ) 2 e 2 μ 2 + δ e x k e c x μ 2 .
Hence, we have
w ( x , · ) w μ , δ ( x , · ) u ( x , · ) u μ , δ ( x , · ) + v ( x , · ) v μ , δ ( x , · ) 12 b E ( 1 x ) 2 e 2 μ 2 + 2 δ e x k e c x μ 2 .
Based on the error estimation of the regularization solution given in Theorem 1, we can obtain the a priori choice strategy of μ = μ ( δ ) . □
Corollary 1.
Assume that the conditions of Theorem 1 hold. If we take μ = 2 l n E δ , then it holds
w ( x , · ) w μ , δ ( x , · ) 1 l n E δ ( 24 b E ( 1 x ) 2 e 2 + o ( 1 ) ) , δ 0 .
Next, we consider the case of x = 1 . We need to give some a- riori assumptions of u ( 1 , · ) and v ( 1 , · ) that are stronger than the case of 0 < x < 1 , i.e.,
u ( 1 , · ) p E p , v ( 1 , · ) p E p .
Here, E p > 0 is a constant, and · p denotes the H p -norm in the Sobolev space
H p ( R n ) : = { f L 2 ( R n ) | f p < } ,
where
f p = R n ( 1 + ξ 2 ) p | f ^ ( ξ ) | 2 d ξ 1 2 .
Theorem 2.
Let P ^ μ ( ξ ) be a regularization convolution kernel function satisfying Definition 1, where w ( 1 , y ) is the exact solution of (1) when x = 1 , and w μ , δ ( 1 , y ) = u μ , δ ( 1 , y ) + v μ , δ ( 1 , y ) is the regularization solution. Assume that conditions (2) and (12) hold. Then, we obtain
w ( 1 , · ) w μ , δ ( 1 , · ) 2 E p b p p + 2 μ 2 p p + 2 + 2 δ e k e c μ 2 .
Proof. 
By the Parseval’s identity and the triangle inequality, we have
u ( 1 , · ) u μ , δ ( 1 , · ) = u ^ ( 1 , · ) u ^ μ , δ ( 1 , · ) u ^ ( 1 , · ) u ^ μ , 0 ( 1 , · ) + u ^ μ , 0 ( 1 , · ) u ^ μ , δ ( 1 , · ) = ( 1 P ^ μ ( ξ ) ) cosh ( ξ 2 + k 2 ) f ^ ( ξ ) + P ^ μ ( ξ ) cosh ( ξ 2 + k 2 ) ( f ^ ( ξ ) f ^ δ ( ξ ) ) 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 ( 1 + ξ 2 ) p 2 u ^ ( 1 , ξ ) + sup ξ R n | P ^ μ ( ξ ) cosh ( ξ 2 + k 2 ) | δ E p sup ξ R n | 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 | + δ e k e c μ 2 .
Let A ( ξ ) : = 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 and ξ 0 > 0 . When | ξ | > ξ 0 , one can get
| A ( ξ ) | = | 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 | < 1 | ξ | p < 1 ξ 0 p ,
and when | ξ | ξ 0 , one can get
| A ( ξ ) | b μ 2 ξ 2 b μ 2 ξ 0 2 .
When 1 ξ 0 p = b μ 2 ξ 0 2 , it has ξ 0 = ( 1 b μ 2 ) 1 p + 2 ; then,
| A ( ξ ) | b μ 2 ( 1 b μ 2 ) 2 p + 2 = b p p + 2 μ 2 p p + 2 .
Hence, we have
u ( 1 , · ) u μ , δ ( 1 , · ) E p b p p + 2 μ 2 p p + 2 + δ e k e c μ 2 .
Similarly, we have
v ( 1 , · ) v μ , δ ( 1 , · ) = v ^ ( 1 , · ) v ^ μ , δ ( 1 , · ) v ^ ( 1 , · ) v ^ μ , 0 ( 1 , · ) + v ^ μ , 0 ( 1 , · ) v ^ μ , δ ( 1 , · ) = ( 1 P ^ μ ( ξ ) ) sinh ( ξ 2 + k 2 ) ξ 2 + k 2 g ^ ( ξ ) + P ^ μ ( ξ ) sinh ( ξ 2 + k 2 ) ξ 2 + k 2 ( g ^ ( ξ ) g ^ δ ( ξ ) ) 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 ( 1 + ξ 2 ) p 2 v ^ ( 1 , ξ ) + sup ξ R n | P ^ μ ( ξ ) sinh ( ξ 2 + k 2 ) ξ 2 + k 2 | δ E p sup ξ R n | 1 P ^ μ ( ξ ) ( 1 + ξ 2 ) p 2 | + δ e k e c μ 2 E p b p p + 2 μ 2 p p + 2 + δ e k e c μ 2 .
Hence, we have
w ( 1 , · ) w μ , δ ( 1 , · ) u ( 1 , · ) u μ , δ ( 1 , · ) + v ( 1 , · ) v μ , δ ( 1 , · ) 2 E p b p p + 2 μ 2 p p + 2 + 2 δ e k e c μ 2 .
Based on the error estimation of the regularization solution given in Theorem 2, we can obtain the a priori choice strategy of μ = μ ( δ ) . □
Corollary 2.
Assume that conditions of Theorem 2 hold. If we take μ = 2 l n E p δ , then it holds
w ( 1 , · ) w μ , δ ( 1 , · ) ( 1 l n E p δ ) p p + 2 ( 2 E p ( 2 b ) p p + 2 + o ( 1 ) ) , δ 0 .

4.2. A posteriori Parameter Choice Strategies and Error Estimations

Theorems 1 and 2 show that the a priori choice strategy of the regularization parameter μ = μ ( δ ) relies on the a priori assumptions of u ( 1 , · ) and v ( 1 , · ) . These assumptions are usually unknown in practice; therefore, the a posteriori parameter choice strategy should be considered. We consider the a posteriori choice strategy based on Morozov’s discrepancy principle [21] in this subsection. The regularization parameters μ 1 = μ 1 ( δ ) and μ 2 = μ 2 ( δ ) are determined by
d 1 ( μ ) : = P ^ μ ( ξ ) f ^ δ ( ξ ) f ^ δ ( ξ ) = δ + τ 1 ( l n τ 2 δ ) 1
and
d 2 ( μ ) : = P ^ μ ( ξ ) g ^ δ ( ξ ) g ^ δ ( ξ ) = δ + τ 1 ( l n τ 2 δ ) 1 ,
respectively. Here, τ 1 and τ 2 satisfy 0 < τ 1 ( l n τ 2 δ ) 1 < m i n { f δ , g δ } δ . To ensure the solvability of Equations (13) and (14), we need the following Lemma.
Lemma 2.
For given δ > 0 , d 1 ( μ ) and d 2 ( μ ) satisfy the following properties:
(a)
d 1 ( μ ) and d 2 ( μ ) are continuous functions;
(b)
lim μ 0 + d 1 ( μ ) = lim μ 0 + d 2 ( μ ) = 0 ;
(c)
lim μ + d 1 ( μ ) = f ^ δ and lim μ + d 2 ( μ ) = g ^ δ ;
(d)
d 1 ( μ ) and d 2 ( μ ) are strictly increasing functions.
The proof of Lemma 2 is very easy and omitted here.
Lemma 3.
Let μ 1 and μ 2 be solutions of (13) and (14), respectively. Then, we obtain
P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ ( ξ ) 2 δ + τ 1 ( l n τ 2 δ ) 1 , P ^ μ 2 ( ξ ) g ^ δ ( ξ ) g ^ ( ξ ) 2 δ + τ 1 ( l n τ 2 δ ) 1 .
Proof. 
By the triangle inequality, we have
P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ ( ξ ) = P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ δ ( ξ ) + f ^ δ ( ξ ) f ^ ( ξ ) P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ δ ( ξ ) + δ 2 δ + τ 1 ( l n τ 2 δ ) 1 .
Similarly, one can get
P ^ μ 2 ( ξ ) g ^ δ ( ξ ) g ^ ( ξ ) 2 δ + τ 1 ( l n τ 2 δ ) 1 .
Lemma 4.
Let μ 1 and μ 2 be solutions of (13) and (14), respectively. Then, we obtain
1 μ 1 2 2 b E τ 1 l n τ 2 δ , 1 μ 2 2 4 b E τ 1 l n τ 2 δ .
Proof. 
By the triangle inequality, we have
δ + τ 1 ( l n τ 2 δ ) 1 = ( 1 P ^ μ 1 ( ξ ) ) f ^ δ ( ξ ) ( 1 P ^ μ 1 ( ξ ) ) ( f ^ δ ( ξ ) f ^ ( ξ ) ) + ( 1 P ^ μ 1 ( ξ ) ) f ^ ( ξ ) δ + ( 1 P ^ μ 1 ( ξ ) ) u ^ ( 1 , ξ ) cosh ξ 2 + k 2 δ + 2 E sup ξ R n b μ 1 2 ξ 2 e | ξ | δ + 2 b E μ 1 2 .
Then,
1 μ 1 2 2 b E τ 1 l n τ 2 δ .
Similarly, one can get
δ + τ 1 ( l n τ 2 δ ) 1 = ( 1 P ^ μ 2 ( ξ ) ) g ^ δ ( ξ ) ( 1 P ^ μ 2 ( ξ ) ) ( g ^ δ ( ξ ) g ^ ( ξ ) ) + ( 1 P ^ μ 2 ( ξ ) ) g ^ ( ξ ) δ + ( 1 P ^ μ 2 ( ξ ) ) v ^ ( 1 , ξ ) ξ 2 + k 2 sinh ξ 2 + k 2 δ + 2 b E μ 2 2 sup ξ R n ( ξ 2 + k 2 ) 3 e ξ 2 + k 2 e 2 ξ 2 + k 2 1 δ + 4 b E μ 2 2 ,
and
1 μ 2 2 4 b E τ 1 l n τ 2 δ .
Theorem 3.
Let P ^ μ ( ξ ) be a regularization convolution kernel function satisfying Definition 1, where w ( x , y ) is the exact solution of (1) when 0 < x < 1 , and w μ , δ ( x , y ) = u μ 1 , δ ( x , y ) + v μ 2 , δ ( x , y ) is the regularization solution, where μ 1 and μ 2 are solutions of (13) and (14), respectively. Assume that conditions (2) and (11) hold, τ 1 and τ 2 satisfy
τ 1 4 b c E , τ 2 > δ e τ 1 min f δ , g δ δ .
Then, we obtain
w ( x , · ) w μ , δ ( x , · ) 2 ( 1 l n τ 2 δ ) 1 x ( τ 1 + o ( 1 ) ) 1 x ( 2 E + o ( 1 ) ) x .
Proof. 
By Parseval’s identity, the Holder inequality, and Lemmas 3 and 4, we have
u ( x , · ) u μ 1 , δ ( x , · ) = u ^ ( x , · ) u ^ μ 1 , δ ( x , · ) = cosh ( x ξ 2 + k 2 ) f ^ ( ξ ) P ^ μ 1 ( ξ ) cosh ( x ξ 2 + k 2 ) f ^ δ ( ξ ) = cosh ( x ξ 2 + k 2 ) ( f ^ ( ξ ) P ^ μ 1 ( ξ ) f ^ δ ( ξ ) ) ( P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ ( ξ ) ) 1 x ( P ^ μ 1 ( ξ ) f ^ δ ( ξ ) f ^ ( ξ ) ) ( cosh ( x ξ 2 + k 2 ) ) 1 x x ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( P ^ μ 1 ( ξ ) ( f ^ δ ( ξ ) f ^ ( ξ ) ) ( cosh ( x ξ 2 + k 2 ) ) 1 x + ( 1 P ^ μ 1 ( ξ ) ) f ^ ( ξ ) ( cosh ( x ξ 2 + k 2 ) ) 1 x ) x ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( sup ξ R n | P ^ μ 1 ( ξ ) e ξ 2 + k 2 | δ + ( 1 P ^ μ 1 ( ξ ) ) e ξ 2 + k 2 u ^ ( 1 , · ) cosh ξ 2 + k 2 ) x ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( e k δ sup ξ R n | P ^ μ 1 ( ξ ) e | ξ | | + 2 E sup ξ R n | 1 P ^ μ 1 ( ξ ) | ) x ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( e k δ e c μ 1 2 + 2 E ) x ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( e k δ e 2 b c E τ 1 l n τ 2 δ + 2 E ) x = ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( e k τ 2 2 b c E τ 1 δ 1 2 b c E τ 1 + 2 E ) x = ( 1 l n τ 2 δ ) 1 x ( τ 1 + o ( 1 ) ) 1 x ( 2 E + o ( 1 ) ) x .
Similarly, one can get
v ( x , · ) v μ 2 , δ ( x , · ) = v ^ ( x , · ) v ^ μ 2 , δ ( x , · ) = sinh ( x ξ 2 + k 2 ) ξ 2 + k 2 ( g ^ ( ξ ) P ^ μ 2 ( ξ ) g ^ δ ( ξ ) ) cosh ( x ξ 2 + k 2 ) ( g ^ ( ξ ) P ^ μ 2 ( ξ ) g ^ δ ( ξ ) ) ( P ^ μ 2 ( ξ ) g ^ δ ( ξ ) g ^ ( ξ ) ) 1 x ( P ^ μ 2 ( ξ ) g ^ δ ( ξ ) g ^ ( ξ ) ) ( cosh ( x ξ 2 + k 2 ) ) 1 x x = ( 2 δ + τ 1 ( l n τ 2 δ ) 1 ) 1 x ( e k τ 2 4 b c E τ 1 δ 1 4 b c E τ 1 + 2 E ) x ( 1 l n τ 2 δ ) 1 x ( τ 1 + o ( 1 ) ) 1 x ( 2 E + o ( 1 ) ) x .
Hence, the regularization solution w μ , δ ( x , y ) = u μ 1 , δ ( x , y ) + v μ 2 , δ ( x , y ) satisfies
w ( x , · ) w μ , δ ( x , · ) 2 ( 1 l n τ 2 δ ) 1 x ( τ 1 + o ( 1 ) ) 1 x ( 2 E + o ( 1 ) ) x .

5. Numerical Experiments

In this section, we show the numerical validity of the proposed method by two numerical examples. The numerical experiments are carried out using MATLAB R2016b. The steps are as follows: first, we give the exact solution u ( x , y ) or v ( x , y ) and get the exact boundary data f ( y ) = u ( 0 , y ) or g ( y ) = v x ( 0 , y ) ; second, we simulate the measured data f δ ( y ) or g δ ( y ) by adding random noise to f ( y ) or g ( y ) ; third, we get the regularization solution u μ , δ ( x , y ) by (13) or v μ , δ ( x , y ) by (14).
Considering the case of n = 2 , we set the domain of y as [ 2 π , 2 π ] × [ 2 π , 2 π ] . The measured data f ( y ) and g ( y ) are generated by
f δ = f + ϵ ( 2 r a n d n ( s i z e ( f ) 1 ) ) ,
where
f = ( f i j ) n × n = ( f ( y i , y j ) ) n × n , y i = 2 π + 4 π ( i 1 ) n 1 , i = 1 , 2 , , n ,
and the function “ r a n d n ( · ) ” generates arrays of random data with a mean of 0 and variance of σ 2 = 1 . The error level is
δ = f δ f = 4 π n i , j = 1 n ( f i j δ f i j ) 2 .
The relative error of the regularization solution u μ , δ is
r e l ( u μ , δ ) = u μ , δ ( x , · ) u ( x , · ) u ( x , · ) .
We always take n = 129 in the numerical experiments. The a posteriori regularization parameter μ = μ ( δ ) is determined by (13) or (14) using the bisection method.
Example 1.
Let the exact solution of (3) be
u ( x , y ) = u ( x , y 1 , y 2 ) = sin y 1 2 sin y 2 2 cosh ( x k 2 + 1 2 ) ,
where k is the wave number. Then, we have
f ( y ) = u ( 0 , y ) = sin y 1 2 sin y 2 2 .
Table 1, Table 2 and Table 3 show the relative errors of the regularization solution r e l ( u μ , δ ) at x = 0.2 for different ϵ and k when the regularization parameter is chosen by the a priori and a posteriori strategies. We take the regularization convolution kernel function P ^ ( ξ ) as the Gaussian kernel, the Dirichlet kernel, and the Poussin kernel function, respectively.
Table 1, Table 2 and Table 3 show the following four facts:
(1)
The regularization solution u μ , δ ( x , y ) is stable for both the a priori and a posteriori parameter choice strategy;
(2)
The results of the regularization solution u μ , δ ( x , y ) are still well for high wave number k;
(3)
The results of the a posteriori choice strategy are better than those of the a priori choice strategy;
(4)
The Gaussian kernel function has a slight advantage over the other two kernel functions.
Therefore, the a posteriori parameter choice strategy and the Gaussian kernel function are used in the following experiments. When ϵ = 0.01 and k = 100 , the exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the error u μ , δ ( x , y ) u ( x , y ) at x = 0.2 , x = 0.5 , and x = 0.8 are given in Figure 1, Figure 2 and Figure 3, respectively. Figure 1, Figure 2 and Figure 3 show that the regularization solution u μ , δ ( x , y ) can approximate the exact solution u ( x , y ) well, and the approximation effect of u μ , δ ( x , y ) is better for smaller x.
Example 2.
Let the exact solution of (4) be
v ( x , y ) = v ( x , y 1 , y 2 ) = cos ( y 1 + y 2 ) sinh ( x k 2 + 1 / 2 ) k 2 + 1 / 2 ,
where k is the wave number. Then, we have
g ( y ) = v x ( 0 , y ) = cos ( y 1 + y 2 ) .
Table 4, Table 5 and Table 6 show the relative errors of the regularization solution r e l ( v μ , δ ) at x = 0.2 for different ϵ and k. We take the regularization convolution kernel function P ^ μ ( ξ ) as the Gaussian kernel, the Dirichlet kernel, and the Poussin kernel function, respectively.
The results given in Table 4, Table 5 and Table 6 are similar to those of Table 1, Table 2 and Table 3; therefore, the a posteriori parameter choice strategy and the Gaussian kernel function are used next. When ϵ = 0.01 and k = 100 , the exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.2 , x = 0.5 , and x = 0.8 are given in Figure 4, Figure 5 and Figure 6, respectively. The results are similar to those of Example 1.

6. Discussion

Numerical experiments show that the regularization solution is stable for both the a priori and a posteriori parameter choice strategies. Compared with the a priori choice strategy, the results of the a posteriori choice strategy are better. The numerical results show that the Gaussian kernel function is superior to the Dirichlet and the Poussin kernel function. The graphs given in Examples 1 and 2 show that the approximation effect of the regularization solution is better for smaller x ( 0 , 1 ) .

7. Conclusions

In this paper, we propose a general framework of the mollification regularization method to solve the Cauchy problem for a multi-dimensional modified Helmholtz equation. The common conditions that the regularization convolution kernel function should satisfy are summarized. Many kernel functions satisfy these conditions, such as the Dirichlet kernel, the Poussion kernel, the Gaussian kernel, and so on. Considering the a priori choice strategy of the regularization parameter, we give the convergence rate of the regularization solution when 0 < x < 1 and x = 1 , respectively. A new a posteriori choice strategy of the regularization parameter based on Morozov’s discrepancy principle is introduced, and the convergence rate of the regularization solution is also given when 0 < x < 1 . Numerical experiments show that the a posteriori parameter choice strategy is better than the a priori strategy, and the Gaussian kernel function has a slight advantage over the other kernel functions.
For future research, we plan to continue our research on the a posteriori choice strategy of the regularization parameter and apply the general mollification regularization method to solve other ill-posed problems, e.g., the inverse heat conduction problem, the edge detection problem, the inverse source problem of diffusion equations, etc.

Author Contributions

Conceptualization, H.X. and D.Z.; methodology, H.X.; software, B.W.; validation, H.X. and B.W.; formal analysis, H.X.; investigation, B.W.; resources, D.Z.; data curation, B.W.; writing—original draft preparation, H.X.; writing—review and editing, D.Z.; visualization, B.W.; supervision, H.X.; project administration, D.Z.; funding acquisition, H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 11661008), the Science and Technology Research Project of Jiangxi Provincial Department of Education (Grant No. GJJ211402), the Natural Science Foundation of Jiangxi Province (Grant No. 20224BAB201013), and the Gannan Normal University Graduate Student Innovation Fund Project (Grant No. YCX23A025).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nguyen, H.T.; Tran, Q.V.; Nguyen, V.T. Some remarks on a modified Helmholtz equation with inhomogeneous source. Appl. Math. Model. 2013, 37, 793–814. [Google Scholar] [CrossRef]
  2. Cheng, H.W.; Huang, J.F.; Leiterman, T.J. An adaptive fast solver for the modified Helmholtz equation in two dimensions. J. Comput. Phys. 2006, 211, 616–637. [Google Scholar] [CrossRef]
  3. Manoussakis, G. A new modified Helmholtz equation for the expression of the gravity gradient and the intensity of an electrostatic field in spherical harmonics. Mathematics 2023, 11, 4326. [Google Scholar] [CrossRef]
  4. Reginska, T.; Reginski, K. Approximate solution of a Cauchy problem for the Helmholtz equation. Inverse Probl. 2006, 22, 975–989. [Google Scholar] [CrossRef]
  5. Xiong, X.T.; Shi, W.X.; Fan, X.Y. Two numerical methods for a Cauchy problem for modified Helmholtz equation. Appl. Math. Model. 2011, 35, 4951–4964. [Google Scholar] [CrossRef]
  6. Marin, L.; Elliott, L.; Heggs, P.J.; Ingham, D.B.; Lesnic, D.; Wen, X. Conjugate gradient-boundary element solution to the Cauchy problem for Helmholtz-type equations. Comput. Mech. 2003, 31, 367–377. [Google Scholar] [CrossRef]
  7. Qin, H.H.; Wei, T. Quasi-reversibility and truncation methods to solve a Cauchy problem of the modified Helmholtz equation. Math. Comput. Simulat. 2009, 80, 352–366. [Google Scholar] [CrossRef]
  8. Cheng, H.; Zhu, P.; Gao, J. A regularization method for the cauchy problem of the modified Helmholtz equation. Math. Meth. Appl. Sci. 2015, 38, 3711–3719. [Google Scholar] [CrossRef]
  9. Chen, Y.G.; Yang, F.; Ding, Q. The Landweber iterative regularization method for solving the Cauchy problem of the modified Helmholtz equation. Symmetry 2022, 14, 1209. [Google Scholar] [CrossRef]
  10. Fu, C.L.; Feng, X.L.; Qian, Z. The Fourier regularization for solving the Cauchy problem for the Helmholtz equation. Appl. Numer. Math. 2009, 59, 2625–2640. [Google Scholar] [CrossRef]
  11. He, S.Q.; Feng, X.F. A regularization method to solve a Cauchy problem for the two-dimensional modified Helmholtz equation. Mathematics 2019, 7, 360. [Google Scholar] [CrossRef]
  12. Jday, F.; Omri, H. Adaptive Runge-Kutta regularization for a Cauchy problem of a modified Helmholtz equation. J. Inverse Ill-Posed Probl. 2023, 31, 351–374. [Google Scholar] [CrossRef]
  13. Hao, D.N. A mollification method for ill-posed problems. Numer. Math. 1994, 68, 469–506. [Google Scholar]
  14. Murio, D.A. The Mollification Method and the Numerical Solution of Ill-Posed Problems; Wiley-Interscience Publication: New York, NY, USA, 1993. [Google Scholar]
  15. Basu, M. Gaussian-based edge-detection methods—A survey. IEEE Trans Syst. Man Cybern. C 2002, 32, 252–260. [Google Scholar] [CrossRef]
  16. Yang, F.; Fu, C.L. A mollification regularization method for the inverse spatial-dependent heat source problem. J. Comput. Appl. Math. 2014, 255, 555–567. [Google Scholar] [CrossRef]
  17. He, S.Q.; Feng, X.F. A mollification regularization method with the Dirichlet Kernel for two Cauchy problems of three-dimensional Helmholtz equation. Int. J. Comput. Math. 2020, 97, 2320–2336. [Google Scholar] [CrossRef]
  18. Fu, C.L.; Ma, Y.J.; Zhang, Y.X.; Yang, F. A a posteriori regularization for the Cauchy problem for the Helmholtz equation with inhomogeneous Neumann data. Appl. Math. Model. 2015, 39, 4103–4120. [Google Scholar] [CrossRef]
  19. He, S.Q.; Di, C.N.; Yang, L. The mollification method based on a modified operator to the ill-posed problem for 3D Helmholtz equation with mixed boundary. Appl. Numer. Math. 2021, 160, 422–435. [Google Scholar] [CrossRef]
  20. Li, Z.P.; Xu, C.; Lan, M.; Qian, Z. A mollification method for a Cauchy problem for the Helmholtz equation. Int. J. Comput. Math. 2018, 95, 2256–2268. [Google Scholar] [CrossRef]
  21. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problems, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
Figure 1. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the absolute value of the error u μ , δ ( x , y ) u ( x , y ) at x = 0.2 when ϵ = 0.01 and k = 100 .
Figure 1. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the absolute value of the error u μ , δ ( x , y ) u ( x , y ) at x = 0.2 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g001
Figure 2. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the error u μ , δ ( x , y ) u ( x , y ) at x = 0.5 when ϵ = 0.01 and k = 100 .
Figure 2. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the error u μ , δ ( x , y ) u ( x , y ) at x = 0.5 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g002
Figure 3. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the error u μ , δ ( x , y ) u ( x , y ) at x = 0.8 when ϵ = 0.01 and k = 100 .
Figure 3. The exact solution u ( x , y ) , the regularization solution u μ , δ ( x , y ) , and the error u μ , δ ( x , y ) u ( x , y ) at x = 0.8 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g003
Figure 4. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.2 when ϵ = 0.01 and k = 100 .
Figure 4. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.2 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g004
Figure 5. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.5 when ϵ = 0.01 and k = 100 .
Figure 5. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.5 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g005
Figure 6. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.8 when ϵ = 0.01 and k = 100 .
Figure 6. The exact solution v ( x , y ) , the regularization solution v μ , δ ( x , y ) , and the error v μ , δ ( x , y ) v ( x , y ) at x = 0.8 when ϵ = 0.01 and k = 100 .
Symmetry 16 01549 g006
Table 1. The relative errors of the regularization solution r e l ( u μ , δ ) with the Gaussian kernel function at x = 0.2 for different ϵ and k.
Table 1. The relative errors of the regularization solution r e l ( u μ , δ ) with the Gaussian kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.233680 0.159344 0.117272 0.070912
k = 10 0.234835 0.159938 0.117797 0.071367
k = 100 0.235930 0.160133 0.117868 0.071860
a p o s t e r i o r i k = 1 0.112525 0.079810 0.031275 0.017466
k = 10 0.113050 0.080876 0.032780 0.017826
k = 100 0.113058 0.080909 0.034385 0.018324
Table 2. The relative errors of the regularization solution r e l ( u μ , δ ) with the Dirichlet kernel function at x = 0.2 for different ϵ and k.
Table 2. The relative errors of the regularization solution r e l ( u μ , δ ) with the Dirichlet kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.268808 0.199017 0.129024 0.085737
k = 10 0.271318 0.199420 0.129214 0.091286
k = 100 0.273179 0.199462 0.129992 0.091387
a p o s t e r i o r i k = 1 0.123046 0.085015 0.059284 0.024477
k = 10 0.127065 0.085118 0.059289 0.022827
k = 100 0.127246 0.085211 0.059296 0.022905
Table 3. The relative errors of the regularization solution r e l ( u μ , δ ) with the Poussin kernel function at x = 0.2 for different ϵ and k.
Table 3. The relative errors of the regularization solution r e l ( u μ , δ ) with the Poussin kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.259040 0.183093 0.125741 0.085630
k = 10 0.272195 0.183271 0.125906 0.087056
k = 100 0.281887 0.184352 0.126475 0.087284
a p o s t e r i o r i k = 1 0.113688 0.083249 0.039925 0.019824
k = 10 0.117872 0.083636 0.041173 0.022638
k = 100 0.119315 0.083901 0.042274 0.022715
Table 4. The relative errors of the regularization solution r e l ( v μ , δ ) with the Gaussian kernel function at x = 0.2 for different ϵ and k.
Table 4. The relative errors of the regularization solution r e l ( v μ , δ ) with the Gaussian kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.346058 0.080304 0.042824 0.033628
k = 10 0.349183 0.082115 0.044944 0.035809
k = 100 0.351843 0.088390 0.052139 0.043055
a p o s t e r i o r i k = 1 0.117750 0.068838 0.035348 0.011900
k = 10 0.120404 0.071982 0.036363 0.013695
k = 100 0.126416 0.077805 0.044492 0.017960
Table 5. The relative errors of the regularization solution r e l ( v μ , δ ) with the Dirichlet kernel function at x = 0.2 for different ϵ and k.
Table 5. The relative errors of the regularization solution r e l ( v μ , δ ) with the Dirichlet kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.855277 0.128594 0.086578 0.071735
k = 10 0.855221 0.129458 0.085031 0.071980
k = 100 0.855772 0.129776 0.086976 0.072002
a p o s t e r i o r i k = 1 0.124472 0.086636 0.055169 0.017641
k = 10 0.127957 0.084905 0.061391 0.022857
k = 100 0.129509 0.088039 0.063265 0.025268
Table 6. The relative errors of the regularization solution r e l ( v μ , δ ) with the Poussin kernel function at x = 0.2 for different ϵ and k.
Table 6. The relative errors of the regularization solution r e l ( v μ , δ ) with the Poussin kernel function at x = 0.2 for different ϵ and k.
ϵ 0.1 0.01 0.001 0.0001
a p r i o r i k = 1 0.741338 0.124538 0.052405 0.047151
k = 10 0.744302 0.126333 0.052473 0.047907
k = 100 0.744574 0.127423 0.052592 0.048278
a p o s t e r i o r i k = 1 0.123952 0.084619 0.051950 0.017559
k = 10 0.126644 0.086460 0.053411 0.018083
k = 100 0.129783 0.090114 0.059655 0.022334
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, H.; Wang, B.; Zhou, D. A General Mollification Regularization Method to Solve a Cauchy Problem for the Multi-Dimensional Modified Helmholtz Equation. Symmetry 2024, 16, 1549. https://doi.org/10.3390/sym16111549

AMA Style

Xu H, Wang B, Zhou D. A General Mollification Regularization Method to Solve a Cauchy Problem for the Multi-Dimensional Modified Helmholtz Equation. Symmetry. 2024; 16(11):1549. https://doi.org/10.3390/sym16111549

Chicago/Turabian Style

Xu, Huilin, Baoxia Wang, and Duanmei Zhou. 2024. "A General Mollification Regularization Method to Solve a Cauchy Problem for the Multi-Dimensional Modified Helmholtz Equation" Symmetry 16, no. 11: 1549. https://doi.org/10.3390/sym16111549

APA Style

Xu, H., Wang, B., & Zhou, D. (2024). A General Mollification Regularization Method to Solve a Cauchy Problem for the Multi-Dimensional Modified Helmholtz Equation. Symmetry, 16(11), 1549. https://doi.org/10.3390/sym16111549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop