Next Article in Journal
Statistical Evaluation of Beta-Binomial Probability Law for Removal in Progressive First-Failure Censoring and Its Applications to Three Cancer Cases
Previous Article in Journal
A Nonparametric Double Homogeneously Weighted Moving Average Signed-Rank Control Chart for Monitoring Location Parameter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Boundary Knot Neural Networks for the Inverse Cauchy Problem of the Helmholtz Equation

College of Mechanical and Electrical Engineering, Qingdao University, Qingdao 266071, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 3029; https://doi.org/10.3390/math13183029
Submission received: 22 August 2025 / Revised: 12 September 2025 / Accepted: 14 September 2025 / Published: 19 September 2025

Abstract

The traditional boundary knot method (BKM) has certain advantages in solving Helmholtz equations, but it still faces the difficulty of solving ill-posed problems when dealing with inverse problems. This work proposes a novel deep learning framework, the boundary knot neural networks (BKNNs), for solving inverse Cauchy problems of the Helmholtz equation. The method begins by uniformly distributing collocation points on the physical boundary, then employs a fully connected neural network to approximate the source point coefficient vector in the BKM. The physical quantities on the computational domain can be expressed by the BKM formula, and the loss functions can be constructed via accessible conditions on measurable boundaries. After that, the optimal weights and biases can be obtained by training the fully connected neural network, and thus, the source point coefficient vector can be successfully solved. As a machine learning-based meshless scheme, the BKNN eliminates tedious procedures like meshing and numerical integration while handling inverse Cauchy problems with complex boundaries. More importantly, the method itself is an optimization algorithm that completely avoids the complex processing techniques for ill-conditioned problems in traditional methods. Numerical experiments validate the efficacy of the proposed method, showcasing its superior performance over the traditional BKM for solving the Helmholtz equation’s inverse Cauchy problems.

1. Introduction

The Helmholtz equation is ubiquitous in engineering and scientific applications. While forward problems defined with fully specified boundary condition data have been thoroughly investigated, practical scenarios often encounter incomplete boundary information due to measurement limitations [1,2]. Technical challenges arise when portions of boundaries remain physically inaccessible, or sensor intrusions perturb the system under study, yielding only approximate measurements. This results in an underdetermined system requiring supplementary data acquisition either additional boundary condition data on accessible regions or interior point measurements to uniquely reconstruct unspecified boundary condition data. Such inverse formulations exhibit inherent ill-posed, where minor data perturbations may induce substantial solution deviations. Numerous computational approaches address this inverse Cauchy problem [3,4,5,6,7], broadly categorized as iterative or direct methods. Both categories necessitate partial differential equation (PDE) discretization, typically implemented through finite difference methods (FDMs) [8,9,10,11], finite element methods (FEMs) [12,13], or boundary element methods (BEMs) [14,15]. Domain-discretization techniques (FDM/FEM) face significant meshing challenges for complex geometries, particularly in higher dimensions. The BEM alleviates this issue through dimensionality reduction, yet encounters implementation obstacles, including singular integral evaluations and non-trivial surface meshing for intricate shapes.
Recent advancements in computational techniques have witnessed significant development of meshless/meshfree methods [16,17,18], aiming to circumvent mesh generation and singular integration challenges while maintaining solution accuracy and efficiency for high-dimensional complex systems. Prominent among these methodologies are radial basis function (RBF) [19,20,21,22], method of fundamental solutions (MFS) [23,24,25,26] and singular boundary method (SBM) [27,28,29]. The RBF framework offers a versatile meshless strategy for solving general PDEs; however, its implementation requires the determination of optimal shape parameters, which remains a critical and widely debated issue. In contrast, the MFS employs fundamental solutions as the base function and introduces geometric constraints by requiring an artificial fictitious boundary to avoid source singularities. The SBM without the need for a fictitious boundary requires specific techniques to determine the source intensity factor, which increases workload and introduces certain impacts on accuracy [30,31,32]. Despite sustained research efforts, the appropriate positioning of this fictitious boundary persists as a non-trivial optimization problem, particularly for intricate three-dimensional structures. As a novel computational alternative, the BKM [33,34,35,36,37,38,39,40] developed by Chen and Tanaka synergistically combines the merits of MFS and RBF techniques. In contrast to conventional approaches, this methodology eliminates both shape parameter dependence and the fictitious boundary dilemma through innovative adoption of non-singular general solutions rather than singular fundamental solutions. Its advantages, including algorithmic simplicity, numerical accuracy, and geometric flexibility inherent in its meshless formulation have driven its successful implementation across diverse PDE systems, as evidenced by recent applications documented in [33,36,37]. At the same time, the BKM usually requires the introduction of regularization techniques to solve inverse Cauchy problems, thereby suppressing the instability of the solution and improving the stability and accuracy of the numerical solution. Despite the significant efforts made to address these issues, considerable scope for enhancement remains in terms of computational complexity, efficiency, and accuracy. The key aspect of the BKM in handling inverse Cauchy problems is the solution of the source point coefficient vector [41].
The physics-informed neural network (PINN) [21,42,43,44,45,46,47,48] is an emerging approach that integrates physical information with neural networks. By leveraging the strong function approximation capabilities of neural networks, PINN provides an innovative solution for solving complex problems. To fully exploit the strengths of both the PINN and the BKM, this paper introduces a novel machine learning-based semi-analytical meshless method, the BKNN. The method employs PINN to derive the optimal source point coefficient vector for the BKM. Four numerical examples demonstrate the validity of the proposed method. As a machine learning-based method that employs optimization algorithms to obtain numerical solutions, it does not need additional regularization techniques and is particularly well-suited for solving ill-posed problems.
This paper is structured as follows: Section 2 presents the inverse Cauchy problem associated with the Helmholtz equation studied. Section 3 elaborates on the basic principles of the BKM, PINN and BKNN used in this paper. Section 4 presents the numerical results and comparisons. Finally, Section 5 draws some conclusions.

2. Problem Statement

The Helmholtz equation finds widespread applications in various fields, including acoustics, electromagnetics, and fluid dynamics. The positive forward problem, defined by complete boundary conditions, has been the subject of extensive research in recent years. Unfortunately, technical challenges in measurement prevent the complete acquisition of boundary condition data. For example, certain segments of the boundary might be impossible to measure directly. Furthermore, the insertion of measurement devices can perturb the system under investigation, consequently resulting in unreliable data. Then the problem is under-determined. As shown in Figure 1, the portion of the boundary denoted as B1 is prescribed with two types of boundary condition data—Dirichlet and Neumann. In contrast, no boundary condition data are provided on the portion B2. This setup constitutes a Cauchy problem, which is well-known for its ill-posedness. For simplicity, we consider the following inverse Cauchy problem of the two-dimensional Helmholtz equation:
2 u ( x ) + k 2 u ( x ) = 0 , x Ω ,
u ( x ) = u ¯ ( x ) , x B 1 ,
𝜕 u ( x ) 𝜕 n = q ¯ ( x ) , x B 1 ,
where 2 = 𝜕 2 u 𝜕 x 1 2 + 𝜕 2 u 𝜕 x 2 2 is the Laplacian operator, k is the wave number, u ¯ and q ¯ are the accessible boundary data at boundary point x = ( x 1 , x 2 ) , Ω denotes the two-dimensional physical domain with boundary 𝜕 Ω = B 1 B 2 . In this study, we only consider the case where k is a real number.
To address this problem, we assume that a total of N = n 1 + n 2 + n i nodes are distributed over the boundaries and the interior of the computational domain. Specifically, the nodes x i i = 1 n 1 are located on boundary B1, the nodes x i i = n 1 + 1 n 1 + n 2 are located on boundary B2, and the nodes x i i = n 1 + n 2 + 1 N are placed inside the domain. The task is to approximate physical quantities on the boundary B2 and inside the computation domain using the accessible data on boundary B1. It should be noted that this constitutes an inverse Cauchy problem, which is often ill-posed, as small perturbations in the input data can result in significant deviations in the solution.

3. Methodology

3.1. Physics-Informed Neural Networks

The PINN is a scientific machine learning framework that has been successfully applied to a wide range of problems, including boundary value problems, fractional equations, integro-differential equations, and stochastic PDEs. It utilizes the powerful function approximation capabilities of neural networks to provide an innovative approach to solving the inverse Cauchy problem. Unlike earlier neural network-based solvers, which primarily aimed to approximate solutions directly, the PINN is often employed to infer unknown parameters or fields in the governing equations by embedding physical laws into the learning process. The PINN approximates the solutions to boundary value problems (BVPs) by training neural networks to minimize a loss function that incorporates both data discrepancies and the residuals of the governing equations. By integrating physical constraints, the PINN enables the model to better capture the underlying structure of the solution. This integration of domain knowledge significantly improves the model’s interpretability and generalization capabilities compared to traditional neural networks.
The application of the PINN in solving various BVPs becomes increasingly mature. Figure 2 illustrates the basic framework of the PINN for solving the inverse Cauchy problem of the Helmholtz equation given in Section 2. The framework comprises four primary elements: a deep neural network (DNN), automatic differentiation (AD), physical information, and a feedback learning mechanism.
To effectively guide the approximation of the source point coefficient vector, we define the following loss function, which incorporates the given boundary condition data:
L D C = 1 n 1 i = 1 n 1 u ^ ( x i ) u ¯ ( x i ) 2 ,
L N C = 1 n 1 i = 1 n 1 𝜕 u ^ ( x i ) 𝜕 n q ¯ ( x i ) 2 ,
L G E = 1 n 2 + n i i = n 1 + 1 N 2 u ^ x i + k 2 u ^ x i 2 ,
where L D C and L N C correspond to the loss functions for the Dirichlet and Neumann boundary condition data, respectively. L G E corresponds to the loss functions for the governing equation. u ^ x i is the numerical solution.
The physical information component integrates governing physical laws, including boundary condition data and governing equations, directly into the learning process. By incorporating these physical constraints into the loss function, the model ensures that its solutions inherently satisfy the underlying physics, significantly enhancing predictive accuracy and physical consistency. The feedback learning mechanism iteratively updates the model parameters, guided by the prediction errors. This process promotes convergence and accuracy while guaranteeing that solutions remain consistent with physical constraints and fundamental mathematical principles.
The DNN adopts a fully connected feed-forward architecture structurally organized into three distinct components: an input layer, multiple hidden layers, and an output layer. Within the hidden layers, each neuron applies a mathematical transformation that involves three key components: trainable weights, bias terms, and differentiable activation functions. The use of nonlinear activation functions endows the network with expressive power, enabling it to capture complex and highly nonlinear input-output relationships through hierarchical feature abstraction. This enhances the model’s ability to extract intricate patterns from data, which is crucial for accurately approximating the solutions to complex physical problems.
A portion of the hidden layer structure is illustrated in Figure 3. For each hidden layer, the relationship between the output vector Y and the input vector X can be expressed as follows:
Y = σ ( ω X + g ) ,
where ω and g denote the trainable parameters of the network, namely the weights and biases. The function σ acts as an activation function, providing the necessary nonlinearity. Each neuron in the hidden layer is equipped with this function, allowing the network to capture nonlinear mappings.
The implementation of the PINN for solving the inverse Cauchy problem of the Helmholtz equation involves a series of crucial steps. First, a DNN is constructed to approximate the solutions of Helmholtz equation. A gradient-based optimization method is then employed to update the neural network parameters, and iteration proceeds until the loss function converges to the preset threshold. Ultimately, the trained network yields a numerical solution to the inverse Cauchy problem of Helmholtz equation. The primary workflow of the algorithm is outlined in Algorithm 1.
Algorithm 1: The PINN for solving the inverse Cauchy problem of Helmholtz equation
1. Input:
  • Define governing equation, computational domain and boundary conditions.
  • Discretize the domain into a set of collocation and boundary nodes.
  • Define the neural network parameters including layers, neurons per layer, learning rate, activation function, and training epochs.
2. Initialization:
  • Initialize the neural network parameters ( ω , g ) .
3. PINN implementation:
  • Build a DNN to approximate the solution u ^ ( x ) .
  • Use the AD to calculate the derivatives of various orders involved in the loss function.
  • Construct a loss function incorporating boundary residuals and governing equation constraints.
  • Update the network parameters by minimizing the loss function.
  • Perform iterative optimization until the loss function satisfies the predefined tolerance criterion.
4. Output:
  • Use the trained network to provide numerical solutions within the domain and along the inaccessible boundary B2.
  • Present the computed physical quantities along with relevant visualizations.

3.2. Boundary Knot Method Using the Moore–Penrose Inverse

The BKM is an inherently meshless boundary-type numerical technique for solving the inverse Cauchy problem of Helmholtz equation. Unlike traditional mesh-based methods, the BKM does not require discretization of the entire computational domain, significantly simplifying the modeling process. Moreover, it avoids the need for complex numerical integration, as it provides direct numerical solutions. The BKM can be regarded as a variation in Kansa’s method, wherein radial basis functions are constructed based on the general solution of governing equation. This approach enables exponential convergence rates and mitigates the loss of accuracy commonly associated with conventional RBFs. We next present a concise overview of the approach.
The non-singular general solution of homogeneous Helmholtz is given by
u * ( x , s ) = J 0 ( k x s 2 ) ,
where J 0 represents the zero-order Bessel function of the first kind, x and s denote field points and source points, respectively.
By using the non-singular general solution in Equation (8) as the radial basis function, the numerical solution of Equation (1) can be represented by
u ( x ) = j = 1 n 1 + n 2 α j u * ( x , s j ) ,
where n 1 + n 2 denotes the total number of source points s 1 , s 2 , , s n 1 + n 2 , and α = α 1 , α 2 , , α n 1 + n 2 T is the source point coefficient vector.
By collocating boundary Equations (2) and (3) at all collocation knots on accessible boundary B1, the following system of equations can be obtained,
j = 1 n 1 + n 2 α j u * ( x i , s j ) = u ¯ ( x i ) , x i i = 1 n 1   on   B 1 ,
j = 1 n 1 + n 2 α j 𝜕 u * ( x i , s j ) 𝜕 n = q ¯ ( x i ) , x i i = 1 n 1   on   B 1 .
Equations (10) and (11) can be written in the following 2 n 1 × n 1 + n 2 matrix system,
A α = b ,
where A is a coefficient matrix derived from nonsingular general solutions, b = u ¯ ( x 1 ) , , u ¯ ( x n 1 ) , q ¯ ( x 1 ) , , q ¯ ( x n 1 ) T denotes the vector formed by the accessible boundary values.
It should be noted that Equation (12) is a dense and ill-posed system, for which traditional methods struggle to provide accurate solutions. An effective strategy is to use the Moore–Penrose inverse, particularly in handling ill-posed or underdetermined systems. Accordingly, the vector α in Equation (12) can be computed using the following expression,
α = A + b ,
where A + is the Moore–Penrose inverse (MPI) of the matrix A , and can be calculated using the MATLAB2022b command “pinv( A , tol)”, tol is the singular value tolerance. With α determined, the physical values at the inaccessible boundary nodes and interior points are evaluated by the following formulas,
u ^ ( x i ) = j = 1 n 1 + n 2 α j u * ( x i , s j ) , i = n 1 + 1 , , N ,
𝜕 u ^ ( x i ) 𝜕 n = j = 1 n 1 + n 2 α j 𝜕 u * ( x i , s j ) 𝜕 n , i = n 1 + 1 , , n 1 + n 2 .
Algorithm 2 provides a detailed description of the BKM-MPI for solving inverse Cauchy problems of Helmholtz equation.
Algorithm 2: The BKM-MPI for solving inverse Cauchy problems of Helmholtz equation
1. Input:
  • Define governing equation, computational domain and boundary conditions.
  • Identify the non-singular general solution of the governing equation.
  • Discretize the domain into a set of collocation and boundary nodes (source points).
2. Calculate source point coefficient vector:
  • Construct matrix A using boundary and source point positions with non-singular general solutions.
  • Assemble vector b using accessible boundary conditions.
  • Solve the linear system A α = b using the MPI to obtain unknown coefficients α .
3. Output:
  • Use the frame to provide numerical solutions within the domain and along the inaccessible boundary B2.
  • Present the computed physical quantities and any relevant graphical representations.

3.3. Boundary Knot Neural Networks

The BKM primarily characterizes the influence of source points on field points by employing non-singular general solutions. It determines the coefficient vector associated with each source point using boundary nodes and the corresponding boundary condition data. Once the coefficient vector is established, physical quantities at any point in the domain can be efficiently computed. The method has several advantages, including theoretical simplicity, high numerical accuracy, and ease of implementation. However, in the context of the inverse Cauchy problem, where the available boundary condition data are incomplete or uncertain, the matrix formed by the non-singular general solution may not be a square matrix. This can lead to instability in direct computations using the BKM.
To address the limitation, this paper integrates the PINN into the traditional BKM, resulting in a novel approach termed BKNN. The stability of BKNN in addressing ill-posed inverse problems is attributed to several key features. First, the network incorporates physics-informed constraints by embedding the boundary conditions on measurable boundaries into the loss function, providing an effective regularization that reduces sensitivity to noise and incomplete boundary data. Second, the meshless representation based on non-singular boundary kernels enables smooth, global approximations, avoiding numerical instabilities commonly encountered in traditional methods. Furthermore, as a machine learning-based approach, the BKNN inherently employs optimization and iterative solution mechanisms, which further enhance its ability to reliably resolve ill-posed problems. Together, these aspects allow the BKNN to generate accurate and physically consistent solutions without resorting to additional complex processing techniques.
Figure 4 illustrates the BKNN framework. The BKNN architecture consists of four main components: a deep neural network, a BKM part, a loss function based on boundary conditions and backpropagation algorithms. The neural networks take the source points as input and output the corresponding source point coefficient vector. By training the neural networks, the weights and biases in the BKNN are updated simultaneously. The source point coefficient vector is acquired once the loss converges to a predefined threshold.
By employing the BKNN method, the proposed approach effectively circumvents the challenge of solving ill-conditioned systems in the traditional BKM. Moreover, the inherent flexibility of neural networks enables the method to adapt to complex geometries and boundary condition data, significantly mitigating the impact of boundary perturbations on the results. The BKNN adaptability allows it to be readily applied to a broad spectrum of problems without requiring significant underlying model changes. For three-dimensional problems, the two-dimensional non-singular general solution and coordinates should be substituted with their three-dimensional non-singular general solution and coordinates to perform iterative calculations within the framework.
Algorithm 3 lists the BKNN procedure for solving inverse Cauchy problems of the Helmholtz equation.
Algorithm 3: The BKNN for solving inverse Cauchy problems of the Helmholtz equation
1. Input:
  • Define computational domain and boundary conditions.
  • Identify the non-singular general solutions of the governing equation.
  • Discretize the domain into a set of boundary nodes (source points).
  • Define the neural network parameters including layers, neurons per layer, learning rate, activation function, and training epochs.
2. Initialization:
  • Initialize the neural network parameters ( ω , g ) .
3. BKNN implementation:
  • Build a DNN to evaluate the source point coefficient vector α .
  • Construct matrix A using boundary and source point positions with non-singular general solutions.
  • Approximate the solution u ^ ( x ) using the BKM formula.
  • Assemble vector b using accessible boundary conditions.
  • Construct a loss function incorporating boundary residuals.
  • Update the network parameters by minimizing the loss function.
  • Perform iterative optimization until the loss function satisfies the predefined tolerance criterion.
  • Obtain the optimal source point coefficient vector α .
4. Output:
  • Use the trained network to provide numerical solutions within the domain and along the inaccessible boundary B2.
  • Present the computed physical quantities and any relevant graphical representations.

4. Numerical Experiments

In this section, we validate the performance of the proposed BKNN by solving the inverse Cauchy problem for the Helmholtz equation. To assess the accuracy of the algorithm, this paper employs the absolute error (AE) and mean square error (MSE) as evaluation metrics:
A E = u ^ ( x i ) u ( x i ) ,
M S E = 1 n 2 i = 1 n 2 u ^ ( x i ) u ( x i ) 2 ,
where u ( x i ) and u ^ ( x i ) denote analytical and numerical solutions at the i-th test point.
In the following numerical experiments, the BKNN is optimized using the fmincon function in MATLAB with the Sequential quadratic programming algorithm. The optimization options are set as follows: HessianApproximation = lbfgs, MaxIterations = 8000, MaxFunctionEvaluations = 16,000, OptimalityTolerance = 1 × 10−15, FunctionTolerance = 1 × 10−20, StepTolerance = 1 × 10−20. The activation function employed in all experiments is the Arctan function, defined as “arctan(x)”.

4.1. Example 1

As the first example, we consider an inverse Cauchy problem for the Helmholtz equation on a peanut-shaped domain, as shown in Figure 5. The analytical solution for this problem is given by
u ( x ) = sin ( x 1 ) + cos ( x 2 ) .
In this example, the wave number is k = 1, the inaccessible boundary segment B2 constitutes 50% of the entire boundary. The proposed method employs a neural network architecture with 6 hidden layers, each containing 6 neurons. A total of 300 boundary nodes are uniformly distributed along the boundary in the angular direction, with 150 nodes designated as accessible and the remaining 150 as inaccessible. To evaluate the method’s sensitivity to noise, 1%, 3% and 5% noise levels are added to the data on the accessible boundary. The resulting impact on solution accuracy is then quantitatively analyzed.
Figure 6 presents a comparison between the analytical and numerical solutions on the inaccessible boundary segment B2 under noise levels of 1%, 3% and 5%. The results clearly demonstrate a convergent trend in solution accuracy as the noise level decreases. Notably, even under a relatively high noise level of 5%, the proposed method remains capable of obtaining reliable and accurate numerical solutions. This indicates the accuracy and robustness of the proposed approach in effectively addressing this inverse problem, even in the presence of significant measurement noise.
As illustrated in Figure 7, the BKNN solutions under a 3% noise level in the computational domain exhibit strong agreement with the analytical solutions, demonstrating the precision of the presented scheme for complex geometries. Figure 8 presents the AE distributions of both the BKM-MPI (tol = 0.001) and the BKNN across the computational domain under the same noise level. The comparative analysis clearly indicates that the BKNN consistently achieves higher solution accuracy than the BKM-MPI, even under significant noise perturbations, thereby highlighting the superior robustness and reliability of the proposed method.
Table 1 reports the MSEs of the three methods (traditional BKM, BKM-MPI with varying tolerance levels, and the proposed BKNN) under different noise levels (1%, 3%, and 5%). It is evident that the traditional BKM fails to obtain stable solutions across all noise scenarios, reflecting its sensitivity to data perturbations and limited robustness in practical settings. The BKM-MPI demonstrates improved numerical stability, with performance dependent on the specified tolerance. At a tolerance of 10−4, the BKM-MPI achieves its best result under the 1% noise level, yielding an MSE of 9.06 × 10−5; however, its accuracy significantly deteriorates as noise increases, reaching an MSE of 5.85 × 10−3 at the 5% noise level. Although reducing the tolerance improves accuracy to some extent, the method remains sensitive to noise, particularly at higher levels. In contrast, the proposed BKNN exhibits consistently superior performance under all noise conditions. These results highlight the robustness and accuracy of BKNN in solving inverse problems, even in the presence of substantial measurement noise. The consistent outperformance of BKNN over the BKM-MPI approach further underscores its effectiveness and reliability in practical noisy environments.

4.2. Example 2

This example investigates the impact of varying proportions of inaccessible boundary segments on the inverse Cauchy problem for the Helmholtz equation within a triangle-shaped domain. A total of 300 boundary nodes are uniformly distributed along the domain boundary in the angular direction. The accessible boundary segments are systematically modified by adjusting the number of accessible boundary nodes, and measurement noise is added to simulate uncertainty in practical data acquisition. Figure 9 depicts four different ratios of accessible boundary length relative to the total boundary: 30%, 40%, 50%, and 60%. The analytical solution is defined by Equation (19) with the wave number k = 2 .
u ( x ) = x 1   sin ( 2 x 2 ) .
In Figure 10, a quantitative comparison is presented between the analytical solution and the BKNN-reconstructed solutions for both u and its normal derivative ∂u/∂n on the accessible boundary with a ratio of 30%, under noise levels of 1%, 2%, and 3%. The close agreement between the curves, particularly for lower noise levels, indicates the strong robustness and accuracy of the method even with partial and noisy boundary data. In addition, the method exhibits convergence as the noise level decreases.
Figure 11 provides a spatial visualization of the reconstruction performance. Subplots (a) and (b) show the analytical and numerical solutions, respectively, while (c) illustrates the AE distribution on a logarithmic scale. The error is predominantly concentrated near the inaccessible boundary regions, which is expected due to limited input data. Nonetheless, the overall low AE values confirm the method’s effectiveness in recovering interior field values from incomplete boundary observations.
Table 2 presents the MSEs of the BKNN for solving this problem with a 1% noise level under different accessible boundary ratios. Table 3 lists the MSEs of the BKNN for solving this problem with a 30% accessible boundary ratio under different numbers of boundary nodes. As evidenced in Table 2, the algorithm exhibits a general reduction in error with both increased accessible boundary length and greater numbers of boundary nodes. This trend further substantiates the effectiveness and precision of the proposed method for inverse Cauchy problems governed by the Helmholtz equation.

4.3. Example 3

This example considers a star-shaped computational domain shown in Figure 12. The problem configuration features an inaccessible boundary segment (B2) constituting 50% of the total boundary length. The analytical solution with the wave number k = 2 is
u ( x ) = sin ( x 1 ) sin ( x 2 ) .
In the calculation, a total of 300 boundary nodes are uniformly distributed along the boundary in the angular direction, with 150 designated as accessible boundary nodes and 150 as inaccessible boundary nodes. A 3% noise level is added to the accessible boundary data.
To investigate the impact of the BKNN architecture on numerical accuracy in solving inverse Cauchy problems, Figure 13 compares the numerical and analytical solutions obtained using networks of varying depths, with the number of neurons per layer fixed at six. Conversely, Figure 14 compares the solutions obtained with different neuron counts per layer, while keeping the network depth fixed at six layers. Specifically, when the number of layers is fixed at six, the effect of varying the number of neurons per layer from three to six on computational accuracy is evaluated. Similarly, with six neurons per layer, the impact of network depth, ranging from three to twelve layers, is assessed. As illustrated in the figures, the network architecture with six hidden layers and six neurons per layer exhibits the best performance among all tested configurations.
Using six network layers and six neurons per layer, Figure 15 compares the AE of the numerical solutions on the inaccessible boundary obtained by the conventional PINN and the proposed method. It can be seen that the BKNN yields significantly higher accuracy than the traditional PINN. This improvement is primarily attributed to the incorporation of semi-analytical basis functions in the BKNN framework, which greatly enhances computational precision.
Figure 16 illustrates the analytical solution, the numerical solution obtained using the BKNN method, and the corresponding AE for a network with six layers and six neurons per layer. As observed, the analytical and numerical solutions exhibit excellent agreement. The AE is minimal across the domain, indicating that the method provides highly accurate results. These results confirm the high accuracy and reliability of the proposed method for addressing this problem.
Table 4 lists the MSE and computational time of PINN and BKNN with a fixed neuron count of six per layer while varying the number of network layers. Table 5 reports the corresponding results for a fixed network depth of six layers with varying numbers of neurons per layer. It is first observed that the computational time increases sharply with the number of network layers, whereas changes in the number of neurons per layer have only a minor effect on the computational time. Compared to the BKNN, the PINN exhibits substantially higher computational cost. Moreover, despite both methods achieving acceptable numerical performance, the accuracy obtained by the PINN is significantly lower than that of the BKNN.

4.4. Example 4

The last example examines a computational domain with an irregular boundary, as illustrated in Figure 1. Notably, 50% of the total boundary length corresponds to an inaccessible boundary segment (B2). The analytical solution, corresponding to the specified wave number k = 1 , is given by
u ( x ) = cos ( 0.5 x 1 + k 2 0.5 2 x 2 ) .
In the calculation, a total of 300 boundary nodes are uniformly distributed along the boundary in the angular direction, with 150 nodes assigned to the accessible boundary and 150 to the inaccessible boundary. To evaluate the stability of the BKNN, noise levels of 1%, 2%, and 3% are introduced to the accessible boundary conditions.
Figure 17 shows a comparison between the analytical and numerical solutions on the inaccessible boundary segment under the three noise levels. The results clearly indicate that the proposed method can provide reliable and accurate numerical solutions even for irregular domains, thereby confirming the computational accuracy and robustness of the BKNN in handling such complex geometries.
Figure 18 depicts the analytical solution, the numerical solution obtained using the BKNN, and the corresponding AE within the irregular domain. As shown, the analytical and numerical solutions exhibit excellent agreement, with the AE remaining minimal throughout the domain. Overall, the findings illustrate that the BKNN effectively captures the solution behavior in complex domains, providing a robust and precise computational tool for the inverse Cauchy problem of Helmholtz equation with an irregular boundary.

5. Conclusions

In this work, we propose a novel deep learning-based meshless framework, termed BKNN, for solving inverse Cauchy problems of the Helmholtz equation. By integrating the BKM with the PINN, the BKNN approach effectively addresses the ill-posedness commonly encountered in traditional inverse problem solvers. The framework employs a fully connected neural network to approximate the source coefficient vector in the BKM formulation, enabling a direct and data-driven representation of the solution through boundary information alone. Without requiring domain meshing or numerical integration, the BKNN constructs loss functions based solely on known boundary conditions and optimizes network parameters to recover accurate physical quantities within the domain. Compared to conventional methods, the BKNN offers several advantages: (1) they eliminate the need for meshing and complex numerical integration, (2) they bypass ill-conditioning issues by reformulating the problem as a neural network optimization task, and (3) they handle irregular geometries and limited boundary data with enhanced robustness.
This study provides a promising direction for extending semi-analytical meshless methods through intelligent, data-driven modeling, with potential applications to a broader class of inverse and ill-posed problems in physics and engineering. It should be noted that the proposed method can be directly applied to high-dimensional problems and complex geometrical boundaries, although its computational efficiency is relatively reduced compared to two-dimensional cases with simple geometries. Furthermore, this scheme can also be extended to solve inverse Cauchy problems of other PDEs, as long as their non-singular general solutions are available. The solution procedure remains identical, requiring only the substitution of the corresponding non-singular general solution. The aforementioned extensions will be explored in subsequent studies.

Author Contributions

Conceptualization, R.W. and F.W.; methodology, F.W.; software, L.Q.; validation, R.W., X.L., F.W. and L.Q.; formal analysis, X.L.; investigation, F.W.; data curation, R.W.; writing—original draft preparation, R.W.; writing—review and editing, F.W. and L.Q.; visualization, X.L; supervision, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Shandong Province (Nos. ZR2023YQ005, ZR2023QA013) and the National Natural Science Foundation of China (Nos. 12472203, 12302263).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank the editor and the referees for their valuable comments and suggestions, which greatly improved the quality of this paper.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Gryazin, Y.A.; Klibanov, M.V.; Lucas, T.R. Two numerical methods for an inverse problem for the 2-D Helmholtz equation. J. Comput. Phys. 2003, 184, 122–148. [Google Scholar] [CrossRef]
  2. Jin, B.; Zheng, Y. A meshless method for some inverse problems associated with the Helmholtz equation. Comput. Methods Appl. Mech. Eng. 2006, 195, 2270–2288. [Google Scholar] [CrossRef]
  3. Aboud, F.; Jameel, I.T.; Hasan, A.F.; Mostafa, B.K.; Nachaoui, A. Polynomial approximation of an inverse Cauchy problem for Helmholtz type equations. Adv. Math. Models Appl. 2022, 7, 306–322. [Google Scholar]
  4. Belgacem, F.B. Why is the Cauchy problem severely ill-posed? Inverse Probl. 2007, 23, 823. [Google Scholar] [CrossRef]
  5. Choulli, M. Applications of Elliptic Carleman Inequalities to Cauchy and Inverse Problems; Springer: Berlin/Heidelberg, Germany, 2016; Volume 5. [Google Scholar]
  6. Jourhmane, M.; Nachaoui, A. An alternating method for an inverse Cauchy problem. Numer. Algorithms 1999, 21, 247–260. [Google Scholar] [CrossRef]
  7. Liu, C.-S. A modified collocation Trefftz method for the inverse Cauchy problem of Laplace equation. Eng. Anal. Bound. Elem. 2008, 32, 778–785. [Google Scholar] [CrossRef]
  8. Ahsan, M.; Hussain, I.; Ahmad, M. A finite-difference and Haar wavelets hybrid collocation technique for non-linear inverse Cauchy problems. Appl. Math. Sci. Eng. 2022, 30, 121–140. [Google Scholar] [CrossRef]
  9. Fan, C.-M.; Li, P.-W.; Yeih, W. Generalized finite difference method for solving two-dimensional inverse Cauchy problems. Inverse Probl. Sci. Eng. 2015, 23, 737–759. [Google Scholar] [CrossRef]
  10. Hua, Q.; Gu, Y.; Qu, W.; Chen, W.; Zhang, C. A meshless generalized finite difference method for inverse Cauchy problems associated with three-dimensional inhomogeneous Helmholtz-type equations. Eng. Anal. Bound. Elem. 2017, 82, 162–171. [Google Scholar] [CrossRef]
  11. Li, P.-W.; Fu, Z.-J.; Gu, Y.; Song, L. The generalized finite difference method for the inverse Cauchy problem in two-dimensional isotropic linear elasticity. Int. J. Solids Struct. 2019, 174, 69–84. [Google Scholar] [CrossRef]
  12. Meethal, R.E.; Kodakkal, A.; Khalil, M.; Ghantasala, A.; Obst, B.; Bletzinger, K.-U.; Wüchner, R. Finite element method-enhanced neural network for forward and inverse problems. Adv. Model. Simul. Eng. Sci. 2023, 10, 6. [Google Scholar] [CrossRef]
  13. Nicholson, D.W. On finite element analysis of an inverse problem in elasticity. Inverse Probl. Sci. Eng. 2012, 20, 735–748. [Google Scholar] [CrossRef]
  14. Delvare, F.; Cimetière, A.; Pons, F. An iterative boundary element method for Cauchy inverse problems. Comput. Mech. 2002, 28, 291–302. [Google Scholar] [CrossRef]
  15. Marin, L. Boundary element–minimal error method for the Cauchy problem associated with Helmholtz-type equations. Comput. Mech. 2009, 44, 205–219. [Google Scholar] [CrossRef]
  16. Qiu, L.; Wang, F.; Qu, W.; Lin, J.; Gu, Y.; Qin, Q.H. A Hybrid Collocation Method for Long-Time Simulation of Heat Conduction in Anisotropic Functionally Graded Materials. Int. J. Numer. Methods Eng. 2025, 126, e70002. [Google Scholar] [CrossRef]
  17. Qiu, L.; Wang, Y.; Gu, Y.; Qin, Q.-H.; Wang, F. Adaptive physics-informed neural networks for dynamic coupled thermo-mechanical problems in large-size-ratio functionally graded materials. Appl. Math. Model. 2025, 140, 115906. [Google Scholar] [CrossRef]
  18. Wei, X.; Rao, C.; Chen, S.; Luo, W. Numerical simulation of anti-plane wave propagation in heterogeneous media. Appl. Math. Lett. 2023, 135, 108436. [Google Scholar] [CrossRef]
  19. Chan, H.-F.; Fan, C.-M. The local radial basis function collocation method for solving two-dimensional inverse Cauchy problems. Numer. Heat Transf. Part B Fundam. 2013, 63, 284–303. [Google Scholar] [CrossRef]
  20. Li, J. A radial basis meshless method for solving inverse boundary value problems. Commun. Numer. Methods Eng. 2004, 20, 51–61. [Google Scholar] [CrossRef]
  21. Mostajeran, F.; Hosseini, S.M. Radial basis function neural network (RBFNN) approximation of Cauchy inverse problems of the Laplace equation. Comput. Math. Appl. 2023, 141, 129–144. [Google Scholar] [CrossRef]
  22. Yang, J.P.; Chen, Y.-C. Gradient enhanced localized radial basis collocation method for inverse analysis of cauchy problems. Int. J. Appl. Mech. 2020, 12, 2050107. [Google Scholar] [CrossRef]
  23. Al-Mahdawi, H.K. Development of a Numerical Method for Solving the Inverse Cauchy Problem for the Heat Equation. Bull. South Ural State Univ. Ser. Comput. Math. Softw. Eng. 2019, 8, 22–31. [Google Scholar][Green Version]
  24. Liu, C.-S.; Wang, F. An energy method of fundamental solutions for solving the inverse Cauchy problems of the Laplace equation. Comput. Math. Appl. 2018, 75, 4405–4413. [Google Scholar] [CrossRef]
  25. Young, D.-L.; Tsai, C.; Chen, C.; Fan, C.-M. The method of fundamental solutions and condition number analysis for inverse problems of Laplace equation. Comput. Math. Appl. 2008, 55, 1189–1200. [Google Scholar] [CrossRef]
  26. Wang, F.; Li, X.; Liu, H.; Qiu, L.; Yue, X. An adaptive method of fundamental solutions using physics-informed neural networks. Eng. Anal. Bound. Elem. 2025, 178, 106295. [Google Scholar] [CrossRef]
  27. Gu, Y.; Chen, W.; Fu, Z.-J. Singular boundary method for inverse heat conduction problems in general anisotropic media. Inverse Probl. Sci. Eng. 2014, 22, 889–909. [Google Scholar] [CrossRef]
  28. Cheng, S.; Wang, F.; Li, P.-W.; Qu, W. Singular boundary method for 2D and 3D acoustic design sensitivity analysis. Comput. Math. Appl. 2022, 119, 371–386. [Google Scholar] [CrossRef]
  29. Wei, X.; Luo, W. 2.5 D singular boundary method for acoustic wave propagation. Appl. Math. Lett. 2021, 112, 106760. [Google Scholar] [CrossRef]
  30. Sun, L.; Chen, W.; Cheng, A.H.-D. Evaluating the origin intensity factor in the singular boundary method for three-dimensional dirichlet problems. Adv. Appl. Math. Mech. 2017, 9, 1289–1311. [Google Scholar] [CrossRef]
  31. Wang, F.; Chen, W. Accurate empirical formulas for the evaluation of origin intensity factor in singular boundary method using time-dependent diffusion fundamental solution. Int. J. Heat Mass Transf. 2016, 103, 360–369. [Google Scholar] [CrossRef]
  32. Wei, X.; Chen, W.; Sun, L.; Chen, B. A simple accurate formula evaluating origin intensity factor in singular boundary method for two-dimensional potential problems with Dirichlet boundary. Eng. Anal. Bound. Elem. 2015, 58, 151–165. [Google Scholar] [CrossRef]
  33. Dehghan, M.; Mohammadi, V. The boundary knot method for the solution of two-dimensional advection reaction-diffusion and Brusselator equations. Int. J. Numer. Methods Heat Fluid Flow 2021, 31, 106–133. [Google Scholar] [CrossRef]
  34. Hon, Y.; Chen, W. Boundary knot method for 2D and 3D Helmholtz and convection–diffusion problems under complicated geometry. Int. J. Numer. Methods Eng. 2003, 56, 1931–1948. [Google Scholar] [CrossRef]
  35. Jin, B.; Zheng, Y. Boundary knot method for some inverse problems associated with the Helmholtz equation. Int. J. Numer. Methods Eng. 2005, 62, 1636–1651. [Google Scholar] [CrossRef]
  36. Lei, M.; Liu, L.; Chen, C.; Zhao, W. The enhanced boundary knot method with fictitious sources for solving Helmholtz-type equations. Int. J. Comput. Math. 2023, 100, 1500–1511. [Google Scholar] [CrossRef]
  37. Liu, L.; Lei, M.; Yue, J.H.; Niu, R.P. Modified boundary knot method for multi-dimensional harmonic-type equations. Int. J. Comput. Methods 2024, 21, 2341004. [Google Scholar] [CrossRef]
  38. Wang, F.; Chen, W.; Jiang, X. Investigation of regularized techniques for boundary knot method. Int. J. Numer. Methods Biomed. Eng. 2010, 26, 1868–1877. [Google Scholar] [CrossRef]
  39. Wang, F.; Ling, L.; Chen, W. Effective condition number for boundary knot method. Comput. Mater. Contin. (CMC) 2009, 12, 57. [Google Scholar]
  40. Zhang, Q.; Ji, Z.; Sun, L. An improved localized boundary knot method for 3D acoustic problems. Appl. Math. Lett. 2024, 149, 108900. [Google Scholar] [CrossRef]
  41. Golub, G.; Kahan, W. Calculating the singular values and pseudo-inverse of a matrix. J. Soc. Ind. Appl. Math. Ser. B Numer. Anal. 1965, 2, 205–224. [Google Scholar] [CrossRef]
  42. Cai, S.; Mao, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
  43. Cai, S.; Wang, Z.; Wang, S.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks for heat transfer problems. J. Heat Transf. 2021, 143, 060801. [Google Scholar] [CrossRef]
  44. Hu, H.; Qi, L.; Chao, X. Physics-informed Neural Networks (PINN) for computational solid mechanics: Numerical frameworks and applications. Thin-Walled Struct. 2024, 205, 112495. [Google Scholar] [CrossRef]
  45. Li, Y.; Hu, X. An artificial neural network approximation for Cauchy inverse problems. arXiv 2020, arXiv:2001.01413. [Google Scholar] [CrossRef]
  46. McClenny, L.D.; Braga-Neto, U.M. Self-adaptive physics-informed neural networks. J. Comput. Phys. 2023, 474, 111722. [Google Scholar] [CrossRef]
  47. Nechita, M. Solving ill-posed Helmholtz problems with physics-informed neural networks. J. Numer. Anal. Approx. Theory 2023, 52, 90–101. [Google Scholar] [CrossRef]
  48. Zhang, B.; Wu, G.; Gu, Y.; Wang, X.; Wang, F. Multi-domain physics-informed neural network for solving forward and inverse problems of steady-state heat conduction in multilayer media. Phys. Fluids 2022, 34, 116116. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the computational domain.
Figure 1. Schematic diagram of the computational domain.
Mathematics 13 03029 g001
Figure 2. The PINN framework for solving the inverse Cauchy problem of the Helmholtz equation.
Figure 2. The PINN framework for solving the inverse Cauchy problem of the Helmholtz equation.
Mathematics 13 03029 g002
Figure 3. Schematic of a neuron model in hidden layer.
Figure 3. Schematic of a neuron model in hidden layer.
Mathematics 13 03029 g003
Figure 4. The BKNN framework for solving the inverse Cauchy problem of Helmholtz equation.
Figure 4. The BKNN framework for solving the inverse Cauchy problem of Helmholtz equation.
Mathematics 13 03029 g004
Figure 5. Schematic diagram of the peanut-shaped domain.
Figure 5. Schematic diagram of the peanut-shaped domain.
Mathematics 13 03029 g005
Figure 6. Comparisons between the analytical solution and the BKNN solutions with different noise levels: (a) u; (b) ∂u/∂n.
Figure 6. Comparisons between the analytical solution and the BKNN solutions with different noise levels: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g006
Figure 7. Solution distributions in the computational domain: (a) analytical solution and (b) BKNN solution.
Figure 7. Solution distributions in the computational domain: (a) analytical solution and (b) BKNN solution.
Mathematics 13 03029 g007
Figure 8. The AE distributions in the computational domain with 3% noise level: (a) BKM-MPI (tol = 0.001) and (b) BKNN.
Figure 8. The AE distributions in the computational domain with 3% noise level: (a) BKM-MPI (tol = 0.001) and (b) BKNN.
Mathematics 13 03029 g008
Figure 9. Geometry of the triangular domain and boundary segmentation for different accessible boundary ratios: (ad) correspond to 30%, 40%, 50%, and 60%, respectively.
Figure 9. Geometry of the triangular domain and boundary segmentation for different accessible boundary ratios: (ad) correspond to 30%, 40%, 50%, and 60%, respectively.
Mathematics 13 03029 g009
Figure 10. Comparison between analytical and BKNN-reconstructed solutions under different noise levels (1–3%) with 30% accessible boundary: (a) u; (b) ∂u/∂n.
Figure 10. Comparison between analytical and BKNN-reconstructed solutions under different noise levels (1–3%) with 30% accessible boundary: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g010
Figure 11. Comparison of analytical and BKNN-reconstructed solutions in the domain with 30% accessible boundary: (a) Analytical solution, (b) BKNN solution, (c) AE.
Figure 11. Comparison of analytical and BKNN-reconstructed solutions in the domain with 30% accessible boundary: (a) Analytical solution, (b) BKNN solution, (c) AE.
Mathematics 13 03029 g011
Figure 12. Schematic diagram of the star-shaped domain.
Figure 12. Schematic diagram of the star-shaped domain.
Mathematics 13 03029 g012
Figure 13. Comparison between the analytical solution and BKNN solutions with different network layers: (a) u; (b) ∂u/∂n.
Figure 13. Comparison between the analytical solution and BKNN solutions with different network layers: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g013
Figure 14. Comparison between the analytical solution and the BKNN solutions with varying numbers of neurons: (a) u; (b) ∂u/∂n.
Figure 14. Comparison between the analytical solution and the BKNN solutions with varying numbers of neurons: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g014
Figure 15. Comparison of AE distributions obtained by the PINN and the BKNN: (a) u; (b) ∂u/∂n.
Figure 15. Comparison of AE distributions obtained by the PINN and the BKNN: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g015
Figure 16. Comparison of analytical and BKNN-reconstructed solutions in the domain using a network with six layers and six neurons per layer boundary: (a) Analytical solution, (b) BKNN solution, (c) AE.
Figure 16. Comparison of analytical and BKNN-reconstructed solutions in the domain using a network with six layers and six neurons per layer boundary: (a) Analytical solution, (b) BKNN solution, (c) AE.
Mathematics 13 03029 g016
Figure 17. Comparisons between the analytical solution and the BKNN solutions in the irregular domain: (a) u; (b) ∂u/∂n.
Figure 17. Comparisons between the analytical solution and the BKNN solutions in the irregular domain: (a) u; (b) ∂u/∂n.
Mathematics 13 03029 g017
Figure 18. Comparison of analytical and BKNN-reconstructed solutions in the irregular domain with 3% noise level: (a) Analytical solution, (b) BKNN solution, (c) AE.
Figure 18. Comparison of analytical and BKNN-reconstructed solutions in the irregular domain with 3% noise level: (a) Analytical solution, (b) BKNN solution, (c) AE.
Mathematics 13 03029 g018
Table 1. The MSE comparison of the traditional BKM, BKM-MPI, and BKNN under different noise levels.
Table 1. The MSE comparison of the traditional BKM, BKM-MPI, and BKNN under different noise levels.
NoiseTraditional BKMBKM-MPIBKNN
tol = 0.01tol = 0.03tol = 0.05
1%failed 9.08 × 10 4 2.14 × 10 4 9.06 × 10 5 2.03 × 10 5
3%failed 1.29 × 10 3 2.68 × 10 4 3.06 × 10 3 1.22 × 10 4
5%failed 1.47 × 10 3 1.11 × 10 3 5.85 × 10 3 5.55 × 10 4
Table 2. The MSEs of BKNN with a 1% noise level under different accessible boundary ratios.
Table 2. The MSEs of BKNN with a 1% noise level under different accessible boundary ratios.
Accessible Boundary Ratio30%40%50%60%
MSE 1.42 × 10 3 5.09 × 10 4 2.44 × 10 4 3.30 × 10 5
Table 3. The MSEs of BKNN with a 30% accessible boundary under different numbers of boundary nodes.
Table 3. The MSEs of BKNN with a 30% accessible boundary under different numbers of boundary nodes.
The Number of Boundary Nodes6090120150
MSE 1.62 × 10 3 1.42 × 10 3 8.71 × 10 4 5.53 × 10 4
Table 4. MSE and time of the different neural network layers by the PINN and the BKNN.
Table 4. MSE and time of the different neural network layers by the PINN and the BKNN.
Number of
Network Layers
PINNBKNN
MSETime (s)MSETime (s)
3 9.79 × 10 5 46.64 8.62 × 10 5 33.69
6 1.14 × 10 5 83.44 1.04 × 10 6 49.98
9 7.03 × 10 5 147.52 2.25 × 10 5 64.89
12 9.94 × 10 4 199.67 1.07 × 10 4 83.10
Table 5. MSE and time of the different neurons by the PINN and the BKNN.
Table 5. MSE and time of the different neurons by the PINN and the BKNN.
Number of NeuronsPINNBKNN
MSETime (s)MSETime (s)
3 8.95 × 10 3 82.17 1.38 × 10 3 43.37
4 7.44 × 10 4 83.45 1.71 × 10 4 47.79
5 1.32 × 10 4 82.17 2.45 × 10 5 47.37
6 1.14 × 10 5 83.44 1.04 × 10 6 49.98
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, R.; Wang, F.; Li, X.; Qiu, L. Boundary Knot Neural Networks for the Inverse Cauchy Problem of the Helmholtz Equation. Mathematics 2025, 13, 3029. https://doi.org/10.3390/math13183029

AMA Style

Wang R, Wang F, Li X, Qiu L. Boundary Knot Neural Networks for the Inverse Cauchy Problem of the Helmholtz Equation. Mathematics. 2025; 13(18):3029. https://doi.org/10.3390/math13183029

Chicago/Turabian Style

Wang, Renhao, Fajie Wang, Xin Li, and Lin Qiu. 2025. "Boundary Knot Neural Networks for the Inverse Cauchy Problem of the Helmholtz Equation" Mathematics 13, no. 18: 3029. https://doi.org/10.3390/math13183029

APA Style

Wang, R., Wang, F., Li, X., & Qiu, L. (2025). Boundary Knot Neural Networks for the Inverse Cauchy Problem of the Helmholtz Equation. Mathematics, 13(18), 3029. https://doi.org/10.3390/math13183029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop