Next Article in Journal
Estimation of the Reliability Function of the Generalized Rayleigh Distribution under Progressive First-Failure Censoring Model
Previous Article in Journal
mKdV Equation on Time Scales: Darboux Transformation and N-Soliton Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line

1
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Fundamental Course, Shandong University of Science and Technology, Taian 271019, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(9), 579; https://doi.org/10.3390/axioms13090579
Submission received: 28 June 2024 / Revised: 16 August 2024 / Accepted: 23 August 2024 / Published: 26 August 2024
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)

Abstract

:
In this article, the coupled Kundu equations are analyzed using the Fokas unified method on the half-line. We resolve a Riemann–Hilbert (RH) problem in order to illustrate the representation of the potential function in the coupled Kundu equations. The jump matrix is obtained from the spectral matrix, which is determined according to the initial value data and the boundary value data. The findings indicate that these spectral functions exhibit interdependence rather than being mutually independent, and adhere to a global relation while being connected by a compatibility condition.

1. Introduction

In 1967, Gautner et al. [1] put forward a method to analyze the boundary value problem of initial data decay in the study of the KdV equation. This approach corresponds to the inverse scattering method, which is also referred to as the non-linear Fourier transform method [2,3]. While the inverse scattering method is a valuable technique for solving integrable equations, its applicability may be limited for certain non-linear equations due to variations in initial boundary values [4,5]. In practice, the solution of the non-linear equation is determined by the initial boundary value conditions, rather than the superposition of the initial value conditions, indicating that the inverse scattering method is not applicable in this context. Therefore, we need a novel approach for analyzing initial boundary value problems. In 1997, Fokas [6] introduced a new approach to tackle the initial boundary value problem of non-linear integral partial differential equations based on the Riemann–Hilbert problem. Fokas et al. [7,8] extended the application of the inverse scattering method to include the analysis of both initial value conditions and boundary value conditions. The Fokas method effectively identifies and analyzes both types of conditions, similar to how the inverse scattering method analyzes initial value conditions. The Fokas method involves transforming the initial boundary value problem in the complex plane of spectral parameters into a Riemann–Hilbert problem. However, in the analysis of the initial boundary value problem in the complex plane, we are faced with the problem that the boundary value is interdependent, and the whole value will change if any value of the boundary value changes. Therefore, in order to formulate the corresponding Riemann–Hilbert problem accurately, it is necessary to consider multiple initial boundary value cases and choose the appropriate initial boundary value accordingly. After nearly 25 years of development, Fokas et al. [7,8], Boutet et al. [9,10], Monvel et al. [11], Lenells [12,13], Lenells and Fokas [14,15], Fan [16], Xu and Fan [17,18], Chen et al. [19,20], Zhao et al. [21], Zhang et al. [22,23], Li and Xia [24], Wen et al. [25], and Hu et al. [26,27,28] have studied and extended the Fokas method.
In this study, we mainly discuss the coupled Kundu equations [29]:
i q y + q x x + 2 i ( 2 β 1 ) q r q x + i ( 4 β 1 ) q 2 r x + β ( 4 β 1 ) q 3 r 2 = 0 ,
i r y r x x + 2 i ( 2 β 1 ) q r r x + i ( 4 β 1 ) r 2 q x + β ( 4 β 1 ) r 3 q 2 = 0 ,
where q and r are functions with complex values, and β is a constant. Supposing r = q * , Equation (1) can be written as
i q y + q x x 2 i ( 2 β 1 ) | q | 2 q x i ( 4 β 1 ) | q | 2 q x * + β ( 4 β 1 ) | q | 4 q = 0 ,
where q * denotes the complex conjugate of q. When β = 0 , we can obtain the Kaup–Newell equation [30]:
i q y + q x x + 2 i | q | 2 q x + i | q | 2 q x * = 0 .
When β = 1 4 , we obtain the Chen–Lee–Liu equation [23]:
i q y + q x x + i | q | 2 q x = 0 .
When β = 1 2 , we obtain the Gerdjikov–Ivanov equation [31]:
i q y + q x x i | q | 2 q x + 1 2 | q | 4 q = 0 .
The above three equations have been studied using the Fokas method; however, the Fokas method has yet to be applied in the study of the coupled Kundu equations. Therefore, we intend to use the Fokas method to analyze the coupled Kundu equations.
In Figure 1, the sides { < x 0 ,   y = L } , { x = 0 ,   0 y L } and { < x 0 ,   y = 0 } are called sides ( I ) , ( I I ) , and ( I I I ) , respectively.
According to [29], we assume the existence of a solution q ( x , y ) and r ( x , y ) for the coupled Kundu equations. We proceed to define the initial values { q 0 ( x ) , r 0 ( x ) } as follows:
q 0 ( x ) = q ( x , 0 ) , < x < 0 ,
r 0 ( x ) = r ( x , 0 ) , < x < 0 ,
the Dirichlet boundary values as
h 0 ( y ) = q ( 0 , y ) , 0 < y < L ,
g 0 ( y ) = r ( 0 , y ) , 0 < y < L ,
and the Neumann boundary values as
h 1 ( y ) = q x ( 0 , y ) , 0 < y < L ,
g 1 ( y ) = r x ( 0 , y ) , 0 < y < L .
The solution of the equation can be represented in the complex plane using the Riemann–Hilbert problem matrix.
The applications of the coupled Kundu equations span various fields, including non-linear optics and plasma physics. In the field of non-linear optics [32,33,34], there is significant interest in studying the transmission of femtosecond optical pulses, particularly in communication and flow control routing systems. In this context, it has become imperative to employ models based on the coupled Kundu equations. In optical fibers, the coupled Kundu equations can be applied as a model for long-range transmission systems with high data rates in single-mode fibers, specifically for the propagation of optical solitons. In plasma physics [35,36], the coupled Kundu equations can be used to study the stability of finite-amplitude circular-polarized waves propagating parallel to magnetic fields, enabling the creation of a model for an Alfvén wave that propagates parallel to the surrounding magnetic field. In this model, q represents the perturbation in the transverse magnetic field, while x and y correspondingly denote spatial and temporal coordinates.
Lenells [12,13] conducted an analysis of the existing literature on the initial boundary value problem and the Riemann–Hilbert problem on the half-line. In this study, we utilize the Fokas method to extend Lenells’ analysis of the initial boundary value problem within the finite interval ( , 0 ] . The arrangement of this paper is outlined below. In Section 2, we provide conclusions related to the coupled Kundu equations, and partition the characteristic function to discuss the relationship between each partition. In Section 3, the Jump matrix ( x , y ) with explicit dependencies, using the spectrum function { s ( η ) , t ( η ) } and { S ( η ) , T ( η ) } , is given using the initial data (7), (8) and boundary data (9)–(12). This issue involves crossing the boundary { I m η 4 = 0 } . The spectral functions exhibit interdependence and are connected by a consistent requirement known as the global relation. In Section 4, the main content of this paper is summarized, and future work is detailed.

2. Basic Riemann–Hilbert Problem

In this section, we present the Lax pairs of equations and the blocks located in the complex plane, along with the jump matrix that connects these distinct blocks.

2.1. Formulas and Symbols

  • The Pauli matrix can be written as μ 3 , μ 3 = 1 0 0 1 , μ = 0 1 0 0 , and μ + = 0 0 1 0 .
  • The matrices S and T are 2 × 2 matrices, where S represents the inner product of T; that is, [ S , T ] = S T T S .
  • The matrix commutator is μ ^ 3 , and e μ ^ 3 is calculated in this study as follows: e μ ^ 3 S = e μ 3 S e μ 3 .
  • The complex conjugation of a function is represented by an overline, for example, s ( ) ¯ .

2.2. Lax Pair

Lax pairs for the coupled Kundu equations have been obtained in [29]. The coupled Kundu equations should satisfy the Lax pair conditions for arbitrary η C , as follows:
x Ψ ( x , y ; η ) = F ( x , y ; η ) Ψ ( x , y ; η ) , y Ψ ( x , y ; η ) = G ( x , y ; η ) Ψ ( x , y ; η ) ,
where
F ( x , y ; η ) = η ( i η μ 3 + q μ + + r μ ) β i q r μ 3 , G ( x , y ; η ) = 2 η 2 [ i μ 3 η 2 + ( q μ + + r μ ) η i 2 u v μ 3 ] + η [ 1 2 q r ( q μ + + r μ ) + i ( q x μ + r x μ ) ] + [ ( 4 β 2 i 3 2 β i ) q 2 r 2 β ( q r x q x r ) ] μ 3 .
In particular, q and r are arbitrary complex functions of x and y, while β is a constant. According to [29], we can obtain the Lax pair equation
ψ x + i η 2 μ 3 ψ = [ η Q i β Q 2 μ 3 ] ψ ψ y + 2 i η 4 μ 3 ψ = [ 2 η 3 Q i η 2 Q 2 μ 3 η ( 2 β 1 ) Q 3 i η Q x μ 3 + P ] ψ ,
where
Q = q μ + + r μ , P = ( 4 β 2 i 3 2 β i ) Q 4 μ 3 β [ Q , Q x ] , Q = 0 q r 0 , Q x = 0 q x r x 0 .
Assuming
ψ = Ψ e i ( η 2 x + 2 η 4 y ) μ 3 , < x 0 , 0 y L ,
we obtain the corresponding Lax pair
Ψ x + i η 2 [ μ 3 , Ψ ] = [ η Q i β Q 2 μ 3 ] Ψ Ψ y + 2 i η 4 [ μ 3 , Ψ ] = [ 2 η 3 Q i η 2 Q 2 μ 3 η ( 2 β 1 ) Q 3 i η Q x μ 3 + P ] Ψ .
The Lax pair can be expressed as
d ( e i ( η 2 x + 2 η 4 y ) μ ^ 3 ψ ( x , y ; η ) ) = e i ( η 2 x + 2 η 4 y ) μ ^ 3 V ( x , y ; η ) ψ , < x 0 , 0 y L ,
where
V ( x , y ; η ) = V 1 ( x , y ; η ) d x + V 2 ( x , y ; η ) d y ,
V 1 ( x , y ; η ) = η Q i β Q 2 μ 3 , V 2 ( x , y ; η ) = 2 η 3 Q i η 2 Q 2 μ 3 η ( i Q x μ 3 + ( 2 β 1 ) Q 3 ) + P .
To solve the inverse spectrum problem using the Riemann–Hilbert method, our objective is to obtain a spectral solution that approaches the identity matrix as η . As the result does not satisfy the property, we transform (19) to obtain the anticipated asymptotic behavior using the Lenells method [12,13].

2.3. Spectral Analysis and Asymptotic Analysis

Assume that the following is the form of a solution to (19):
ψ ( x , y ; η ) = Ω 0 + Ω 1 η + Ω 2 η 2 + Ω 3 η 3 + O ( 1 η 4 ) ,   η ,
where Ω 0 , Ω 1 , Ω 2 , Ω 3 are not affected by η . Incorporating the mentioned expansion into the first expression in (16), we obtain
O ( η 2 ) : i [ μ 3 , Ω 0 ] = 0 , O ( η ) : i [ μ 3 , Ω 1 ] = Q Ω 0 , O ( 1 ) : Ω 0 x + i [ μ 3 , Ω 2 ] = Q Ω 1 i β Q 2 μ 3 Ω 0 .
Ω 0 represents a matrix consisting of diagonal elements. Let Ω 0 = Ω 0 11 0 0 Ω 0 22 . Utilizing O ( η ) , we obtain the expression
Ω 1 ( o ) = 0 i 2 q Ω 0 22 i 2 r Ω 0 11 0 ,
where Ω 1 ( o ) is the off-diagonal part of Ω 1 . Utilizing O ( 1 ) , we obtain
Ω 0 x = ( 1 2 β ) i q r μ 3 Ω 0 .
Substituting the previously mentioned expansion into the second equation of (14), we arrive at
O ( η 4 ) : 2 i [ μ 3 , Ω 0 ] = 0 , O ( η 3 ) : 2 i [ μ 3 , Ω 1 ] = 2 Q Ω 0 , O ( η 2 ) : 2 i [ μ 3 , Ω 2 ] = 2 Q Ω 1 i Q 2 μ 3 Ω 0 , O ( η 1 ) : 2 i [ μ 3 , Ω 3 ] = ( ( 2 β 1 ) Q 3 i Q x μ 3 ) Ω 0 + 2 Q Ω 2 i Q 2 μ 3 Ω 1 , O ( 1 ) : Ω 0 y = 2 Q Ω 3 i Q 2 μ 3 Ω 2 + ( ( 2 β 1 ) Q 3 i Q x μ 3 ) Ω 1 + ( ( 4 β 2 i 3 2 β i ) Q 4 μ 3 β [ Q , Q x ] ) Ω 0 .
The following relation can be obtained from O ( η 1 ) :
2 Q Ω 3 ( o ) i Q 2 Ω 2 ( d ) μ 3 = 1 2 Q 3 Ω 1 ( o ) i 2 ( 2 β 1 ) Q 4 Ω 0 μ 3 + 1 2 Q Q x Ω 0 ,
in which the off-diagonal component of Ω 3 is denoted by Ω 3 ( o ) and the diagonal part of Ω 2 is denoted by Ω 2 ( d ) . Utilizing (21) and employing a computational complexity of O ( 1 ) , we can derive the desired result
Ω 0 y = ( 5 i 2 + i 2 + 4 β 2 i ) q 2 r 2 + ( 1 2 β ) ( q r x q x r ) μ 3 Ω 0 .
Using (20) and (22), we obtain the following definition
Ω 0 ( x , y ) = e x p ( i ( x 0 , y 0 ) ( x , y ) Δ μ 3 ) ,
where Δ ( x , y ) = Δ 1 ( x , y ) d x + Δ 2 ( x , y ) d y , Δ 1 ( x , y ) = ( 1 2 β ) q r , Δ 2 ( x , y ) = ( 5 2 β + 1 2 + 4 β 2 ) q 2 r 2 ( 1 2 β ) i ( q r x q x r ) , and ( x 0 , y 0 ) D simultaneously. To facilitate computations, we choose ( x 0 , y 0 ) = ( 0 , 0 ) .
As the value of the integral (23) remains unaffected by the chosen path, we can introduce a new function γ ( x , y ; η ) in the following manner:
ψ ( x , y ; η ) = e i ( 0 , 0 ) ( x , y ) Δ μ ^ 3 γ ( x , y ; η ) Ω 0 ( x , y ) , < x 0 , , 0 y L .
Then,
γ ( x , y ; η ) = 1 + O ( 1 η ) , η .
After performing a straightforward computation, we obtain the Lax pair (19)
d ( e i ( η 2 x + 2 η 4 y ) μ ^ 3 γ ( x , y ; η ) ) = W ( x , y ; η ) , η C ,
where
W ( x , y ; η ) = e i ( η 2 x + 2 η 4 y ) μ ^ 3 H ( x , y ; η ) γ ( x , y ; η )
H ( x , y ; η ) = H 1 ( x , y ; η ) d x + H 2 ( x , y ; η ) d y = e i ( 0 , 0 ) ( x , y ) Δ μ ^ 3 ( V ( x , y ; η ) i Δ μ 3 ) .
Given the definition of V ( x , y ; η ) and Δ , it is possible to derive both H 1 and H 2 :
H 1 ( x , y ; η ) = i 2 q r η q e 2 i ( 0 , 0 ) ( x , y ) Δ η r e 2 i ( 0 , 0 ) ( x , y ) Δ i 2 q r ,
H 2 ( x , y ; η ) = i η 2 q r + ( β 1 2 ) i q 2 r 2 1 2 ( q r x q x r ) ( 2 η 3 q η ( i q x + ( 2 β 1 ) q 2 r ) ) e 2 i ( 0 , 0 ) ( x , y ) Δ ( 2 η 3 r η ( i r x + ( 2 β 1 ) q r 2 ) ) e 2 i ( 0 , 0 ) ( x , y ) Δ i η 2 q r ( β 1 2 ) i q 2 r 2 + 1 2 ( q r x q x r ) .
The expression for γ ( x , y ; η ) in (25) can be derived as
γ x + i η 2 [ μ 3 , γ ] = H 1 γ , γ y + 2 i η 4 [ μ 3 , γ ] = H 2 γ ,
where < x 0 , 0 y L , η C .

2.4. Eigenfunctions and Their Relations

We examine the scenario where Ω is defined as the interval { < x 0 ,   0 y L } , with L representing a fixed positive value. According to [6], we define three solutions of (26). γ k ( x , y , η ) represents functions that are valued as 2 × 2 matrices and determined by
γ k ( x , y ; η ) = I + ( x k , y k ) ( x , y ) e i ( η 2 x + 2 η 4 y ) μ ^ 3 W ( ξ , τ , η ) , < x < 0 , 0 < y < L ,
where I is the identity matrix; that is, I = 1 0 0 1 .
As shown in Figure 2, the integral represents a continuous curve that connects ( x k , y k ) and ( x , y ) , demonstrating the subsequent values: ( x 1 , y 1 ) ( , y ) ,   ( x 2 , y 2 ) = ( 0 , 0 ) ,   ( x 3 , y 3 ) = ( 0 , L ) .
The value of the integral remains unaffected by the chosen integration path, according to the principles of mathematical analysis, while γ 1 , γ 2 , and γ 3 are mutually independent. The functions γ 1 , γ 2 , and γ 3 are determined based on various trajectories in the complex η -plane. Revisiting the concept introduced in [6], we choose three distinct routes that lead to the coordinates ( x , y ) , as depicted in Figure 3.
Based on the above analysis, we obtain
γ 1 ( x , y ; η ) = I + x e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 1 ) ( ξ , t , η ) d ξ , γ 2 ( x , y ; η ) = I x 0 e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 2 ) ( ξ , t , η ) d ξ + e i η 2 x μ ^ 3 0 y e 2 i η 4 ( τ y ) μ ^ 3 ( H 2 γ 2 ) ( 0 , τ , η ) d τ , γ 3 ( x , y ; η ) = I x 0 e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 3 ) ( ξ , t , η ) d ξ e i η 2 x μ ^ 3 y L e 2 i η 4 ( τ y ) μ ^ 3 ( H 2 γ 3 ) ( 0 , τ , η ) d τ .
Depending on the path we choose, we obtain the following inequality:
( x 1 , y 1 ) ( x , y ) : < ξ < x , ( x 2 , y 2 ) ( x , y ) : x < ξ < 0 , 0 < τ < y , ( x 3 , y 3 ) ( x , y ) : x < ξ < 0 , y < τ < L .
As we have discovered, e 2 i η 2 ( ξ x ) and e 4 i η 4 ( τ y ) are involved in the first column of the matrix (28). Utilizing the aforementioned inequality, it can be observed that these values are bounded. Consequently, we are able to divide the complex η -plane into eight distinct regions.
γ 1 ( 1 ) ( x , y ; η ) : ( x 1 , y 1 ) ( x , y ) : { Im η 2 0 } , γ 2 ( 1 ) ( x , y ; η ) : ( x 2 , y 2 ) ( x , y ) : { Im η 2 0 } { Im η 4 0 } , γ 3 ( 1 ) ( x , y ; η ) : ( x 3 , y 3 ) ( x , y ) : { Im η 2 0 } { Im η 4 0 } .
The second column is the inverse of the first column of (28). Thus, the outcome of the second column is the opposite of the first column, which is bounded in
γ 1 ( 2 ) ( x , y ; η ) : ( x 1 , y 1 ) ( x , y ) : { Im η 2 0 } , γ 2 ( 2 ) ( x , y ; η ) : ( x 2 , y 2 ) ( x , y ) : { Im η 2 0 } { Im η 4 0 } , γ 3 ( 2 ) ( x , y ; η ) : ( x 3 , y 3 ) ( x , y ) : { Im η 2 0 } { Im η 4 0 } .
Next, we acquire
γ 1 ( x , y ; η ) = ( γ 1 Ω 1 Ω 2 ( x , y ; η ) , γ 1 Ω 3 Ω 4 ( x , y ; η ) ) , γ 2 ( x , y ; η ) = ( γ 2 Ω 3 ( x , y ; η ) , γ 2 Ω 2 ( x , y ; η ) ) , γ 3 ( x , y ; η ) = ( γ 3 Ω 4 ( x , y ; η ) , γ 3 Ω 1 ( x , y ; η ) ) .
The γ k functions act as essential eigenfunctions necessary for formulating a Riemann–Hilbert problem in the complex η -plane. We introduce the sets Ω i ( i = 1 ,   2 ,   3 ,   4 ) in the complex plane of η as Ω i = ϵ i ( ϵ i ) , ϵ i = { η C | k π + i 1 4 π < Arg η < k π + i 4 π } , ϵ i = { η C | η ϵ i } , i = 1 ,   2 ,   3 ,   4 ,   k = 0 ,   ± 1 ,   ± 2 , , (see Figure 4).
However, the functions γ 2 ( 0 , y ; η ) , γ 3 ( 0 , y ; η ) , γ 2 ( x , 0 ; η ) , γ 3 ( x , L ; η ) , γ 2 ( 0 , L ; η ) , and γ 3 ( 0 , 0 ; η ) are bounded in larger domains:
γ 2 ( 0 , y ; η ) = ( γ 2 Ω 1 Ω 3 ( 0 , y ; η ) , γ 2 Ω 2 Ω 4 ( 0 , y ; η ) ) , γ 3 ( 0 , y ; η ) = ( γ 3 Ω 1 Ω 3 ( 0 , y ; η ) , γ 3 Ω 2 Ω 4 ( 0 , y ; η ) ) , γ 2 ( x , 0 ; η ) = ( γ 2 Ω 3 Ω 4 ( x , 0 ; η ) , γ 2 Ω 1 Ω 2 ( x , 0 ; η ) ) , γ 3 ( x , L ; η ) = ( γ 3 Ω 3 Ω 4 ( x , L ; η ) , γ 3 Ω 1 Ω 2 ( x , L ; η ) ) , γ 2 ( 0 , L ; η ) = ( γ 2 Ω 1 Ω 3 ( 0 , L ; η ) , γ 2 Ω 2 Ω 4 ( 0 , L ; η ) ) , γ 3 ( 0 , 0 ; η ) = ( γ 3 Ω 2 Ω 4 ( 0 , 0 ; η ) , γ 3 Ω 1 Ω 3 ( 0 , 0 ; η ) ) .
The γ k satisfies the condition that γ k ( x , y ; η ) = I + O ( 1 η ) as η , k = 1 ,   2 ,   3 .

2.5. The Spectral Functions and Their Propositions

Analyzing distinct Ω i ( i = 1 ,   2 ,   3 ,   4 ) regions and calculating the jump matrices connecting them, one can derive the Riemann–Hilbert problem. These findings indicate that the correlation jump matrix can be determined according to two spectral functions, c ( η ) and C ( η ) , both of which are matrices with dimensions of 2 × 2 . Their structure is as follows
γ 1 ( x , y ; η ) = γ 2 ( x , y ; η ) e i ( η 2 x + 2 η 4 y ) μ ^ 3 c ( η ) ,
γ 3 ( x , y ; η ) = γ 2 ( x , y ; η ) e i ( η 2 x + 2 η 4 y ) μ ^ 3 C ( η ) .
Calculating (32) at ( x , y ) = ( 0 , 0 ) , one obtains
c ( η ) = γ 1 ( 0 , 0 ; η ) .
Meanwhile, calculating (32) at ( x , y ) = ( 0 , 0 ) ,
C ( η ) = γ 3 ( 0 , 0 ; η ) .
When (32) is evaluated at ( x , y ) = ( 0 , L ) , it is implied that
C ( η ) = ( e 2 i η 4 L μ ^ 3 γ 2 ( 0 , L ; η ) ) 1 , C ( η ) 1 = ( e 2 i η 4 L μ ^ 3 γ 2 ( 0 , L ; η ) ) .
Furthermore, (32) and (33) imply
γ 3 ( x , y ; η ) = γ 1 ( x , y ; η ) e i ( η 2 x + 2 η 4 y ) μ ^ 3 ( c ( η ) ) 1 C ( η ) ,
which will result in the global relation.
Then, evaluating the function γ 1 ( x , 0 , η ) at x = 0 and the function γ 2 ( 0 , y , η ) at y = L , we can obtain the functions c ( η ) and C ( η ) . Moreover, these functions, denoted as γ k ( x , y ; η ) (where k = 1 ,   2 ,   3 ), adhere to the subsequent formulas:
γ 1 ( x , 0 ; η ) = I + x e i τ 2 ( ξ x ) μ ^ 3 ( H 1 γ 1 ) ( ξ , 0 , η ) d ξ , γ 2 ( 0 , y ; η ) = I + 0 y e 2 i η 4 ( τ y ) μ ^ 3 ( H 2 γ 2 ) ( 0 , τ , η ) d τ , γ 2 ( x , 0 ; η ) = I x 0 e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 2 ) ( ξ , 0 , η ) d ξ , γ 3 ( 0 , y ; η ) = I y L e 2 i η 4 ( τ y ) μ ^ 3 ( H 2 γ 3 ) ( 0 , τ , η ) d τ .
We compute the value of equation H 1 at y = 0 , and consider q 0 ( x ) = q ( x , 0 ) and r 0 ( x ) = r ( x , 0 ) as the initial values for q ( x , y ) and r ( x , y ) , respectively. Define h 0 ( y ) = q ( 0 , y ) , g 0 ( y ) = r ( 0 , y ) , h 1 ( y ) = q x ( 0 , y ) , and g 1 ( y ) = r x ( 0 , y ) as the boundary values for q ( x , y ) and r ( x , y ) , and determine the calculated value for x = 0 as H 2 . Considering these values, we can obtain
H 1 ( x , 0 ; η ) = i 2 q 0 r 0 η q 0 e 0 x 2 i ( 1 2 β ) q 0 r 0 d ξ η r 0 e 0 x 2 i ( 1 2 β ) q 0 r 0 d ξ i 2 q 0 r 0 ,
H 2 ( 0 , y ; η ) = H 2 11 H 2 12 H 2 21 H 2 22 ,
  • H 2 11 = i η 2 h 0 g 0 + ( β 1 2 ) i h 0 2 g 0 2 1 2 ( h 0 g 1 h 1 g 0 ) ,
  • H 2 12 = ( 2 η 3 h 0 η ( i h 1 + ( 2 β 1 ) h 0 2 g 0 ) ) e 2 i 0 y Δ 2 ( 0 , τ ) d τ ,
  • H 2 21 = ( 2 η 3 g 0 η ( i g 1 + ( 2 β 1 ) h 0 g 0 2 ) ) e 2 i 0 y Δ 2 ( 0 , τ ) d τ ,
  • H 2 22 = i η 2 h 0 g 0 ( β 1 2 ) i h 0 2 g 0 2 + 1 2 ( h 0 g 1 h 1 g 0 ) ,
and Δ 2 ( 0 , η ) = ( 5 2 β + 1 2 + 4 β 2 ) h 0 2 g 0 2 ( 1 2 β ) i ( h 0 g 1 h 1 g 0 ) .
The calculation of H 1 ( x , 0 ; η ) and H 2 ( 0 , y ; η ) depends solely on the functions q 0 ( x ) , r 0 ( x ) , h 0 ( y ) , h 1 ( y ) , g 0 ( y ) , and g 1 ( y ) . As a result, the initial data define the integral (24) that determines c ( η ) , while the boundary data are used to define h 0 ( y ) , h 1 ( y ) , g 0 ( y ) , and g 1 ( y ) through the integral (24), which determines C ( η ) . The subsequent statement outlines the analytical characteristics of matrices γ k ( x , y ; η ) ( j = 1 , 2 , 3 . ) obtained from (24).
Proposition 1.
For k = 1 , 2 , 3 ,   γ ( x , y , η ) = γ k ( x , y , η ) satisfy the symmetric relation
γ 11 ( x , y ; η ) = γ 22 ( x , y ; η ¯ ) ¯ , γ 12 ( x , y ; η ) = γ 21 ( x , y ; η ¯ ) ¯ ,
as well as
γ 11 ( x , y ; η ) = γ 11 ( x , y ; η ) , γ 12 ( x , y ; η ) = γ 12 ( x , y ; η ) , γ 21 ( x , y ; η ) = γ 21 ( x , y ; η ) , γ 22 ( x , y ; η ) = γ 22 ( x , y ; η ) .
Proof. 
To establish (39), we employ the following notation: for a 2 × 2 matrix S, the matrix GS of size 2 × 2 is defined as follows G S = s ¯ 22 s ¯ 21 s ¯ 12 s ¯ 11 ; where S = s 11 s 12 s 21 s 22 ; This operation has the characteristic that G ( S T ) = ( G S ) ( G T ) for any 2 × 2 matrices S and T. Specifically, G e S = e G S . When applying G to (26) concerning γ
d ( e i ( η 2 x + 2 η 4 y ) μ ^ 3 γ ( x , y ; η ) ) = e i ( η 2 x + 2 η 4 y ) H ( x , y ; η ) γ ( x , y ; η ) ,
we acquire
d ( e i ( η ¯ 2 x + 2 η ¯ 4 y ) μ ^ 3 γ ( x , y ; η ) ) = e i ( η ¯ 2 x + 2 η ¯ 4 y ) ( G H ) ( x , y ; η ) ( G γ ) ( x , y ; η ) .
Given that G Q = Q , according to the definition (19) of V, we have
G V ( x , y ; η ) = G V 1 ( x , y ; η ) d x + G V 2 ( x , y ; η ) d y ,
G V 1 ( x , y ; η ) = η ¯ Q i β Q 2 μ 3 ,
G V 2 ( x , y ; η ) = 2 η ¯ 3 Q i η ¯ 2 Q 2 μ 3 η ¯ ( i Q x μ 3 + ( 2 β 1 ) Q 3 ) + P .
It is evident, from this expression, that G V ( η ¯ ) = V ( η ) , thus indicating
( G H ) ( η ¯ ) = e i ( 0 , 0 ) ( x , y ) Δ μ ^ 3 ( G V ) ( η ¯ ) i Δ μ 3 ) = H ( η ) .
Therefore, substituting η with η ¯ in (42), we can deduce that both ( G γ ) ( x , y , η ¯ ) and γ ( x , y , η ) fulfill (40). If γ is equal to γ k , where k takes values of 1, 2, and 3, then the function γ meets the initial condition γ ( x k , y k , η ) = I for all defined values of η . Consequently, it can be deduced that the function ( G γ ) ( x k , y k , η ¯ ) = G I = I fulfills the identical initial condition. Therefore, we can conclude that ( G γ ) ( x k , y k , η ¯ ) and γ ( x , y , η ) are equivalent expressions, as stated in (39).
To illustrate (40), we present the 2 × 2 matrix M obtained from a 2 × 2 matrix.
M S = s 11 s 12 s 21 s 22 ,
where S = s 11 s 12 s 21 s 22 .
Similar to T, this operation also exhibits the characteristic that M ( S T ) = ( M S ) ( M T ) for any matrices S and T of size 2 × 2 , M ( e S ) = e M S , and M ( i μ 3 ) = i μ 3 . Furthermore, given that M Q = Q , it can be readily verified that M H ( η ) = H ( η ) . The argument that led to (39) can also be applied to derive (40). □
Proposition 2.
The matrix functions
γ k ( x , y ; η ) = ( γ k ( 1 ) ( x , y ; η ) , γ k ( 2 ) ( x , y ; η ) ) = γ k 11 γ k 12 γ k 21 γ k 22 , k = 1 , 2 , 3 .
exhibit the subsequent analytical characteristics:
  • (1) d e t γ k ( x , y ; η ) = 1 ,   k = 1 ,   2 ,   3 .
  • (2) The analytic properties of the function γ 1 ( 1 ) ( x , y ; η ) are apparent, and lim η γ 1 ( 1 ) ( x , y ; η ) = ( 1 , 0 ) T , η { I m η 2 0 } .
  • (3) The analytic properties of the function γ 1 ( 2 ) ( x , y ; η ) are apparent, and lim η γ 1 ( 2 ) ( x , y ; η ) = ( 0 , 1 ) T , η { I m η 2 0 } .
  • (4) The analytic properties of the function γ 2 ( 1 ) ( x , y ; η ) are apparent, and lim η γ 2 ( 1 ) ( x , y ; η ) = ( 1 , 0 ) T , η { I m η 2 0 } { I m η 4 0 } .
  • (5) The analytic properties of the function γ 2 ( 2 ) ( x , y ; η ) are apparent, and lim η γ 2 ( 2 ) ( x , y ; η ) = ( 0 , 1 ) T , λ { I m η 2 0 } { I m η 4 0 } .
  • (6) The analytic properties of the function γ 3 ( 1 ) ( x , y ; η ) are apparent, and lim η γ 3 ( 1 ) ( x , y ; η ) = ( 1 , 0 ) T , η { I m η 2 0 } { I m η 4 0 } .
  • (7) The analytic properties of the function γ 3 ( 2 ) ( x , y ; η ) are apparent, and lim η γ 3 ( 2 ) ( x , y ; η ) = ( 0 , 1 ) T , η { I m η 2 0 } { I m η 4 0 } .
Proof. 
From γ ( x , y ; η ) = 1 + O ( 1 η ) , η we know that, as η , γ ( x , y ; η ) tends towards the identity matrix. Therefore, the first row of the matrix tends towards ( 1 , 0 ) , and the second row tends towards ( 0 , 1 ) . □
Proposition 3.
We have the ability to create matrix functions c ( η ) and C ( η ) ,
c ( η ) = s ( η ¯ ) ¯ t ( η ) t ( η ¯ ) ¯ s ( η ) , C ( η ) = S ( η ¯ ) ¯ T ( η ) T ( η ¯ ) ¯ S ( η ) .
The definitions of γ 1 and γ 2 have implications that encompass both c ( η ) and C ( η ) ,
c ( η ) = γ 1 ( 0 , 0 ; η ) = I + 0 e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 1 ) ( ξ , 0 ; η ) d ξ , C 1 ( η ) = e 2 i η 4 L μ ^ 3 γ 2 ( 0 , L ; η ) = I + 0 L e 2 i η 4 τ μ ^ 3 ( H 2 γ 2 ) ( 0 , τ ; η ) d τ .
The following properties can be obtained according to the determinant conditions
  • (1)
    s ( η ¯ ) ¯ t ( η ¯ ) ¯ = γ 1 ( 1 ) ( 0 , 0 ; η ) = γ 1 11 ( 0 , 0 ; η ) γ 1 21 ( 0 , 0 ; η ) , t ( η ) s ( η ) = γ 1 ( 2 ) ( 0 , 0 ; η ) = γ 1 12 ( 0 , 0 ; η ) γ 1 22 ( 0 , 0 ; η ) , S ( η ) T ( η ¯ ) ¯ e 4 i η 4 L = γ 2 ( 1 ) ( 0 , L ; η ) = γ 2 11 ( 0 , L ; η ) γ 2 21 ( 0 , L ; η ) , e 4 i η 4 L T ( η ) S ( η ¯ ) ¯ = γ 2 ( 2 ) ( 0 , L ; η ) = γ 2 12 ( 0 , L ; η ) γ 2 22 ( 0 , L ; η ) .
  • (2)
    x γ 1 ( 2 ) ( x , 0 ; η ) + 2 i η 2 σ γ 1 ( 2 ) ( x , 0 ; η ) = H 1 ( x , 0 ; η ) γ 1 ( 2 ) ( x , 0 ; η ) , η Ω 3 Ω 4 , < x < 0 . y γ 2 ( 2 ) ( 0 , y ; η ) + 4 i η 4 σ γ 2 ( 2 ) ( 0 , y ; η ) = H 2 ( 0 , y ; η ) γ 2 ( 2 ) ( 0 , y ; η ) , η Ω 2 Ω 4 , 0 < y < L ,
    where σ = 1 0 0 0 .
  • (3)
    det c ( η ) = det C ( η ) = 1
    s ( η ) = s ( η ) , t ( η ) = t ( η ) , S ( η ) = S ( η ) , T ( η ) = T ( η ) , s ( η ) s ( η ¯ ) ¯ t ( η ) t ( η ¯ ) ¯ = 1 , S ( η ) S ( η ¯ ) ¯ T ( η ) T ( η ¯ ) ¯ = 1 , s ( η ) = 1 + O ( 1 η ) , t ( η ) = O ( 1 η ) , η , I m η 2 0 , S ( η ) = 1 + O ( 1 η ) , T ( η ) = O ( 1 η ) , η , I m η 4 0 .
Proof. 
All of these properties can be derived from the analytic and bounded nature of γ 1 ( x , 0 ; η ) and γ 2 ( 0 , y ; η ) , as well as the requirement for a unit determinant and the asymptotic behavior of these eigenfunctions at η . □

2.6. Jump Matrix

We demonstrate that the spectral functions exhibit a global relationship, rather than being independent of one another. The Riemann–Hilbert problem associated with the coupled Kundu equations can be derived using [29]. (32), (33) and (37) need to be represented as jump matrices.
Let us assume, for the sake of simplicity, that
θ ( η ) = η 2 x + 2 η 4 y ; φ ( η ) = s ( η ¯ ) ¯ S ( η ) t ( η ¯ ) ¯ T ( η ) ; ϖ ( η ) = s ( η ) T ( η ) S ( η ) t ( η ) ; d ( η ) = s ( η ¯ ) ¯ ϖ ( η ) + t ( η ) φ ( η ) .
Consider the function N ( x , y ; η ) defined as follows
N + ( x , y , η ) = ( γ 3 Ω 1 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) φ ( η ) ) , η Ω 1 , N ( x , y , η ) = ( γ 2 Ω 2 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ ) , η Ω 2 , N + ( x , y , η ) = ( γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , γ 2 Ω 3 ( x , y , η ) ) , η Ω 3 , N ( x , y , η ) = ( γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ , γ 3 Ω 4 ( x , y , η ) ) , η Ω 4 .
These definitions show that
d e t N ( x , y ; η ) = 1 , N ( x , y ; η ) = I + O ( 1 η ) , η .
Theorem 1.
Assume that q ( x , y ; η ) is a smooth function and N ( x , y ; η ) is given by (45), which adheres to the jump environment.
N + ( x , y , η ) = N ( x , y , η ) G ( x , y , η ) , η 4 R ,
where
G ( x , y , η ) = G 1 ( x , y , η ) , A r g η 2 = 0 , G 2 ( x , y , η ) , A r g η 2 = π 2 , G 3 ( x , y , η ) , A r g η 2 = π , G 4 ( x , y , η ) , A r g η 2 = 3 π 2
and
G 1 ( x , y , η ) = 1 ϖ ( η ¯ ) ¯ φ ( η ) e 2 i θ ( η ) ϖ ( η ) φ ( η ¯ ) ¯ e 2 i θ ( η ) 1 φ ( η ) φ ( η ¯ ) ¯ , G 2 ( x , y , η ) = φ ( η ) s ( η ¯ ) ¯ 0 d ( η ) e 2 i θ ( η ) s ( η ¯ ) ¯ φ ( η ) , G 3 ( x , y , η ) = 1 s ( η ) s ( η ¯ ) ¯ t ( η ¯ ) ¯ s ( η ¯ ) ¯ e 2 i θ ( η ) t ( η ) s ( η ) e 2 i θ ( η ) 1 , G 4 ( x , y , η ) = φ ( η ¯ ) ¯ s ( η ) d ( η ¯ ) ¯ e 2 i θ ( η ) 0 s ( η ) φ ( η ¯ ) ¯ .
Proof. 
We construct (32), (33), (37) and (46) in the following manner, thus expressing their relationships and deriving the jump environment.
γ 1 Ω 1 Ω 2 = s ( η ¯ ) ¯ γ 2 Ω 3 + t ( η ¯ ) ¯ e 2 i ω ( η ) γ 2 Ω 2 γ 1 Ω 3 Ω 4 = t ( η ) e 2 i ω ( η ) γ 2 Ω 3 + s ( η ) γ 2 Ω 2 ,
γ 3 Ω 4 = S ( λ ¯ ) ¯ γ 2 Ω 3 + T ( η ¯ ) ¯ e 2 i ω ( η ) γ 2 Ω 2 γ 3 Ω 1 = T ( η ) e 2 i ω ( η ) γ 2 Ω 3 + S ( η ) γ 2 Ω 2 ,
γ 3 Ω 4 = ( s ( η ) S ( η ¯ ) ¯ t ( η ) T ( η ¯ ) ¯ ) γ 1 Ω 1 Ω 2 + e 2 i ω ( η ) ( s ( η ¯ ) ¯ T ( η ¯ ) ¯ t ( η ¯ ) ¯ S ( η ¯ ) ¯ ) γ 1 Ω 3 Ω 4 γ 3 Ω 1 = e 2 i ω ( η ) ( s ( η ) T ( η ) S ( η ) t ( η ) ) γ 1 Ω 1 Ω 2 + ( s ( η ¯ ) ¯ S ( η ) t ( η ¯ ) ¯ T ( η ) ) γ 1 Ω 3 Ω 4 .
Utilizing (48)–(50) in conjunction with the relationships among different boundaries, it becomes feasible to deduce the jump matrices G i ( x , y ; η ) ( i = 1 , 2 , 3 , 4 . ) .
( γ 3 Ω 1 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) φ ( η ) ) = ( γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ , γ 3 Ω 4 ( x , y , η ) ) G 1 ( x , y ; η ) , ( γ 3 Ω 1 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) φ ( η ) ) = ( γ 2 Ω 2 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ ) G 2 ( x , y ; η ) , ( γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , γ 2 Ω 3 ( x , y , η ) ) = ( γ 2 Ω 2 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ ) G 3 ( x , y ; η ) , ( γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , γ 2 Ω 3 ( x , y , η ) ) = ( γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ , γ 3 Ω 4 ( x , y , η ) ) G 4 ( x , y ; η ) .
This completes the proof. □
Hypothesis 1.
Regarding the functions φ ( η ) and s ( η ) , we postulate the following
  • (1) φ ( η ) contains 2 n simple zeros { ζ j } j = 1 2 n ( 2 ν = 2 ν 1 + 2 ν 2 ). We assume that ζ j ( j = 1 , 2 , , 2 ν 1 ) pertains to Ω 1 Ω 2 , and ζ ¯ j ( j = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν ) pertains to Ω 3 Ω 4 .
  • (2) s ( η ) contains 2 μ simple zeros { ε j } j = 1 2 μ ( 2 μ = 2 μ 1 + 2 μ 2 ). We assume that ε j ( j = 1 , 2 , , 2 μ 1 ) pertains to Ω 3 Ω 4 , and ε ¯ j ( j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ ) pertains to Ω 1 Ω 2 .
  • (3) There are distinctions between the simple zeros of φ ( η ) and s ( η ) .
The solution denoted as [ N ( x , y ; η ) ] 1 is used to solve the Riemann–Hilbert problem for the first column, while [ N ( x , y ; η ) ] 2 is utilized for the second column. These solutions form the basis of subsequent propositions. (32), (33) and (37) can be used to calculate the residue at the point corresponding to the function N ( x , y ; η ) . We can write s ˙ ( η ) = d s d η and φ ˙ ( η ) = d φ d η .
Proposition 4.
  • (1) Res { [ N ( x , y ; η ) ] 2 , ζ j } = e 2 i θ ( ζ j ) φ ˙ ( ζ j ) ϖ ( ζ j ) [ N ( x , y ; ζ j ) ] 1 , j = 1 , 2 , , 2 ν 1 .
  • (2) Res { [ N ( x , y ; η ) ] 1 , ζ ¯ j } = e 2 i θ ( ζ ¯ j ) φ ˙ ( ζ ¯ j ) ¯ ϖ ( ζ j ¯ ) ¯ [ N ( x , y ; ζ i ¯ ) ] 2 , i = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν .
  • (3) Res { [ N ( x , y ; η ) ] 2 , ε ¯ j } = e 2 i θ ( ε j ¯ ) t ( ε ¯ j ) ¯ s ˙ ( ε ¯ j ) ¯ [ N ( x , y ; ε ¯ j ) ] 1 , j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ .
  • (4) Res { [ N ( x , y ; η ) ] 1 , ε j } = e 2 i θ ( ε j ) t ( ε j ) s ˙ ( ε j ) [ N ( x , y ; ε j ) ] 2 , j = 1 , 2 , , 2 μ 1 .
Proof. 
Let ( γ 3 Ω 1 , γ 1 Ω 1 Ω 2 φ ( η ) ) and take into account that the zeros γ j ( j = 1 , 2 , , 2 ν 1 ) of φ ( η ) are the poles of γ 1 Ω 1 Ω 2 φ ( η ) . Then, we consider
R e s { γ 1 Ω 1 Ω 2 ( x , y ; η ) φ ( η ) , ζ j } = lim η ζ j ( η ζ j ) γ 1 Ω 1 Ω 2 ( x , y ; η ) φ ( η ) = γ 1 Ω 1 Ω 2 ( x , y ; ζ j ) φ ˙ ( ζ j ) .
Inputting η = ζ j into the second part of (50) generates
γ 1 Ω 1 Ω 2 ( x , y ; ζ j ) = γ 3 Ω 1 ( x , y ; ζ j ) ϖ ( ζ j ) e 2 i θ ( ζ j ) .
Furthermore,
R e s { γ 1 Ω 1 Ω 2 ( x , y ; η ) φ ( η ) , ζ j } = e 2 i θ ( ζ j ) γ 3 Ω 1 ( x , y ; ζ j ) φ ˙ ( ζ j ) ϖ ( ζ j ) .
This is identical to (1).
Let ( γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ , γ 3 Ω 4 ( x , y , η ) ) , and take into account that the zeros ζ ¯ j ( j = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν ) of φ ( η ) are the poles of γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ . Then, we consider
R e s { γ 1 Ω 3 Ω 4 ( x , y , η ) φ ( η ¯ ) ¯ , ζ ¯ j } = lim η ζ ¯ j ( η ζ ¯ j ) γ 1 Ω 3 Ω 4 ( x , y ; η ) φ ( η ¯ ) ¯ = γ 1 Ω 1 Ω 2 ( x , y ; ζ ¯ j ) φ ˙ ( ζ j ¯ ) ¯ .
Inputting η = ζ ¯ j into the first part of (50) generates
γ 1 Ω 3 Ω 4 ( x , y ; ζ ¯ j ) = γ 3 Ω 4 ( x , y ; ζ ¯ j ) ϖ ( ζ ¯ j ) ¯ e 2 i θ ( ζ ¯ j )
Furthermore,
R e s { γ 1 Ω 3 Ω 4 ( x , y ; η ) φ ( η ¯ ) ¯ , ζ ¯ j } = e 2 i θ ( ζ ¯ j ) γ 3 Ω 4 ( x , y ; ζ j ) φ ˙ ( ζ j ¯ ) ϖ ( ζ ¯ j ) ¯
This is identical to (2).
Let ( γ 2 Ω 2 ( x , y , η ) , γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ ) and take into account that the zeros ε ¯ j ( j = 1 , 2 , , 2 μ 1 ) of s ( η ) are the poles of γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ . Then, we consider
R e s { γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ , ε ¯ j } = lim η ε ¯ j ( η ε ¯ j ) γ 1 Ω 1 Ω 2 ( x , y ; η ) s ( η ¯ ) ¯ = γ 1 Ω 1 Ω 2 ( x , y ; ε ¯ j ) s ˙ ( ε j ¯ ) ¯ .
Inputting η = ε j ¯ into the first part of (48) generates
γ 1 Ω 1 Ω 2 ( x , y ; ε ¯ j ) = γ 2 Ω 2 ( x , y ; ε j ¯ ) t ( ε j ¯ ) ¯ e 2 i θ ( ε ¯ j )
Furthermore,
R e s { γ 1 Ω 1 Ω 2 ( x , y , η ) s ( η ¯ ) ¯ , ε ¯ j } = e 2 i θ ( ε ¯ j ) t ( ε j ¯ ) ¯ γ 2 Ω 2 ( x , y ; ε ¯ j ) s ˙ ( ε j ¯ ) ¯
This is identical to (3).
Let ( γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , γ 2 Ω 3 ( x , y , η ) ) and take into account that the zeros ε j ( j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ ) of s ( η ) are the poles of γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) . Then, we consider
R e s { γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , ε j } = lim η ε j ( η ε j ) γ 1 Ω 3 Ω 4 ( x , y ; η ) s ( η ) = γ 1 Ω 3 Ω 4 ( x , y ; ε j ) s ˙ ( ε j ) .
Inputting η = ε j into the second part of (48) generates
γ 1 Ω 3 Ω 4 ( x , y ; ε j ) = γ 2 Ω 3 ( x , y ; ε j ) t ( ε j ) e 2 i θ ( ε j )
Furthermore,
R e s { γ 1 Ω 3 Ω 4 ( x , y , η ) s ( η ) , ε j } = e 2 i θ ( ε j ) t ( ε j ) γ 2 Ω 3 ( x , y ; ε j ) s ˙ ( ε j )
This is identical to (4). □

2.7. The Inverse Problem

Equation (46) for the jump relation is identical to
N + ( x , y ; η ) N ( x , y ; η ) = N G ˜ ( x , y ; η ) ,
where G ˜ ( x , y ; η ) = G ( x , y ; η ) I . The function’s asymptotic expansion is derived using Proposition 2 and the asymptotic conditions stated in (29):
N ( x , y ; η ) = I + N ¯ ( x , y ; η ) η + O ( 1 η ) , η , η C Γ ,
where Γ = { η 4 = R } . Applying (52) and (53), one can obtain the subsequent results
N ( x , y ; η ) = I + 1 2 π i Γ N + ( x , y ; η ) G ˜ ( x , y ; η ) η η d η , η C Γ ,
then
N ¯ ( x , y ; η ) = 1 2 π i Γ N + ( x , y ; η ) G ˜ ( x , y ; η ) d η .
The spectral functions γ k , k = 1 , 2 , 3 , should be used to determine the potential function q ( x , y ) . We shall rebuild the potential function q ( x , y ) . As demonstrated in Section 2.2, we present evidence that
Ω 1 ( o ) = 0 i 2 q Ω 0 22 i 2 r Ω 0 11 0 .
As η , the function ψ ( x , y ; η ) can be expressed as Ω 0 + Ω 1 η + Ω 2 η 2 + Ω 3 η 3 + O ( 1 η 4 ) , satisfying (19). Consequently,
q ( x , y ) = 2 i g 12 ( x , y ) ,
r ( x , y ) = 2 i g 21 ( x , y ) .
Compute g 12 ( x , y ) and g 21 ( x , y ) using the spectral functions γ k ( k = 1 , 2 , 3 . ) , in accordance with
g 12 ( x , y ) = lim η ( η N k ( x , y ; η ) ) 12 ,
g 21 ( x , y ) = lim η ( η N k ( x , y ; η ) ) 21 .
Therefore, we obtain the following results:
q ( x , y ) = 2 i lim η ( η N k ( x , y ; η ) ) 12 ,
r ( x , y ) = 2 i lim η ( η N k ( x , y ; η ) ) 21 .

3. Definition and Properties of Spectral Functions and Riemann–Hilbert Problem

In this section, we examine the spectral functions and Riemann–Hilbert problem.

3.1. Characteristics of Spectral Functions

Definition 1.
(Regarding s ( η ) and t ( η ) ) We suppose that q 0 ( x ) = q ( x , 0 ) , and define the map
S : { q 0 ( x ) } { s ( η ) , t ( η ) }
where s ( η ) and t ( η ) are spectral functions.
t ( η ) s ( η ) = γ 1 ( 2 ) ( x , 0 ; η ) = γ 1 12 ( x , 0 ; η ) γ 1 22 ( x , 0 ; η ) , Im η 2 0 ,
and
γ 1 ( x , 0 ; η ) = I + x e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 1 ) ( ξ , 0 ; η ) d ξ .
H 1 ( x , 0 ; η ) is expressed as a function of q ( x , 0 ; η ) :
H 1 ( x , 0 ; η ) = i 2 q 0 r 0 η q 0 e x 0 ( 1 2 β ) i q 0 r 0 d ξ η r 0 e x 0 ( 1 2 β ) i q 0 r 0 d ξ i 2 q 0 r 0 .
Proposition 5.
The significant properties of s ( η ) and t ( η ) are as follows:
  • (1) For Im η 2 0 , s ( η ) and t ( η ) are analytical.
  • (2) s ( η ) = 1 + O ( 1 η ) , t ( η ) = O ( 1 η ) as η , Im η 2 0 .
  • (3) As Q : { s ( η ) , t ( η ) } { q 0 ( x ) } and S 1 = Q , the definition of the inverse map S for Q is as follows:
    q 0 ( x ) = 2 i lim η ( η N ( x ) ( x , η ) ) 12 , r 0 ( x ) = 2 i lim η ( η N ( x ) ( x , η ) ) 21 ,
    where N ( x ) ( x , η ) fulfills the subsequent Riemann–Hilbert problem. (see Theorem 2).
Proof. 
( 1 ) ( 3 ) follow from the discussion in Proposition 3. □
Theorem 2.
Let
N ( x ) ( x , η ) = ( γ 2 Ω 3 Ω 4 ( x , η ) , γ 1 Ω 3 Ω 4 ( x , η ) s ( η ) ) , Im η 2 0 , η Ω 3 Ω 4 , N + ( x ) ( x , η ) = ( γ 1 Ω 1 Ω 2 ( x , η ) s ( η ¯ ) ¯ , γ 2 Ω 1 Ω 2 ( x , η ) ) , Im η 2 0 , η Ω 1 Ω 2 .
Then, N ( x ) ( x , η ) fulfills the subsequent Riemann–Hilbert problem:
  • (1) N ( x ) ( x , η ) = N ( x ) ( x , η ) , I m η 2 0 N + ( x ) ( x , η ) , I m η 2 0 is a piecewise analytic function.
  • (2) N + ( x ) ( x , η ) = N ( x ) ( x , η ) G ( x ) ( x , η ) , η 2 R , and
    G ( x ) ( x , η ) = 1 s ( η ) s ( η ¯ ) ¯ t ( η ) s ( η ) e 2 i η 2 x t ( η ¯ ) ¯ s ( η ¯ ) ¯ e 2 i η 2 x 1 , η 2 R .
  • (3) N ( x ) ( x , η ) = I + O ( 1 η ) , η .
  • (4) s ( η ) contains 2 μ simple zeros { ε j } j = 1 2 μ , ( 2 μ = 2 μ 1 + 2 μ 2 ), we assume that ε j ( j = 1 , 2 , , 2 μ 1 ) is a part of Ω 1 Ω 2 , and ε ¯ j ( j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ ) is a part of Ω 3 Ω 4 .
  • (5) There exist simple poles in [ N ( x ) ( x , η ) ] 2 at η = ε j , where j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ . The locations of the simple poles in [ N + ( x ) ( x , η ) ] 1 can be identified as η = ε j , where j = 1 , 2 , , 2 μ 1 .
Then, the expressions for the residue condition are
R e s { [ N ( x , η ) ] 2 , ε ¯ j } = e 2 i ε j ¯ 2 x t ( ε ¯ j ) s ˙ ( ε ¯ j ) [ N ( x ) ( x , ε ¯ j ) ] 1 , j = 2 μ 1 + 1 , 2 μ 1 + 2 , , 2 μ ,
R e s { [ N ( x , η ) ] 1 , ε j } = e 2 i ε j 2 x t ( ε ¯ j ) ¯ s ˙ ( ε ¯ j ) ¯ [ N ( x ) ( x , ε j ) ] 2 , j = 1 , 2 , , 2 μ 1 .
Proof. 
After inserting y = 0 into (48), we derive the subsequent formula
γ 1 Ω 1 Ω 2 = s ( η ¯ ) ¯ γ 2 Ω 3 Ω 4 + t ( η ¯ ) ¯ e 2 i η 2 x γ 2 Ω 1 Ω 2 γ 1 Ω 3 Ω 4 = t ( η ) e 2 i η 2 x γ 2 Ω 3 Ω 4 + s ( η ) γ 2 Ω 1 Ω 2 .
The jump matrix N ( x ) ( x , η ) adheres to the jump conditions, as determined through computational analysis.
We analyze [ N + ( x ) ( x , η ) ] 1
R e s { γ 1 Ω 1 Ω 2 ( x , η ) s ( η ¯ ) ¯ , ε j } = lim η ε j ( η ε j ) γ 1 Ω 1 Ω 2 ( x , η ) s ( η ¯ ) ¯ = γ 1 Ω 1 Ω 2 ( x , ε j ) s ˙ ( ε ¯ j ) ¯ .
Replacing η = ε j with the first equation of (64), we deduce
γ 1 Ω 1 Ω 2 ( x , ε j ) = e 2 i ε j 2 x t ( ε ¯ j ) ¯ γ 2 Ω 1 Ω 2 ( x , ε j ) .
Furthermore,
R e s { γ 1 Ω 1 Ω 2 ( x , η ) s ( η ¯ ) ¯ , ε j } = e 2 i ε j 2 x t ( ε ¯ j ) ¯ s ˙ ( ε ¯ j ) ¯ γ 2 Ω 1 Ω 2 ( x , ε j ) , j = 1 , 2 , , 2 μ 1 .
This can be considered the same as (63). The validity of the Formula (62) can be demonstrated using the same approach. □
Definition 2
(Regarding S ( η ) and T ( η ) ). Take h 0 ( y ) and h 1 ( y ) , and define the map
S ˜ : { h 0 ( y ) , h 1 ( y ) } { S ( η ) , T ( η ) }
where
S ( η ) T ( η ) = γ 3 ( 2 ) ( 0 , η ) = γ 3 12 ( 0 , η ) γ 3 22 ( 0 , η ) , I m η 4 0 ,
and
γ 3 ( 0 , η ) = I y L e 2 i η 4 ( τ L ) σ ^ 3 ( H 2 γ 3 ) ( τ , η ) d τ .
H 2 ( 0 , L ; η ) is given by
H 2 ( 0 , y ; η ) = H 2 11 ( 0 , y ; η ) H 2 12 ( 0 , y ; η ) H 2 21 ( 0 , y ; η ) H 2 22 ( 0 , y ; η ) ,
  • H 2 11 ( 0 , y ; η ) = i η 2 h 0 g 0 + ( β 1 2 ) i h 0 2 g 0 2 1 2 ( h 0 g 1 h 1 g 0 ) ,
  • H 2 12 ( 0 , y ; η ) = ( 2 η 3 h 0 η ( i h 1 + ( 2 β 1 ) h 0 2 g 0 ) ) e 2 i 0 y Δ 2 ( 0 , τ ) d τ ,
  • H 2 21 ( 0 , y ; η ) = ( 2 η 3 g 0 η ( i g 1 + ( 2 β 1 ) h 0 g 0 2 ) ) e 2 i 0 y Δ 2 ( 0 , τ ) d τ ,
  • H 2 22 ( 0 , y ; η ) = i η 2 h 0 g 0 ( β 1 2 ) i h 0 2 g 0 2 + 1 2 ( h 0 g 1 h 1 g 0 ) ,
where Δ 2 ( 0 , τ ) = ( 5 2 β + 1 2 + 4 β 2 ) h 0 2 g 0 2 ( 1 2 β ) i ( h 0 g 1 h 1 g 0 ) .
Proposition 6.
The following are the significant properties of S ( η ) and T ( η ) .
  • (1) For I m η 4 0 , S ( η ) and T ( η ) are analytical.
  • (2) S ( η ) = 1 + O ( 1 η ) , T ( η ) = O ( 1 η ) as η , I m η 4 0 .
  • (3) As Q ˜ : { S ( η ) , T ( η ) } { h 0 ( y ) , h 1 ( y ) } and S ˜ 1 = Q ˜ , the definition of the inverse map S for Q is as follows:
    q 0 ( y ) = 2 i lim η ( η N ( y ) ( y , η ) ) 12 ,
    r 0 ( y ) = 2 i lim η ( η N ( y ) ( y , η ) ) 21 .
N ( y ) ( y , η ) fulfills the subsequent Riemann–Hilbert problem (see Theorem 3).
Proof. 
( 1 ) ( 3 ) follow from the discussion in Proposition 3. □
Theorem 3.
Let
N + ( y ) ( y , η ) = ( γ 2 Ω 1 Ω 3 ( y , η ) S ( η ) , γ 3 Ω 1 Ω 3 ( y , η ) ) , I m η 4 0 , η Ω 1 Ω 3 , N ( y ) ( y , η ) = ( γ 3 Ω 2 Ω 4 ( y , η ) , γ 2 Ω 2 Ω 4 ( y , η ) S ( η ¯ ) ¯ ) , I m η 4 0 , η Ω 2 Ω 4 .
Then, N ( y ) ( y , η ) fulfills the subsequent Riemann–Hilbert problem.
  • (1) N ( y ) ( y , η ) = N + ( y ) ( y , η ) , I m η 4 0 N ( y ) ( y , η ) , I m η 4 0 is a piecewise analytic function.
  • (2) N + ( y ) ( y , η ) = N ( y ) ( y , η ) G ( y ) ( y , η ) , η 4 R , and
    G ( y ) ( y , η ) = 1 S ( η ) S ( η ¯ ) ¯ T ( η ) S ( η ¯ ) ¯ e 4 i η 4 y T ( η ¯ ) ¯ S ( η ) e 4 i η 4 y 1 , η 4 R .
  • (3) N ( y ) ( L , η ) = I + O ( 1 η ) , η .
  • (4) S ( η ) contains 2 k simple zeros { ϑ j } 1 2 k , ( 2 k = 2 k 1 + 2 k 2 ), and we assume that ϑ j ( j = 1 , 2 , , 2 k 1 ) is a part of Ω 1 Ω 3 and ϑ ¯ j ( j = 2 k 1 + 1 , 2 k 1 + 2 , , 2 k ) is a part of Ω 2 Ω 4 .
  • (5) There exist simple poles in [ N + ( y ) ( y , η ) ] 1 at η = ϑ j , where j = 1 , 2 , , 2 k 1 . The locations of the simple poles in [ N ( y ) ( y , η ) ] 2 can be identified as η = ϑ ¯ j , where j = 2 k 1 + 1 , 2 k 1 + 2 , , 2 k .
Then, the residue conditions are
R e s { [ N ( y ) ( y , η ) ] 1 , ϑ j } = e 4 i ϑ j 4 y S ˙ ( ϑ j ) T ( ϑ j ) [ N ( y ) ( y , ϑ j ) ] 2 , j = 1 , 2 , , 2 k 1 ,
R e s { [ N ( y ) ( y , η ) ] 2 , ϑ ¯ j } = e 4 i ϑ ¯ j 4 y S ˙ ( ϑ ¯ j ) ¯ T ( ϑ ¯ j ) ¯ [ N ( y ) ( y , ϑ ¯ j ) ] 1 , j = 2 k 1 + 1 , 2 k 1 + 2 , , 2 k .
Proof. 
After inserting x = 0 into (49), we derive the subsequent formula
γ 3 Ω 2 Ω 4 = S ( η ¯ ) ¯ γ 2 Ω 1 Ω 3 + T ( η ¯ ) ¯ e 4 i η 4 y γ 2 Ω 2 Ω 4 γ 3 Ω 1 Ω 3 = T ( η ) e 4 i η 4 y γ 2 Ω 1 Ω 3 + S ( η ) γ 2 Ω 2 Ω 4 .
The jump matrix N ( y ) ( y , η ) adheres to the jump conditions, as determined through computational analysis.
We analyze [ N + ( y ) ( y , η ) ] 1
R e s { γ 2 Ω 1 Ω 3 ( y , η ) S ( η ) , ϑ j } = lim η ϑ j ( η ϑ j ) γ 2 Ω 1 Ω 3 ( y , η ) S ( η ) = γ 2 Ω 1 Ω 3 ( y , ϑ j ) S ˙ ( ϑ j ) .
Replacing η = ϑ j with the second equation of (71), we deduce
γ 2 Ω 1 Ω 3 ( y , ϑ j ) = 1 e 4 i ϑ 4 y T ( ϑ j ) γ 3 Ω 1 Ω 3 ( y , ϑ j ) .
Furthermore,
R e s { γ 2 Ω 1 Ω 3 ( t , η ) S ( η ) , ϑ j } = e 4 i ϑ j 4 y S ˙ ( ϑ j ) T ( ϑ j ) [ N ( y ) ( y , ϑ j ) ] 2 , j = 1 , 2 , , 2 k .
This can be considered the same as (69). The validity of the Formula (70) can be demonstrated using the same approach. □
Definition 3
(Regarding φ ( η ) and ϖ ( η ) ). We hypothesize that q L ( x ) = q ( x , L ) , r L ( x ) = r ( x , L ) . Additionally, we consider the spectral functions
φ ( η ) = s ( η ¯ ) ¯ S ( η ) t ( η ¯ ) ¯ T ( η ) , ϖ ( η ) = s ( η ) T ( η ) t ( η ) S ( η ) .
We define the map
S ˜ ˜ : { g L ( x ) } { φ ( η ) , ϖ ( η ) } ,
where
ϖ ( η ) φ ( η ) = γ 3 ( 2 ) ( 0 , η ) = γ 3 12 ( 0 , η ) γ 3 22 ( 0 , η ) , I m η 2 0
and
γ 1 ( x , L ; η ) = I x 0 e i η 2 ( ξ x ) μ ^ 3 ( H 1 γ 1 ) ( ξ , L ; η ) d ξ .
H 1 ( x , L ; η ) is given by
H 1 ( x , L ; η ) = i 2 q L r L η q L e x 0 ( 1 2 β ) i q L r L d ξ η r L e x 0 ( 1 2 β ) i q L r L d ξ i 2 q L r L .
Proposition 7.
The significant properties of φ ( η ) and ϖ ( η ) are as follows:
  • (1) For Im η 2 0 , φ ( η ) and ϖ ( η ) are analytical;
  • (2) φ ( η ) = 1 + O ( 1 η ) , ϖ ( η ) = O ( 1 η ) as η , Im η 2 0 ;
  • (3) φ ( η ) φ ( η ¯ ) ¯ ϖ ( η ) ϖ ( η ¯ ) ¯ = 1 , η 2 R ;
  • (4) φ ( η ) = φ ( η ) , ϖ ( η ) = ϖ ( η ) , Im η 2 0 ;
  • (5) As Q ˜ ˜ : { φ ( η ) , ϖ ( η ) } { g L ( x ) } and S ˜ ˜ 1 = Q ˜ ˜ , the definition of the inverse map S ˜ ˜ for Q ˜ ˜ is as follows:
    q L ( x ) = 2 i lim η ( η N ( L ) ( x , η ) ) 12
    r L ( x ) = 2 i lim η ( η N ( L ) ( x , η ) ) 21
    where N ( L ) ( x , η ) fulfills the subsequent Riemann–Hilbert problem.
Proof. 
( 1 ) ( 5 ) follow from the discussion in Proposition 3. □
Theorem 4.
Let
N ( L ) ( x , η ) = ( γ 1 Ω 3 Ω 4 ( x , η ) φ ( η ¯ ) ¯ , γ 3 Ω 3 Ω 4 ( x , η ) ) , I m η 2 0 , η Ω 3 Ω 4 , N + ( L ) ( x , η ) = ( γ 3 Ω 1 Ω 2 ( x , η ) , γ 1 Ω 1 Ω 2 ( x , η ) φ ( η ) ) , I m η 2 0 , η Ω 1 Ω 2 .
Then, N ( L ) ( x , η ) fulfills the subsequent Riemann–Hilbert problem.
  • (1) N ( L ) ( x , η ) = N ( L ) ( x , η ) , I m η 2 0 N + ( L ) ( x , η ) , I m η 2 0 is a piecewise analytic function.
  • (2) N + ( L ) ( x , η ) = N ( L ) ( x , η ) G ( L ) ( x , η ) , η 2 R , and
    G ( L ) ( x , η ) = 1 ϖ ( η ¯ ) ¯ φ ( η ) e 2 i ( η 2 x + 2 η 4 L ) ϖ ( η ) φ ( η ¯ ) ¯ e 2 i ( η 2 x + 2 η 4 L ) 1 φ ( η ) φ ( η ¯ ) ¯ , η 2 R .
  • (3) N ( L ) ( x , η ) = I + O ( 1 η ) , η .
  • (4) φ ( η ) contains 2 ν simple zeros { γ j } j = 1 2 n ( 2 ν = 2 ν 1 + 2 ν 2 ), we assume that γ j ( j = 1 , 2 , , 2 ν 1 ) is a part of Ω 1 Ω 2 , and γ ¯ j ( j = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν ) is a part of Ω 3 Ω 4 .
  • (5) There exist simple poles in [ N + ( L ) ( x , η ) ] 2 at η = γ j , where j = 1 , 2 , , 2 ν 1 . The locations of the simple poles in [ N ( L ) ( x , η ) ] 1 can be identified as η = γ ¯ j , where j = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν .
Then, the residue conditions are
R e s { [ N ( L ) ( x , η ) ] 2 , γ j } = e 2 i ( γ j 2 x + 2 γ j 4 y ) φ ˙ ( γ j ) ϖ ( γ j ) [ N ( L ) ( x , γ j ) ] 1 , j = 1 , 2 , , 2 ν 1 ,
R e s { [ N ( L ) ( x , η ) ] 1 , γ ¯ j } = e 2 i ( γ j ¯ 2 x + 2 γ j ¯ 4 y ) φ ˙ ( γ ¯ j ) ¯ ϖ ( γ j ¯ ) ¯ [ N ( L ) ( x , γ ¯ j ) ] 2 , j = 2 ν 1 + 1 , 2 ν 1 + 2 , , 2 ν .
Proof. 
After inserting y = L into (50), we derive the subsequent formula
γ 3 Ω 3 Ω 4 = φ ( η ¯ ) ¯ γ 1 Ω 1 Ω 2 + e 2 i ( η 2 x + 2 η 4 L ) ϖ ( η ¯ ) ¯ γ 1 Ω 3 Ω 4 γ 3 Ω 1 Ω 2 = e 2 i ( η 2 x + 2 η 4 L ) ϖ ( η ) γ 1 Ω 1 Ω 2 + φ ( η ) γ 1 Ω 3 Ω 4 .
The jump matrix N ( L ) ( L , η ) adheres to the jump conditions, as determined through computational analysis.
We analyze [ N + ( L ) ( L , η ) ] 2
R e s { γ 1 Ω 1 Ω 2 ( x , η ) φ ( η ) , γ j } = lim η γ j ( η γ j ) γ 1 Ω 1 Ω 2 ( x , η ) φ ( η ) = γ 1 Ω 1 Ω 2 ( x , γ j ) φ ˙ ( γ j ) .
Replacing η = γ j with the second equation of (78), we deduce
γ 1 Ω 1 Ω 2 ( y , γ j ) = 1 e 2 i ( γ j 2 x + 2 γ j 4 y ) ϖ ( γ j ) γ 3 Ω 1 Ω 2 ( y , γ j ) .
Furthermore,
R e s { γ 1 Ω 1 Ω 2 ( x , η ) φ ( η ) , γ j } = e 2 i ( γ j 2 x + 2 γ j 4 y ) φ ˙ ( γ j ) ϖ ( γ j ) [ N ( L ) ( x , γ j ) ] 1 , j = 1 , 2 , , 2 ν 1 .
This can be considered the same as (76). The validity of the Formula (77) can be demonstrated using the same approach. □

3.2. Riemann–Hilbert Problem

Theorem 5.
s ( η ) and t ( η ) , and S ( η ) and T ( η ) define the matrix functions c ( η ) and C ( η ) , respectively. s ( η ) , t ( η ) , S ( η ) , and T ( η ) are represented by the spectral functions q 0 ( x ) , h 0 ( y ) , and h 1 ( y ) in Definitions 1 and 2. Assume that s ( η ) , t ( η ) , S ( η ) , and T ( η ) meet the global relation. Then,
s ( η ) T ( η ) t ( η ) S ( η ) = e 4 i η 4 L a + ( η ) , I m η 2 0 ,
where a + ( η ) is an entire function. c ( η ) = γ 1 ( 0 , 0 ; η ) , C ( η ) = C ( L , η ) = ( e 2 i η 4 L γ 2 ( 0 , L ; η ) ) 1 , The global relation is transformed into s ( η ) T ( η ) t ( η ) S ( η ) = 0 when η . N ( x , y , η ) conforms in the following ways to the Riemann–Hilbert problem.
  • The function N ( x , y ; η ) is an analytic function that operates on slices while maintaining a determinant of unity.
  • N ( x , y ; η ) meets the jump condition
    N + ( x , y ; η ) = N ( x , y ; η ) G ( x , y ; η ) , η 4 R .
  • Simple poles are located at η = ε ¯ j , j = 2 μ 1 + 1 , 2 μ 1 + 2 , 2 μ , and η = γ ¯ j , j = 2 ν 1 + 1 , 2 ν 2 + 1 , 2 ν in [ N ( x , y ; η ) ] 1 . Additionally, simple poles exist at η = ε j , j = 1 , 2 , , 2 μ 1 , and η = γ j , j = 1 , 2 , , 2 ν 1 in [ N ( x , y ; η ) ] 2 .
  • N ( x , y ; η ) = I + O ( 1 η ) , η .
  • N ( x , y ; η ) fulfills the residue relationship mentioned in Hypothesis 1.
Then, the function N ( x , y ; η ) both exists and is unique.
q ( x , y ) = 2 i lim η ( η N ( x , y ; η ) ) 12 , r ( x , y ) = 2 i lim η ( η N ( x , y ; η ) ) 21 .
Furthermore, the solutions to (1) and (2), denoted as q ( x , y ) and r ( x , y ) , respectively, are expressed by these equations.
Proof. 
If s ( η ) and S ( η ) do not have any zeros, it can be assumed that the ( 2 × 2 ) function N ( x , y ; η ) adheres to a non-singular Riemann–Hilbert problem. Utilizing the correspondence between the jump matrix G ( x , y , η ) and symmetry conditions, it is possible to demonstrate that this problem has a global solution. The scenario where s ( η ) and S ( η ) have a limited number of zeros can be transformed into an equivalent situation with no zeros by introducing an algebraic system of equations, which always has a solvable outcome. □
Lemma 1
(Vanishing lemma). The Riemann–Hilbert problem has only zero solutions when the boundary condition N ( x , y ; η ) 0 ( η ) is applied, based on Theorem 5.
Proof. 
Define
ϰ + ( η ) = N + ( η ) N ( η ¯ ) , Im η 4 0 , ϰ ( η ) = N ( η ) N + ( η ¯ ) , Im η 4 0 ,
where the x and y are dependent. Using the symmetry relations, we can infer that
J 1 ( η ¯ ) = J 1 ( η ) , J 2 ( η ¯ ) = J 4 ( η ) , J 3 ( η ¯ ) = J 3 ( η ) .
Then,
ϰ + ( η ) = N ( η ) G ( η ) N ( η ¯ ) , Im η 4 R , ϰ ( η ) = N ( η ) G ( η ¯ ) N ( η ¯ ) , Im η 4 R
for I m η 4 R , ϰ + ( η ) = ϰ ( η ) , in accordance with (81) and (82). Consequently, ϰ + ( η ) and ϰ ( η ) establish a complete function, which is identically zero at infinity. It turns out that the matrix G 1 ( i ι ) ( ι R ) is a positive definite matrix. Given that ϰ ( ι ) disappear in the same way for ι i R , that is,
N + ( i ι ) G 1 ( i ι ) N + ( i ι ) = 0 , ι R .
As ι R , it can be deduced that N + ( i ι ) = 0 . Therefore, the disappearance of both N + ( η ) and N ( η ) occurs in the same manner. □
Proposition 8.
There is evidence supporting the satisfaction of the Coupled Kundu equations by q ( x , y ) and r ( x , y ) .
Proof. 
The dressing method can be used to demonstrate that, if we denote the solution to the Coupled Kundu equations as N ( x , y ; η ) , then we can express q ( x , y ) and r ( x , y ) in relation to N ( x , y ; η ) using (80). Furthermore, it should be noted that q ( x , y ) and r ( x , y ) satisfy the Lax pair for this equation. As a result, q ( x , y ) and r ( x , y ) are solutions to the Coupled Kundu equations. □
When y = 0 , q ( x , 0 ) = q 0 ( x ) is also true.
Proof. 
The jump matrix can be decomposed into matrices with dimensions ( 2 × 2 ) through observing (47) when y = 0 :
G 1 ( x , 0 ; η ) = 1 ϖ ( η ¯ ) ¯ φ ( η ) e 2 i η 2 x ϖ ( η ) φ ( η ¯ ) ¯ e 2 i η 2 x 1 φ ( η ) φ ( η ¯ ) ¯ , G 2 ( x , 0 ; η ) = φ ( η ) s ( η ¯ ) ¯ 0 d ( η ) e 2 i η 2 x s ( η ¯ ) ¯ φ ( η ) , G 3 ( x , 0 ; η ) = 1 s ( η ) s ( η ¯ ) ¯ t ( η ¯ ) ¯ s ( η ¯ ) ¯ e 2 i η 2 x t ( η ) s ( η ) e 2 i η 2 x 1 , G 4 ( x , 0 ; η ) = φ ( η ¯ ) ¯ s ( η ) d ( η ¯ ) ¯ e 2 i η 2 x 0 s ( η ) φ ( η ¯ ) ¯ .
After calculation, when y = 0 , the result is compared with (47). We observe that N Ω 2 then becomes N + , and N + Ω 3 then becomes N .
For the primal relation N + 2 ( x , 0 , η ) = N 2 ( x , 0 , η ) G 4 ( x , 0 , η ) , the present relation is N ( x ) ( x , 0 , η ) = N ( x , 0 , η ) G 4 ( x , 0 , η ) . For the primal relation N + 1 ( x , 0 , η ) = N 2 ( x , 0 , η ) G 2 ( x , 0 , η ) and N 2 ( x , 0 , η ) = N + 1 ( x , 0 , η ) G 2 1 ( x , 0 , η ) , the present relation is N ( x ) ( x , 0 , η ) = N ( x , 0 , η ) G 2 1 ( x , 0 , η ) . The other areas remain unchanged, resulting in the subsequent definition.
Define
N ( x ) ( x , 0 , η ) = N ( x , 0 ; η ) , η Ω 1 Ω 4 , N ( x ) ( x , 0 , η ) = N ( x , 0 ; η ) G 4 ( x , 0 ; η ) , η Ω 3 , N ( x ) ( x , 0 , η ) = N ( x , 0 ; η ) ( G 2 ( x , 0 ; η ) ) 1 , η Ω 2 .
Then, we set
N + ( x ) ( x , 0 , η ) = ( γ 3 Ω 1 Ω 2 ( x , 0 , η ) , γ 1 Ω 1 Ω 2 ( x , 0 , η ) φ ( η ) ) , η Ω 1 Ω 2 , N ( x ) ( x , 0 , η ) = ( γ 1 Ω 3 Ω 4 ( x , 0 , η ) φ ( η ¯ ) ¯ , γ 3 Ω 3 Ω 4 ( x , 0 , η ) ) , η Ω 3 Ω 4 , N + ( x ) ( x , 0 , η ) = ( γ 2 Ω 1 Ω 2 ( x , 0 , η ) , γ 1 Ω 1 Ω 2 ( x , 0 , η ) s ( η ¯ ) ¯ ) , η Ω 1 Ω 2 , N ( x ) ( x , 0 , η ) = ( γ 1 Ω 3 Ω 4 ( x , 0 , η ) s ( η ) , γ 2 Ω 3 Ω 4 ( x , 0 , η ) ) , η Ω 3 Ω 4 .
Let
N + ( x ) ( x , 0 , η ) = ( γ 3 Ω 1 Ω 2 ( x , 0 , η ) , γ 1 Ω 1 Ω 2 ( x , 0 , η ) φ ( η ) ) , η Ω 1 Ω 2 , N ( x ) ( x , 0 , η ) = ( γ 1 Ω 3 Ω 4 ( x , 0 , η ) s ( η ) , γ 2 Ω 3 Ω 4 ( x , 0 , η ) ) , η Ω 3 Ω 4 ,
N + ( x ) ( x , 0 , η ) = N ( x ) ( x , 0 , η ) G 1 ( x ) ( x , 0 , η ) ,
and
G 1 ( x ) ( x , 0 ; η ) = 1 ω ( η ¯ ) ¯ φ ( η ) e 2 i η 2 x ω ( η ) φ ( η ¯ ) ¯ e 2 i η 2 x 1 φ ( η ) φ ( η ¯ ) ¯ .
Let
N + ( x ) ( x , 0 , η ) = ( γ 2 Ω 1 Ω 2 ( x , 0 , η ) , γ 1 Ω 1 Ω 2 ( x , 0 , η ) s ( η ¯ ) ¯ ) , η Ω 1 Ω 2 , N ( x ) ( x , 0 , η ) = ( γ 1 Ω 3 Ω 4 ( x , 0 , η ) s ( η ) , γ 2 Ω 3 Ω 4 ( x , 0 , η ) ) , η Ω 3 Ω 4 ,
N + ( x ) ( x , 0 , η ) = N ( x ) ( x , 0 , η ) ( G 3 ( x ) ( x , 0 , η ) ) 1 ,
and
( G 3 ( x ) ( x , 0 , η ) ) 1 = 1 t ( η ¯ ) ¯ s ( η ¯ ) ¯ e 2 i η 2 x t ( η ) s ( η ) e 2 i η 2 x 1 s ( η ) s ( η ¯ ) ¯ ,
where N ( x ) ( x , 0 , η ) satisfies
  • (1)
    N + ( x ) ( x , 0 , η ) = N ( x ) ( x , 0 , η ) G 1 ( x ) ( x , 0 , η ) , η R , N + ( x ) ( x , 0 , η ) = N ( x ) ( x , 0 , η ) ( G 3 ( x ) ( x , 0 , η ) ) 1 , η 2 R .
  • (2) N ( x ) ( x , 0 , η ) = I + O ( 1 η ) , η .
According to Proposition 5,
q 0 ( x ) = 2 i lim η ( η N ( x ) ( x , η ) ) 12 .
Contrasting this with the evaluation of (80) at y = 0 , we can infer that q 0 = q ( x , 0 ) . □
When x = 0 , q ( 0 , y ) = h 0 ( y ) and q x ( 0 , y ) = h 1 ( y ) are also true.
Proof. 
Let N ( 0 , y ) ( 0 , y , η ) be defined by
N ( 0 , y ) ( 0 , y , η ) = N ( 0 , y , η ) L ( y , η ) ,
where L ( y , η ) is given by L ( 1 ) ( y , η ) , L ( 2 ) ( y , η ) , L ( 3 ) ( y , η ) , L ( 4 ) ( y , η ) for η Ω 1 , η Ω 2 , η Ω 3 , η Ω 4 .
N + ( y ) ( 0 , y , η ) = ( γ 3 Ω 1 Ω 3 ( 0 , y , η ) , γ 1 Ω 1 Ω 2 ( 0 , y , η ) φ ( η ) ) , η Ω 1 , N ( y ) ( 0 , y , η ) = ( γ 2 Ω 2 Ω 4 ( 0 , y , η ) , γ 1 Ω 1 Ω 2 ( 0 , y , η ) s ( η ¯ ) ¯ ) , η Ω 2 , N + ( y ) ( 0 , y , η ) = ( γ 1 Ω 3 Ω 4 ( 0 , y , η ) s ( η ) , γ 2 Ω 1 Ω 3 ( 0 , y , η ) ) , η Ω 3 , N ( y ) ( 0 , y , η ) = ( γ 1 Ω 3 Ω 4 ( 0 , y , η ) φ ( η ¯ ) ¯ , γ 3 Ω 2 Ω 4 ( 0 , y , η ) ) , η Ω 4 .
When y = 0 , we compare (96) with (47) and find that the region does not change.
Let
N + 1 ( y ) = N + 1 ( 0 , y , η ) L ( 1 ) ( y , η ) , N + 2 ( y ) = N + 2 ( 0 , y , η ) L ( 3 ) ( y , η ) , N 1 ( y ) = N 1 ( 0 , y , η ) L ( 2 ) ( y , η ) , N 2 ( y ) = N 2 ( 0 , y , η ) L ( 4 ) ( y , η ) .
As an example, two of these formulas are derived, while the remaining two formulas prove to be similar.
N + 1 ( y ) = N 1 ( y ) G ( y ) ( η ) , N + 1 ( 0 , y , η ) L ( 1 ) ( y , η ) = N 1 ( 0 , y , η ) L ( 2 ) ( y , η ) G ( 0 , y ) ( y , η ) .
Using N + 1 ( 0 , y , η ) = N 1 ( 0 , y , η ) G 2 ( 0 , y , η ) in the above equations, we obtain
G 2 ( 0 , y ; η ) L ( 1 ) ( y , η ) = L ( 2 ) ( y , η ) G ( 0 , y ) ( y , η ) .
N + 1 ( y ) = N 2 ( y ) G ( y ) ( η ) , N + 1 ( 0 , y , η ) L ( 1 ) ( y , η ) = N 2 ( 0 , y , η ) L ( 4 ) ( y , η ) G ( 0 , y ) ( y , η ) .
Using N + 1 ( 0 , y , η ) = N 2 ( 0 , y , η ) G 1 ( 0 , y , η ) in the above equations, we obtain
G 1 ( 0 , y ; η ) L ( 1 ) ( y , η ) = L ( 4 ) ( y , η ) G ( 0 , y ) ( y , η ) .
Note, that the boundary requirements (46) and (47) are still fulfilled by N ( 0 , y ; η ) . Assume that the matrices L ( 1 ) and L ( 2 ) exhibit continuity for Im η 2 > 0 , while the matrices L ( 3 ) and L ( 4 ) demonstrate continuity for Im η 2 < 0 . As η approaches infinity, the matrix L tends towards the identity matrix. Consequently, L satisfies the following relation.
G 2 ( 0 , y ; η ) L ( 1 ) ( y , η ) = L ( 2 ) ( y , η ) G ( 0 , y ) ( y , η ) , G 1 ( 0 , y ; η ) L ( 1 ) ( y , η ) = L ( 4 ) ( y , η ) G ( 0 , y ) ( y , η ) , G 3 ( 0 , y ; η ) L ( 3 ) ( y , η ) = L ( 2 ) ( y , η ) G ( 0 , y ) ( y , η ) , G 4 ( 0 , y ; η ) L ( 3 ) ( y , η ) = L ( 4 ) ( y , η ) G ( 0 , y ) ( y , η ) .
After conducting the aforementioned analysis, it becomes evident that the matrices L ( 1 ) ( y , η ) , L ( 2 ) ( y , η ) , L ( 3 ) ( y , η ) , and L ( 4 ) ( y , η ) can be derived
L ( 1 ) ( y , η ) = 0 S ( η ) φ ( η ) φ ( η ) S ( η ) a + ( η ) e 4 i η 4 ( L y ) , L ( 2 ) ( y , η ) = 0 1 d ( η ) d ( η ) t ( η ) S ( η ¯ ) ¯ e 4 i η 4 y , L ( 3 ) ( y , η ) = t ( η ¯ ) ¯ S ( η ) e 4 i η 4 y d ( η ¯ ) ¯ 1 d ( η ¯ ) ¯ 0 , L ( 4 ) ( y , η ) = a + ( η ¯ ) ¯ e 4 i η 4 ( L y ) φ ( η ¯ ) ¯ S ( η ¯ ) ¯ S ( η ¯ ) ¯ φ ( η ¯ ) ¯ 0 .
Based on our calculations, it can be determined that the matrices L ( j ) ( y , η ) ( j = 1 , 2 , 3 , 4 . ) satisfy the jumping environment (102). It can be confirmed that the transformation (95) replaces the residual condition in Proposition 4 with the residual condition in Theorem 3, in a manner that resembles the demonstration of the equation q ( x , 0 ) = q 0 ( x ) . □

4. Conclusions and Remarks

In this article, we thoroughly examined the coupled Kundu equations on the half-line using the Fokas method. First, by employing spectral and asymptotic analyses, we partitioned the equation in the complex plane. Subsequently, through rigorous calculations, we determined the jump matrix between different partitions and ultimately obtained a solution to the equation utilizing the Riemann–Hilbert method. The exact solution of the equation can be obtained in [29], and the exact single soliton solution of the equation is also provided in [29], along with a 3D model diagram. In reality, the Fokas method can be employed to discuss the coupled Kundu equations within a limited interval. Our future research will concentrate on investigating this aspect.

Author Contributions

Conceptualization, J.H.; methodology, J.H.; formal analysis, J.H.; writing—original draft preparation, J.H.; writing—review and editing, J.H.; funding acquisition, J.H. and N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 11805114; 1197050803) and the SDUST Research Fund (No. 2018TDJH101).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the reviewers and the editor for their valuable comments for improving the original manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gardner, C.; Clifford, S.; Kruskal, M. The Korteweg-de Vries Equation and Generalizations. VI. Method for exact Solutions. Commun. Pur. Appl. Math. 1974, 27, 97–133. [Google Scholar] [CrossRef]
  2. Fokas, A.; Zakharov, V. Important Developments in Soliton Theory; Springer: Berlin, Germany, 1993. [Google Scholar]
  3. Fokas, A. On a Class of Physically Important Integrable Equations. Physica D 1995, 87, 145–150. [Google Scholar] [CrossRef]
  4. Dong, H.; Zhang, Y.; Zhang, X. The New Integrable Symplectic Map and the Symmetry of Integrable Nonlinear Lattice Equation. Commun. Nonlinear Sci. 2016, 36, 354–365. [Google Scholar] [CrossRef]
  5. Fang, Y.; Dong, H.; Hou, Y.; Kong, Y. Frobenius Integrable Decompositions Of Nonlinear Evolution Equations with Modified Term. Appl. Math. Comput. 2014, 226, 435–440. [Google Scholar] [CrossRef]
  6. Fokas, A. A Unified Transform Method for Solving Linear and Certain Nonlinear PDEs. Proc. R. Soc. Lond. Ser. A-Math. Phys. Eng. Sci. 1997, 453, 1411–1443. [Google Scholar] [CrossRef]
  7. Fokas, A.; Its, A.; Sung, L.Y. The Nonlinear Schrödinger Equation on the Half-Line. Nonlinearity 2005, 18, 1771–1822. [Google Scholar] [CrossRef]
  8. Fokas, A.; Its, A. The Nonlinear Schrödinger Equation on the Interval. J. Phys. A 2004, 37, 6091–6114. [Google Scholar] [CrossRef]
  9. Boutet, A.; Monvel, D.; Fokas, A. The mKDV Equation on the Half-Line. J. Int. Math. Jussieu 2004, 3, 139–164. [Google Scholar] [CrossRef]
  10. Boutet, D.; Monvel, A.; Shepelsky, D. Initial Boundary Value Problem for the MKdV Equation on a Finite Interval. Ann. I’institut Fourier 2004, 54, 1477–1495. [Google Scholar] [CrossRef]
  11. Monvel, A.; Shepelsky, D. Long Time Asymptotics of the Camassa-Holm Equation on the Half-Line. Ann. I’institut Fourier 2009, 7, 59. [Google Scholar]
  12. Lenells, J. The Derivative Nonlinear Schrödinger Equation on the Half-Line. Physica D 2008, 237, 3008–3019. [Google Scholar] [CrossRef]
  13. Lenells, J. An Integrable Generalization of the Sine-Gordon Equation on the Half-Line. IMA J. Appl. Math. 2011, 76, 554–572. [Google Scholar] [CrossRef]
  14. Lenells, J.; Fokas, A. On a Novel Integrable Generalization of the Nonlinear Schrödinger Equation. Nonlinearity 2009, 22, 709–722. [Google Scholar] [CrossRef]
  15. Lenells, J.; Fokas, A. An Integrable Generalization of the Nonlinear Schrödinger Equation on the Half-Line and Solitons. Inverse Probl. 2009, 25, 1–32. [Google Scholar] [CrossRef]
  16. Fan, E. A Family of Completely Integrable Multi-Hamiltonian Systems Explicitly Related to some Celebrated Equations. J. Math. Phys. 2001, 42, 95–99. [Google Scholar] [CrossRef]
  17. Xu, J.; Fan, E. A Riemann-Hilbert Approach to the Initial-Boundary Problem for Derivative Nonlinear Schrödinger Equation. Acta Math. Sci. 2014, 34, 973–994. [Google Scholar] [CrossRef]
  18. Xu, J.; Fan, E. Initial-Boundary Value Problem for Integrable Nonlinear Evolution Equation with 3 × 3 Lax Pairs on the Interval. Stud. Appl. Math. 2016, 136, 321–354. [Google Scholar] [CrossRef]
  19. Chen, M.; Chen, Y.; Fan, E. The Riemann-Hilbert Analysis to the Pollaczek-Jacobi Type Orthogonal Polynomials. Stud. Appl. Math. 2019, 143, 42–80. [Google Scholar] [CrossRef]
  20. Chen, M.; Fan, E.; He, J. Riemann-Hilbert Approach and the Soliton Solutions of the Discrete MKdV Equations. Chaos Soliton Fract. 2023, 168, 113209. [Google Scholar] [CrossRef]
  21. Zhao, P.; Fan, E. A Riemann-Hilbert Method to Algebro-Geometric Solutions of the Korteweg-de Vries Equation. Physica D 2023, 454, 133879. [Google Scholar] [CrossRef]
  22. Zhang, N.; Xia, T.; Hu, B. A Riemann-Hilbert Approach to the Complex Sharma-Tasso-Olver Equation on the Half Line. Commun. Theor. Phys. 2017, 68, 580. [Google Scholar] [CrossRef]
  23. Zhang, N.; Xia, T.; Fan, E. A Riemann-Hilbert Approach to the Chen-Lee-Liu Equation on the Half Line. Acta Math. Sci. 2018, 34, 493–515. [Google Scholar] [CrossRef]
  24. Li, J.; Xia, T. Application of the Riemann-Hilbert Approach to the Derivative Nonlinear Schrödinger Hierarchy. Int. J. Mod. Phys. B 2023, 37, 2350115. [Google Scholar] [CrossRef]
  25. Wen, L.; Zhang, N.; Fan, E. N-Soliton Solution of The Kundu-Type Equation Via Riemann-Hilbert Approach. Acta Math. Sci. 2020, 40, 113–126. [Google Scholar] [CrossRef]
  26. Hu, B.; Lin, J.; Zhang, L. On the Riemann-Hilbert problem for the integrable three-coupled Hirota system with a 4 × 4 Matrix Lax Pair. Appl. Math. Comput. 2022, 428, 127202. [Google Scholar] [CrossRef]
  27. Hu, B.; Zhang, L.; Xia, T. On the Riemann-Hilbert Problem of a Generalized Derivative Nonlinear Schrödinger Equation. Commun. Theor. Phys. 2021, 73, 015002. [Google Scholar] [CrossRef]
  28. Hu, B.; Zhang, L.; Xia, T.; Zhang, N. On the Riemann-Hilbert Problem of the Kundu Equation. Appl. Math. Comput. 2020, 381, 125262. [Google Scholar] [CrossRef]
  29. Tao, M.; Dong, H. N-Soliton Solutions of the Coupled Kundu Equations Based on the Riemann-Hilbert Method. Math. Probl. Eng. 2019, 3, 1–10. [Google Scholar] [CrossRef]
  30. Kundu, A.; Strampp, W.; Oevel, W. Gauge Transformations of Constrained KP Flows: New Integrable Hierarchies. J. Math. Phys. 1995, 36, 2972–2984. [Google Scholar] [CrossRef]
  31. Luo, J.; Fan, E. Dbar Dressing Method for the Coupled Gerdjikov-Ivanov Equation. Appl. Math. Lett. 2020, 110, 106589. [Google Scholar] [CrossRef]
  32. Kodama, Y. Optical Solitons in a Monomode Fiber. J. Stat. Phys. 1985, 35, 597–614. [Google Scholar] [CrossRef]
  33. Pinar, A.; Muslum, O.; Aydin, S.; Mustafa, B.; Sebahat, E. Optical Solitons of Stochastic Perturbed Radhakrishnan-Kundu-Lakshmanan Model with Kerr Law of Self-Phase-Modulation. Mod. Phys. Lett. B 2024, 38, 2450122. [Google Scholar]
  34. Monika, N.; Shubham, K.; Sachin, K. Dynamical Forms of Various Optical Soliton Solutions and Other Solitons for the New Schrödinger Equation in Optical Fibers Using Two Distinct Efficient Approaches. Mod. Phys. Lett. B 2024, 38, 2450087. [Google Scholar]
  35. Mjolhus, E. On the Modulational Instability of Hydromagnetic Waves Parallel to the Magnetic Field. J. Plasma Phys. 1976, 16, 321–334. [Google Scholar] [CrossRef]
  36. Han, M.; Garlow, J.; Du, K.; Cheong, S.; Zhu, Y. Chirality reversal of magnetic solitons in chiral Cr 1 3 TaS2. Appl. Phys. Lett. 2023, 123, 022405. [Google Scholar] [CrossRef]
Figure 1. The ( x , y ) -domain.
Figure 1. The ( x , y ) -domain.
Axioms 13 00579 g001
Figure 2. Three different points in the ( x , y ) -region.
Figure 2. Three different points in the ( x , y ) -region.
Axioms 13 00579 g002
Figure 3. γ 1 , γ 2 , and γ 3 in the ( x , y ) -domain.
Figure 3. γ 1 , γ 2 , and γ 3 in the ( x , y ) -domain.
Axioms 13 00579 g003
Figure 4. The η -plane is divided into sets Ω j , where j = 1 , 2 , 3 , 4 .
Figure 4. The η -plane is divided into sets Ω j , where j = 1 , 2 , 3 , 4 .
Axioms 13 00579 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, J.; Zhang, N. Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line. Axioms 2024, 13, 579. https://doi.org/10.3390/axioms13090579

AMA Style

Hu J, Zhang N. Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line. Axioms. 2024; 13(9):579. https://doi.org/10.3390/axioms13090579

Chicago/Turabian Style

Hu, Jiawei, and Ning Zhang. 2024. "Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line" Axioms 13, no. 9: 579. https://doi.org/10.3390/axioms13090579

APA Style

Hu, J., & Zhang, N. (2024). Initial Boundary Value Problem for the Coupled Kundu Equations on the Half-Line. Axioms, 13(9), 579. https://doi.org/10.3390/axioms13090579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop