Next Article in Journal
Tachyonic Neutrinos: From the Cosmic Rays to Extragalactic Supernovae
Previous Article in Journal
Strong Necessary Conditions and the Cauchy Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applications of the R-Matrix Method in Integrable Systems

1
School of Mathematics and Information Sciences, Weifang University, Weifang 261061, China
2
School of Mathematics, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(9), 1623; https://doi.org/10.3390/sym15091623
Submission received: 29 June 2023 / Revised: 30 July 2023 / Accepted: 1 August 2023 / Published: 23 August 2023
(This article belongs to the Section Mathematics)

Abstract

:
Based on work related to the R-matrix theory, we first abstract the Lax pairs proposed by Blaszak and Sergyeyev into a unified form. Then, a generalized zero-curvature equation expressed by the Poisson bracket is exhibited. As an application of this theory, a generalized (2+1)-dimensional integrable system is obtained, from which a resulting generalized Davey–Stewartson (DS) equation and a generalized Pavlov equation (gPe) are further obtained. Via the use of a nonisospectral zero-curvature-type equation, some (3+1) -dimensional integrable systems are produced. Next, we investigate the recursion operator of the gPe using an approach under the framework of the R-matrix theory. Furthermore, a type of solution for the resulting linearized equation of the gPe is produced by using its conserved densities. In addition, by applying a nonisospectral Lax pair, a (3+1)-dimensional integrable system is generated and reduced to a Boussinesq-type equation in which the recursion operators and the linearization are produced by using a Lie symmetry analysis; the resulting invertible mappings are presented as well. Finally, a Bäcklund transformation of the Boussinesq-type equation is constructed, which can be used to generate some exact solutions.
PACS:
05.45.Yv; 02.30.Jr; 02.30.Ik

1. Introduction

The R-matrix approach has two important applications. One is to systematically construct consistent Lax pairs ( L , B ) generate dispersionless integrable systems; the other is to systematically construct an infinite hierarchy of commuting symmetries for a given dispersionless system. First, we recall some basic facts on R-matrix formalism [1,2].
Let g be a Lie algebra (in general, infinite-dimensional). The Lie bracket [ · , · ] defines the adjoint action of g on g: a d a b = [ a , b ] .
Definition 1
([3]). An R-structure is a Lie algebra g equipped with a linear map R: g g (called the R-matrix) such that the bracket
[ a , b ] R = : [ R a , b ] + [ a , R b ] , a , b g
is another Lie product on g. The skew symmetry of (1) is obvious.
Lemma 1
([3]). A sufficient condition for R to be an R-matrix is
[ R ( a ) , R ( b ) ] R ( [ a , b ] R ) = α [ a , b ] ,
where α is some real number. Equation (2) is called the Yang–Baxter equation. It can be verified that a sufficient condition for the Jacobi identity to hold is the Yang–Baxter equation for R. How do we find such an R? Assume that the Lie algebra g can be split into a direct sum of Lie subalgebras g + and g , that is,
g = g + g , [ g ± , g ± ] g ± , g + g = { 0 } .
Denoting the projections onto these subalgebras by P ± , it is easy to verify that
R = 1 2 ( P + P ) = P + 1 2
solves Equation (2) when α = 1 4 . Hence, it defines an R-structure on g.
Let L i g , i N . We consider the associated hierarchies of flows (Lax hierarchies):
( L n ) t r = [ R L r , L n ] , r , n N .
Suppose that R commutes with all derivatives t n , that is,
( R L ) t n = R L t n , n N ,
and obeys the classical modified Yang–Baxter equation in (2). One can verify that the following conditions are equivalent:
Lemma 2
([4]). (i) The zero-curvature equations
( R L r ) t s ( R L s ) t r + [ R L r , R L s ] = 0 , r , s N
hold true;
(ii) All L i commute in g:
[ L i , L j ] = 0 , i , j N .
Consider an L g and its associated Lax hierarchy, which extends the systems by adding an extra independent variable:
L t r = [ R L r , L ] + ( R L r ) y , r N .
Suppose that L i g , i N are such that Lemma 2 holds for all r , s N and the R-matrix satisfies (4). Then, the flows in (7) commute. Via the so-called Lax Novikov equation
[ L j , L ] + ( L i ) y = 0 , j N
and noting B i = P + L i , Equations (3), (5), and (7) take the following forms:
( L s ) t r = [ B r , L s ] , r , s N ,
( B r ) t s ( B s ) t r + [ B r , B s ] = 0 ,
L t r = [ B r , L ] + ( B r ) y , n , r N .
The usual approach for constructing a commutative subalgebra spanned by L i , whose existence ensures commutativity of the flows in (3) and (7), is as follows:
The commutative subalgebra is generated by rational powers of a given element L g when the Lie algebra in question is a Poisson algebra that obeys the Leibniz rule:
[ a , b c ] = [ a , b ] c + b [ a , c ] .
However, in the (3+1)-dimensional setting, this construction does not work anymore when the Leibniz rule is no long required to hold. It is the case of (3+1)-dimensional dispersionless systems when the Lie algebra is a Jacobi algebra. Hence, instead of explicit construction of a commuting L i , as in [4], the zero-curvature constraints in (5) are imposed on chosen elements L i g , i N .
For the (3+1)-dimensional case, we consider a commutative and associative algebra A of formal series in p:
A = { f = i u i p i }
with ordinary dot multiplication:
f 1 · f 2 f 1 f 2 , f 1 , f 2 A .
The coefficients u i of these series are assumed to be smooth functions of x , y , z and time t. The Jacobi structure on A is induced by the contact bracket:
[ f 1 , f 2 ] { f 1 , f 2 } C = [ f 1 , f 2 ] p , x + [ f 1 , f 2 ] 0 , z p [ f 1 , f 2 ] p , z ,
where [ f i , f j ] α , β = f i α f j β f j α f i β , α R , β { x , z } . We call the algebra g = ( A , { , } ) the Jacobi algebra.
The flow in (8) can be bearded as the compatibility condition of the Lax pair:
L s ψ = λ ψ , ψ t r = B r ψ .
Similarly, the flows in (9) and (10) can be regarded as having the following Lax pairs, respectively:
ψ t s = B s ψ , ψ t r = B r ψ ,
ψ y = L ψ , ψ t r = B r ψ .
The Lax pairs (13) and (15) can be abstracted as follows:
E = L ( p , u ) , ψ t = B ( p , u ) ,
ψ y = L ( p , u ) , ψ t = B ( p , u ) ,
where p can be taken as some functions in x , y , z . For example, if p = ψ x , (16) and (17) become (3) and (4) in [4,5], and L ( p , u ) , B ( p , u ) are polynomials in p , u = ( u 1 , , u n ) . Setting L ( p , u ) = ψ z L ¯ ( p , u ) , B ( p , u ) = ψ z B ¯ ( p , u ) , and taking p = ψ x ψ z , (17) turns into the following form:
ψ y ψ z = L ¯ ( ψ x ψ z , u ) , ψ t ψ z = B ¯ ( ψ x ψ z , u ) ,
which can be used to generate (3+1)-dimensional integrable systems. Therefore, by applying the compatibility conditions of the Lax pairs in (16)–(18), some (1+1), (2+1), and (3+1) dimensional integrable hierarchies can be produced. In addition, the Lax pairs (16) and (17) can be expressed by the Poisson brackets. For a 2 n -dimensional symplectic manifold M and any H , F F ( M ) , an associative algebra of smooth functions on M, the Poisson bracket is defined as
{ H , F } P = X H ( F ) = ( H p i x i H x i p i ) F ,
where ( x i , p i ) are local coordinates of M, called the Darboux coordinates. Thus, the Lax pairs (13) and (15) can be written as
X L ( ϕ ) = { L , ϕ } P = 0 , ϕ t = X B ( ϕ ) { B , ϕ } P ,
where ϕ = ϕ ( x , t , p ) .
ϕ y = X L ( ϕ ) = { L , ϕ } P , ϕ t = X B ( ϕ ) = { B , ϕ } P ,
where ϕ = ϕ ( x , y , t , p ) .
Using the results from contact geometry, the two kinds of linear nonisospectral Lax pairs in (3+1)-dimensions that generalize (20) are as follows: The first one replaces the Poisson bracket { · , · } P with the contact bracket in (12) and gives us the Lax pair of the following form:
ϕ y = { L , ϕ } C , ϕ t = { B , ϕ } C ,
where, now, ϕ = ϕ ( x , y , z , t , p ) .
The second one replaces the Hamiltonian vector field X H with their contact counterparts X H , and we have
χ y = X L ( χ ) , χ t = X B ( χ ) ,
where χ = χ ( x , y , z , t , p ) . The Lax pair of the form (22) is called a linear contact Lax pair, where
X H ( · ) = { H , · } C + { 1 , H } C = H z + H p x H x p p ( H p z H z p ) .
It is easy to verify that the compatibility condition of (22) presents
{ L t B y + { L , B } C , ϕ } C = 0 ,
where L = L ( p , u ) , B = B ( p , u ) , which gives rise to the following zero-curvature type equation due to ϕ being arbitrary:
L t B y + { L , B } C = 0 .
In the paper, we apply the Lax pair in (16), taking p = ψ , and the R-matrix method to generate a new (2+1)-dimensional integrable system from which a generalized system of the DS equation is obtained based on a matrix associative algebra; here, we call it a generalized DS hierarchy. By reducing the integrable hierarchy, we obtain linear and nonlinear scalar (2+1)-dimensional equations. With the help of the nonisospectral zero-curvature-type equation in (23), a kind of integrable hierarchy is obtained, which can be reduced to several (3+1)-dimensional integrable systems, the recursion operators, linearizations of some of them generated by using a Lie-group analysis.

2. A Generalized DS Hierarchy and Its Reductions

In the section, we mainly focus on deriving a new generalized DS integrable hierarchy by choosing a new Lax pair with matrix function coefficients.
Consider a linear Lax pair
ψ y = ( M p + N + K p 1 ) ψ , ψ t = ( B p 2 + C p + D ) ψ ,
where M = α 0 0 α , N = 0 u v 0 , K = k 1 k 2 k 3 k 4 ,   B , C , and D are all matrices to be determined; α is a real constant independent of x , y , and t; and u and v are functions in x , y , and t . Actually, the Lax pair in (24) reads as in the following form (16):
L ψ = λ ψ , L = y M p N K p 1 , ψ t = ( B p 2 + C p + D ) ψ = : B ψ ,
the compatibility of which is just right with the Lax equation:
L t + [ L , B ] = 0 .
It is easy to calculate that (26) admits the following equation system:
B M = M B B = β M , β = c o n s t , M C N B + B N + C M = 0 , C y M C p M D N C + 2 B N p K + B K + D M + C N = 0 , N t M D p N D K C + B N p p + D y + 2 B K p + C N x C K + D N = 0 , K t + C K p + D K + B K p p = 0 , C p = D .
Taking p = x , the first equation above leads to B = β M . Noting C = c 1 c 2 c 3 c 4 , D = d 1 d 2 d 3 d 4 , then the last equation becomes
D = C x .
From the second equation in (27), we have
C = β N .
The third equation in (27) gives that
β u y + α β u x 2 α d 2 ( 1 + α β ) k 2 = 0 ,
β v y α β v x + 2 α d 3 ( 1 α β ) k 3 = 0 ,
( 1 + α β ) k 1 = 0 k 1 = 0 ; ( 1 α β ) k 4 = 0 k 4 = 0 .
And,
d 2 = 1 2 α [ β u y + α β u x ( 1 + α β ) k 2 ] ,
d 3 = 1 2 α [ β v y + α β v x + ( 1 α β ) k 3 ] .
The fourth equation in (27) admits, by using (29), that
u t α d 2 x u d 4 + α β u x x + d 2 y + 2 α β k 2 x + d 1 u = 0 ,
v t + α d 3 x v d 1 α β v x x 2 α β k 3 x + d 4 v + d 3 y = 0 ,
α d 1 x u d 3 β k 2 v + β u v x β u k 3 + d 2 v + d 1 y = 0 ,
α d 4 x v d 2 β k 3 u + β v u x β v k 2 + d 3 u + d 4 y = 0 .
The last two equations in (27) can be written as
d 2 k 3 + β u k 3 x = 0 , β v k 2 x + d 3 k 2 = 0 , k 2 t + d 1 k 2 + α β k 2 x x = 0 , k 2 t + d 4 k 3 α β k 3 x x = 0 , d 1 = d 4 = 0 , d 2 = β u x , d 3 = β v x ,
which is equivalent to
k 3 = c 1 u , c 1 > 0 ; k 2 = c 2 v , c 2 > 0 , ( 1 v ) t = α β ( 1 v ) x x , ( 1 u ) t = α β ( 1 v ) x x .
Hence, the matrix K = 0 c 2 v c 1 u 0 , c 1 , c 2 > 0 . In terms of (32)–(35) and by the use of (37), one infers that
u t 1 2 α β u x x β 2 α u y y 1 2 ( 1 + 5 α β ) c 2 ( 1 v ) x + 1 2 α ( 1 + α β ) c 2 ( 1 v ) y = 0 , v t + 1 2 α β v x x + β 2 α v y y 1 2 ( 1 5 α β ) c 1 ( 1 u ) x + 1 2 α ( 1 α β ) c 1 ( 1 u ) y = 0 , β ( u v ) y α β ( u v ) x + c 1 + c 2 + α β ( c 2 c 1 ) + 2 α β ( c 1 + c 2 ) = 0 , β ( u v ) y α β ( u v ) x + ( 2 α β 1 ) ( c 1 + c 2 ) + α β ( c 1 c 2 ) = 0 .
Let β = 2 i α ; then, (38) becomes
u t i α 2 u x x i u y y 1 2 ( 1 + 10 i α 2 ) c 2 ( 1 v ) x + 1 2 α ( 1 + 2 i α 2 ) c 2 ( 1 v ) y = 0 , v t + i α 2 v x x + i v y y 1 2 ( 1 10 i α 2 ) c 1 ( 1 u ) x + 1 2 α ( 1 2 i α 2 ) c 1 ( 1 u ) y = 0 , 2 i α ( u v ) y 2 i α 2 ( u v ) x + ( 4 i α 2 + 1 ) ( c 1 + c 2 ) + 2 i α 2 ( c 2 c 1 ) = 0 , 2 i α ( u v ) y 2 i α 2 ( u v ) x + ( 4 i α 2 1 ) ( c 1 + c 2 ) + 2 i α 2 ( c 1 c 2 ) = 0 .
Set v = u * and c 1 = c 2 ; then, (39) reduces to
u t i α 2 u x x i u y y 1 2 ( 1 + 10 i α 2 ) c 1 ( 1 u * ) x + 1 2 α ( 1 + 2 i α 2 ) c 1 ( 1 u * ) y = 0 , | u | x 2 = 4 i α | u | y 2 ,
which is a rational DS-type equation. Setting the matrix K = 0 , similar to the discussion in Ref. [6], we can obtain the DS equation:
i u t + u x x + α 2 u y y 2 k α 2 | u | 2 u + S u = 0 ,
where S satisfies
α 2 S x x S y y 4 k ( | u | 2 ) x x = 0 ,
where α and k are constants.
Remark 1.
In the section, given an explicit expression of the first equation in (24), we can determine the second linear spectral expression in (24) by virtue of the Lax equation. Obviously, choosing a different linear spectral problem, (24), we can generate various integrable systems. For example, we consider the following modified linear spectral problem, which is simpler than (24) (not matrices, but scalar functions) by setting p = λ , M = 1 , N = u x , K = v :
ψ y = ( λ u x + v λ 1 ) ψ x + α ψ x .
By letting p = λ , B = 1 , C = u x , D = w , one obtains
ψ t = ( λ 2 λ u x + w ) ψ x .
The compatibility condition of the Lax pair (42) and (43) leads to
u x t + ( α u x u y ) u x x α v x + u x ( u x y α u x x ) + α ( α u x x u y y ) = w y , v t + ( α u x u y ) v ( α u x u y ) = 0 .
Taking w = u y , v = 0 , (44) reduces to
u x t u y u x x + u x u x y + α ( α u x x u x y ) + u y y = 0 .
Setting α = 0 , (45) again reduces to
u x t u y u x x + u x u x y + u y y = 0 ,
which is the Pavlov equation. Hence, Equation (45) is known as a generalized Pavlov equation. In what follows, we want to deduce the recursion operator of the generalized Pavlov equation in the setting of Ref. [5]. First, we recall some preliminaries. Consider a system of m PDEs
F I = 0 , I = 1 , 2 , , m
in d independent variables x i ( i = 1 , 2 , , d ) for an unknown N-component vector function u = ( u 1 , , u N ) T . A total derivative with respect to x i reads
D x j = x j + α = 1 N i 1 , , i n = 0 u i 1 i j 1 ( i j + 1 ) i j + 1 i n α u i 1 i n α ,
where u i 1 i n α = i 1 + + i n u α / ( x 1 ) i 1 ( x n ) i n and U 0 0 α u α .
For a local N-component vector function U, it is a symmetry for the system (46) if and only if that U satisfies the linearized version of this system, namely, l F ( U ) = 0 , where
l F = α = 1 N i 1 , , i n = 0 F u i 1 i n α D x 1 i 1 D x n i n .
Denoting
A i = A i 0 + j = 1 d A i j D x j , B i = B i 0 + j = 1 d B i j D x j , i = 1 , 2 ,
L = L 0 + k = 1 d L k D x k , M = M 0 + k = 1 d M k D x k ,
where A i j = A i j ( x , u ) , B i j = B i j ( x , u ) , L k = L k ( x , u ) , k = 0 , 1 , , d are N × m matrices and M k = M k ( x , u ) are m × N matrices. In Ref. [7], three propositions are presented as follows:
Proposition 1.
For the system (24), suppose that
(i) 
[ A 1 , A 2 ] = 0 , [ B 1 , B 2 ] = 0 ;
(ii) 
A 1 B 2 A 2 B 1 = L l F ;
(iii) 
l F = M ( B 1 A 2 B 2 A 1 ) ;
(iv) 
There are p , q { 1 , , d } , p q , such that we can express D x p U ˜ and D x q U ˜ from the relations
A i ( U ˜ ) = B i ( U ) , i = 1 , 2 ,
and then, (47) defines a recursion operator for (24), i.e., whenever U is a symmetry for (24), so is U ˜ defined by (47).
Proposition 2.
For the system (24), suppose that
(i) 
[ A 1 , A 2 ] = 0 , [ B 1 , B 2 ] = 0 ,
(ii) 
l F + = L ( B 1 A 2 B 2 A 1 ) ,
(iii) 
A 1 B 2 A 2 B 1 = M l F + ,
(iv) 
There exist p , q { 1 , , d } , p q , such that we can express D x p γ ˜ and D x q γ ˜ from the relation
A i ( γ ˜ ) = B i ( γ ) , i = 1 , 2 ,
and then, (48) defines an adjoint recursion operator for (24), i.e., whenever γ is a cosymmetry for (24), then so is γ ˜ defined by (48).
Remark 2.
A so-called cosymmetry γ means that it is a quantity that is the dual to a symmetry that satisfies the system l F + ( γ ) = 0 .
Proposition 3.
Under the assumptions of Propositions 1 and 2, the operators L i = λ A i B i , i = 1 , 2 , where λ is a spectral parameter, satisfy [ L 1 , L 2 ] = 0 , which constitute a Lax pair for (24).
According to the above known basic facts, it is easy to find the Lax pair of (45) as follows:
L 1 = ψ y + ( α + λ + u x ) ψ x = 0 , L 2 = ψ t λ ψ y + ( α λ u y ) ψ x = 0 .
For a nonlocal symmetry for (45) with the form
Φ = Φ ( x ¯ , u ) ,
where x ¯ = ( x , y , t ) , we require that there exist operators φ i that are linear in λ and such that
φ i Φ = 0 , i = 1 , 2 .
Then, one should extract A i and B i based on Proposition 1 and φ i . How do we seek such operators? For Equation (45), starting with its Lax pair in (49), we have
λ ψ x = ψ y u x ψ x + α ψ x ,
λ ψ y = ψ t + ( α λ u y ) ψ x .
It follows that
φ 1 = λ D x D y u x D x + u x x + α D x ,
φ 2 = λ D y + D t + α λ D x u y D x + u x y .
Thus, one obtains that
A 1 = D x , B 1 = D y + u x D x u x x α ,
A 2 = D y + α D x , B 2 = D t + u y D x u x y .
Applying the Garteax derivative d d ϵ | ϵ = 0 f ( u + ϵ σ ) = f ( u ) σ , the linearized equation of the gPe (45) presents that
σ x t u y σ x x u x x σ y + σ x u x y + u x σ x y + α ( α σ x x σ x y ) + σ y y = 0 .
Hence, the recursion operator for Equation (45) is obtained by using Proposition 1 and Proposition 3:
σ ˜ x = σ y + u x σ x u x x σ α σ , σ ˜ y = σ t + u y σ x u x y σ α σ x ,
which maps a (possible nonlocal) symmetry σ to a new symmetry σ ˜ . This is the recursion operator found in [7], rewritten as a Bäcklund auto-transformation for (50).
In the following, we consider some solutions of (50). It is easy to see that the conjugate equation of (50) is given by
ψ x t x 2 u y ψ + y u x x ψ x u x y ψ + x y u x ψ + α 2 ψ x x α ψ x y + ψ y y = 0 .
It can be verified that if h n are conserved densities of Equation (50), then the variation
δ h n δ u = : ψ n
is the solution of Equation (52). Therefore, assuming that ψ is a solution of (52), we can verify that
σ = ψ e g ( ψ ) d ψ
is a solution of Equation (50), where
g ( ψ ) = ψ u x x ψ y u x y ψ x ψ x ψ t + ( α 2 u y ) ψ x 2 + ( u x α ) ψ x ψ y + ψ y 2 d ψ .
In fact, supposing that σ = f ( ψ ) is a solution to Equation (50), we put it into Equation (45) and calculate that
f ( ψ ) f ( ψ ) = u x x ψ y u x y ψ x ψ x ψ t + ( α 2 u y ) ψ x 2 + ( u x α ) ψ x ψ y + ψ y 2 ,
which implies that (53) holds true.

3. A Linear Nonisospectral Lax Pair and Applications

In the section, we apply the linear nonisospectral Lax pair (22) in contact geometry to consider the generation of (3+1)-dimensional integrable systems, which are known as nonisospectral integrable systems because operator X H has a derivative H p , which indicates that function H is dependent on parameter p.
According to the discussion on the R-matrix method in [4], for the following general Lax functions,
L = L n = u n p n + u n 1 p n 1 + + u 0 + u 1 p 1 + , n > 0 , B = P + L n = v n p n + v n 1 p n 1 + + v 0 .
A special case of the above Lax functions is chosen as
L = u 2 p 2 + u 1 p + u 0 + u 1 p 1 , B = v 0 + v 1 p + v 2 p 2 + v 3 p 3 .
The corresponding nonisospectral Lax pair exhibits that
ϕ y = X L ( ϕ ) Y ( B ) ϕ = ( u 2 p 2 + u 0 ) ϕ z + ( 2 u 2 p + u 1 u 1 p 2 ) ϕ x + ( u 2 z p 2 u 1 z p u 0 z u 1 , z ) ϕ + [ u 2 z p 3 u 2 x p 2 + u 1 z p 2 + ( u 0 z u 1 x ) p + u 1 , z u 0 x u 1 , x p 1 ] ϕ p ,
ϕ t = X B ( ϕ ) Y ( B ) ϕ = ( v 0 2 v 2 p 2 2 v 3 p 3 ) ϕ z + ( v 1 + 2 v 2 p + 3 v 3 p 2 ) ϕ x + ( v 0 z v 1 z p v 2 z p 2 v 3 z p 3 ) ϕ + [ v 3 z p 4 + ( v 2 z v 3 x ) p 3 + ( v 1 z v 2 x ) p 2 + ( v 0 z v 1 x ) p v 0 x ] ϕ p .
The compatibility condition of (55) and (56) leads to the following equations with (3+1) dimensions:
u 2 v 3 z + 4 v 3 u 2 z = 0 ,
2 u 2 v 3 x 3 v 3 u 2 x u 2 v 2 z + 2 v 3 u 1 z + v 2 u 2 z = 0 ,
v 3 y + 2 u 2 v 2 x + u 1 v 3 x 2 u 2 x v 2 3 v 3 u 1 x u 2 v 1 z + v 2 u 1 z + 2 v 3 u 0 z + u 0 v 3 z = 0 ,
u 2 t v 2 y + 2 v 1 x u 2 + u 1 v 2 x 2 v 2 u 1 x v 1 u 2 x 3 v 3 u 0 x u 2 u 0 z + 2 u 1 v 3 z + v 2 u 0 z + 2 v 3 u 1 , z + u 0 v 2 z v 0 u 2 z = 0 ,
u 1 t v 1 y + 2 u 2 v 0 x + u 1 v 1 x u 1 v 3 x v 1 u 1 x 2 v 2 u 0 x 3 v 3 u 1 , x + 2 u 1 v 2 z + v 2 u 1 , z + u 0 v 1 z v 0 u 1 z = 0 ,
u 0 t v 0 y + u 1 v 0 x u 1 v 2 x v 1 u 0 x 2 v 2 u 1 , x + 2 u 1 v 1 z + u 0 v 0 z v 0 u 0 z = 0 ,
u 1 , t u 1 v 1 x v 1 u 1 , x + 2 u 1 v 0 z v 0 u 1 , z = 0 ,
u 1 v 0 x = 0 .
In order to recognize what the system of equations in (57)–(63) is, we now consider their special cases. Taking u 1 = v 3 = 0 , v 2 = u 2 , v 1 = u 1 , v 0 = u 0 , we obtain a (3+1)-dimensional integrable system:
u 0 t = u 1 u 0 x , u 1 t = u 1 y + 2 u 2 u 0 x u 0 u 1 z , u 2 t = u 2 y .
When u 2 = 0 , (64) reduces to
u 0 t = u 1 u 0 x , u 1 t = u 1 y u 0 u 1 z
Denoting u 0 = w , (65) can be transformed to a (3+1)-dimensional nonlinear equation:
w t t w x = w t ( w x t + w w x z w x y ) + w x ( w t y w w t z ) ,
which can be written as
( ln w t w x ) t = ( ln w t w x ) y w ( ln w t w x ) z .
This is a (3+1)-dimensional rational equation that looks beautiful! When u 2 = 1 , (64) becomes the following (3+1)-dimensional integrable system:
u 0 t = u 1 u 0 x , u 1 t = u 1 y + 2 u 0 x u 0 u 1 z .
It is easy to see that
u 0 t t = u 1 t u 0 x + u 1 u 0 x t = u 1 y u 0 x + 2 u 0 x 2 u 0 u 0 x u 1 z + u 1 u 1 x u 0 x + u 1 2 u 0 x x = u 1 y u 0 x + 2 u 0 x 2 u 0 u 0 x u 1 z + u 1 u 0 x t .
Setting u 1 = 1 , the above equation can be reduced to
u 0 t t = u 0 x x + 2 u 0 x 2 ,
which is called a Boussinesq-type equation; the reason why it has this name will be explained later.

4. The Recursion Operators and Linearizations

In the section, we apply a Lie-group analysis to discuss the recursion operators and the linearizations of Equation (68). Such a method is not only suitable for Equation (68) but also suitable for other associated integrable equations or integrable systems.
For convenience, we rewrite (68) as follows:
u t t = u x x + 2 u x 2 .
Firstly, we need to recall some basic facts for transforming nonlinear PDEs into linear PDEs using Lie groups (see [8]).
Lemma 1 (necessary conditions for the existence of an invertible mapping): If there exists an invertible transformation μ , which maps a given nonlinear system of PDEs R { x , u ) } to a linear system of PDEs S { z , w } , then
(i)
The mapping must be a point transformation of the form
z j = ϕ j ( x , u ) , j = 1 , 2 , , n , w γ = ψ γ ( x , u ) , γ = 1 , 2 , , m ;
(ii)
R { x , u } must admit an infinite-parameter Lie group of point transformations having infinitesimal generator
X = ξ i ( x , u ) u i + η γ u γ ,
with ξ i ( x , u ) , η γ ( x , u ) characterized by
ξ i ( x , u ) = σ = 1 m α i σ ( x , u ) F σ ( x , u ) ,
η ν ( x , u ) = σ = 1 m β ν σ ( x , u ) F σ ( x , u ) ,
where α i σ ( x , u ) , β ν σ ( x , u ) , i = 1 , , n ; ν = 1 , , m ; σ = 1 , , m are some functions of ( x , u ) and F = ( F 1 , , F m ) is an arbitrary solution of some linear system of PDEs
L [ X ] F = 0 ,
with L [ X ] representing a linear differential operator depending on independent variables X = ( X 1 ( x , u ) , , X n ( x , u ) ) .
Lemma 3
([8]). (Sufficient conditions for the existence of an invertible mapping): Let a given nonlinear system of PDEs R { x , u } admit an infinitesimal generator (71) for which the coefficients are of the forms in (72) and (73), with F being an arbitrary solution of a linear system (74) with specific independent variables:
X ( x , u ) = ( X 1 ( x , u ) , , X n ( x , u ) ) .
If the linear homogeneous system of m first-order PDEs for scalar Φ
α i σ ( x , u ) Φ x i + β ν σ ( x , u ) Φ u ν = 0 , σ = 1 , , m ,
has n functionally independent solutions
X 1 ( x , u ) , , X n ( x , u )
and the linear system of m 2 first-order PDEs is
α i σ ( x , u ) ψ γ x i + β ν σ ( x , u ) ψ γ u ν = δ γ σ ,
where δ γ σ is the Kronecker symbol, γ , σ = 1 , , m has a solution
ψ = ( ψ 1 ( x , u ) , , ψ m ( x , u ) ) ,
then the invertible mapping μ given by
z j = ϕ j ( x , u ) = X j ( x , u ) , j = 1 , 2 , , n , w γ = ψ γ ( x , u ) , γ = 1 , 2 , , m
transforms R { x , u } into a linear system of PDEs S { z , w } ,
L ( z ) w = g ( z )
for some nonhomogeneous term g ( z ) .
Proposition 4.
The Boussinesq-type equation in (69) has the following linearizations:
2 F t 2 2 F x 2 = 0 , i f F = F ( x , t ) ,
( u x x 2 u x ) F u + 2 F t 2 + 2 u t 2 F t u + u t 2 2 F u 2 = 0 , i f F = F ( t , u ) .
Proof. 
Assume that Equation (69) has the Lie–Bäcklund symmetry
U = F ( x , t , u ) u ;
then its prolongation U ( 2 ) given by
U ( 2 ) = U + ( D x F ) u x + ( D x 2 F ) u x x + ( D t 2 F ) u t t
acts on Equation (69) and leads to
( D t 2 D x 2 2 u x D x ) F ( x , t , u ) = 0 ,
which means that
( u x x 2 u x ) F u + 2 F t 2 + 2 u t 2 F t u + u t 2 2 F u 2 2 F x 2 2 u x 2 F x u u x 2 2 F u 2 = 0 ,
If F = F ( x , t ) , then (82) reduces to a linear partial differential equation:
2 F t 2 2 F x 2 = 0 ,
which is the standard liner wave equation.
If F = F ( t , u ) , then (82) becomes a linear equation:
( u x x 2 u x ) F u + 2 F t 2 + 2 u t 2 F t u + u t 2 2 F u 2 = 0 ,
which is just Equation (80).
The proof is completed. □
In addition, we can also obtain other linearizations of (69) from (82). For example, if F = F ( u ) , then (82) has the following reduction:
( u x x 2 u x ) F u + u t 2 2 F u 2 u x 2 2 F u 2 = 0 ,
In terms of (81) and Theorem 5.2.4.-4 (see [8]), the characteristic function W = F leads to
ξ j = W u j , η = u i W u i W , η j ( 1 ) = W x j u j W u , j = 1 , , m ; i = 1 , , n .
For Equation (83), ξ 1 = 0 , ξ 2 = 0 , η = F , X 1 = t , C 2 = x . From Lemma 1, we have
α 1 ( x , t , u ) = α 2 ( x , t , u ) = 0 ; η = F β = 1 .
Equation (76) can be written as
ψ 1 u = 1 ψ 1 = u .
Thus, we obtain an invertible mapping between Equation (69) and Equation (83) as follows:
z 1 = X 1 = t , z 2 = X 2 = x , w = ψ 1 = u .
The resulting linearized equation reads that
2 w z 1 2 = 2 w z 2 2 .
Therefore, as long as some exact solutions of (87) are obtained, the solutions of (69) could also be known.
Next, we consider an invertible mapping between Equation (69) and its linearized Equation (84). According to (86), we find that
ξ 1 = ξ 2 = 0 , η = F .
Solving Equation (75) yields that
X 1 = t , X 2 = u .
Equation (76) becomes ψ 1 u = 1 , which has a solution ψ 1 = u . Hence, an integrable mapping can be given by
z 1 = t , z 2 = u , w = u .
The resulting linearization of Equation (69) is shown to be
w z 2 + 2 w z 1 2 + 2 u z 1 2 w z 1 z 2 + u z 1 2 2 w z 2 2 = 0
under the constraint u x x = 2 u x . For Equation (85), we can similarly discuss the invertible mapping among (69) and (85); here, we omit it.
In what follows, we consider a possible invertible mapping of the nonlinear Equation (69) and its linearizations by means of contact transformation. Noting x = x 1 , t = x 2 , (69) can be written as
u 22 = u 11 + 2 ( u 1 ) 2 .
Assuming that (88) has a Lie–Bäcklund transformation
F ( x , u , u 1 ) u ,
where x = ( x , t ) , u 1 = ( u x , u t ) , we introduce two lemmas.
Lemma 4
([8]). If there exists an invertible transformation μ that maps a given nonlinear scalar PDE R { x , u } to a linear scalar PDE S { z , w } ; then,
(i) 
The mapping μ has the form
z j = ϕ j ( x , u , u 1 ) ,
w = ψ ( x , u , u 1 ) ,
w j = ψ j ( x , u , u 1 ) , j = 1 , , n ;
(ii) 
R { x , u } admits the infinitesimal generator
X = ξ i ( x , u , u 1 ) x i + η ( x , u , u 1 ) u + η i ( 1 ) ( x , u , u 1 ) u i ,
with ξ i , η , η i ( 1 ) given by
ξ i = α i F + α i j H j , η = β F + β j H j , η i ( 1 ) = λ i F + λ i j H j ,
where F = F ( x , u , u 1 ) is an arbitrary solution of some linear PDE
L ( X ) F = 0 ,
X = ( X 1 ( x , u , u 1 ) ) , , X n ( x , u , u 1 ) ,
H j = H j ( x , u , u 1 ) = F x j , j = 1 , , n .
Lemma 5
([8]). Let a given nonlinear scalar PDE(m=1) R { x , u } admit a generator (89) with coefficients of the form in (90). Suppose that the following four conditions hold:
(i) 
α i Φ x i + β Φ u + λ i Φ u i = 0 , α i j Φ x i + β j Φ u + λ i j Φ u i = 0 , j = 1 , , n
has
X 1 ( x , u , u 1 ) , , X n ( x , u , u 1 )
as n functionally independent solutions;
(ii) 
The following equations
α i ϕ x i + β ϕ u + λ i ϕ u i = 1 , α i j ϕ x i + β j ϕ u + λ i j ϕ u i = 0 , j = 1 , , n
have a solution ψ ( x , u , u 1 ) ;
(iii) 
The linear equations
α i ϕ j x i + β ϕ j u + λ i ϕ j u i = 0 , α i k ϕ j x i + β k ϕ j u + λ i k ϕ j u i = δ k j , j , k = 1 , , n
have n functionally independent solutions
ψ 1 ( x , u , u 1 ) = ( ψ 1 ( x , u , u 1 ) , , ψ n ( x , u , u 1 ) ;
(iv) 
( z , w , w 1 ) = ( X ( x , u , u 1 ) , ψ ( x , u , u 1 ) , ψ 1 ( x , u , u 1 ) ) define a contact transformation.
Then, the invertible mapping μ given by
z j = ϕ j = X j , w = ψ , w j = ψ j , j = 1 , , n .
transforms R { x , u } into a linear PDE { z , w } :
L ( z ) w = g ( z ) ,
for some nonhomogenous term g ( z ) .
In the following, we consider some linearizations of Equation (69) by using the above Lemmas 3 and 4 via the contact transformations. For later convenience, we copy (88) as follows:
u 22 = u 11 + 2 u 1 2 .
Assume that (94) admits a Lie–Bäcklund symmetry
U = F ( x 1 , x 2 , u , u 1 , u 2 ) u ;
then, one infers that
D 1 F = F x 1 + u 1 F u + u 11 F u 1 + u 12 F u 2 ,
D 1 2 F = 2 F x 1 2 + 2 u 1 2 F x 1 u + 2 u 11 2 F x 1 u 1 + 2 u 12 2 F x 1 u 2 + u 11 F u + u i 2 2 F u 2 + 2 u 1 u 11 2 F u u 1 + 2 u 1 u 12 2 F u u 2 + u 111 F u 1 + u 11 2 2 F u 1 2 + 2 u 11 u 12 2 F u 1 u 2 + u 112 F u 2 + u 12 2 2 F u 2 2 ,
D 2 2 F = 2 F x 2 2 + 2 u 2 2 F x 2 u + 2 u 12 2 F x 2 u 1 + 2 u 22 2 F x 2 u 2 + u 22 F u + u 2 2 2 F u 2 + 2 u 2 u 12 2 F u u 1 + 2 u 2 u 22 2 F u u 2 + u 122 F u 1 + u 12 2 2 F u 1 2 + 2 u 12 u 22 2 F u 1 u 2 + u 222 F u 2 + u 22 2 2 F u 2 2 .
Thus, the linearization of (94) presents that
2 F x 2 2 + 2 u 2 2 F x 2 u + 2 u 12 2 F x 2 u 1 + 2 u 22 2 F x 2 u 2 + ( u 2 2 u 1 2 u 12 2 ) 2 F u 2 + ( 2 u 2 u 12 2 u 1 u 11 ) 2 F u u 1 + ( 2 u 2 u 22 2 u 1 u 12 ) 2 F u u 2 + u 12 2 2 F u 1 2 + ( 2 u 12 u 22 2 u 11 u 12 ) 2 F u 1 u 2 + ( u 22 2 u 11 2 ) 2 F u 2 2 2 F x 1 2 2 u 1 2 F x 1 u 2 u 11 2 F x 1 u 1 2 u 12 2 F x 1 u 2 4 u 1 F x 1 2 u 1 2 F u = 0 .
Proposition 5.
Assume F = F ( x 2 , u ) ; then, (95) reduces to a linearized equation as follows:
2 F x 2 2 + 2 u 12 2 F x 2 u 1 + ( u 12 2 u 11 2 ) 2 F u 1 2 = 0 .
An invertible mapping between Equation (69) i.e., Equations (94) and (96), can be established. In fact, in terms of (86), one has that, if W = F ,
ξ 1 = F u 1 , ξ 2 = 0 , η = u 1 F u 1 + F ,
η 1 ( 1 ) = F x 1 , η 2 ( 1 ) = F x 2 + u 2 F u = F x 2 .
According to Lemma 4, we have
α 1 = α 2 = α 12 = α 21 = α 22 = 0 , α 11 = 1 , β = 1 , β 1 = u 1 , β 2 = 0 , λ 1 = λ 2 = λ 11 = λ 12 = λ 21 = 0 , λ 22 = 1 .
Again applying Lemmas 3 and 4, the function ψ satisfies
ψ u = 1 , ψ x 1 + u 1 ψ u = 0 , ψ u 2 = 0 ,
which has a solution ψ = u x 1 u 1 .
In addition, (93) admits that
ψ 1 = x 1 , ψ 2 = u 2 .
Hence, we obtain the invertible mapping
z 1 = u 1 , z 2 = x 2 , w = u x 1 u 1 , w 1 = x 1 , w 2 = u 2
between (69) and the following linearization equation:
2 w z 2 2 + 2 u x 1 z 2 2 w z 1 z 2 + u x 1 z 2 2 2 w z 1 2 u x 1 x 1 2 2 w z 1 2 = 0 ,
where x 1 can be regarded as a free variable and u ( x 1 , z 2 ) can regarded as a parameter function. When u is independent of x 1 , Equation (97) reduces to 2 w z 2 2 = 0 .

5. Bäcklund Transformations and Invariant Solutions of Equation (69)

In the section, we investigate the Bäcklund transformation of Equation (69) via a undetermined method such as (100) (see, below). Given seed solutions, we can apply the Bäcklund transformation to deduce other exact solutions of integrable equations.
We all know that the Boussinesq equation reads as
u t t + α u x x + β ( u 2 ) x x + γ u 4 x = 0 ,
while the Boussinesq-type Equation (69) can be written as
u t t u x x + ( u 2 ) x x 2 u u x x = 0 .
Compared with (98), we find the nonlinear term u u x x in (99) different from the linear term u 4 x in (98). Defining degree ( u ) = : deg ( u ) = 2 , deg ( x ) = 1 , we see that deg ( u u x x ) = deg ( u 4 x ) , which indicates that (99) has many of the same properties as (98) in some aspects. Because the Boussinesq Equation (98) has some Bäcklund transformation and conservation laws (see [9]), we guess that the Boussinesq-type Equation (99) may also have a Bäcklund transformation. In what follows, we want to follow the approach to look for such a property.
Setting u x = w , Equation (99) becomes
w t t = w x x + 2 w w x = : F ( w ) .
Let a Bäcklund transformation of (100) be as follows:
u t = α u x + f ( u , v ) , v t = β v x + g ( u , v ) .
Noting u = w + w , v = w w , that is
w = 1 2 ( u + v ) , w = 1 2 ( u v ) ,
w and w satisfy Equation (100), i.e.,
w t t = F ( w ) , W t t = F ( w ) ,
which implies that
u t t = F ( u + v 2 ) + F ( u v 2 ) , v t t = F ( u + v 2 ) F ( u v 2 ) .
Via calculation, one infers from (102) that
u t t = u x x + u u x + v v x , v t t = v x x + v u x + u v x .
From (101), we have
u t t = α 2 u x x + 2 α f u u x + ( α f v + β f v ) v x + f u f + f v g , v t t = β 2 v x x + ( α + β ) g u u x + 2 β g v v x + g u f + g v g .
Comparing (103) with (104) yields that
α 2 = 1 , 2 α f u = u , ( α + β ) f v = v , f u f + f v g = 0 , β 2 = 1 , ( α + β ) g u = v , 2 β g v = u , g u f + g v g = 0 .
Taking α = β = 1 , we find that
f = 1 4 ( u 2 + v 2 ) + σ , g = 1 2 u v + δ ,
with a constraint condition
u 3 + 3 u v 2 + 4 σ u + 4 δ v = 0 , v 3 + 3 u 2 v + 4 σ v + 4 δ u = 0 ,
where σ and δ are constants. Thus, we obtain the Bäcklund transformation of Equation (100) with parameters σ and δ :
u t = u x + 1 4 ( u 2 + v 2 ) + σ , v t = v x + 1 2 u v + δ ,
along with Constraint (105).
Let w = 0 , then u = v = w . Hence, (106) becomes that when σ = δ :
w t = w x + 1 2 w 2 + σ .
Moreover, from (105), we obtain
2 w 3 + 4 σ w = 0 w 3 + 2 σ w = 0 w = 0 o r w 2 + 2 σ = 0 .
Therefore, (107) becomes that when w 0 :
w t = w x ,
which indicates that Equation (100) only has a constant solution.
Remark 3.
When w 0 , for example, w = f ( a x + b t ) = : f ( ξ ) , we can compute that
w = a 2 b 2 a ξ .
Hence,
u = w + a 2 b 2 a ξ , v = w a 2 b 2 a ξ .
Inserting (109) into (106) and (105), we can obtain a new solution w by using the above similar calculations; here, we omit them.
In what follows, we discuss the invariant solution of (100). Firstly, we write Equation (100) as a form of the conservation laws:
t w t = x ( w x + w 2 ) .
Next, we introduce a variable v that satisfies
v x = w t , v t = w x + w 2 .
Using Maple, the infinitesimal generators of (110) are given by
X 1 = x x + t t w w v v ,
X 2 = x , X 3 = t , X 4 = v .
For the vector field X 1 , an invariance ξ is presented as ξ = x t by solving the equation d x x = d t t .
The characteristic equation corresponding to X 1 reads as
d x x = d w w = d v v
which has invariant functions that satisfy
x w = f ( ξ ) , x v = g ( ξ ) ,
where f ( ξ ) , g ( ξ ) are arbitrary smooth functions that meet the following ODEs with variable coefficients:
g ( ξ ) + ξ g ( ξ ) + ξ 2 f ( ξ ) = 0 , f ( ξ ) + ξ f ( ξ ) + f 2 ( ξ ) + ξ 2 g ( ξ ) = 0 .
Differentiating the first equation in (111) with respect to ξ gives that
g ( ξ ) = 2 f ( ξ ) ξ f ( ξ ) .
Again integrating (112) with respect to ξ leads to
g ( ξ ) = 3 f ( ξ ) ξ f ( ξ ) ,
where the integral constant is taken to be zero. Similarly, we differentiate the second equation in (111) for ξ and yield the ODE:
ξ f ( ξ ) + 2 f ( ξ ) f ( ξ ) + 2 ξ g ( ξ ) + ξ 2 g ( ξ ) = 0 .
Instituting (112) and (113) into (114), one infers that
( ξ ξ 3 ) f ( ξ ) + 2 f ( ξ ) f ( ξ ) 4 ξ 2 f ( ξ ) 6 ξ f ( ξ ) = 0 .
As long as some solution of (115) is obtained, the resulting solution of Equation (100) can be also presented.
Remark 4.
How do we solve Equation (115) with variable coefficients? A feasible way may be seeking its series solutions. Concerning this problem, we would like to discuss it in another paper.

6. Conclusions

In this paper, we utilized the isospectral and nonisospectral Lax pairs based on the R-matrix theory to generate some new (2+1)- and (3+1)-dimensional integrable systems that can be reduced to the generalized DS equation and the Pavlov equation, as well the Boussinesq-type equation. Some properties of these reduced equations including the recursion operators, linearizations equations, invertible mappings, and Bäcklund transformations were obtained. Recently, we developed an approach for generating nonisospectral integrable hierarchies of evolution equations (see [10]). By applying this method, a series of integrable systems and their some properties were produced [11,12,13]. Ma [14] and Qiao [15] once presented some good ways for generating nonisospectral integrable hierarchies. However, there is some difference where the evolution of the spectral parameter λ in [14,15] presents the format λ t = a ( t ) λ n , while the λ t in [11,12,13] is expressed by a polynomial in λ . Moreover, some methods for generating integrable systems can refer to those in Refs. [16,17,18,19,20,21,22,23].

Author Contributions

Resources, Y.Z.; writing—original draft preparation, H.Z.; writing—review and editing, B.F. All authors have contributed to all aspects of this manuscript and have reviewed its final draft. All authors read and approved the final manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 11971475).

Data Availability Statement

No datasets were generated or analyzed during the current study.

Acknowledgments

The authors would like to thank anonymous referees for their valuable comments and helpful suggestions that improved the quality of our paper.

Conflicts of Interest

The authors declare that they have no conflicts of interests.

References

  1. Blaszak, M. Classical R-matrices on Poisson algebras and related dispersionless systems. Phys. Lett. A 2002, 297, 191–195. [Google Scholar]
  2. Blaszak, M.; Szablikowski, B. Classical R-matrix theory of dispersionless systems I. J. Phys. A Math. Gen. 2001, 35, 225–259. [Google Scholar]
  3. Blaszak, M.; Szablikowski, B. Classical R-matrix theory of dispersionless system II. J. Phys. A Math. Gen. 2002, 35, 10325–10344. [Google Scholar]
  4. Blaszak, M.; Sergyeyev, A. Contact Lax pairs and associated (3+1)-dimensional integrable dispersionless systems. arXiv 2019, arXiv:1901.05181v1. [Google Scholar]
  5. Sergyeyev, A. New integrable (3+1)-dimensional systems and contact geometry. arXiv 2017, arXiv:1401.2122v5. [Google Scholar]
  6. Li, Y.S. Soliton and Integrable System; Shanghai Scientific and Technological Education Publishing House: Shanghai, China, 1999. (In Chinese) [Google Scholar]
  7. Sergyeyev, A. A simple construction of recursion operators for multidimensional dispersionless integrable systems. arXiv 2017, arXiv:1501.01955v4. [Google Scholar]
  8. Bluman, G.W.; Kumei, S. Symmetries and Differential Equations; Springer: New York, NY, USA, 1989. [Google Scholar]
  9. Tu, G.Z. The Bäcklund transformations and conservation laws of the Boussinesq equation. Acta Math. Appl. Sin. 1981, 4, 63–68. [Google Scholar]
  10. Zhang, Y.F.; Mei, J.Q.; Guan, H.Y. A method for generating isospectral and nonisospectral hierarchies of equations as well as symmetries. Geom. Phys. 2020, 147, 103538. [Google Scholar]
  11. Zhang, Y.F.; Zhang, X.Z. A scheme for generating nonisospectral integrable hierarchies and its related applications. Acta Math. Sin. Engl. Ser. 2021, 37, 707–730. [Google Scholar]
  12. Zhang, Y.F.; Wang, H.F.; Bai, N. Schemes for generating different nonlinear Schrödinger integrable equations and their some properties. Acta Math. Appl. Engl. Ser. 2022, 38, 579–600. [Google Scholar]
  13. Zhao, S.Y.; Zhang, Y.F.; Zhou, J.; Zhang, H.Y. Coverings and nonlocal symmetries as well as fundamental solutions of nonlinear equations derived from the nonisospectral AKNS hierarchy. Commun. Nonlinear Sci. Numer. Simulat. 2022, 14, 106622. [Google Scholar]
  14. Ma, W.X. An approach for constructing nonisospectral hierarchies of evolution equations. J. Phys. A Math. Gen. 1992, 25, L719. [Google Scholar]
  15. Qiao, Z.J. New hierarchies of isospectral and non-isospectral integrable NLEEs derived from the Harry-Dym spectral problem. Physica A 1998, 252, 377. [Google Scholar]
  16. Ablowitz, M.J.; Segur, H. Solitons and the Inverse Scattering Transform; SIAM: Philadelphia, PA, USA, 1981. [Google Scholar]
  17. Ablowitz, M.J.; Chakravarty, S.; Halburd, R.G. Integrable systems and reductions of the self-dual Yang—CMills equations. J. Math. Phys. 2003, 44, 3147. [Google Scholar]
  18. Newell, A.C. Solitons in Mathematics and Physics; SIAM: Philadelphia, PA, USA, 1985. [Google Scholar]
  19. Tu, G.Z. The trace identity, a powerful tool for constructing the Hamiltonian structure of integrable systems. J. Math. Phys. 1989, 30, 330–338. [Google Scholar]
  20. Ma, W.X. A new hierarchy of Liouville integrable generalized Hamiltonian equations and its reduction. Chin. J. Contemp. Math. 1992, 13, 79–89. [Google Scholar]
  21. Ma, W.X. A hierarchy of Liouville integrable finite-dimensional Hamiltonian systems. Appl. Math. Mech. 1992, 13, 369. [Google Scholar]
  22. Hu, X.B. A powerful approach to generate new integrable systems. J. Phys. A 1994, 27, 2497. [Google Scholar] [CrossRef]
  23. Zhang, Y.F.; Liu, Y.Y.; Liu, J.G.; Feng, B.L. A New Non-isospectral Integrable Hierarchy and Some Associated Symmetries. J. Math. Res. Appl. 2023. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, B.; Zhang, Y.; Zhang, H. Applications of the R-Matrix Method in Integrable Systems. Symmetry 2023, 15, 1623. https://doi.org/10.3390/sym15091623

AMA Style

Feng B, Zhang Y, Zhang H. Applications of the R-Matrix Method in Integrable Systems. Symmetry. 2023; 15(9):1623. https://doi.org/10.3390/sym15091623

Chicago/Turabian Style

Feng, Binlu, Yufeng Zhang, and Hongyi Zhang. 2023. "Applications of the R-Matrix Method in Integrable Systems" Symmetry 15, no. 9: 1623. https://doi.org/10.3390/sym15091623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop