Next Article in Journal
3F4 Hypergeometric Functions as a Sum of a Product of 2F3 Functions
Previous Article in Journal
Approximate and Parametric Solutions to SIR Epidemic Model
Previous Article in Special Issue
Binomial Series Involving Harmonic-like Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Two-Variable Analogue Matrix of Bessel Polynomials and Their Properties

1
College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, China
2
Department of Mathematics, Faculty of Science, Al-Azhar University, Assiut 71524, Egypt
3
Department of Mathematics, College of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi Arabia
4
Department of Mathematical Science, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 105862, Riyadh 11656, Saudi Arabia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(3), 202; https://doi.org/10.3390/axioms13030202
Submission received: 6 February 2024 / Revised: 7 March 2024 / Accepted: 9 March 2024 / Published: 17 March 2024
(This article belongs to the Special Issue Research in Special Functions)

Abstract

:
In this paper, we explore a study focused on a two-variable extension of matrix Bessel polynomials. We initiate the discussion by introducing the matrix Bessel polynomials involving two variables and derive specific differential formulas and recurrence relations associated with them. Additionally, we present a segment detailing integral formulas for the extended matrix Bessel polynomials. Lastly, we introduce the Laplace–Carson transform for the two-variable matrix Bessel polynomial analogue.

1. Introduction

In modern mathematics, special functions play a crucial role in various disciplines. Special versions of these functions have proven invaluable in various fields, including probability theory, computer science, mathematical physics, engineering, and many other areas (see [1,2]).
Bessel polynomials are important due to their natural occurrence in seemingly unrelated situations. For example, they appear in the solution of the wave equation in spherical polar coordinates (see [3]), in network synthesis and design (see [4]), in the analysis of the Student t-distribution (as shown in [5]), and in the development of a matrix approach suitable for solving differential equations with multiple orders and fractions of orders. In addition, Bessel polynomials play a role in the representation of the energy spectrum functions for a family of isotropic turbulence fields (see [6]).
In 1949, Krall and Frink [3] presented an article on what they termed Bessel polynomials. Within this work, they introduced the elementary Bessel polynomials in the following manner:
Y ϵ ( x ) = s = 0 ϵ ϵ s ϵ + s s s ! ( x 2 ) s   =   2 F 0 ( ϵ , ϵ + 1 ; ; x 2 ) ,
where    2 F 0  is the Gauss hypergeometric function of a two-numerator as
  2 F 0 ( ϵ , ϵ + 1 ; ; z ) = s = 0 ( ϵ ) s ( ϵ + 1 ) s n ! z n .
The generalized Bessel polynomials, denoted as  Y ϵ ( μ , ν ; x )  and extending the Bessel polynomials  Y ϵ  by introducing two parameters, are defined based on the research by Krall and Frink [3] as follows:
Y ϵ ( μ , ν ; x ) = k = 0 ϵ ϵ k ( ϵ + μ 1 ) k ( x ν ) k =   2 F 0 ( ϵ , μ + ϵ 1 ; ; x ν ) .
where  ( μ ) k  is the Pochhammer symbol, defined by
( μ ) k = μ ( μ + 1 ) . . . ( μ + k 1 ) = Γ ( μ + k ) Γ ( μ ) , k 1 , 1 , k = 0 .
They also obtained certain recurrence relations between these polynomials and one generating function for the Bessel polynomials proper and also provided some of their qualities including orthogonality and the linkages between them and Bessel functions.
Recently, numerous approaches have been employed to study these polynomials, demonstrating their utility in various research fields (see, e.g., [7,8,9,10,11]).
The Laplace–Carson transform of the function  G ( s )  for all  s 0  is defined as [12]
L G ( s ) = p 0 G ( s ) e p s d s = g ( p ) ,
where  L  is Laplace–Carson transform operator.
The field of generalized special matrix functions has undergone significant development in recent years. This interest is due to various reasons. Focusing on practical applications, one realizes that the use of new classes of special matrix functions in certain physical problems has led to solutions that are difficult to obtain using traditional analytical and numerical methods. In this context, Hermite, Chebyshev, Jacobi, Laguerre, and Gegenbauer matrix polynomials have been introduced and studied (see, e.g., [13,14,15,16,17]).
The objective of this paper is to present a novel two-variable analogue, denoted as  Y ϵ ( θ , ϑ ) ( z , w ) , and to derive specific outcomes related to the two-variable matrix Bessel polynomials  Y ϵ ( θ , ϑ ) ( z , w ) . Additionally, we explore applications involving the Laplace–Carson transform of functions.
The structure of this article is outlined as follows: In Section 2, we give a brief introduction to certain matrix functions that are important for the further development of the article. In Section 3, we introduce a novel extension of Bessel matrix polynomials called  Y ϵ ( θ , ϑ ) ( z , w )  and state several theorems on recurrence relations and the derivation formula for this extension. Section 4 deals with various integral formulas for the Bessel matrix polynomials  Y ϵ ( θ , ϑ ) ( z , w ) . In Section 5, we give theorems on the Laplace–Carson transform of functions containing the new extension of the matrix Bessel polynomials  Y ϵ ( θ , ϑ ) ( z , w ) . The concluding remarks and future work are presented in Section 6.

2. Some Definitions and Notations

Let  C r  be the r-dimensional complex vector space and  C r × r  denote all square complex matrices of order  r .  A positive stable matrix  θ  in  C r × r  is denoted as  R e ( λ ) > 0  for all  λ σ ( θ ) , where  σ ( θ )  is the set of all Eigenvalues of  θ  and  θ n 0 ,  then we call
P n ( z ) = θ n z n + θ n 1 z n 1 + θ n 2 z n 2 + . . . . + θ 0 ,
a matrix polynomial in z of degree n.
In the matrix complex space  C r × r , we use I to represent the identity matrix and  O  for the zero matrix.
The spectrum of a matrix  θ  in  C r × r  is defined as the set of all its Eigenvalues and is denoted by  σ ( θ ) . If  g ( z )  and  h ( z )  are holomorphic functions defined on an open set  D C  and  θ  is a matrix in  C r × r , such that  σ ( θ ) D , then the commutative property  g ( θ ) h ( θ ) = h ( θ ) g ( θ )  holds ([18,19]).
Furthermore, if  θ  is a matrix in  C r × r  with  σ ( θ ) D  and  θ ϑ = ϑ θ , where  ϑ  is a different matrix, then the commutative property  g ( θ ) h ( ϑ ) = h ( ϑ ) g ( θ )  is satisfied.
If  θ  is a positive stable matrix in  C r × r , then the gamma matrix function  Γ ( θ )  is defined as in the References (see, e.g., [19,20,21,22,23,24]):
Γ ( θ ) = 0 t θ I e t d t where t θ I = e ( θ I ) ln t .
The matrix function of Beta is provided if  θ  and  ϑ  are positive stable matrices in  C r × r  (see, e.g., [19,20,21,22,23,24]):
β ( θ , ϑ ) = 0 1 t θ I ( 1 t ) ϑ I d t .
Also, let  θ ϑ , and  θ + ϑ  be positive stable matrices in  C r × r  and  θ ϑ = ϑ θ , then (see [18,19,20])
β ( θ , ϑ ) = Γ ( θ ) Γ ( ϑ ) Γ 1 ( θ + ϑ ) .
In  C r × r , if  θ  is a matrix that allows
θ + ϵ I is invertible for all ϵ 0 ,
then the Pochhammer matrix symbol version is given by (see [19]):
( θ ) ϵ = θ ( θ + I ) ( θ + 2 I ) . . . . . ( θ + ( ϵ 1 ) I ) where ϵ 1 and ( θ ) 0 = I .
The following property is satisfied by the extended Pochhammer matrix symbol (see [25]):
( θ ) n + ϵ = ( θ ) ϵ ( θ + n I ) n .
Let  θ  and  ϑ  be commuting matrices in  C r × r  satisfying the (9), for any natural number  m 0 , the  ϵ -th generalized matrix Bessel polynomial  Y m ( θ , ϑ ; w )  is defined by [26]
Y ϵ ( θ , ϑ ; w ) = s = 0 ϵ ϵ s ( θ + ( ϵ + s 2 ) I ) ( s ) ( w ϑ 1 ) s ,
where  ϵ s  is a binomial coefficient. It is the solution of the matrix differential equation showing this matrix polynomial as a solution.
w 2 Y ϵ ( θ , ϑ ; w ) + ( θ w + ϑ ) Y ϵ ( θ , ϑ ; w ) = ϵ ( θ + ( ϵ 1 ) I ) Y ϵ ( θ , ϑ ; z ) .
Using the hypergeometric matrix series, the generalized Bessel matrix polynomials may be derived as follows:
Y ϵ ( θ , ϑ ; w ) =   2 F 0 ( ϵ I , θ + ( ϵ 1 ) I ; ; w ϑ 1 ) .
Laguerre matrix polynomials may be defined in [16] by
L ϵ θ ( z ) = s = 0 ϵ ( 1 ) s s ! ( ϵ s ) ! ( θ + I ) ϵ [ ( θ + I ) s ] 1 z s .
The Whittaker matrix function  W θ , θ + I / ϵ + 2 ( z )  is defined [26] by the matrix function  W θ , θ + I / ϵ + 2 ( z )  defined [26] by
W θ , θ + I / ϵ + 2 ( z ) = e z / 2 z θ k = 0 ϵ ( 1 ) k ( ϵ I ) k ( ( ϵ + 1 ) I 2 θ ) k k ! z k = e z / 2 z θ   2 F 0 ( ϵ I , ( ϵ + 1 ) I 2 θ ; ; 1 z ) .
We also see that the Whittaker matrix functions and Laguerre’s matrix polynomials are the fundamental components of the generalized Bessel matrix polynomials. In fact, we have
Y ϵ ( θ , ϑ ; w ) = n ! ( w ϑ 1 ) ϵ L ϵ 2 ϵ I θ + I ( ϑ w ) ,
and
Y ϵ ( θ , ϑ ; w ) = e ϑ / 2 w ( w ϑ 1 ) I θ / 2 W I θ / 2 , ( θ I ) / 2 + ϵ I ( ϑ w ) .
The integral representation is an immediate result of (12) as
Y ϵ ( θ , ϑ ; w ) = Γ 1 ( θ + ( ϵ 1 ) I ) 0 t θ + ( ϵ 2 ) I   1 F 0 ( ϵ I ; ; t w ϑ 1 ) e t d t = Γ 1 ( θ + ( ϵ 1 ) I ) 0 t θ + ( ϵ 2 ) I ( I + t w ϑ 1 ) ϵ e t d t .
The orthogonality of the generalized Bessel matrix polynomials on the unit circle with regard to the weight matrix function ((cf. [26]):
ρ ( w ) = 1 2 π i ϵ = 0 Γ ( θ ) Γ 1 ( θ + ( n 1 ) I ) ( ϑ w ) n .
This is satisfied by the associated matrix nonhomogeneous equation
( w 2 ρ ( w ) ) = ( θ w + ϑ ) ρ ( w ) 1 2 π i [ ( θ I ) ( θ 2 I ) ] w .
For  n ϵ ,  we have
C ρ ( w ) Y ϵ ( θ , ϑ ; w ) Y n ( θ , ϑ ; w ) d w = 0 .
Now, a few features of the Pochhammer symbol are needed in our present study.
Lemma 1
([6,27]). If θ be a positive stable matrix in  C r × r , and m, s, and r are non-negative integer numbers, then we obtain
1.
 
( m I ) m s = ( 1 ) m s ( m I ) ! [ ( s ) ! ] 1 ,
2.
 
( m ) r + s = ( m ) r ( m + r ) s ,
3.
 
( m s ) ! = ( 1 ) s m ! ( m ) s ,
4.
 
( m I θ ) n r = ( 1 ) m r Γ ( θ ( m + 1 ) I ) Γ 1 ( θ + ( s + 1 ) I ) ,
5.
 
Γ ( θ + ( m + 1 ) I ) Γ 1 ( θ + ( s + 1 ) I ) = ( θ + I ) m [ ( θ + I ) s ] 1 ,
6.
 
θ + I m θ + I m s r 1 = ( 1 ) r + s ( θ m I ) r + s ,
7.
 
θ m s = ( 1 ) s θ m 1 m I θ s 1 ,
8.
 
Γ ( θ m I ) = ( 1 ) m Γ ( θ ) ( I θ ) m 1 .

3. The Matrix Bessel Polynomial  Y ϵ ( θ , ϑ ) ( z , w )

In this section, we introduce the matrix Bessel polynomial  Y ϵ ( θ , ϑ ) ( z , w )  of two variables and discuss some important basic properties of it as follows.
Definition 1.
Let θ and ϑ be positive stable matrices in  C r × r  satisfying the condition (9), then the matrix Bessel polynomials of two variables  Y ϵ ( θ , ϑ ) ( z , w )  is given by
Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r r ! .
Remark 1.
The following lists some particular results on matrix Bessel polynomials:
1.
Replace θ by  ( 1 2 ϵ ) I θ  and ϑ by  ( 1 2 ϵ ) I ϑ  in (27), then we obtain
Y ϵ ( ( 1 2 ϵ ) I θ , ( 1 2 ϵ ) I ϑ ) ( z , w ) = ( ( 2 2 ϵ ) I ϑ ) n ( ( 2 2 ϵ ) I θ ) n × v = 0 ϵ r = 0 ϵ v ( n I ) v + r ( 2 2 ϵ ) I θ v 1 ( 2 2 ϵ ) I ϑ r 1 z v v ! w r r ! .
2.
Also, from (27), we can deduce that
Y ϵ ( θ , ϑ ) ( z , w ) = Y ϵ ( ϑ , θ ) ( z , w ) .
3.
If we put  w = 0  in (27), we find that
Y ϵ ( θ , ϑ ) ( z , 0 ) = ( z ) ϵ ( ϑ + I ) ϵ Y ϵ ( I , ϑ + ( 1 2 ϵ ) ) ( z ) .
4.
If putting  w = 0  and  ϑ = O  in (27), we obtain the relation:
Y ϵ ( θ , O ) ( z , 0 ) = ( z ) ϵ ϵ ! Y ϵ ( I , θ + ( 1 2 ϵ ) ) ( z ) .
Theorem 1.
Suppose that θ and ϑ are matrices in  C r × r  satisfying the condition (9) such that  ϵ , s , and r are non-negative integer numbers and where all matrices are commutative, then we have
Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( w ) ϵ r = 0 ϵ s = 0 ϵ r ( ϵ ) r + s ( ϵ I ϑ ) r + s θ + I r 1 z w r 1 ( w ) s 1 r ! s ! .
Proof. 
Using the R.H.S. of (32), we determine that
( θ + I ) ϵ ( w ) ϵ r = 0 ϵ s = 0 ϵ r ( ϵ ) r + s ( ϵ I ϑ ) r + s θ + I r 1 z w r 1 ( w ) s 1 r ! s ! = ( θ + I ) ϵ ( w ) ϵ r = 0 ϵ s = 0 ϵ r ( ϵ ) r + s ( ϵ I ϑ ) r + s θ + I r 1 z r w r s ( 1 ) s 1 r ! s ! .
By using (24), we have
= ( θ + I ) ϵ ( 1 ) ϵ r = 0 ϵ v = 0 ϵ r ( ϵ ) ϵ v ( ϵ I ϑ ) ϵ v θ + I r 1 z r w v ( 1 ) ϵ r v 1 r ! ( ϵ r v ) ! .
Putting  ϵ r s = v  and rearranging the terms, we obtain
( θ + I ) ϵ ( w ) ϵ r = 0 ϵ s = 0 ϵ r ( ϵ ) r + s ( ϵ I ϑ ) r + s θ + I r 1 z w r 1 ( w ) s 1 r ! s ! = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r r ! = Y ϵ ( θ , ϑ ) ( z , w ) .
This complete the proof. □
Theorem 2.
If we assume that θ and ϑ are matrices in  C r × r  satisfying the condition (9) such that  ϵ , p , q , and r are non-negative integer numbers and where all matrices are commutative, then we have that
p + q z p w q Y ϵ ( θ , ϑ ) ( z , w ) = ( ϵ ) p + q ( ϵ I θ ) p + q ( ϵ I ϑ ) p + q θ + I v 1 ϑ + I r 1 Y ϵ p q ( θ + p I , ϑ + q I ) ( z , w ) .
Proof. 
By using the definition in (27), we obtain
p z p Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ v = p ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v p ( v p ) ! w r r !
and
q w q Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ r = q ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r q ( r q ) ! .
Now, we find that
p + q w p + q Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ v = p ϵ r = q ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v p ( v p ) ! w r q ( r q ) ! = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ p r = 0 ϵ v p ( ϵ I ) v + p + r + q θ + I v + p 1 ϑ + I r + q 1 z v v ! w r r ! .
By using (20), we have
p + q w p + q Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ ( ϵ I ) p + q ( θ + I ) p 1 ϑ + I ) q 1 ( θ + I ) ϵ p q 1 ( I + ϑ ) ϵ p q 1 × ( θ + I ) ϵ p q ( ϑ + I ) ϵ p q × v = 0 n p r = 0 ϵ v p ( ϵ + p + q ) v + r ( θ + 1 ) I + P v 1 ( 1 + q ) I + ϑ r 1 z v v ! w r r ! = ( ( θ + I ) p ) 1 ( ϑ + I ) q 1 ( ϵ ) p + q ( θ ϵ I ) p + q ( ϑ ϵ I ) p + q Y ϵ p q ( θ + p I , ϑ + q I ) ( z , w )
and this finish the proof. □

Recurrence Relation of  Y ϵ ( θ , ϑ ) ( z , w )

A recurrence relation of  Y ϵ ( θ , ϑ ) ( z , w )  will be stated in the next theorem.
Theorem 3.
Let parameters θ and ϑ be positive stable matrices in  C r × r  satisfying the condition (9), where ϵ and r are non-negative integer numbers and where all matrices are commutative, then we obtain
Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ Y ϵ 1 ( θ , ϑ ) ( z , w ) z ( ϑ + ϵ I ) Y ϵ 1 ( θ + I , ϑ ) ( z , w ) z ( θ + ϵ I ) Y ϵ 1 ( θ , ϑ + I ) ( z , w ) .
Proof. 
The Bessel matrix polynomials are expanded in double series using (27) in this case, and it is sufficient to show that the coefficient of  z v w r  is the same on both sides of Equation (34). Form the definition in (27), we have the L.H.S. as
Y ϵ ( θ , ϑ ) ( z , w ) = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r r ! ,
then, we find the coefficient of  z v w r  given by
( θ + I ) ϵ ( ϑ + I ) ϵ ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 1 v ! 1 r ! .
Also, we see the R.H.S. of Equation (34) as follows:
( θ + ϵ I ) ( ϑ + ϵ I ) Y ϵ 1 ( θ , ϑ ) ( z , w ) z ( ϑ + ϵ I ) Y ϵ 1 ( θ + I , ϑ ) ( z , w ) z ( θ + ϵ I ) Y ϵ 1 ( θ , ϑ + I ) ( z , w ) = z ( θ + ϵ I ) ( ϑ + ϵ I ) ( θ + I ) ϵ 1 ( ϑ + I ) ϵ 1 × v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r r ! z ( ϑ + ϵ I ) ( θ + 2 I ) ϵ 1 ( ϑ + I ) ϵ 1 × v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + 2 I v 1 ϑ + I r 1 z v v ! w r r ! w ( θ + ϵ I ) ( θ + I ) ϵ 1 ( ϑ + 2 I ) ϵ 1 × v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + I v 1 ϑ + 2 I r 1 z v v ! w r r ! = ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + I v 1 ϑ + I r 1 z v v ! w r r ! ( θ + 2 I ) ϵ 1 ( ϑ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + 2 I v 1 ϑ + I r 1 z v + 1 v ! w r r ! ( θ + I ) ϵ ( ϑ + 2 I ) ϵ 1 v = 0 ϵ r = 0 ϵ v ( ( ϵ 1 ) I ) v + r θ + I v 1 ϑ + 2 I r 1 z v v ! w r + 1 r ! .
The coefficient of  z v w r  of the R.H.S is
( θ + I ) ϵ ( ϑ + I ) ϵ ( ( ϵ 1 ) I ) v + r θ + I v 1 ϑ + I r 1 1 v ! 1 r ! ( 2 I + θ ) ϵ 1 ( ϑ + I ) ϵ ( ( ϵ 1 ) I ) v + r 1 2 I + θ v 1 1 ϑ + I r 1 1 ( v 1 ) ! 1 r ! ( θ + I ) ϵ ( 2 I + ϑ ) ϵ 1 ( ( ϵ 1 ) I ) v + r 1 θ + I v 1 2 I + ϑ r 1 1 1 v ! 1 ( r 1 ) ! .
We find this from (36) as
( ϑ + I ) ϵ ( θ + I ) ϵ ( ( ϵ 1 ) I ) v + r 1 ( ϵ + v + r v r ) I × θ + I v 1 ϑ + I r 1 1 v ! 1 r ! = ( ϑ + I ) ϵ ( θ + I ) ϵ ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 1 v ! 1 r ! .
Hence, from (35) and (37), the coefficient of  z v w r  is the same on both sides of Equation (34), and this complete the proof. □

4. Some Integrals Involving the Matrix Bessel Polynomial  Y ϵ ( θ , ϑ ) ( z , w )

In this section, our attention is directed towards presenting integral representations for the matrix Bessel polynomial  Y m ( θ , ϑ ) ( z , w )  in the form of theorems.
Theorem 4.
Let θ, ϑ, and R be positive stable matrices in  C r × r  satisfying the condition (9) and where all matrices are commutative, then we obtain
0 e t t R I Y ϵ ( θ , ϑ ) ( a t z , b t w ) d t = Γ ( R ) ( ϑ + I ) ϵ ( θ + I ) ϵ × p = 0 ϵ ( I R ) p 1 ( θ + I ) p 1 ( ϵ I ) p ( a / z ) p p ! r = 0 p ϑ + I ) r 1 ( p I ) r ( ( θ + p I ) ) r ( b z / a w ) r r ! .
Where  a , b > 0
Proof. 
By using the definition (27) in the L.H.S., we obtain
( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( a / z ) v v ! × ( b / w ) r r ! 0 e t t R ( v + r + 1 ) I d t = ( θ + I ) ϵ ( ϑ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( a / z ) v v ! ( b / w ) r r ! Γ ( R ( v + r ) I ) = Γ ( R ) ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( 1 ) v + r ( ϵ I ) v + r I R v + r 1 θ + I v 1 ϑ + I r 1 × ( a / z ) v v ! ( b / w ) r r ! .
By substituting  p = v + r , rearranging the terms, and using (1), we obtain
= Γ ( R ) ( ϑ + I ) ϵ ( θ + I ) ϵ p = 0 ϵ r = 0 p ( 1 ) p ( ϵ I ) p I R p 1 θ + I p r 1 ϑ + I r 1 × ( a / z ) p r ( p r ) ! ( b / w ) r r ! = Γ ( R ) ( ϑ + I ) ϵ ( θ + I ) ϵ p = 0 ϵ ( I R ) p 1 ( θ + I ) p 1 ( ϵ I ) p ( a / z ) p p ! × r = 0 p ( ϑ + I ) r 1 ( p I ) r ( θ p I ) r ( b z / a w ) r r ! .
This concludes the proof. □
Theorem 5.
For θ and ϑ, R and K are matrices in  C r × r  such that  R K = K R  and satisfying the condition (9), then we obtain
0 1 t R I ( 1 t ) K I Y ϵ ( θ , ϑ ) ( z t a , w t b ) d t = ( θ + I ) ϵ ( ϑ + I ) ϵ β ( K , R ) p = 0 ϵ ( R + K ) p 1 ( θ + I ) p 1 ( ϵ I ) p ( R ) p ( z / a ) p p ! r = 0 p ( ϑ + I ) r 1 ( p I ) r ( θ p I ) r ( a w / b z ) r r ! .
where  a , b > 0
Proof. 
By using the L.H.S. of (39) and (27), we find that
( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( z / a ) v v ! ( w / b ) r r ! × 0 1 t R + ( v + r 1 ) I ( 1 t ) K I d t = ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( z / a ) v v ! ( w / b ) r r ! × Γ ( R + ( v + r ) I ) Γ ( K ) Γ 1 ( R + K + ( v + r ) I ) = Γ ( R ) Γ ( K ) Γ 1 ( R + K ) ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 m r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 × ( R + k ) v + r 1 ( R ) v + r ( z / a ) v v ! ( w / b ) r r ! ,
By putting  v + r = p  and rearranging the terms, we have
= ( ϑ + I ) ϵ ( θ + I ) ϵ β ( R , K ) × p = 0 ϵ r = 0 p ( ϵ I ) p θ + I p r 1 ϑ + I r 1 ( R + k ) p 1 ( R ) p ( z / a ) p r ( p r ) ! ( w / b ) r r ! = ( ϑ + I ) ϵ ( θ + I ) ϵ β ( R , K ) P = 0 ϵ r = 0 p ( ϵ I ) p θ + I p 1 ϑ + I r 1 R + K p 1 × ( R ) p ( p I ) r ( θ p I ) r ( z / a ) p p ! ( a w / b z ) r r ! = ( ϑ + I ) ϵ ( θ + I ) ϵ β ( R , K ) × P = 0 ϵ θ + I p R + K p ( R ) p ( z / a ) p p ! r = 0 p ( ϑ + I ) r 1 ( p I ) r ( θ p I ) r ( a w / b z ) r r !
This complete the proof. □
Theorem 6.
If θ and ϑ are positive stable matrices in  C r × r  such that  θ ϑ = ϑ θ  and satisfy the condition (9), then the matrix Bessel polynomials  Y ϵ ( θ , ϑ ) ( z , w )  satisfy the following integral representation as
0 1 z θ ( 1 z ) ϑ θ I Y ϵ ( θ , ϑ ) ( z / a , w / b ) d z = Γ 1 ( ϑ + ( ϵ + 1 ) I ) Γ ( θ + ( ϵ + 1 ) I ) Γ ( ϑ θ ) Y ϵ ( ϑ , ϑ ) ( 1 / a , w / b ) ,
where  a , b > 0 .
Proof. 
By using the L.H.S. of (40) and (27), we obtain
( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( w / b ) r r ! × 0 1 z θ + v I ( 1 z ) ϑ θ I d z = ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( w / b ) r r ! × Γ 1 ( ϑ + ( v + 1 ) I ) Γ ( θ + ( v + 1 ) I ) Γ ( ϑ θ ) = ( θ + I ) ϵ ( ϑ + I ) ϵ Γ 1 ( ϑ + I ) Γ ( θ + I ) Γ ( ϑ θ ) × v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( w / b ) r r ! = ( θ + I ) ϵ ( ϑ + I ) ϵ Γ ( θ + I ) Γ ( ϑ θ ) Γ 1 ( ϑ + I ) ϑ + I ϵ 1 ϑ + I ϵ 1 Y ϵ ( ϑ , ϑ ) ( 1 / a , w / b ) = Γ 1 ( ϑ + ( ϵ + 1 ) I ) Γ ( θ + ( ϵ + 1 ) I ) Γ ( ϑ θ ) Y ϵ ( ϑ , ϑ ) ( 1 / a , w / b ) .
This leads to the assertion (40). □
Theorem 7.
Let θ and ϑ be positive stable matrices in  C r × r  such that  θ ϑ = ϑ θ  satisfies the condition (9), then we have
0 t z θ ( t x ) ϑ 2 ϵ I Y ϵ ( θ , ϑ ) ( z / a , w / b ) d z = t θ + ϑ + ( 1 2 ϵ ) I Γ ( θ + ( 1 + m ) I ) × Γ 1 ( θ ϑ + ( 2 ϵ ) I ) Γ ( 1 2 ϵ ) I ϑ Y ϵ ( θ ϑ + ( 1 2 ϵ ) I , ϑ ) ( t / a , w / b ) ,
where  a , b > 0
Proof. 
By using the L.H.S. of (41) and (27), we obtain
( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( w / b ) r r ! × 0 t z θ + v I ( t z ) ϑ 2 n I d z .
Substituting  z = t u , we obtain
= ( ϑ + I ) ϵ ( θ + I ) ϵ v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( w / b ) r r ! × t θ ϑ + ( v 2 ϵ + 1 ) I 0 1 u θ + v I ( 1 u ) ϑ 2 n I d u = ( θ + I ) ϵ ( ϑ + I ) ϵ t θ + ϑ ( 2 ϵ 1 ) I × v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( t / a ) v v ! ( w / b ) r r ! × Γ 1 ( θ ϑ + ( v 2 ϵ + 2 ) I ) Γ ( θ + ( v + 1 ) I ) Γ ( ( 1 2 ϵ ) I ϑ ) = ( θ + I ) ϵ ( ϑ + I ) ϵ Γ 1 ( θ ϑ + ( 2 ϵ + 2 ) I ) Γ ( θ + I ) Γ ( ( 1 2 M ) I ϑ ) × v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ ϑ + ( 2 2 ϵ ) I v 1 I + ϑ r 1 ( t / a ) v v ! ( w / b ) r r ! t θ + ϑ ( 2 ϵ 1 ) I = t θ + ϑ + ( 1 2 ϵ ) I Γ 1 ( θ ϑ + ( 2 ϵ ) I ) ) × Γ ( θ + ( 1 + ϵ ) I ) Γ ( 1 2 ϵ ) I ϑ Y ϵ ( θ ϑ + ( 1 2 ϵ ) I , ϑ ) ( t / a , w / b ) ,
and this complete the proof. □

5. The Laplace–Carson Matrix Transform

In this section, we introduce the Laplace–Carson transform of the extended matrix Bessel polynomial. Initially, we provide the definition of the Laplace–Carson transform for matrix functions.
Definition 2.
Let  F ( u , s )  be a function defined on the collection of all the positive stable matrices observed in  C r × r , then the two-dimensional Laplace–Carson transform is provided by:
F ( u , s ) = L f ( z , w ) : ( z , w ) ( u , s ) = u s 0 0 e u z s w f ( z , w ) d z d w
such that the integral on the right side of the equation exists.
Theorem 8.
Suppose that θ and ϑ are in  C r × r  satisfying the condition (9) and where all matrices are commutative, then we obtain the Laplace–Carson transform as
L ( z / a ) θ ( w / b ) ϑ Y ϵ ( θ , ϑ ) ( z / a , w / b ) : ( z , w ) ( u , s ) = Γ ( θ + ( ϵ + 1 ) I ) Γ ( ϑ + ( ϵ + 1 ) I ) ( a u ) θ ( b s ) ϑ 1 ( 1 / a u ) ( 1 / b s ) ϵ ,
where  R e ( u ) > 0 , R e ( s ) > 0 .
Proof. 
By using Definition 2, we find that
L . H . S = u s 0 0 e u z s w ( z / a ) θ ( w / b ) ϑ Y ϵ ( θ , ϑ ) ( z / a , w / b ) d z d w = u s K v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) θ + v I v ! ( 1 / b ) ϑ + r I r ! × 0 e u z z θ + v I d z 0 e s w w ϑ + r I d w
where  K = ( θ + I ) ϵ ( ϑ + I ) ϵ . Putting  u z = t  and  s w = n , we have
= K v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 Γ ( θ + ( v + 1 ) I ) Γ ( ϑ + ( r + 1 ) I ) × ( a u ) θ + v I 1 ( b s ) ϑ + r I 1 1 v ! 1 r ! = K Γ ( θ + I ) Γ ( ϑ + I ) ( a u ) θ 1 ( b s ) ϑ 1 v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r ( 1 / a u ) v v ! ( 1 / b s ) r r ! = Γ ( θ + ( n + 1 ) I ) Γ ( ϑ + ( n + 1 ) I ) ( a u ) θ 1 ( b s ) ϑ 1 1 ( 1 / a u ) ( 1 / b s ) ϵ ,
which is the required proof. □
Theorem 9.
Suppose that θ and ϑ are in  C r × r , satisfying the condition (9) and where all matrices are commutative, then we obtain
L Y ϵ ( θ , ϑ ) ( z / a , w / b ) : ( z , w ) ( u , s ) = A F 2 ( ϵ I , I , I ; θ + I , ϑ + I ; 1 / a u , 1 / b s ) ,
where  A = ( θ + I ) ϵ ( ϑ + I ) ϵ  and  F 2  is the second Appell’s matrix function defined in [28] as
F 2 ( θ , ϑ , ϑ ; R , R ; z , w ) = v = 0 ϵ r = 0 ϵ v ( θ ) v + r ( ϑ ) v ( ϑ ) r ( R ) v 1 ( R ) r 1 z v v ! w r r ! .
Proof. 
Using Definition 2, we have
L . H . S = u s 0 0 e u z s w Y ϵ ( θ , ϑ ) ( z / a , w / b ) d z d w = u s A v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( 1 / b ) r r ! × 0 e u z z v d z 0 e s w w r d w = u s A v = 0 n r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 × Γ ( v + 1 ) Γ ( r + 1 ) × 1 u v + 1 1 s r + 1 ( 1 / a ) v v ! ( 1 / b ) r r ! = A v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 1 ( a u ) v 1 ( b s ) r ( 1 ) v v ! ( 1 ) r r ! = A F 2 ( ϵ I , I , I ; θ + I , ϑ + I ; 1 / a u , 1 / b s ) ,
which is the required proof. □
Theorem 10.
Let θ and ϑ be matrices in  C r × r , satisfying the condition (9) and where all matrices are commutative, then we obtain
L Y ϵ ( θ , ϑ ) ( z / a , 1 / b ) s i n z w : ( z , w ) ( u , s ) = 2 π u s A ( 4 u s + 1 ) 3 / 2 ψ 1 ( ϵ I , ( 3 / 2 ) I ; θ + I , ϑ + I ; ( 4 s / a ) ( 4 u s + 1 ) 1 , 1 / b )
where  A = ( θ + I ) ϵ ( ϑ + I ) ϵ , and the Humbert matrix function  ψ 1 ( θ , ϑ ; R , R ; z , w )  of two complex variables is given in [29] as
ψ 1 ( θ , ϑ ; R , R ; z , w ) = v = 0 ϵ r = 0 ϵ v ( θ ) v + r ( ϑ ) v ( R ) v 1 ( R ) r 1 z v v ! w r r !
Proof. 
Using Definition 2, we obtain
L . H . S = u s 0 0 e u z s w Y ϵ ( θ , ϑ ) ( z / a , 1 / b ) s i n z w d z d w
Since  s i n z w = k = 0 ( 1 ) k Γ ( 2 k + 2 ) ( z w ) k + 1 2 , then, we get
= u s A v = 0 ϵ r = 0 ϵ v k = 0 ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( 1 / b ) r r ! × ( 1 ) k Γ ( 2 k + 2 ) 0 e u z z v + k + 1 2 d z 0 e s w w k + 1 2 d w = u s A v = 0 ϵ r = 0 ϵ v k = 0 ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( 1 / b ) r r ! × ( 1 ) k Γ ( 2 k + 2 ) Γ ( v + k + 3 / 2 ) Γ ( k + 3 / 2 ) u v + k + 3 / 2 s k + 3 / 2 .
By using the duplication form of the gamma function, we obtain
= A u s π u 3 / 2 s 3 / 2 v = 0 ϵ r = 0 ϵ v k = 0 ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v v ! ( 1 / b ) r r ! × ( 1 ) k Γ ( k + 1 ) Γ ( v + k + 3 / 2 ) Γ ( k + 3 / 2 ) Γ ( k + 3 / 2 ) 2 2 k + 1 u v ( u s ) k = 1 2 A π u s 1 2 v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v Γ ( v + 3 / 2 ) v ! ( 1 / b ) r r ! × k = 0 v + 3 2 k ( 1 / 4 u s ) k k ! = A Γ ( 3 2 ) π 2 u s × v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( 1 / a ) v ( 3 / 2 ) v u v v ! ( 1 / b ) r r ! 1 + 1 4 u s ) v 3 2 = A 1 4 π ( u s ) 1 2 4 u s 4 u s + 1 3 2 × v = 0 ϵ r = 0 ϵ v ( ϵ I ) v + r θ + I v 1 ϑ + I r 1 ( ( 3 / 2 ) v v ! ( 1 / b ) r r ! 4 s a ( 4 u s + 1 ) v = 2 π u s A ( 4 u s + 1 ) 3 / 2 ψ 1 ( ϵ I , ( 3 / 2 ) I ; θ + I , ϑ + I ; ( 4 s / a ) ( 4 u s + 1 ) 1 , 1 / b ) .
This concludes the proof of Theorem 10. □

6. Conclusions

The article examines a two-variable counterpart of matrix Bessel polynomials and investigates specific differential formulas and recurrence relations associated with them. Part of the integral formula for this new extension of matrix Bessel polynomials is also presented. In addition, we presented the Laplace–Carson transform for the analogous matrix Bessel polynomial with two variables. Future research efforts could be devoted to unveiling further properties and features of these polynomials. This could include the exploration of extended and generalized forms, as well as integral representations. The analysis of these facets may contribute to a deeper understanding of the polynomials and their behavior.

Author Contributions

Methodology and conceptualization, A.B. and M.N.; data curation and writing—original draft, G.A. and M.Z. investigation an dvisualization, A.B. and S.H. avalidation, writing—reviewing and editing, M.Z. and G.A. funding acquisiition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the King Khalid University, Grant RGP 2/414/44 and Princess Nourah bint Abdulrahman University Researchers Supporting Project number PNURSP2024R45, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

There are no data associated with this study.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under grant number RGP 2/414/44 and Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R45), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare there are no conflicts of interest.

References

  1. Srivastava, H.M.; Agarwal, P.; Jain, S. Generating functions for the generalized Gauss hypergeometric functions. Appl. Math. Comput. 2014, 247, 348–352. [Google Scholar] [CrossRef]
  2. Agarwal, P.S.; Dragomir, M.J.; Samet, B. Advances in Mathematical Inequalities and Applications; Trends in Mathematics; Birkhaser: Basel, Switzerland, 2019. [Google Scholar]
  3. Krall, H.L.; Frink, O. A new class of orthogonal polynomials: The Bessel polynomials. Trans. Amer. Math. Soc. 1949, 65, 100–115. [Google Scholar] [CrossRef]
  4. Galvez, F.; Dehesa, J.S. Some open problems of generalized Bessel polynomials. J. Phys. A Math. Gen. 1984, 17, 2759–2766. [Google Scholar] [CrossRef]
  5. Berg, C.; Vignat, C. Linearization coefficients of Bessel polynomials and properties of Student t-distributions. Constr. Approx 2008, 27, 15–32. [Google Scholar] [CrossRef]
  6. Izadi, M.; Srivastava, H.M. A novel matrix technique for multi-order pantograph differential equations of fractional order. Proc. Roy. Soc. Lond. Ser. A Math. Phys. Engrg. Sci. 2021, 477, 20210321. [Google Scholar] [CrossRef]
  7. Izadi, M.; Cattani, C. Generalized Bessel polynomial for multi-order fractional differential equations. Symmetry 2020, 12, 1260. [Google Scholar] [CrossRef]
  8. Tcheutia, D.D. Nonnegative linearization coefficients of the generalized Bessel polynomials. Ramanujan J. 2019, 48, 217–231. [Google Scholar] [CrossRef]
  9. Altomare, M.; Costabile, F. A new determinant form of Bessel polynomials and applications. Math. Comput. Simul. 2017, 141, 16–23. [Google Scholar] [CrossRef]
  10. Abdalla, M.; Abul-Ez, M.; Morais, J. On the construction of generalized monogenic Bessel polynomials. Math. Meth. Appl. Sci. 2018, 40, 9335–9348. [Google Scholar] [CrossRef]
  11. Hamza, A.M. Properties of Bessel polynomials. PhD in Loughborough University UK 1974.
  12. Chauhan, R.; Kumar, N.; Aggarwal, S. Dualities between Laplace–Carson transform and some useful integral transforms. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 1654–1659. [Google Scholar] [CrossRef]
  13. Batahan, R.S. A new extension Hermite matrix polynomials and its applications. Linear Algebra Appl. 2006, 419, 82–92. [Google Scholar] [CrossRef]
  14. DeFez, E.; Jódar, L. Chebyshev matrix polynomails and second order matrix differential equations. Utilitas Math. 2002, 61, 107–123. [Google Scholar]
  15. DeFez, E.; Jódar, L.; Law, A. Jacobi matrix differential equation, polynomial solutions, and their properties. Comput. Math. Appl. 2004, 48, 789–803. [Google Scholar] [CrossRef]
  16. Jódar, L.; Sastre, J. On the Laguerre matrix polynomials. Util. Math. 1998, 53, 37–48. [Google Scholar]
  17. Shehata, A. A new extension of Gegenbauer matrix polynomials and their properties. Bull. Internat. Math. Virtual Inst. 2012, 2, 29–42. [Google Scholar]
  18. Bakhet, A.; Zayed, M. Incomplete exponential type of R-matrix functions and their properties. AIMS Math. 2023, 8, 26081–26095. [Google Scholar] [CrossRef]
  19. Jódar, L.; Sastre, J. On the hypergeometric matrix Function. J. Comput. Appl. Math. 1998, 99, 205–217. [Google Scholar] [CrossRef]
  20. Jódar, L.; Sastre, J. Some properties oF Gamma and Beta matrix Functions. Appl. Math. Lett. 1998, 11, 89–93. [Google Scholar] [CrossRef]
  21. Goyal, R.; Agarwal, P.; Oros, I.G.; Jain, S. Extended Beta and Gamma Matrix Functions via 2-Parameter Mittag-LeFFler Matrix Function. Mathematics 2022, 10, 892. [Google Scholar] [CrossRef]
  22. Cuchta, T.; Grow, D.; Wintz, N. Discrete matrix hypergeometric functions. J. Math. Anal. Appl. 2023, 518, 126716. [Google Scholar] [CrossRef]
  23. Khammash, S.G.; Agarwal, P.; Choi, J. Extended k-Gamma and k-Beta Functions of Matrix Arguments. Mathematics 2020, 8, 1715. [Google Scholar] [CrossRef]
  24. Cuchta, T.; Grow, D.; Wintz, N. Divergence criteria for matrix generalized hypergeometric series. Proc. Am. Math. Soc. 2022, 150, 1235–1240. [Google Scholar] [CrossRef]
  25. Bakhet, A.; Jiao, Y.; He, F.L. On the Wright hypergeometric matrix functions and their fractional calculus. Integral Transform. Spec. Funct. 2019, 30, 138–156. [Google Scholar] [CrossRef]
  26. Kishka, Z.M.G.; Shehata, A.; Abul-Dahab, M. The generalized Bessel matrix polynomials. J. Math. Comput. Sci. 2012, 2, 305–316. [Google Scholar]
  27. Abul-Dahab, M.A.; Abul-Ez, M.; Kishka, Z.; Constales, D. Reverse generalized Bessel matrix differential equation, polynomial solutions, and their properties. Math. Meth. Appl. Sci. 2015, 38, 1005–1013. [Google Scholar] [CrossRef]
  28. Batahan, R.S.; Metwally, M.S. Differential and integral operators on Appell’s matrix function. Andal. Soc. Appl. Sci. 2009, 3, 7–25. [Google Scholar]
  29. Rida, S.Z.; Abul-Dahab, M.; Saleem, M.A.; Mohammed, M.T. On Humbert matrix function Ψ1(A, B; C, C′; z, w) of two complex variables under differential operator. Int. J. Ind. Math. 2010, 32, 167–179. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bakhet, A.; Hussain, S.; Niyaz, M.; Zakarya, M.; AlNemer, G. On the Two-Variable Analogue Matrix of Bessel Polynomials and Their Properties. Axioms 2024, 13, 202. https://doi.org/10.3390/axioms13030202

AMA Style

Bakhet A, Hussain S, Niyaz M, Zakarya M, AlNemer G. On the Two-Variable Analogue Matrix of Bessel Polynomials and Their Properties. Axioms. 2024; 13(3):202. https://doi.org/10.3390/axioms13030202

Chicago/Turabian Style

Bakhet, Ahmed, Shahid Hussain, Mohamed Niyaz, Mohammed Zakarya, and Ghada AlNemer. 2024. "On the Two-Variable Analogue Matrix of Bessel Polynomials and Their Properties" Axioms 13, no. 3: 202. https://doi.org/10.3390/axioms13030202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop