Next Article in Journal
Continuous and Discrete ZND Models with Aid of Eleven Instants for Complex QR Decomposition of Time-Varying Matrices
Next Article in Special Issue
Numerical Investigation on Suction Flow Control Technology for a Blunt Trailing Edge Hydrofoil
Previous Article in Journal
A Novel DC Electronic Load Topology Incorporated with Model Predictive Control Approach
Previous Article in Special Issue
Global–Local Non Intrusive Analysis with 1D to 3D Coupling: Application to Crack Propagation and Extension to Commercial Software
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Efficient Method for Absolute Value Equations

1
School of Mathematics and Statistics, Anyang Normal University, Anyan 455002, China
2
Department of Mathematics, Abdul Wali Khan University Mardan, Mardan 23200, Pakistan
3
College of Sciences, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3356; https://doi.org/10.3390/math11153356
Submission received: 25 June 2023 / Revised: 24 July 2023 / Accepted: 28 July 2023 / Published: 31 July 2023

Abstract

:
In this paper, the two-step method is considered with the generalized Newton method as a predictor step. The three-point Newton–Cotes formula is taken as a corrector step. The proposed method’s convergence is discussed in detail. This method is very simple and therefore very effective for solving large systems. In numerical analysis, we consider a beam equation, transform it into a system of absolute value equations and then use the proposed method to solve it. Numerical experiments show that our method is very accurate and faster than already existing methods.

1. Introduction

Suppose an AVE of the form
A x x = b ,
where A R n × n , x , b R n and · represents an absolute value. The AVE
A x + B x = b
is the generalized form of Equation (1) for B R n × n , which was first presented by Rohn [1]. The AVE Equation (1) has many applications in pure and applied sciences [2]. It is difficult to find the exact solution of Equation (1) because of the absolute values of x. For some works on this aspect, we refer to [3,4,5]. Many iterative methods were proposed to study the AVE Equation (1), for example [6,7,8,9,10,11,12,13,14,15].
Nowadays, the two-step techniques are very poplar for solving AVE Equation (1). Liu [16,17] presented two-step iterative methods to solve AVEs. Khan et al. [18] have suggested a new method based on a generalized Newton’s technique and Simpson’s rule for solving AVEs. Shi et al. [19] have developed a two-step Newton-type method with linear convergence for AVEs. Noor et al. [20] have suggested minimization techniques for AVEs and discussed the convergence of these techniques under some suitable conditions. In [21], the two-step Gauss quadrature method was suggested for solving AVEs. When the coefficient matrix A in the AVE Equation (1) has the Toeplitz structure, Gu et al. [22] suggested the nonlinear CSCS-like method and the Picard–CSCS method for solving this problem.
In this paper, the Newton–Cotes open method along with the generalized Newton technique [23] is suggested to solve Equation (1). This new method is straightforward and very effective. The proposed method’s convergence is proved under the condition that A 1 < 1 10 in Section 3. To prove the effectiveness, we consider several examples in Section 4. The main aim of this new method is to obtain the solution of (1) in a few iterations with good accuracy. This new method successfully solves large systems of AVEs. In most cases, this new method requires just one iteration to find the approximate solution of Equation (1) with accuracy up to 10 13 . The following notations are used. Let s i g n ( x ) be a vector with entries 1 , 0 , 1 , based on the associated entries of x. The generalized Jacobian σ x of | x | based on a subgradient [24,25] of the entries of | x | is the diagonal matrix D given by
D ( x ) = σ x = d i a g ( s i g n ( x ) ) .
s v d ( A ) denotes the n singular values of A, A = ( λ ) 1 2 represents the 2-norm of A and λ is the maximum eigenvalue of A T A in absolute. x = ( x T , x ) is the 2-norm of the vector x; for more detail, see [26].

2. Proposed Method

We develop a new two-step (NTS) method for AVE Equation (1) in this section. Let
J ( x ) = A x x b .
Then, J ( x ) is given by:
J ( x ) = σ ( J ( x ) ) = A D ( x ) .
Consider the predictor step as:
γ k = A D x k 1 b .
Let v be the solution of Equation (1). To construct the corrector step, we proceed as follows:
u v J ( x ) d x = J ( v ) J ( u ) = J ( u ) .
Now, using the three-point Newton–Cotes formula, we have
u v J ( x ) d x = 1 3 2 J 3 u + v 4 J u + v 2 + 2 J u + 3 v 4 ( v u ) .
From Equations (7) and (8), we have
J ( u ) = 1 3 2 J 3 u + v 4 J u + v 2 + 2 J u + 3 v 4 ( v u ) .
Thus,
v = u 3 2 J 3 u + v 4 J u + v 2 + 2 J u + 3 v 4 1 J ( u ) .
From Equation (10), the NTS method can be written as (Algorithm 1):
Algorithm 1: NTS Method
1: Choose x 0 R n .
2: For k, calculate γ k = A D x k 1 b .
3: Using Step 2, calculate
x k + 1 = x k 3 2 J 3 x k + γ k 4 J x k + γ k 2 + 2 J x k + 3 γ k 4 1 J x k .
4: If x k + 1 x k < t o l , then stop. Otherwise, go to step 2.

3. Convergence

Now, we examine the convergence of the NTS method. The predictor step
γ k = A D x k 1 b
is well defined; see Lemma 2 [23]. To prove that
2 J 3 x k + γ k 4 J x k + γ k 2 + 2 J x k + 3 γ k 4
is nonsingular, first we consider
ϕ k = 3 x k + γ k 4 , δ k = x k + γ k 2 , τ k = x k + 3 γ k 4 .
Now
2 J 3 x k + γ k 4 J x k + γ k 2 + 2 J x k + 3 γ k 4 = 2 A 2 D 3 x k + γ k 4 A + D x k + γ k 2 + 2 A D x k + 3 γ k 4 = 3 A 2 D ϕ k + D δ k 2 D τ k ,
where D ϕ k , D δ k and D τ k are diagonal matrices defined in Equation (3).
Lemma 1.
If s v d ( A ) > 1 , then 3 A 2 D ϕ k + D δ k 2 D τ k 1 exists for any diagonal matrix D defined in Equation (3).
Proof. 
If 3 A 2 D ϕ k + D δ k 2 D τ k is singular, then
3 A 2 D ϕ k + D δ k 2 D τ k u = 0
for some u 0 . As s v d ( A ) > 1 , therefore, using Lemma 1 [23], we have
u T u < u T A T A u = 1 9 u T 2 D ϕ k D δ k + 2 D τ k 2 D ϕ k D δ k + 2 D τ k u = 1 9 u T 4 D ϕ k D ϕ k 4 D ϕ k D δ k + 8 D ϕ k D τ k + D δ k D δ k 4 D δ k D τ k + 4 D τ k D τ k u 1 9 u T ( 9 ) u = u T u ,
which is a contradiction, hence 3 A 2 D ϕ k + D δ k 2 D τ k is nonsingular.   □
Lemma 2.
If s v d ( A ) > 1 , then the sequence of the NTS method is well defined and bounded with an accumulation point x ˜ such that
x ˜ = x ˜ 3 2 J ϕ k J δ k + 2 J τ k 1 J x ˜ ,
or it is equivalent to
2 J ϕ k J δ k + 2 J τ k x ˜ = 2 J ϕ k J δ k + 2 J τ k x ˜ 3 J x ˜ .
Hence, there exists an accumulation point x ˜ with
A D ˜ x ˜ x ˜ = b ,
for some diagonal matrix D ˜ whose diagonal entries are 0 or ± 1 depending on whether the corresponding component of x ˜ is zero, positive, or negative as defined in (3).
Proof. 
The proof is the same as given in [23]. Thus, it is skipped.   □
Theorem 1.
If 2 J ϕ k J δ k + 2 J τ k 1 < 1 9 , then the NTS method converges to a solution v of Equation (1).
Proof. 
Consider
x k + 1 v = x k 3 2 J ϕ k J δ k + 2 J τ k 1 J x k v = x k v 3 2 J ϕ k J δ k + 2 J τ k 1 J x k .
It is seen that
2 J ϕ k J δ k + 2 J τ k x k + 1 v = 2 J ϕ k J δ k + 2 J τ k x k v 3 J x k .
As the solution to Equation (1) is v, therefore
J ( v ) = A v v b = 0 .
From Equations (18) and (19), we have
2 J ϕ k J δ k + 2 J τ k x k + 1 v = 2 J ϕ k J δ k + 2 J τ k x k v 3 J x k + 3 J ( v ) = 2 J ϕ k J δ k + 2 J τ k x k v 3 J x k J ( v ) = 2 J ϕ k J δ k + 2 J τ k x k v 3 A x k x k A v + v = 2 J ϕ k J δ k + 2 J τ k 3 A x k v + 3 x k v = 2 D ϕ k + D δ k 2 D τ k x k v + 3 x k v .
It follows that
x k + 1 v = 2 J ϕ k J δ k + 2 J τ k 1 2 D ϕ k + D δ k 2 D τ k x k v + 3 x k v .
Thus, we know
x k + 1 v = 2 J ϕ k J δ k + 2 J τ k 1 2 D ϕ k + D δ k 2 D τ k x k v + 3 x k v .
This leads to
x k + 1 v 2 J ϕ k J δ k + 2 J τ k 1 2 D ϕ k + D δ k 2 D τ k x k v + 3 x k v 2 J ϕ k J δ k + 2 J τ k 1 2 D ϕ k + D δ k 2 D τ k x k v + 3 x k v .
Since D ϕ k , D δ k and D τ k are diagonal matrices, therefore
2 D ϕ k + D δ k 2 D τ k 3 ,
We also use the Lipchitz continuity (see Lemma 5 [23]), that is
x k v 2 x k v .
From Equations (20)–(22), we have
x k + 1 v 2 J ϕ k J δ k + 2 J τ k 1 3 x k v + 6 x k v = 9 2 J ϕ k J δ k + 2 J τ k 1 x k v < x k v .
In Equation (23), the supposition 2 J ϕ k J δ k + 2 J τ k 1 < 1 9 is used. Hence x k converges linearly to the solution of Equation (1).   □
Lemma 3.
Let A 1 < 1 10 and D ϕ k , D δ k , D τ k be non-zeros. Then, for any b, the NTS method converges to the unique solution of Equation (1) for any initial guess x 0 R n .
Proof. 
Since A 1 < 1 10 , therefore, Equation (1) is uniquely solvable for any b see ([2], Proposition 4). Since A 1 exists, therefore, by Lemma 2.3.2 [26], we have
2 J ϕ k J δ k + 2 J τ k 1 = 3 A 2 D τ k + D δ k 2 D τ k 3 A 1 2 D ϕ k + D δ k 2 D τ k 1 ( 3 A ) 1 2 D ϕ k + D δ k 2 D τ k 1 3 A 1 3 1 1 3 A 1 3 < 1 10 1 1 10 = 1 9 .
Hence, by Theorem 1, the NTS method converges to the unique solution of Equation (1).   □

4. Numerical Result

In the section, several examples are taken to prove the efficiency of the suggested method. We use Matlab R2021a with a core (TM) i5@ 1.70 GHz. The CPU time in seconds, number of iterations and 2-norm of residuals are denoted by time, K and RES, respectively.
Example 1
([9]). Consider
A = t r i d i a g 1.5 , 4 , 0.5 R s × s , x R s a n d b = 1 , 2 , , s T .
A comparison of the NTS method with the MSOR-like method [9], generalized Newton method (GNM) [23] and RIM [11] is given in Table 1.
Table 1 shows that the NTS method finds the solution of Equation (1) very quickly. The RES of the NTS method shows that the new method is more accurate than all the methods stated in Table 1.
Example 2
([16]). Consider
A = r o u n d ( s × ( e y e ( s , s ) 0.02 × ( 2 × r a n d ( s , s ) 1 ) ) ) .
Choose a random x R s and b = A x x .
We compare the NTS method with INM [7], the GQ method [21] and TSI [16] in Table 2.
It is clear that the NTS method converges in one iteration in most cases. The other two methods require at least three iterations to find the solution of Equation (1) to achieve the given accuracy.
Example 3
([7]). Let
A = t r i d i a g ( 1 , 8 , 1 ) R s × s , b = A e e f o r e = 1 , 1 , 1 , , T R s ,
where the initial vector is taken from [7].
We compare the NTS method with GGS [8], MGS [6] and Method II [7].
As seen in Table 3, the suggested method approximates the solution of Equation (1) in just one iteration. The residual shows that the NTS method is very accurate.
Example 4.
Consider the beam equation of the form
d 2 x d r 2 x = S x E M q r r L 2 E M ,
with boundary conditions
x ( 0 ) = 0 , x ( L ) = 0 ,
where L = 120 i n . is the length of the beam, the modulus of elasticity E = 3 × 10 7 l b / i n . 2 , the intensity of uniform load q = 100 l b / f t , the stress at the ends is 1000 lb and the central moment of inertia M = 625 i n . 4 .
We use FDM to discretize this equation. A comparison of the NTS method with the solution by Maple is illustrated in Figure 1.
Figure 1 shows the effectiveness and accuracy of the NTS method. Clearly, the deflection of the beam is maximum at the center.
Example 5
([20]). Consider an AVE of the form
a i i = 4 s , a i , i + 1 = a i + 1 , i = s , a i j = 0.5 , i = 1 , 2 , , s .
Choose a constant vector b, and the initial guess is taken from [20]. A comparison of the NTS method with MM [20] and MMSGP [1] is presented in Table 4.
We observe that the NTS method is very successful for solving Equation (1). Furthermore, the NTS method is very consistent when n increases (large systems), whereas the other two methods need more iterations.

5. Conclusions

In this paper, we have used a two-step method for AVEs. In this new method, a three-point Newton–Cotes open formula is taken as a corrector step, while a generalized Newton method is taken as the predictor. The local convergence of the NTS method is proved in Section 2. Theorem 1 proves the linear convergence of the proposed method. A comparison shows that this method is very accurate and converges in just one iteration in most cases. This idea can be used to solve generalized AVEs and also to find all solutions of AVEs in the future.

Author Contributions

The idea of the present paper came from J.I. and S.M.G.; P.G., J.I., S.M.G., M.A. and R.K.A. wrote and completed the calculations; L.S. checked all the results. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Scientific Research Projects of Universities in Henan Province under Grant 22A110005.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to extend their sincere appreciation to Researchers Supporting Project number RSPD2023R802 KSU, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Rohn, J. A theorem of the alternatives for the equation Ax + B|x| = b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef] [Green Version]
  2. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Its Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef] [Green Version]
  3. Mansoori, A.; Eshaghnezhad, M.; Effati, S. An efficient neural network model for solving the absolute value equations. IEEE Tran. Circ. Syst. II Express Briefs 2017, 65, 391–395. [Google Scholar] [CrossRef]
  4. Chen, C.; Yang, Y.; Yu, D.; Han, D. An inverse-free dynamical system for solving the absolute value equations. Appl. Numer. Math. 2021, 168, 170–181. [Google Scholar] [CrossRef]
  5. Cairong, C.; Yu, D.; Han, D. Exact and inexact Douglas-Rachford splitting methods for solving large-scale sparse absolute value equations. IMA J. Numer. Anal. 2023, 43, 1036–1060. [Google Scholar]
  6. Ali, R. Numerical solution of the absolute value equation using modified iteration methods. Comput. Math. Methods 2022, 2022, 2828457. [Google Scholar] [CrossRef]
  7. Ali, R.; Khan, I.; Ali, A.; Mohamed, A. Two new generalized iteration methods for solving absolute value equations using M-matrix. AIMS Math. 2022, 7, 8176–8187. [Google Scholar] [CrossRef]
  8. Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. A generalization of the Gauss-Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 2017, 293, 156–167. [Google Scholar]
  9. Haung, B.; Li, W. A modified SOR-like method for absolute value equations associated with second order cones. J. Comput. Appl. Math. 2022, 400, 113745. [Google Scholar] [CrossRef]
  10. Mansoori, A.; Erfanian, M. A dynamic model to solve the absolute value equations. J. Comput. Appl. Math. 2018, 333, 28–35. [Google Scholar] [CrossRef]
  11. Noor, M.A.; Iqbal, J.; Al-Said, E. Residual Iterative Method for Solving Absolute Value Equations. Abstr. Appl. Anal. 2012, 2012, 406232. [Google Scholar] [CrossRef] [Green Version]
  12. Salkuyeh, D.K. The Picard-HSS iteration method for absolute value equations. Optim. Lett. 2014, 8, 2191–2202. [Google Scholar] [CrossRef]
  13. Abdallah, L.; Haddou, M.; Migot, T. Solving absolute value equation using complementarity and smoothing functions. J. Comput. Appl. Math. 2018, 327, 196–207. [Google Scholar] [CrossRef] [Green Version]
  14. Yu, Z.; Li, L.; Yuan, Y. A modified multivariate spectral gradient algorithm for solving absolute value equations. Appl. Math. Lett. 2021, 21, 107461. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Yu, D.; Yuan, Y. On the Alternative SOR-like Iteration Method for Solving Absolute Value Equations. Symmetry 2023, 15, 589. [Google Scholar] [CrossRef]
  16. Feng, J.; Liu, S. An improved generalized Newton method for absolute value equations. SpringerPlus 2016, 5, 1042. [Google Scholar] [CrossRef] [Green Version]
  17. Feng, J.; Liu, S. A new two-step iterative method for solving absolute value equations. J. Inequal. Appl. 2019, 2019, 39. [Google Scholar] [CrossRef]
  18. Khan, A.; Iqbal, J.; Akgul, A.; Ali, R.; Du, Y.; Hussain, A.; Nisar, K.S.; Vijayakumar, V. A Newton-type technique for solving absolute value equations. Alex. Eng. J. 2023, 64, 291–296. [Google Scholar] [CrossRef]
  19. Shi, L.; Iqbal, J.; Arif, M.; Khan, A. A two-step Newton-type method for solving system of absolute value equations. Math. Prob. Eng. 2020, 2020, 2798080. [Google Scholar] [CrossRef]
  20. Noor, M.A.; Iqbal, J.; Khattri, S.; Al-Said, E. A new iterative method for solving absolute value equations. Int. J. Phys. Sci. 2011, 6, 1793–1797. [Google Scholar]
  21. Shi, L.; Iqbal, J.; Raiz, F.; Arif, M. Gauss quadrature method for absolute value equations. Mathematics 2023, 11, 2069. [Google Scholar] [CrossRef]
  22. Gu, X.-M.; Huang, T.-Z.; Li, H.-B.; Wang, S.-F.; Li, L. Two CSCS-based iteration methods for solving absolute value equations. J. Appl. Anal. Comput. 2017, 7, 1336–1356. [Google Scholar]
  23. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  24. Polyak, B.T. Introduction to Optimization; Optimization Software Inc., Publications Division: New York, NY, USA, 1987. [Google Scholar]
  25. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  26. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA; London, UK, 1970. [Google Scholar]
Figure 1. Deflection of beam for h = 2 (step size).
Figure 1. Deflection of beam for h = 2 (step size).
Mathematics 11 03356 g001
Table 1. NTS method verses MSOR-like method and RIM.
Table 1. NTS method verses MSOR-like method and RIM.
Methods100020003000400050006000
K242525252525
RIMCPU7.08420654.430295150.798374321.604186581.212038912.840059
RES7.6844 × 10 7 4.9891 × 10 7 6.3532 × 10 7 7.6121 × 10 7 8.8041 × 10 7 9.9454 × 10 7
K303132323333
MSOR-LikeCPU0.00673900.00956210.02156340.05414560.05701340.0791257
RES5.5241 × 10 7 7.0154 × 10 7 5.8684 × 10 7 9.0198 × 10 7 5.6562 × 10 7 7.4395 × 10 7
K555555
GNMCPU0.00596510.0073330.01150380.03303450.05518180.0783684
RES3.1777 × 10 10 7.8326 × 10 9 2.6922 × 10 10 3.7473 × 10 9 8.3891 × 10 9 5.8502 × 10 8
K111112
NTS methodCPU0.0018160.0034100.0187710.03264250.0315390.069252
RES9.6317 × 10 12 2.3697 × 10 11 4.1777 × 10 11 6.2756 × 10 11 8.20814 × 10 11 5.9998 × 10 11
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Methods2004006008001000
K33344
TSIRES7.6320 × 10 12 9.0622 × 10 12 1.9329 × 10 11 4.0817 × 10 11 7.1917 × 10 11
CPU0.0316190.1205200.325910.836491.00485
K33344
INMRES2.1320 × 10 12 6.6512 × 10 12 3.0321 × 10 11 2.0629 × 10 11 8.0150 × 10 11
CPU0.0128510.0981240.1568100.6384211 0.982314
K22222
GQ methodRES2.1415 × 10 12 4.4320 × 10 12 1.0515 × 10 11 1.9235 × 10 11 2.8104 × 10 11
CPU0.0131450.0387340.1624390.2045780.276701
K11112
NTS methodRES1.0637 × 10 12 4.0165 × 10 12 1.0430 × 10 11 2.0644 × 10 11 2.1660 × 10 11
CPU0.0128320.0711240.1530010.2013560.274165
Table 3. Comparison of NTS method with GGS, MGS and Method II.
Table 3. Comparison of NTS method with GGS, MGS and Method II.
Methodss10002000300040005000
K1111111111
GGSRES2.4156 × 10 9 2.7231 × 10 9 3.1872 × 10 9 3.2167 × 10 9 3.4538 × 10 9
CPU0.5146561.0452211.1534421.8431985.652411
K78888
MGSRES6.7056 × 10 9 7.30285 × 10 10 7.6382 × 10 10 9.57640 × 10 10 8.52425 × 10 10
CPU0.2152400.9124290.9167881.5035184.514201
K66666
Method IIRES3.6218 × 10 8 5.1286 × 10 8 6.2720 × 10 8 7.2409 × 10 8 8.0154 × 10 8
CPU0.2383520.5412640.9615341.4531892.109724
K11111
NTS methodRES4.9774 × 10 15 7.0304 × 10 15 8.6069 × 10 15 9.9363 × 10 15 1.1107 × 10 14
CPU0.2049740.3211840.4628690.8195031.721235
Table 4. The numerical results for Example 5.
Table 4. The numerical results for Example 5.
MMSGP MM NTS Method
sKCPURESKCPURESKCPURES
2240.0051295.6800 × 10 7 20.0299651.2079 × 10 12 10.00323211.7763 × 10 15
4370.0087019.7485 × 10 7 40.0278645.5011 × 10 8 10.0086213.5527 × 10 15
8450.0092175.5254 × 10 7 60.0453876.9779 × 10 8 10.0041202.9296 × 10 14
16660.0124585.8865 × 10 7 70.3569302.0736 × 10 8 10.0061568.4072 × 10 14
32550.0315978.2514 × 10 7 80.0332774.9218 × 10 8 10.0051082.1645 × 10 13
64860.0856217.6463 × 10 7 90.1857539.0520 × 10 9 10.0081201.0088 × 10 12
128900.5210566.3326 × 10 7 90.4523941.7912 × 10 8 10.3621622.2822 × 10 12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, P.; Iqbal, J.; Ghufran, S.M.; Arif, M.; Alhefthi, R.K.; Shi, L. A New Efficient Method for Absolute Value Equations. Mathematics 2023, 11, 3356. https://doi.org/10.3390/math11153356

AMA Style

Guo P, Iqbal J, Ghufran SM, Arif M, Alhefthi RK, Shi L. A New Efficient Method for Absolute Value Equations. Mathematics. 2023; 11(15):3356. https://doi.org/10.3390/math11153356

Chicago/Turabian Style

Guo, Peng, Javed Iqbal, Syed Muhammad Ghufran, Muhammad Arif, Reem K. Alhefthi, and Lei Shi. 2023. "A New Efficient Method for Absolute Value Equations" Mathematics 11, no. 15: 3356. https://doi.org/10.3390/math11153356

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop