Next Article in Journal
Private Label Introduction and Sales Format Selection with Regard to e-Commerce Platform Supply Chain
Previous Article in Journal
A Context-Preserving Tokenization Mismatch Resolution Method for Korean Word Sense Disambiguation Based on the Sejong Corpus and BERT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Finite-Difference Method for Nonlinear Absolute Value Equations

1
Key Laboratory of Data Science and Smart Education Ministry of Education, Hainan Normal University, Haikou 570203, China
2
Key Laboratory of Computational Science and Application of Hainan Province, Haikou 571158, China
3
Mathematics and Statistics College, Hainan Normal University, Haikou 570203, China
4
Mathematics and Science College, Shanghai Normal University, Shanghai 200234, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(5), 862; https://doi.org/10.3390/math13050862
Submission received: 5 February 2025 / Revised: 27 February 2025 / Accepted: 3 March 2025 / Published: 5 March 2025

Abstract

:
In this paper, we propose a new finite-difference method for nonconvex absolute value equations. The nonsmooth unconstrained optimization problem equivalent to the absolute value equations is considered. The finite-difference technique is considered to compose the linear programming subproblems for obtaining the search direction. The algorithm avoids the computation of gradients and Hessian matrices of problems. The new finite-difference parameter correction technique is considered to ensure the monotonic descent of the objective function. The convergence of the algorithm is analyzed, and numerical experiments are reported, indicating the effectiveness by comparison against a state-of-the-art absolute value equations.

1. Introduction

1.1. Problem Description Motivation

In this paper, we propose a new finite-difference method for solving the nonconvex absolute value equations of the following from:
A x B | x | = d .
where A , B R m × n are m × n matrices, and x R n and d R m are n-dimensional and m-dimensional column vectors, respectively. The problem (1) arises frequently in bimatrix games, contact problems and linear and convex quadratic programming [1,2,3]. In addition, the importance and practicality of developing numerical algorithms and theoretical frameworks for absolute value equations are not limited to their theoretical meanings but also include a wide range of potential applications and enormous economic value. Therefore, addressing the challenges brought by absolute value equations is not only valuable in theory but also of significant economic importance.
In order to obtain the solution of Problem (1), we consider solving the following optimization problem:
min x R n f ( x ) = i = 1 m | A i x + B i | x | d i | ,
where A i and B i are the i—th row of Matrices A and B, respectively, and d i is the i—th component of d. It is obvious that the optimal solution of Problem (2) is the solution of Problem (1). Hence, we consider the new algorithm to solve Problem (2). However, Problem (2) is a nonsmooth problem; thus, it is difficult to obtain the optimal solution of Problem (2) using traditional smooth optimization algorithms. In this paper, we introduce the nonsmooth optimization algorithm to obtain the solution of Problem (2).
The nonlinear absolute value equations (AVEs) already have a wide spectrum of applications, such as economics, engineering, transportation science, and mathematical programming, especially mixed integer linear programming problems. It is very difficult to deal with nonlinear and nonsmooth ones. The general form of (1) was first presented by Rohn [4]. Then, Mangasarian [1,5,6] promoted it in a more general context. In many cases, AVE problems were equivalent to linear complementarity (LPC) problems (see [7,8]). Some studies introduced several theoretical results to guarantee the uniqueness of the solution [9,10,11,12]. Furthermore, if an AVE problem was linked to an LCP, then an AVE problem could be associated with nonlinear complementarity problems (NCPs), taking profit from the huge amount of literature concerning the existence, uniqueness and numerical resolution of NCPs (see [13]).
In order to solve AVE problems, various numerical algorithms can be used. Generally speaking, there are three categories: iterative linear algebra based methods, semi-smooth Newton-like methods and smoothing methods. The aforementioned methods can solve AVE problems, but assumption such as P 0 -matrices and P-matrices are needed [14]. In addition, Caccetta et al. [15] proposed a smoothing Newton method for AVEs. They showed that the method has a global and quadratic convergence rate when singular values of A exceed 1. Li [16] introduced a preconditioned accelerated over-relaxation iterative method coupled with a preconditioning technique for solving absolute value equations. Studies have shown that the convergence rate of the preconditioned accelerated over-relaxation iterative method is better than that of the accelerated over-accelerated over-relaxation iterative method. Yong [17] proposed a particle swarm optimization (PSO), which was based on aggregate function for solving AVE problems. They used an aggregate function to replace the absolute value function, and the nonsmooth AVEs could be solved by solving smooth nonlinear equations. Moosaei et al. [18] proposed a new algorithm for solving NP-hard absolute value equations in which the singular values of A exceed 1. Some randomly generated problems were solved by proposed methods and other other known methods. The comparison results showed that these methods were more effective than other known methods. Wu and Li [19] introduced a special shift splitting iteration method for solving nonlinear absolute value equations. They showed that the special shift splitting iteration method was absolutely convergent under proper conditions. Yong [20] also introduced an iterative method for solving AVE problems. He showed that the sequence generated by the algorithm converged to a solution of AVEs after finite iterations. Finally, this method was applied to solve a two-point boundary value problem. Saeed Ketabchi et al. [21] created an algorithm to compute the minimum norm solution to the absolute value equation (AVE) in a special case. They pointed out that AVEs could be reduced to an unconstrained minimization problem with a once-differentiable convex objective function by using an exterior penalty method. Hence, a quasi-Newton method was used for solving this problem. Ref. [22] studied the optimum correction of the absolute value equation through making minimal changes in the coefficient matrix and the right-hand side using the l 2 norm. Hence, a genetic algorithm could be used for obtaining the solution of this problem. Moreover, many methods used for solving absolute value equations were also used to solve partial differential equations (PDEs); for example, Liu et al. [23] introduced the bilinear neural network method for solving partial differential equations. Taylor et al. [24] also introduced a residual neural network method for solving partial differential equations. These methods were also useful for solving AVE problems.

1.2. Contribution of This Paper

In order to solve the nonlinear absolute value equation (Equation (1)), a new finite-difference method is introduced in this paper. The algorithm’s contributions are as follows:
  • The finite-difference method is proposed for finding the solution of Problem (1). We prove that the sequence generated by the algorithm converges to the solution of Problem (1).
  • The proposed algorithm avoids computing the gradients and Hessian matrices’ information to obtain the subproblem for solving the search direction. The finite-difference technique is only used for proposing the linear programming subproblem to obtain the search direction.
  • Unconstrained nonsmooth optimization problems are established, and their optimal solutions are guaranteed to be absolute value equations. A new finite-difference parameter correction technique is used to ensure the monotonic descent of the objective function of unconstrained nonsmooth optimization problems.
  • In contrast to general smooth optimization algorithms for solving absolute value equations, P0—matrix and P—matrix approximations for Problem (1) are not needed in this paper for the convergence of the algorithm.
This paper is organized into six sections. The finite-difference linear programming subproblem is introduced in Section 2. The finite-difference algorithm is introduced in Section 3. We prove that the sequence generated by the algorithm converges to the solution of Problem (1) in Section 4. Numerical experiments illustrating the practical performance of the algorithm are reported in Section 5. The conclusions of this paper are outlined in Section 6.

2. The Linear Programming Subproblem

In this paper, we consider a new algorithm for obtaining the solution of Problem (1). First, we consider building the linear programming subproblems using the information of the gradient of objective function f ( x ) . However, gradient information is not obtained. Hence, the finite difference is used to approximate the gradient of objective function f ( x ) . The finite difference relies on an appropriate finite-difference parameter, t.
We define the finite-difference interval, as suggested in [25]. The ith component of the approximation finite difference which approximates the gradient of f ( x ) at x is defined by
[ g + ( x ) ] i = f ( x + t e i ) f ( x ) ,
[ g ( x ) ] i = f ( x t e i ) f ( x ) ,
where e i , i { 1 , 2 , , n } is the n-dimensional unit vector whose ith element is 1.
Using (3) and (4), we define g + ( x ) and g ( x ) as follows:
g + ( x ) = [ g 1 + ( x ) , g 2 + ( x ) , , g n + ( x ) ] T ,
g ( x ) = [ g 1 ( x ) , g 2 ( x ) , , g n ( x ) ] T .
Hence, the finite-difference gradient is given as follows:
g ( x ) = g + ( x ) g ( x ) .
We hope that the algorithm can generate a search direction relying on (8). In order to achieve this goal, i.e., in order to obtain the search direction of Problem (1), we will present the following subproblem:
min g k T λ μ = [ g k + ] T λ + [ g k ] T μ s . t . i = 1 n ( λ i + μ i ) = 1 , | λ i | Δ k , | μ i | Δ k , i = 1 , 2 , , n
where Δ k > 0 , λ R n , μ R n , g k = g ( x k ) , g k + = g + ( x k ) , g k = g ( x k ) , g ( x ) , g + ( x ) and g ( x ) are generated by (7), (5) and (6), respectively.

3. Algorithm

In this section, we introduce a new finite-difference method for finding the optimal solution of Problem (2), and we obtain the solution of Problem (1). The design of the algorithm is mainly inspired by the idea of a multi-directional search. However, the basic idea of a multi-directional search is to explore each direction given until a direction that can cause the objective function to decrease is found. Our algorithm will utilize the theory of spatial basis in higher algebra and construct a linear programming model to avoid a one-by -one search.
Remark 1.
In Step 6 of the algorithm, we control the finite-difference parameter such that it is also a step size along the search direction d k .
Remark 2.
Algorithm 1 may obtain a local optimal solution of Problem (1) if it is not a convex optimization problem and this solution is not the solution of Problem (1). From Figure 1, we can see that if we expand the finite-difference parameter, t k , solving Subproblem (8) will generate a new descent direction until the algorithm produces a solution to Problem (1). Hence, the parameter correction plan in Steps 3, 4 and 5 of Algorithm 1 is necessary for finding the solution to Problem (1).
Algorithm 1: FDM
Input: Given initial iteration x 0 R n , the constants t 0 ( 0 , 1 ) , Δ 0 > 0 , 0 < σ < 1 < χ and ϵ > 0 . Set k = 0 .
Main Steps:
1. Calculate the approximate gradient g k = g ( x k ) using (7).
2. If | f ( x k ) | ϵ , then stop; x k is a solution of Problem (1).
3. If min { [ g k + ] 1 , , [ g k + ] n , [ g k ] 1 , , [ g k ] n } > 0 , then let t k 1 = σ t k .
  If t k 1 > ϵ , compute g k 1 = min { [ g k + ] 1 1 , , [ g k + ] n 1 , [ g k ] 1 1 , , [ g k ] n 1 } where [ g k + ] i 1 = f ( x k + t k 1 e i ) , [ g k ] i 1 = f ( x k t k 1 e i ) .
  Else, g k 1 = ( 1 , , 1 , 1 , , 1 ) , and go to 4.
4. Let t k 2 = χ t k , compute g k 2 = min { [ g k + ] 1 2 , , [ g k + ] n 2 , [ g k ] 1 2 , , [ g k ] n 2 } where [ g k + ] i 2 = f ( x k + t k 2 e i ) , [ g k ] i 2 = f ( x k t k 2 e i ) .
5. If min { g k 1 , g k 2 } 0 , then go to 3.
6. If g k 1 g k 2 , then t k = t k 1 ; go to 5.
 Else t k = t k 2 , go to 5.
7. Calculate Subproblem (8) and obtain the search direction d k = λ k μ k , where λ k = λ ( x k ) and μ k = μ ( x k ) .
8. If f ( x k + t k d k ) i = 1 n [ λ k ] i f ( x k + t e i ) + i = 1 n [ μ k ] i f ( x k t e i ) , then let x k + 1 = x k + t k d k .
  Else, let d k = π k e j 0 and x k + 1 = x k + d k such that f ( x k + π k e j 0 ) f ( x k ) = f ( x k + d k ) f ( x k ) = min { f ( x k + t k e 1 )
   f ( x k ) , , f ( x k + t k e n ) f ( x k ) , f ( x k t k e 1 ) f ( x k ) , , f ( x k + t k e n ) f ( x k ) }.
 Set k = k + 1 , and go to 1.
Figure 1 shows how parameter t k is selected in Algorithm 1, such that Algorithm 1 can obtain the solution to Problem (1). The radius of the dashed circle in the four subgraphs in Figure 1 represents the length of parameter t k . These four subgraphs show that as the radius increases (parameter t k increases), the dashed circle can intersect with a function value lower than the center point (current iteration point x k ). This indicates that with the increase in the value of parameter t k , Algorithm 1 can further reduce the problem and ultimately obtain the solution to Problem (1).

4. Convergence Analysis

In this section, we provide the global convergence analysis of Algorithm 1. Similar to the convergence proofs in other studies, the convergence proof of the algorithm in this paper mainly states two points: Firstly, it proves that the algorithm can reduce the objective function of Problem (2) with each iteration before termination under the general assumptions mentioned in the other studies. Secondly, it proves that the solution to Problem (1) can be obtained when Algorithm 1 is terminated. We require the model to satisfy the following assumption:
Assumption 1.
We define the level set as
L ( x 0 ) = { x R n , f ( x ) f ( x 0 ) } .
Suppose that the level set is bounded.
Next, we will prove that the limit point x of the sequence { x k } generated by Algorithm 1 is a solution to Problem (1). First, we show that the sequence generated by Algorithm 1 is such that the objective function of Problem (2) is monotonically decreasing.
Lemma 1.
Under Assumption 1, sequence { x k } is generated by Algorithm 1 such that the objective function of Problem (2) is monotonically decreasing, i.e., for any k and k + 1 , we have that
f ( x k + 1 ) < f ( x k ) .
Proof. 
According to Step 3 of Algorithm 1, if there exists j 0 { 1 , 2 , , n } such that [ g k + ] i < 0 or [ g k ] i < 0 , then let λ j 0 = 1 or μ j 0 = 1 and λ i = 0 , i j 0 or μ i = 0 , i j 0 . It is obvious that either [ λ k T , μ k T ] = [ 0 , , [ λ k ] j 0 , 0 , , 0 ] or [ λ k T , μ k T ] = [ 0 , , [ μ k ] j 0 , 0 , , 0 ] is a feasible solution of Subproblem (8), and we have that
g k T λ k μ k = [ g k + ] T λ k + [ g k ] T μ k = [ g k + ] j 0 < 0 ,
or
g k T λ k μ k = [ g k + ] T λ k + [ g k ] T μ k = [ g k ] j 0 < 0 .
If min { [ g k + ] 1 , , [ g k + ] n , [ g k ] 1 , , [ g k ] n } > 0 , then, according to Steps 3–6 of Algorithm 1, there exists t k > 0 such that min { [ g k + ] 1 , , [ g k + ] n , [ g k ] 1 , , [ g k ] n } < 0 , similar to (9) and (10). Therefore, we have
g k T λ k μ k < 0 .
According to the definition of g k , we obtain
g k T λ k μ k = i = 1 n [ λ k ] i [ g k ] i + + i = 1 n [ μ k ] i [ g k ] i = i = 1 n [ λ k ] i [ f ( x k + t e i ) f ( x k ) ] + i = 1 n [ μ k ] i [ f ( x k t e i ) f ( x k ) ] .
If f ( x k + t k d k ) i = 1 n [ λ k ] i f ( x k + t e i ) + i = 1 n [ μ k ] i f ( x k t e i ) , by i = 1 n ( λ i + μ i ) = 1 , we have
f ( x k + t k d k ) f ( x k ) = f ( x k + t k d k ) i = 1 n [ λ k ] i + i = 1 n [ μ k ] i f ( x k ) i = 1 n [ λ k ] i f ( x k + t e i ) + i = 1 n [ μ k ] i f ( x k t e i ) i = 1 n [ λ k ] i + i = 1 n [ μ k ] i f ( x k ) = i = 1 n [ λ k ] i ( f ( x k + t e i ) f ( x k ) ) + i = 1 n [ μ k ] i ( f ( x k + t e i ) f ( k ) ) = g k T λ k μ k < 0 .
From (12), we have f ( x k + t k d k ) f ( x k ) < 0 , i.e., f ( x k + t k d k ) < f ( x k ) , which means that the objective function is monotonically decreasing.
If f ( x k + t k d k ) > i = 1 n [ λ k ] i f ( x k + t e i ) + i = 1 n [ μ k ] i f ( x k t e i ) , according to Step 8 of Algorithm 1, we have
f ( x k + 1 ) f ( x k ) = f ( x k + d k ) f ( x k ) = f ( x k + π k e j 0 ) f ( x k ) = min { f ( x k + t k e 1 ) f ( x k ) , , f ( x k + t k e n ) f ( x k ) , f ( x k t k e 1 ) f ( x k ) , , f ( x k + t k e n ) f ( x k ) } < 0 .
Combining (12) and (13), we have that
f ( x k + 1 ) f ( x k ) < 0 ,
which means that
f ( x k + 1 ) < f ( x k ) ,
and the objective function of Problem (2) is monotonically decreasing. □
The next theorem shows the convergence of Algorithm 1, i.e., the sequence generated by Algorithm 1 converges to the solution of Problem (1).
Theorem 1.
Under Assumption 1, the sequence generated by Algorithm 1 converges to the solution of Problem (1), i.e., we have
lim k f ( x k ) = 0 .
Proof. 
For any Index N { 1 , 2 , } , we have that
f ( x N ) f ( x 0 ) = k = 0 N ( f ( x k + 1 ) f ( x k ) ) .
Hence,
f ( x N ) = k = 0 N ( f ( x k + 1 ) f ( x k ) ) + f ( x 0 ) .
By Lemma 1, we can obtain
f ( x k + 1 ) f ( x k ) < 0 ,
which means that f ( x N ) as N + . This contradicts the boundedness of function f ( x ) in Assumption 1. Hence, we have
lim k f ( x k ) = 0 ,
which means that the sequence generated by Algorithm 1 converges to the solution of Problem (1). □

5. Numerical Results

In this section, we will use actual examples to test the effectiveness of Algorithm 1. In order to reflect the computational efficiency of Algorithm 1, we would like to compare the practical performance of Algorithm 1 with the following codes:
  • SM [26]: The nonlinear absolute value equations can be restated as nonlinear complementarity problems and solved efficiently using smoothing regularizing techniques. SM is a smooth method for solving nonlinear complementarity problems. It uses a softmax function that approximates the nonsmooth parts of problems, in which the main idea is to approximate the complementarity condition via the limit
    max i { 1 , , d } x i = lim r 0 log i = 1 d e x i / r ,
    which was widely used in many optimization problems.
  • IP [27]: The interior method introduced by Haddou, Migot and Omer in 2019 with the full Newton step for monotone linear complementarity problems. The specificity of the method was to compute the Newton step using a modified system similar to that introduced by Darvay in Stud Univ Babe-Bolyai Ser Inform 47:15-26, 2017. The method considered a general family of smooth concave functions in the Newton system instead of the square root. The method also possessed the best-known upper bound complexity.
In order to test the efficiency of our algorithm, we chose five nonlinear absolute value equations examples from [28,29,30]. A specific form of the example is as follows:
Example 1.
We solve the systems F ( x ) | x | = b , where F ( x ) = A x , A = tridiag ( 1 , 4 , 1 ) , x R m , B = A x | x | and x = ( 1 , 2 , 1 , 2 , , 1 , 2 , ) T R m .
Example 2.
We solve the systems F ( x ) | x | = b , where F ( x ) : R 3 R 3 is defined by
F ( x ) = 2 x 1 2 2 x 2 + x 2 3 x 3 + 3 x 2 + 2 x 3 + 2 x 3 3 3 ,
and we, respectively, consider b 1 = ( 1 , 5 , 10 ) T , b 2 = ( 9 , 100 , 10 ) T , b 3 = ( 200 , 0 , 900 ) T .
Example 3.
We solve the systems F ( x ) | x | = b , where F ( x ) : R 4 R 4 is defined by
F ( x ) = 3 x 1 2 + x 1 + 2 x 1 x 2 + 2 x 2 2 + x 3 + 3 x 4 2 x 1 2 + x 1 + x 2 2 + x 2 + 10 x 3 + 2 x 4 3 x 1 2 + x 1 x 2 + 2 x 2 2 + 3 x 3 + 9 x 4 x 1 2 + 3 x 2 2 + 2 x 3 + 4 x 4 ,
and we, respectively, consider b 1 = ( 10 , 10 , 12 , 0 ) T , b 2 = ( 20 , 100 , 12 , 1 ) T , b 3 = ( 200 , 10 , 5 , 5 ) T .
Example 4.
Assume that m is a predetermined positive integer and that n = m 2 ; moreover, suppose that B = I , A = M + I , where
M ( x ) = S 0.5 I 0 0 0 1.5 I S 0.5 I 0 0 0 1.5 I S 0 0 0 0 0 S 0.5 I 0 0 0 1.5 I S R n × n ,
with
S = 5 0.5 0 0 0 1.5 4 0.5 0 0 0 1.5 4 0 0 0 0 0 4 0.5 0 0 0 1.5 I 4 R m × m ,
and b = A x | x | , in which x = ( 1 , 2 , 1 , 2 , , 1 , 2 , ) T .
Example 5.
Let m be a prescribed positive integer and n = m 2 . Consider Problem (1) with A = tridiag ( I , S , I ) R n × n , where S = tridiag ( 1 , 8 , 1 ) and B = I .
In order to test the computational efficiency of Algorithm 1, different dimensions were selected for Examples 1, 4, and 5 for the calculations. Details are outlined in Table 1.
Before solving Examples 1–5, we will introduce the parameters selected in the actual calculation of the algorithm proposed in this paper:
Δ 0 = 5 , σ = 0.5 and χ = 2 .
In order to solve the test problems in Table 1 by using the algorithm in this paper, we use MATLAB (2014a) to write a computer program. The computer we used was the ThinkPad T480 (CPU: i7-8850U; main frequency: 2.0 Hz; memory: 16 G).
In order to draw a comparison diagram of the results of the three algorithms, we used the performance comparison formula of the algorithm proposed by Dolan and More [31] to calculate the computational efficiency between different algorithms. Specific formulas are outlined as follows:
r p , s = τ p , s min { τ p , u : u S } ,
where S denotes the set of algorithms. Let P denote the problem sets, n s = | S | and n p = | P | . For t 1 , let
ρ s ( t ) = 1 n p s i z e { p P : r p , s t } ,
where ρ s ( t ) represents the efficiency of each solver.
We choose Examples 1–5, where the calculation information for Examples 1, 4, and 5 is shown in Table 1. In actual calculations, we randomly selected 50 different initial points and used three algorithms to calculate Examples 1–5. The average number of iterations and the average CPU time are shown in Figure 2 and Figure 3. In Figure 2 and Figure 3, the average number of iterations and average CPU time taken by the three algorithms to solve the problem when ϵ selects different termination accuracies are shown, i.e., ϵ = 10 3 , ϵ = 10 5 , ϵ = 10 8 and ϵ = 10 10 .
Figure 2 shows the results that used Algorithm 1, SM and IP for calculating Examples 1–5. From the subgraph on the left side of the first row, it can be seen that when we chose the the termination accuracy of the algorithm as ϵ = 10 3 , our algorithm had a high computational efficiency in calculating all cases of Examples 1–5. According to (15) and (16), the speed at which the red curve, which represents our algorithm, tends to 1 is relatively fast. At the initial position, the red curve reaches 0.7 ( ρ s ( t ) = 0.7 ), but the green curve only reaches 0.45 ( ρ s ( t ) = 0.45 ). Meanwhile, the blue curve reaches almost zero. This means that the algorithm can solve more problems by spending a fewer number of iterations than other methods. Meanwhile, from other subgraphs, with the improvement in the algorithm’s termination accuracy, the initial position values of the red curve are much higher than other two algorithms, and the speed at which it approaches 1 is also the fastest among the three algorithms. From the comparison results of the three methods, it can be seen that our algorithm has a much higher computational efficiency than the other two algorithms, which means that our algorithm is more efficient in solving test problems.
From Figure 3, we can see that compared with the other three algorithms, the cost of CPU time of Algorithm 1 also increases with the increase in the algorithm’s termination accuracy, and our algorithm’s computational efficiency continues to improve. From the subgraph on the left of the first row, it can be seen that when the termination accuracy of the algorithm is set to ϵ = 10 3 , the value of the red curve at its initial position is much higher than that of the green curve and blue curve, and the red curve tends towards 1 at a faster rate. This means that our algorithm has more effective function calculations than the SM represented by the green curve. Specifically, the cost of CPU time of Algorithm 1 is much smaller than the interior point algorithm, which corresponds to the blue curve. Similarly, from other subgraphs, it can be seen that although we improved the termination accuracy of the algorithm, the red curve corresponding to Algorithm 1 still has significant advantages in terms of the initial position value and the rate at which its speed tends towards 1. This indicates that Algorithm 1 has a higher computational efficiency than the other two algorithms when they solve Examples 1–5.

6. Concluding Remarks

This paper proposes a new finite-difference method for obtaining a solution to absolute value equations. The finite-difference technique is used to avoid computing the gradients and Hessian matrices of the problem. The search direction is obtained by solving the linear programming subproblem. In the numerical calculation section, we first chose five absolute value equations and computed these problems with different dimensions and algorithm termination accuracy. The finite-difference parameter is important for our algorithm. We compared our algorithm with SM and IP. The number of iterations and the number of function evaluations were reported. It can be seen from the comparison results that our algorithm can successfully solve absolute value equations. In comparison with other algorithms, it can also be seen that our algorithm has advantages in both the number of iterations and the cost of CPU time.
With the advent of the big data era, there will inevitably be large-scale absolute value linear equation system problems. Therefore, inspired by the bilinear neural network method and bilinear residual network method, we will study more practical and efficient neural network and residual network methods for solving large-scale absolute value equation systems to meet practical needs.

Author Contributions

Formal analysis, P.W. and D.Z.; Writing—review & editing, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The author would like to thank National Natural Science Foundation (11371253), Hainan Natural Science Foundation (120MS029) and Scientific Research Projects of Higher Learning Institutions in Hainan (Hnky2022-23) for their support.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mangasarian, O.L. Absolute value programming. Comput. Optim. Appl. 2007, 36, 43–53. [Google Scholar] [CrossRef]
  2. Mangasarian, O.L. Equilibrium points of bimatrix games. J. Soc. Ind. Appl. Math. 1964, 12, 778–780. [Google Scholar] [CrossRef]
  3. Okeke, G.A.; Abbas, M. A solution of delay differential equations via PicardCKrasnoselskii hybrid iterative process. Arab. J. Math. 2017, 6, 21–29. [Google Scholar] [CrossRef]
  4. Rohn, J. A theorem of the alternatives for the equation Ax + B|x| = b. Linear Multilinear Algebra 2004, 52, 421–426. [Google Scholar] [CrossRef]
  5. Mangasarian, O.L. Absolute value equations via concave minimization. Optim. Lett. 2007, 1, 1–8. [Google Scholar] [CrossRef]
  6. Mangasarian, O.L. A generalized Newton method for absolute value equations. Optim. Lett. 2009, 3, 101–108. [Google Scholar] [CrossRef]
  7. Mangasarian, O.L. Linear complementarity as absolute value equation solution. Optim. Lett. 2014, 8, 1529–1534. [Google Scholar] [CrossRef]
  8. Prokopyev, O. On equivalent reformulations for absolute value equations. Comput. Optim. Appl. 2009, 44, 363–372. [Google Scholar] [CrossRef]
  9. Lotfi, T.; Veiseh, H. A note on unique solvability of the absolute value equation. J. Lin. Top. Alg. 2013, 2, 77–81. [Google Scholar]
  10. Mangasarian, O.L.; Meyer, R.R. Absolute value equations. Linear Algebra Appl. 2006, 419, 359–367. [Google Scholar] [CrossRef]
  11. Rohn, J. On unique solvability of the absolute value equation. Optim. Lett. 2009, 3, 603–606. [Google Scholar] [CrossRef]
  12. Rohn, J.; Hooshyarbakhsh, V.; Farhadsefat, R. An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 2014, 8, 35–44. [Google Scholar] [CrossRef]
  13. Hu, S.L.; Huang, Z.H. A note on absolute value equations. Optim. Lett. 2010, 4, 417–424. [Google Scholar] [CrossRef]
  14. Abdallah, L.; Haddou, M.; Migot, T. Solving absolute value equation using complementarity and smoothing functions. J. Comput. Appl. Math. 2018, 327, 196–207. [Google Scholar] [CrossRef]
  15. Caccetta, L.; Qu, B.; Zhou, G.L. A globally and quadratically convergent method for absolute value equations. Comput. Optim. Appl. 2011, 48, 45–58. [Google Scholar] [CrossRef]
  16. Li, C.X. A preconditioned AOR iterative method for the absolute value equations. Int. J. Comput. Methods 2017, 14, 1750016. [Google Scholar] [CrossRef]
  17. Yong, L. Particle swarm optimization for absolute value equations. J. Comput. Informat. Syst. 2010, 6, 2359–2366. [Google Scholar]
  18. Moosaei, H.; Ketabchi, S.; Noor, M.A.; Iqbald, J.; Hooshyarbakhshe, V. Some techniques for solving absolute value equations. Appl. Math. Comput. 2015, 268, 696–705. [Google Scholar] [CrossRef]
  19. Wu, S.; Li, C. A special shift splitting iteration method for absolute value equation. Aims Math. 2020, 5, 5171–5183. [Google Scholar] [CrossRef]
  20. Yong, L. Iteration method for absolute value equation and applications in two-point boundary value problem of linear differential equation. J. Interdiscip. Math. 2014, 18, 355–374. [Google Scholar] [CrossRef]
  21. Ketabchi, S.; Moosaei, H. Minimum norm solution to the absolute value equation in the convex case. J. Optim. Theory Appl. 2012, 154, 1080–1087. [Google Scholar] [CrossRef]
  22. Ketabchi, S.; Moosaei, H.; Fallahi, S. Optimal error correction of the absolute value equation using a genetic algorithm. Math. Comput. Model. 2013, 57, 2339–2342. [Google Scholar] [CrossRef]
  23. Liu, J.G.; Zhu, W.H.; Wu, Y.K.; Jin, G.H. Application of multivariate bilinear neural network method to fractionalpartial differential equations. Results Phys. 2023, 47, 106341. [Google Scholar] [CrossRef]
  24. Taylor, J.M.; Pardo, D.; Muga, I. A deep fourier residual method for solving PDEs using neural networks. Comput. Methods Appl. Mech. Eng. 2023, 405, 115850. [Google Scholar] [CrossRef]
  25. Rohn, J. An algorithm for solving the absolute value equations. Electron. J. Linear Algebra 2009, 18, 589–599. [Google Scholar] [CrossRef]
  26. Nesterov, Y. Smooth minimization of nonsmooth functions. Math. Program. Ser. A 2005, 103, 127–152. [Google Scholar] [CrossRef]
  27. Haddou, M.; Migot, T.; Omer, J. A generalized direction in interior point method for monotone linear complementarity problems. Optim. Lett. 2019, 13, 35–53. [Google Scholar] [CrossRef]
  28. Alcantara, J.H.; Chen, J.-S. A new class of neural networks for NCPs using smooth perturbations of the natural residual function. J. Comput. Appl. Math. 2022, 407, 114092. [Google Scholar] [CrossRef]
  29. Fakharzadeh, A.J.; Shams, N.N. An Efficient Algorithm for Solving Absolute Value Equations. J. Math. Ext. 2021, 15, 1–23. [Google Scholar]
  30. Shindo, M.K.S. Extension of Newton and quasi-Newton methods to systems of PC1 equations. J. Oper. Res. Soc. Jpn. 1986, 29, 352–374. [Google Scholar]
  31. Dolan, E.D.; More, J.J. Benchmarking optimization software with performance profiles. Math. Profiles 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of selecting parameter t k in Algorithm 1.
Figure 1. Schematic diagram of selecting parameter t k in Algorithm 1.
Mathematics 13 00862 g001
Figure 2. Left of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 3 . Right of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 5 . Left of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 8 . Right of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 10 .
Figure 2. Left of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 3 . Right of the first row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 5 . Left of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 8 . Right of the second row: the comparison results of the iterations of Algorithm 1 with SM and IP when we chose ϵ = 10 10 .
Mathematics 13 00862 g002
Figure 3. Left of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 3 . Right of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 5 . Left of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 8 . Right of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 10 .
Figure 3. Left of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 3 . Right of the first row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 5 . Left of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 8 . Right of the second row: the comparison results of CPU time of Algorithm 1 with SM and IP when we chose ϵ = 10 10 .
Mathematics 13 00862 g003
Table 1. Details of Examples 1, 4 and 5.
Table 1. Details of Examples 1, 4 and 5.
ExampleVector bError1Error2Error3Error4
Example 1d = 10 10 3 10 5 10 8 10 10
d = 50 10 3 10 5 10 8 10 10
d = 100 10 3 10 5 10 8 10 10
d = 200 10 3 10 5 10 8 10 10
Example 4d = 100 10 3 10 5 10 8 10 10
d = 500 10 3 10 5 10 8 10 10
d = 1000 10 3 10 5 10 8 10 10
d = 2000 10 3 10 5 10 8 10 10
Example 5d = 100 10 3 10 5 10 8 10 10
d = 500 10 3 10 5 10 8 10 10
d = 1000 10 3 10 5 10 8 10 10
d = 2000 10 3 10 5 10 8 10 10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, P.; Zhang, Y.; Zhu, D. A New Finite-Difference Method for Nonlinear Absolute Value Equations. Mathematics 2025, 13, 862. https://doi.org/10.3390/math13050862

AMA Style

Wang P, Zhang Y, Zhu D. A New Finite-Difference Method for Nonlinear Absolute Value Equations. Mathematics. 2025; 13(5):862. https://doi.org/10.3390/math13050862

Chicago/Turabian Style

Wang, Peng, Yujing Zhang, and Detong Zhu. 2025. "A New Finite-Difference Method for Nonlinear Absolute Value Equations" Mathematics 13, no. 5: 862. https://doi.org/10.3390/math13050862

APA Style

Wang, P., Zhang, Y., & Zhu, D. (2025). A New Finite-Difference Method for Nonlinear Absolute Value Equations. Mathematics, 13(5), 862. https://doi.org/10.3390/math13050862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop