Next Article in Journal
Variations in the Tensorial Trapezoid Type Inequalities for Convex Functions of Self-Adjoint Operators in Hilbert Spaces
Previous Article in Journal
Fixed Point and Convergence Results for Contractive-Type Self-Mappings of Metric Spaces with Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 360302, Taiwan
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(1), 120; https://doi.org/10.3390/sym16010120
Submission received: 19 December 2023 / Revised: 11 January 2024 / Accepted: 15 January 2024 / Published: 19 January 2024

Abstract

:
In this paper, some one-step iterative schemes with memory-accelerating methods are proposed to update three critical values f ( r ) , f ( r ) , and f ( r ) of a nonlinear equation f ( x ) = 0 with r being its simple root. We can achieve high values of the efficiency index (E.I.) over the bound 2 2 / 3 = 1.587 with three function evaluations and over the bound 2 1 / 2 = 1.414 with two function evaluations. The third-degree Newton interpolatory polynomial is derived to update these critical values per iteration. We introduce relaxation factors into the D z ˇ uni c ´ method and its variant, which are updated to render fourth-order convergence by the memory-accelerating technique. We developed six types optimal one-step iterative schemes with the memory-accelerating method, rendering a fourth-order convergence or even more, whose original ones are a second-order convergence without memory and without using specific optimal values of the parameters. We evaluated the performance of these one-step iterative schemes by the computed order of convergence (COC) and the E.I. with numerical tests. A Lie symmetry method to solve a second-order nonlinear boundary-value problem with high efficiency and high accuracy was developed.

1. Introduction

An elementary, yet very important problem is solving a nonlinear equation f ( x ) = 0 . Given an initial guess x 0 , suppose that it is quite close to the real root r with f ( r ) = 0 ; we can approximate the nonlinear equation by
f ( x 0 ) + f ( x 0 ) ( x x 0 ) = 0 .
When f ( x 0 ) 0 , solving the equation for x yields
x = x 0 f ( x 0 ) f ( x 0 ) .
Along this line of thinking,
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 ,
is coined as the Newton method (NM), which exhibits quadratic convergence. Since then, there have arisen many studies, and this work continues to now, while different fourth-order methods have been used to modify the Newton method, which aim to more quickly and stably solve the nonlinear equations [1,2,3,4,5,6,7]. In general, the fourth-order iterative methods are two-step with at least three function evaluations. Kung and Traub conjectured that a multi-step iterative scheme without memory based on m evaluations of functions has an optimal convergence order p = 2 m 1 . When the fourth-order iterative method is optimal, the bound of the efficiency index (E.I.) is 1.587.
When the one-step iterative scheme with two function evaluations is considered, like the NM, its optimal order is two, while the E.I. reduces to 1.414. From this aspect, the multi-step iterative scheme is superior to the one-step iterative scheme with multi-function evaluations. In the local convergence analysis of the iterative scheme for solving nonlinear equations near the root r, three critical values f ( r ) , f ( r ) , and f ( r ) and their ratios are dominant, which appear as the first three Taylor coefficients. In many iterative schemes, the accelerating parameter and optimal parameter are determined by these critical values. But, the root r is itself an unknown constant, such that the precise values of f ( r ) , f ( r ) , and f ( r ) are not available. The primary goal of many memory methods is to develop a powerful updating technique based on the memory of the previous values of the variables to quickly obtain f ( r ) , f ( r ) , and f ( r ) step-by-step, which will be used in several one-step iterative schemes developed in the paper for achieving high values of the computed order of convergence (COC) and the E.I. with the memory-accelerating technique.
Traub [8] was the first to develop a memory-dependent accelerating method from Steffensen’s iterative scheme by giving x 0 and γ 0 :
w n = x n + γ n f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] , γ n + 1 = 1 f [ x n , x n + 1 ] .
With this modification, by taking the memory of ( x n , w n ) into account, the computational order of convergence is raised from 2 to at least 2.414. The iterative methods using information from the current and previous iteration are the methods with memory. In Equation (4), when x n is a step variable, w n is an adjoint variable, which does not have an iteration for itself. The role of w n is different from y n in the two-step iterative scheme. Later, we will introduce a supplementary variable, which just provides the extra datum used in the data interpolation, and its role is different from x n and w n .
In 2013, D z ˇ uni c ´ [9] proposed a modification of Steffensen’s and Traub’s iterative schemes by introducing two parameters γ and p in
w n = x n + γ f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] + p f ( w n ) .
The error equation was derived as
e n + 1 = [ 1 + γ f ( r ) ] ( a 2 + p ) e n 2 + O ( e n 3 ) ,
where a 2 = f ( r ) / [ 2 f ( r ) ] is a ratio of the second and first Taylor coefficients. Taking γ = 1 / f ( r ) is sufficient for the vanishing of the second-order error term; hence, there is freedom to assign the value for the accelerating parameter p.
There are only a few papers that have been concerned with the one-step iterative schemes with memory [9,10,11]. In this paper, we develop several memory-dependent one-step iterative methods for a high-performance solution of nonlinear equations. The methodologies involve introducing a free parameter and a combination function, and then, they are optimized to raise the order of convergence, which is original and highly novel. The strategy for updating the values of the parameters with the memory-accelerating method can significantly speed up the convergence and raise the value of the E.I. to a limit bound.
The novelties involved in the paper are as follows:
  • Introducing a free parameter in the existing or newly created model of the iterative scheme.
  • Inserting a combination function into two iterative schemes; then, the parameter or combination function is optimized to raise the order of convergence.
  • Several ideas presented here are novel and have not yet appeared in the literature, which can promote the development of fourth-order one-step iterative schemes while saving the computational cost.
  • For the application of the derivative-free one-step iterative schemes, we developed a powerful Lie symmetry method to solve a second-order nonlinear boundary-value problem.
The rest of the paper’s contents proceed as follows. In Section 2, we introduce two basic one-step iterative schemes, which are the starting point and motivate the development of the accelerating techniques to update the critical parameters f ( r ) and f ( r ) appearing in the third-order iterative schemes. Section 3 gives a detailed local convergence analysis of these two basic one-step iterative schemes; a new concept of the optimal combination of these two basic one-step iterative schemes is given in Theorem 3. In Section 4, some numerical experiments are carried out with the computed order of convergence (COC) to evaluate the performance upon comparing to some fourth-order optimal two-step iterative schemes. In Section 5, we introduce the updating techniques of the three critical parameters f ( r ) , f ( r ) , and f ( r ) by using the memory-updating technique and the third-degree Newton interpolation polynomial. The idea of the supplementary variable is introduced, and the result is the first memory-accelerating technique. In Section 6, we first derive a new third-order iterative scheme as a variant of D z ˇ uni c ´ ’s method; then, the updating techniques of two parameters and three parameters by the memory methods are developed; the second memory-accelerating technique is developed. In Section 7, we improve D z ˇ uni c ´ ’s method and propose the new optimal combination methods; three memory-accelerating techniques are developed. In Section 8, we introduce a relaxation factor into D z ˇ uni c ´ ’s method, and the optimal value of the relaxation factor is derived; the sixth memory-accelerating technique is developed. As a practical application, a Lie symmetry method is developed in Section 9 to solve the second-order boundary-value problem. Finally, we conclude the achievements in Section 10.

2. Preliminaries

Mathematically speaking, f ( x ) = 0 is equivalent to
x f ( x ) b 0 x f ( x ) = x f ( x ) ( 1 + b 0 x ) f ( x ) ,
where the constant b 0 is to be determined. If x n is known, we can find the next x n + 1 by solving
[ f ( x n ) b 0 f ( x n ) ] x n + 1 = x n f ( x n ) b 0 x n f ( x n ) f ( x n ) .
Upon viewing the terms including x n as the coefficient on both sides and dividing by f ( x n ) b 0 f ( x n ) , we can obtain
x n + 1 = x n f ( x n ) f ( x n ) b 0 f ( x n ) ,
which includes a parameter b 0 0 R to be assigned. The above iterative scheme was developed in [12] for a one-step continuation Newton-like method; it is referred to as Wu’s method [12]. The iterative scheme (9) was used by Lee et al. [13], Zafar et al. [14], and Thangkhenpau et al. [15] as the first step in the multi-step iterative schemes for finding multiple zeros. Recently, Singh and Singh [16] and also Singh and Argyros [17] gave a detailed dynamical analysis of the continuous Newton-like method (9).
As a variant of Equation (9), we consider
x n + 1 = x n f ( x n ) a 0 + b 0 f ( x n ) ,
which is obtained from the equivalent form of f ( x ) = 0 :
a 0 x + b 0 x f ( x ) = a 0 x ( 1 b 0 x ) f ( x ) .
The iterative scheme (10) includes two parameters a 0 , b 0 0 R to be assigned. The iterative scheme (10) is referred to as Liu’s method [18].
In 2021, Liu et al. [19] verified that the iterative scheme (10) is in general of one-order convergence; however, if we take a 0 = f ( r ) , where r is a simple root of f ( x ) = 0 , the order rises to two; moreover, if we take a 0 = f ( r ) and b 0 = f ( r ) / ( 2 f ( r ) ) , the order further rises to three. This technique is somewhat like the method of using accelerating parameters to speed up the convergence, whose optimal values can be determined by the convergence analysis.
The memory methods reuse the information from the previous iteration; they are not required to evaluate the function and its derivative at any new point, but it is required to store this information. The so-called R-convergence order of the memory method increases, and at the same time, the E.I. may go over the corresponding one without the memory method. D z ˇ uni c ´ [9] earlier developed an efficient two-parameter method for solving nonlinear equations by using the memory technique and Newton polynomial interpolation to determine the accelerating parameters. For the progress of the memory methods with accelerating parameters in the multi-step iterative schemes, one can refer to [20,21,22,23,24,25,26,27,28,29].

3. Convergence Analysis

Comparing to the Newton method in Equation (3), Equation (9) is still applicable when f ( x n ) = 0 . But, in this situation, the Newton method would fail, which restricts the practical application of the Newton method. The iterative scheme (9) has some remarkable advantages over the Newton method [30]. It is interesting to study the convergence behaviors of the iterative scheme (9) and its variant in the iterative scheme (10).
Wang and Liu [31] proposed a two-step iterative scheme as an extension of Equation (9):
y n = x n f ( x n ) f ( x n ) α f ( x n ) , x n + 1 = y n f ( y n ) f ( x n ) α f ( x n ) ,
where α [ 1 , 1 ] is a parameter. The error formula was proven to be
e n + 1 = ( α 2 3 a 2 α + 2 a 2 2 ) e n 3 .
The iterative scheme (12) is not the optimal one, which, with three function evaluations, leads to the third-order convergence, not the fourth-order convergence. This motivated us to further investigate the local convergence property of the iterative schemes (9) and (10). A new idea of the optimal combination of the iterative schemes (9) and (10) is introduced, such that a one-step optimal fourth-order iterative scheme can be achieved, which is better than the two-step iterative scheme (12).
Theorem 1.
The iterative scheme (9) for solving f ( x ) = 0 has third-order convergence, if
b 0 = a 2 = f ( r ) 2 f ( r ) .
Proof. 
Let
e n = x n r
which is small when x n and r are sufficiently close. Repeating Equation (15) for n + 1 and subtracting yield
e n + 1 = e n + x n + 1 x n .
As usual, we have
f ( x n ) = f ( r ) [ e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ] , a n : = f ( n ) ( r ) n ! f ( r ) , n = 2 , ,
f ( x n ) = f ( r ) [ 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 + ] .
Then, using Equation (14), we have
f ( x n ) f ( x n ) b 0 f ( x n ) = e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + 1 + ( 2 a 2 b 0 ) e n + ( 3 a 3 b 0 a 2 ) e n 2 + ( 4 a 4 b 0 a 3 ) e n 3 + = e n + C 3 e n 3 + C 4 e n 4 + ,
where
C 3 = ( b 0 2 a 2 ) 2 + b 0 a 2 2 a 3 + a 2 ( b 0 2 a 2 ) = b 0 2 2 b 0 a 2 + 2 a 2 2 2 a 3 = a 2 2 2 a 3 , C 4 = a 3 ( b 0 2 a 2 ) + a 2 ( b 0 a 2 3 a 3 ) + a 2 ( b 0 2 a 2 ) 2 + b 0 a 3 3 a 4 + 2 ( b 0 2 a 2 ) ( b 0 a 2 3 a 3 ) .
Inserting Equation (19) into Equation (9) and using Equation (16), we can obtain
e n + 1 = e n e n C 3 e n 3 C 4 e n 4 + = C 3 e n 3 C 4 e n 4 + = ( 2 a 3 a 2 2 ) e n 3 + O ( e n 4 ) .
This ends the proof of Theorem 1.    □
Theorem 2.
If
a 0 = f ( r ) , b 0 = a 2 = f ( r ) 2 f ( r ) ,
the iterative scheme (10) is cubically convergent.
Proof. 
It can be proven similarly by inserting Equation (17) into Equation (10):
f ( x n ) a 0 + b 0 f ( x n ) = e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + 1 + b 0 e n + b 0 a 2 e n 2 + b 0 a 3 e n 3 + = e n + D 3 e n 3 + D 4 e n 4 + ,
where we use Equation (22), for which
D 3 = a 3 2 a 2 b 0 + b 0 2 = a 3 a 2 2 , D 4 = 3 b 0 2 a 2 b 0 a 2 2 2 b 0 a 3 + a 4 .
Inserting Equation (23) into Equation (10) and using Equation (16), we can obtain
e n + 1 = e n e n D 3 e n 3 D 4 e n 4 + = D 3 e n 3 D 4 e n 4 + = ( a 2 2 a 3 ) e n 3 + O ( e n 4 ) .
This ends the proof of Theorem 2.    □
The combination of any two iterative schemes of the same order cannot yield a new iterative scheme whose convergence order can be raised by one. The conditions for the success of the combination are that, in the two error equations of these two iterative schemes, the coefficients preceding a 2 2 cannot be the same, and at the same time, the coefficients preceding a 3 cannot be the same.
Theorem 3.
As a combination of the iterative schemes (9) and (10) with a function Q, the iterative scheme:
x n + 1 = x n Q f ( x n ) f ( x n ) b 0 f ( x n ) ( 1 Q ) f ( x n ) a 0 + b 0 f ( x n )
has fourth-order convergence, if
a 0 = f ( r ) , b 0 = f ( r ) 2 f ( r ) ,
Q = D 3 D 3 C 3 = a 3 a 2 2 3 a 3 2 a 2 2 = 1 2 + f ( r ) f ( r ) 6 [ f ( r ) 2 f ( r ) f ( r ) ] .
Proof. 
Inserting Equations (17) and (18) into Equation (26) leads to
Q f ( x n ) f ( x n ) b 0 f ( x n ) + ( 1 Q ) f ( x n ) a 0 + b 0 f ( x n ) = Q ( e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ) 1 + ( 2 a 2 b 0 ) e n + ( 3 a 3 b 0 a 2 ) e n 2 + ( 4 a 4 b 0 a 3 ) e n 3 + + ( 1 Q ) ( e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ) 1 + b 0 e n + b 0 a 2 e n 2 + b 0 a 3 e n 3 + = Q e n + Q C 3 e n 3 + Q C 4 e n 4 + ( 1 Q ) e n + ( 1 Q ) D 3 e n 3 + ( 1 Q ) D 4 e n 4 + = e n + Q C 4 e n 4 + ( 1 Q ) D 4 e n 4 + ,
where we use Equations (27) and (28). Inserting Equation (29) into Equation (26) and using Equation (16), we can obtain
e n + 1 = e n e n Q C 4 e n 4 ( 1 Q ) D 4 e n 4 + = [ Q C 4 + ( 1 Q ) D 4 ] e n 4 + .
This completes the proof of Theorem 3.    □

4. Numerical Experiments

For the purposes of comparison, we list the fourth-order iterative schemes developed by Chun [3]:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) f ( x n ) 1 + 2 f ( y n ) f ( x n ) + f 2 ( y n ) f 2 ( x n ) f ( y n ) f ( x n ) ,
by King [4]:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) f ( x n ) f ( x n ) + β f ( y n ) f ( x n ) + ( β 2 ) f ( y n ) f ( y n ) f ( x n ) ,
where β R , and by Chun and Ham [1]:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n f ( x n ) f ( x n ) 4 f 2 ( x n ) + 6 f ( x n ) f ( y n ) + 3 f 2 ( y n ) 4 f 2 ( x n ) 2 f ( x n ) f ( y n ) f 2 ( y n ) f ( y n ) f ( x n ) .
Equations (31)–(33) are fourth-order optimal iterative schemes with the E.I. = 4 3 = 1.587 . However, the E.I. of the iterative scheme (26) can be larger, as shown in Section 5. Furthermore, the iterative scheme (26) is a single-step one, rather than the two steps of the iterative schemes (31)–(33).
The convergence criteria are given by
| x n + 1 x n | < ε and | f ( x n + 1 ) | < ε ,
where ε = 10 15 is fixed for all tests. In [32], the numerically computed order of convergence (COC) is approximated by
COC : = ln | ( x n + 1 r ) / ( x n r ) | ln | ( x n r ) / ( x n 1 r ) | ,
where r is a solution of f ( x ) = 0 .
The iterative schemes in Equations (9), (10), and (26) are named, respectively, Algorithm 1, Algorithm 2, and Algorithm 3. We considered a simple case of f ( x ) = x 3 x = 0 and specify how to calculate the COC. By starting from x 0 = 1.5 , the NM achieves the root r = 1 with seven iterations, Algorithms 1 and 2 with five iterations, and Algorithm 3 with four iterations. For each triple of the data of x, we can compute the COC by Equation (35). The triple ( x n + 1 , x n , x n 1 ) comprises the last three values of x before the convergence, and we set the convergence value of x to be r with f ( r ) = 0 . If ln | ( x n + 1 r ) / ( x n r ) | is not computable due to x n + 1 = r , we can shift the triple forward to ( x n , x n 1 , x n 2 ) , and so on.
By the data from Table 1, we take the COC of the NM to be 1.999, of Algorithm 1 to be 3.046, of Algorithm 2 to be 2.968 and of Algorithm 3 to be 4.899. As expected, both Algorithms 1 and 2 have near third-order convergence; however, Algorithm 3 with COC = 4.899 is greater than the theoretical value of fourth-order.
The test examples are given by
g 1 ( x ) = x 3 + 4 x 2 10 ,
g 2 ( x ) = x 2 e x 3 x + 2 ,
g 3 ( x ) = ( x 1 ) 3 2 ,
g 4 ( x ) = ( x + 2 ) e x 1 ,
g 5 ( x ) = sin 2 x x 2 + 1 .
The corresponding solutions are, respectively, r 1 = 1.3652300134 , r 2 = 0.2575302854 , r 3 = 2.2599210499 , r 4 = 0.442854401002 , and r 5 = 1.4044916482 .
Algorithm 1 was tested by Wu [12,30], and Algorithm 2 was tested by Liu et al. [19]. Because Algorithm 3 is a new iterative scheme, we tested it for the above examples. In Table 2, for different functions, we list the NI obtained by the presently developed Algorithm 3, which are compared to the NM, the method of Jarratt [33] (JM), the method of Traub–Ostrowski [8] (TM), the method of King [4] (KM) with β = 3 , and the method of Chun and Ham [1] (CM).
Algorithm 3 theoretically has fourth-order convergence with the optimal values of the parameters. For the first example, g 1 ( x ) = 0 , Algorithm 3 converges faster than other fourth-order iterative schemes. For other examples, Algorithm 3 is much better than the NM and is competitive with the other fourth-order iterative schemes, JM, TM, KM, and CM.

5. Updating Three Parameters by Memory Method

In order to achieve the fourth-order convergence of the iterative scheme in Equation (26), the values of f ( r ) , f ( r ) , and f ( r ) must be known a priori; however, these values are unknown because the root r is an unknown constant.
Therefore, we introduce a supplementary variable predicted by the Newton scheme used in the data interpolation:
w n = x n f ( x n ) f ( x n ) .
Based on the updated data of x n + 1 and w n + 1 = x n + 1 f ( x n + 1 ) / f ( x n + 1 ) and the previous data of ( x n , w n ) , we can construct a third-degree Newton interpolatory polynomial by
N 3 ( x ) = f ( x n ) + f [ x n , w n ] ( x x n ) + f [ x n , w n , x n + 1 ] ( x x n ) ( x w n ) + f [ x n , w n , x n + 1 , w n + 1 ] ( x x n ) ( x w n ) ( x x n + 1 ) ,
where
f [ x n , w n ] = f ( x n ) f ( w n ) x n w n , f [ x n , w n , x n + 1 ] = f [ x n , w n ] f [ w n , x n + 1 ] x n x n + 1 , f [ x n , w n , x n + 1 , w n + 1 ] = f [ x n , w n , x n + 1 ] f [ w n , x n + 1 , w n + 1 ] x n w n + 1 .
It is easy to derive
N 3 ( x ) = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x x n w n ) + f [ x n , w n , x n + 1 , w n + 1 ] [ ( x w n ) ( x x n + 1 ) + ( x x n ) ( x x n + 1 ) + ( x x n ) ( x w n ) ] , N 3 ( x ) = 2 f [ x n , w n , x n + 1 ] + f [ x n , w n , x n + 1 , w n + 1 ] ( 6 x 2 x n 2 w n 2 x n + 1 ) , N 3 ( x ) = 6 f [ x n , w n , x n + 1 , w n + 1 ] .
We update the values of A = f ( r ) , B = f ( r ) / [ 2 f ( r ) ] and Q in Equation (28) by
A n + 1 = N 3 ( w n + 1 ) , B n + 1 = N 3 ( w n + 1 ) 2 N 3 ( w n + 1 ) ,
Q n + 1 = 1 2 + N 3 ( w n + 1 ) N 3 ( w n + 1 ) 6 [ N 3 ( w n + 1 ) 2 N 3 ( w n + 1 ) N 3 ( w n + 1 ) ] .
Now, we have the first memory-accelerating technique for the iterative scheme (26) in Theorem 3, which reads as (i) giving A 0 , B 0 , and Q 0 and w 0 = x 0 f ( x 0 ) / f ( x 0 ) and (ii) performing for n = 0 , 1 , :
x n + 1 = x n Q n f ( x n ) f ( x n ) B n f ( x n ) ( 1 Q n ) f ( x n ) A n + B n f ( x n ) ,
w n + 1 = x n + 1 f ( x n + 1 ) f ( x n + 1 ) , A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 w n + 1 x n w n ) + f [ x n , w n , x n + 1 , w n + 1 ] × [ ( w n + 1 w n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 w n ) ] , D n + 1 = 2 f [ x n , w n , x n + 1 ] + f [ x n , w n , x n + 1 , w n + 1 ] ( 6 w n + 1 2 x n 2 w n 2 x n + 1 ) , E n + 1 = 6 f [ x n , w n , x n + 1 , w n + 1 ] , B n + 1 = D n + 1 2 A n + 1 ,
Q n + 1 = 1 2 + A n + 1 E n + 1 6 [ D n + 1 2 A n + 1 E n + 1 ] .
Some numerical tests of the first memory-accelerating technique for the iterative scheme are listed in Table 3, for which we can observe that the values of the COC are very large. Because the differential term f ( x n ) was included, the E.I. = (COC) 1 / 3 was computed. All E.I.s are greater than the E.I. = 1.587 of the optimal fourth-order iterative scheme without memory; they have the same numberof function evaluations. The last four E.I.s are also greater than the E.I. = 1.682 of the optimal eighth-order iterative scheme without memory and with four function evaluations.

6. A New Fourth-Order Iterative Scheme with Memory Updating

Below, we will develop the memory-accelerating technique without using the differential term f ( x n ) . The test examples will be changed to
f 1 ( x ) = x 3 + 4 x 2 10 ,
f 2 ( x ) = ( x 1 ) ( x 6 + 1 / x 6 + 4 ) sin ( x 2 ) ,
f 3 ( x ) = ( x 1 ) 3 2 ,
f 4 ( x ) = ( x + 2 ) e x 1 ,
f 5 ( x ) = sin 2 x x 2 + 1 .
The corresponding solutions are, respectively, r 1 = 1.3652300134 , r 2 = 1 , r 3 = 2.2599210499 , r 4 = 0.442854401002 , and r 5 = 1.4044916482 .
In order to compare the numerical results with that given in [9], a new function f 2 is added in Equation (51). For consistency, the other four functions are rewritten as f 1 , f 3 , f 4 , and f 5 to replace the g 1 , g 3 , g 4 , and g 5 appearing in Equations (36) and (38)–(40).
In this section, a new one-step iterative scheme with the aid of a supplementary variable and a relaxation factor is introduced. The detailed convergence analysis is derived. Then, we accelerate the introduced parameters by the memory-updating technique.

6.1. A New Third-Order Result

We address the following iterative scheme:
w n = x n + γ f ( x n ) , x n + 1 = x n f ( x n ) f [ x n , w n ] + p f ( x n ) ,
which is a novel one-step iterative scheme with w n an adjoint variable. Equation (55) is a variant of D z ˇ uni c ´ ’s method [9], where f [ x n , w n ] + p f ( w n ) appears in the denominator, not f [ x n , w n ] + p f ( x n ) .
To derive the convergence order of Equation (55), let
f ( x n ) = F ( e n ) f ( r ) = f ( r ) [ e n + a 2 e n 2 + a 3 e n 3 + a 4 e n 4 + ] ,
f ( x n ) = G ( e n ) f ( r ) = f ( r ) [ 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 + ] ,
f ( x n ) = 2 H ( e n ) f ( r ) ; H ( e n ) = a 2 + 3 a 3 e n + 6 a 4 e n 2 + .
Theorem 4.
The iterative scheme (55) has third-order convergence:
e n + 1 = [ ( 1 + 3 η ) a 3 + a 2 2 ] e n 3 + O ( e n 4 ) ,
where η ( 1 , 1 ] is a relaxation factor, and
γ = 1 + η f ( r ) , p = η a 2 .
If we take
η = 1 3 a 2 2 3 a 3 = 1 3 f ( r ) 2 2 f ( r ) f ( r ) ,
then the order of convergence is further increased to four.
Proof. 
Using the Taylor series yields
f ( w n ) f ( x n ) = f ( x n ) ( w n x n ) + 1 2 f ( x n ) ( w n x n ) 2 + ,
where
w n x n = γ f ( r ) F ( e n ) .
Hence, it follows from Equations (57) and (58) that
f [ x n , w n ] = f ( r ) G ( e n ) + γ f ( r ) 2 F ( e n ) H ( e n ) .
Inserting Equations (56) and (64) into the second one in Equation (55), we have
e n + 1 = e n F ( e n ) G ( e n ) + γ f ( r ) F ( e n ) H ( e n ) + p F ( e n ) .
Through some operations, we can obtain
G ( e n ) + γ f ( r ) F ( e n ) H ( e n ) + p F ( e n ) = 1 + 2 a 2 e n + 3 a 3 e n 2 + 4 a 4 e n 3 + + γ f ( r ) ( e n + a 2 e n 2 + a 3 e n 3 + ) ( a 2 + 3 a 3 e n + 6 a 4 e n 2 + ) + p ( e n + a 2 e n 2 + a 3 e n 3 + ) ,
which can be written as
G ( e n ) + γ f ( r ) F ( e n ) H ( e n ) + p F ( e n ) = 1 + P 1 e n + P 2 e n 2 + P 3 e n 3 + ,
where
P 1 = 2 a 2 + a 2 γ f ( r ) + p , P 2 = 3 a 3 + ( a 2 2 + 3 a 3 ) γ f ( r ) + p a 2 , P 3 = 4 a 4 + γ f ( r ) ( 6 a 4 + 4 a 2 a 3 ) + p a 3 .
Then, Equation (65) is further reduced to
e n + 1 = e n F ( e n ) G ( e n ) + γ f ( r ) F ( e n ) H ( e n ) + p F ( e n ) = e n ( e n + a 2 e n 2 + a 3 e n 3 + ) [ 1 P 1 e n P 2 e n 2 P 3 e n 3 + ( P 1 e n + P 2 e n 2 + P 3 e n 3 ) 2 ] = ( P 1 a 2 ) e n 2 + ( P 2 + a 2 P 1 a 3 P 1 2 ) e n 3 + ( P 3 + a 2 P 2 + a 3 P 1 2 P 1 P 2 a 4 ) e n 4 .
Letting P 1 a 2 = 0 for the vanishing of e n 2 , we can derive a relation between γ and p:
a 2 [ 1 + γ f ( r ) ] + p = 0
Taking 1 + γ f ( r ) = η , we prove Equation (60). By using P 1 = a 2 and Equation (68), the coefficient preceding e n 3 in Equation (69) can be simplified to
( P 2 + a 2 P 1 a 3 P 1 2 ) e n 3 = ( P 2 a 3 ) e n 3 = [ 3 a 3 + ( a 2 2 + 3 a 3 ) γ f ( r ) + p a 2 a 3 ] e n 3 ,
which, further using γ f ( r ) = ( 1 + η ) and p = η a 2 , reduces to
( P 2 + a 2 P 1 a 3 P 1 2 ) e n 3 = [ 2 a 3 ( 1 + η ) ( a 2 2 + 3 a 3 ) + η a 2 2 ] e n 3 = [ ( 1 + 3 η ) a 3 + a 2 2 ] e n 3 .
If Equation (61) is satisfied, the error equation becomes e n + 1 = O ( e n 4 ) . This completes the proof of Theorem 4.    □
Theorem 4 gives us a clue to achieving a fourth-order iterative scheme (55), if γ and p are given by Equations (60) and (61). However, there are three unknown parameters, f ( r ) , f ( r ) , and f ( r ) . We will apply the memory-accelerating technique to adapt the values of γ and p in Equation (55) per iteration. The accuracy of this memory-dependent adaption technique depends on how many current and previous data are taken into account. Similar to what was performed in [9], we can further estimate the lower bound of the convergence order of such a memory-accelerating method, upon giving the updating technique for γ and p. Instead of deriving the formulas of this kind of estimation, we use the numerical values of the COC to display the numerical performance.

6.2. Updating Two Parameters

To mimic the memory-updating procedure in Section 5, we can obtain a memory method of the iterative scheme in Equation (55) by (i) giving A 0 , B 0 , and η and w 0 = x 0 ( 1 + η ) f ( x 0 ) / A 0 and (ii) performing for n = 0 , 1 , :
x n + 1 = x n f ( x n ) f [ x n , w n ] + η B n f ( x n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 ( 1 + η ) f ( x n + 1 ) A n + 1 ,
C n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 w n + 1 x n w n ) + f [ x n , w n , x n + 1 , w n + 1 ] × [ ( w n + 1 w n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 w n ) ] , D n + 1 = 2 f [ x n , w n , x n + 1 ] + f [ x n , w n , x n + 1 , w n + 1 ] ( 6 w n + 1 2 x n 2 w n 2 x n + 1 ) , B n + 1 = D n + 1 2 C n + 1 .
In the above iterative scheme, η is a given constant value of the parameter.
To demonstrate the usefulness of the above iterative scheme, we consider the solution of f 1 ( x ) = 0 . We fix x 0 = 1.3 , A 0 = 17.09 and B 0 = 0.4798 and vary the values of η in Table 4. Because only two function evaluations are required, the E.I. = (COC) 1 / 2 was computed.

6.3. Updating Three Parameters

Table 4 reveals an optimal value of η , such that the COC and E.I. are the best. Indeed, by setting the coefficient preceding e n 3 as zero in Theorem 4, we can truly obtain a fourth-order iterative scheme, whose η is determined by Equation (61).
Thus, we have the second memory-accelerating technique for the iterative scheme (55) in Theorem 4 using η in Equation (61), which reads as (i) giving A 0 , B 0 , and η 0 and w 0 = x 0 ( 1 + η 0 ) f ( x 0 ) / A 0 , and (ii) performing for n = 0 , 1 , :
x n + 1 = x n f ( x n ) f [ x n , w n ] + η n B n f ( x n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 ( 1 + η n ) f ( x n + 1 ) A n + 1 , C n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 w n + 1 x n w n ) + f [ x n , w n , x n + 1 , w n + 1 ] × [ ( w n + 1 w n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 x n + 1 ) + ( w n + 1 x n ) ( w n + 1 w n ) ] , D n + 1 = 2 f [ x n , w n , x n + 1 ] + f [ x n , w n , x n + 1 , w n + 1 ] ( 6 w n + 1 2 x n 2 w n 2 x n + 1 ) ,
E n + 1 = 6 f [ x n , w n , x n + 1 , w n + 1 ] , B n + 1 = D n + 1 2 C n + 1 ,
η n + 1 = 1 3 B n + 1 2 2 A n + 1 E n + 1 .
Some numerical tests of the second memory-accelerating technique for the iterative scheme are listed in Table 5. For the equation f 2 ( x ) = 0 , the COC = 3.535 obtained is slightly larger than the COC = 3.48 obtained in [9].

7. Improvement of D z ˇ uni c ´ ’s Method and Optimal Combination

In this section, we improve D z ˇ uni c ´ ’s method [9] by deriving the optimal parameters and their accelerating techniques. The idea of two optimal combinations of D z ˇ uni c ´ ’s and Wu’s method, and D z ˇ uni c ´ ’s and Liu’s method are introduced. Then, three new one-step iterative schemes with the memory-updating techniques are developed.

7.1. Improvement of D z ˇ uni c ´ ’s Memory Method

As was performed in [9], γ and p in Equation (5) were taken to be γ = 1 / f ( r ) and p = a 2 . To guarantee the third-order convergence of D z ˇ uni c ´ ’s method, 1 + γ f ( r ) = 0 is sufficient, as shown in Equation (6). Therefore, there exists the freedom to chose the value of p.
Theorem 5.
For solving f ( x ) = 0 , the iterative scheme (5) has third-order convergence:
e n + 1 = [ a 3 + p a 2 + a 2 2 ] e n 3 + O ( e n 4 ) ,
if γ = 1 / f ( r ) . If we take, furthermore,
p = a 2 a 3 a 2 = f ( r ) 2 f ( r ) f ( r ) 3 f ( r ) ,
then Equation (5) has fourth-order convergence:
e n + 1 = O ( e n 4 ) .
Proof. 
By Equations (62) and (63) and Equations (56)–(58), we have
f ( w n ) = f ( r ) F ( e n ) + γ f ( r ) F ( e n ) G ( e n ) + γ 2 f ( r ) 3 F 2 ( e n ) H ( e n ) .
At the same time, Equation (65) is modified to
e n + 1 = e n F ( e n ) G ( e n ) + γ f ( r ) F ( e n ) [ H ( e n ) + p G ( e n ) ] + p F ( e n ) + p γ 2 f ( r ) 2 H ( e n ) F 2 ( e n ) .
Since we do not have interest in the details of the fourth-order error equation, we write
G ( e n ) + γ f ( r ) F ( e n ) H ( e n ) + p γ f ( r ) F ( e n ) G ( e n ) + p F ( e n ) + p γ 2 f ( r ) 2 H ( e n ) F 2 ( e n ) = 1 + P 1 e n + P 2 e n 2 + ,
where
P 1 = 2 a 2 + a 2 γ f ( r ) + p [ 1 + γ f ( r ) ] , P 2 = 3 a 3 + ( a 2 2 + 3 a 3 ) γ f ( r ) + p a 2 + p a 2 [ 3 γ f ( r ) + γ 2 f ( r ) 2 ] .
Then, Equation (86) is reduced to
e n + 1 = e n ( e n + a 2 e n 2 + a 3 e n 3 + ) [ 1 P 1 e n P 2 e n 2 P 3 e n 3 + ( P 1 e n + P 2 e n 2 + P 3 e n 3 ) 2 ] = ( P 1 a 2 ) e n 2 + ( P 2 + a 2 P 1 a 3 P 1 2 ) e n 3 + O ( e n 4 ) ,
where P 1 a 2 = ( a 2 + p ) ( 1 + γ f ( r ) ) was derived in [9].
Letting P 1 = a 2 for the vanishing of e n 2 , we can derive
P 2 + a 2 P 1 a 3 P 1 2 = 2 a 3 + ( a 2 2 + 3 a 3 ) γ f ( r ) + p a 2 [ 1 + 3 γ f ( r ) + γ 2 f ( r ) 2 ] = a 3 a 2 2 p a 2 ,
if we take γ f ( r ) = 1 . By using Equation (83), the coefficient preceding e n 3 is reduced to zero. This completes the proof of Theorem 5.    □
The third memory-accelerating technique for the iterative scheme (5) in Theorem 5 using p in Equation (83) reads as (i) giving A 0 and p 0 and w 0 = x 0 f ( x 0 ) / A 0 and (ii) performing for n = 0 , 1 , :
x n + 1 = x n f ( x n ) f [ x n , w n ] + p n f ( w n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 f ( x n + 1 ) A n + 1 ,
C n + 1 , D n + 1 , E n + 1 computed by Equation ( 80 ) ,
p n + 1 = D n + 1 2 C n + 1 E n + 1 3 D n + 1 .
Some numerical tests of the third memory-accelerating technique for the iterative scheme are listed in Table 6. We can notice that, for the equation f 2 ( x ) = 0 , the COC = 4.002 obtained is larger than the COC = 3.48 obtained in [9].

7.2. Optimal Combination of D z ˇ uni c ´ ’s and Wu’s Iterative Methods

Theorem 6.
If
γ = 1 f ( r ) , p = a 2 , b 0 = a 2 ,
then the combination of the iterative schemes (5) and (9):
w n = x n + γ f ( x n ) , x n + 1 = x n Q f ( x n ) f [ x n , w n ] + p f ( w n ) ( 1 Q ) f ( x n ) f ( x n ) b 0 f ( x n )
has fourth-order convergence, where
Q = 2 a 3 a 2 2 3 a 3 + a 2 2 .
Proof. 
By Equations (20), (21), and (82), we have
e n + 1 = ( 1 Q ) ( 2 a 2 2 a 3 ) + Q a 3 + Q p a 2 + Q a 2 2 e n 3 + O ( e n 4 ) .
If we take p = a 2 and Q by Equation (97), then Equation (98) becomes
e n + 1 = O ( e n 4 ) .
This ends the proof of Theorem 6.    □
The fourth memory-accelerating technique for the iterative scheme (96) in Theorem 6 reads as (i) giving A 0 , B 0 , and Q 0 and w 0 = x 0 f ( x 0 ) / A 0 and (ii) performing for n = 0 , 1 , :
x n + 1 = x n Q 0 f ( x n ) f [ x n , w n ] B n f ( w n ) ( 1 Q 0 ) f ( x n ) f ( x n ) B n f ( x n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 f ( x n + 1 ) A n + 1 ,
C n + 1 , D n + 1 , E n + 1 computed by Equation ( 80 ) , B n + 1 = D n + 1 2 C n + 1 , G n + 1 = E n + 1 6 C n + 1 ,
Q n + 1 = 2 G n + 1 D n + 1 2 3 G n + 1 + D n + 1 2 .
Some numerical tests of the fourth memory-accelerating technique for the iterative scheme are listed in Table 7, which shows that the values of COC are high, which, however, need the differential term f ( x n ) . We can notice that, for the equation f 2 ( x ) = 0 , the COC = 6.643 obtained is much larger than the COC = 3.48 obtained in [9].

7.3. Optimal Combination of D z ˇ uni c ´ ’s and Liu’s Iterative Methods

Theorem 7.
If
γ = 1 f ( r ) , p = a 2 , a 0 = f ( r ) , b 0 = a 2 ,
then the combination of the iterative schemes (5) and (10):
w n = x n + γ f ( x n ) , x n + 1 = x n Q f ( x n ) f [ x n , w n ] + p f ( w n ) ( 1 Q ) f ( x n ) f ( r ) + b 0 f ( x n )
has fourth-order convergence, where
Q = 1 a 3 a 2 2 .
Proof. 
By Equations (24) and (82), we have
e n + 1 = ( 1 Q ) ( a 2 2 a 3 ) + Q a 3 + Q p a 2 + Q a 2 2 e n 3 + O ( e n 4 ) .
If we take p = a 2 and Q by Equation (106), then Equation (107) becomes
e n + 1 = O ( e n 4 ) .
This ends the proof of Theorem 7.    □
The fifth memory-accelerating technique for the iterative scheme (105) in Theorem 7 is (i) giving A 0 = C 0 , B 0 , and Q 0 and w 0 = x 0 f ( x 0 ) / A 0 and (ii) performing for n = 0 , 1 , :
x n + 1 = x n Q 0 f ( x n ) f [ x n , w n ] B n f ( w n ) ( 1 Q 0 ) f ( x n ) C n + B n f ( x n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 f ( x n + 1 ) A n + 1 ,
C n + 1 , D n + 1 , E n + 1 computed by Equation ( 80 ) , B n + 1 = D n + 1 2 C n + 1 ,
Q n + 1 = 1 E n + 1 D n + 1 2 .
Some numerical tests of the fifth memory-accelerating technique for the iterative scheme are listed in Table 8. We can notice that, for the equation f 2 ( x ) = 0 , the COC = 5.013 obtained is larger than the COC = 3.48 obtained in [9].

8. Modification of D z ˇ uni c ´ ’s Method

As was performed in [9], γ and p in Equation (5) were taken to be γ = 1 / f ( r ) and p = a 2 . To guarantee the third-order convergence of D z ˇ uni c ´ ’s method, p = a 2 is sufficient, as shown in Equation (6). Therefore, there exists the freedom to chose the value of γ . We propose a modification of D z ˇ uni c ´ ’s method by
w n = x n β f ( x n ) f ( r ) , x n + 1 = x n f ( x n ) f [ x n , w n ] + p f ( w n ) ,
that is, we take p = a 2 and γ f ( r ) = β , where β is a relaxation factor to be determined for increasing the order of convergence. If we take β = 1 , D z ˇ uni c ´ ’s method is recovered. The present modification is different from that analyzed in Section 7.1 and Section 7.2, where p is a free parameter.
Theorem 8.
For solving f ( x ) = 0 , the iterative scheme (113) with p = a 2 has third- order convergence:
e n + 1 = [ ( 2 3 β ) a 3 a 2 2 ( β 2 2 β + 1 ) ] e n 3 + O ( e n 4 ) ,
If we take, furthermore,
β = 2 3 q + 9 q 2 4 q 2 , q = a 3 a 2 2 ,
then Equation (113) has fourth-order convergence:
e n + 1 = O ( e n 4 ) .
Proof. 
Let β = 1 + η . It follows from Equations (89) and (90) that
e n + 1 = ( P 2 + a 2 P 1 a 3 P 1 2 ) e n 3 + O ( e n 4 ) = ( P 2 a 3 ) e n 3 + O ( e n 4 ) , P 2 a 3 = 2 a 3 + ( a 2 2 + 3 a 3 ) γ f ( r ) a 2 2 [ 1 + 3 γ f ( r ) + γ 2 f ( r ) 2 ]
= ( 2 3 β ) a 3 a 2 2 [ β 2 2 β + 1 ] ,
where P 1 = a 2 , p = a 2 , and γ f ( r ) = β were inserted. For the vanishing of e n 3 , we need to solve
β 2 + 3 a 3 a 2 2 2 β + 1 2 a 3 a 2 2 = 0 ,
whose solution is given by Equation (115).    □
The sixth memory-accelerating technique for the iterative scheme (113) in Theorem 8 is (i) giving A 0 , B 0 , and β 0 and w 0 = x 0 β 0 f ( x 0 ) / A 0 and (ii) for n = 0 , 1 , , performing
x n + 1 = x n f ( x n ) f [ x n , w n ] B n f ( w n ) ,
A n + 1 = f [ x n , w n ] + f [ x n , w n , x n + 1 ] ( 2 x n + 1 x n w n ) ,
w n + 1 = x n + 1 β n + 1 f ( x n + 1 ) A n + 1 ,
C n + 1 , D n + 1 , E n + 1 computed by Equation ( 80 ) , B n + 1 = D n + 1 2 C n + 1 , q n + 1 = E n + 1 D n + 1 2 ,
β n + 1 = 2 3 q n + 1 + 9 q n + 1 2 4 q n + 1 2 .
Some numerical tests of the sixth memory-accelerating technique for the iterative scheme are listed in Table 9. For the equation f 2 ( x ) = 0 , the COC = 8.526 is much larger than the COC = 3.48 obtained in [9].
In Equation (113), if we take β = 1 , then D z ˇ uni c ´ ’s method is recovered. In Table 10, the values of the COC are compared, which shows that the COC obtained by the sixth memory-accelerating technique is larger than that obtained by D z ˇ uni c ´ ’s memory method.
Notice that, by using the suggested initial values of x 0 = 1.3 , A 0 = 10 , and B 0 = 0.1 given in [9] and using
COC : = ln | ( f ( x n + 1 ) / f ( x n ) | ln | f ( x n ) / f ( x n 1 ) | ,
instead of that in Equation (35), we can obtain the COC = 3.538 for f 2 ( x ) = 0 , which is close to the lower bound of 3.56 derived in [9]. However, using A 0 = 15.63 and B 0 = 0.077 , the COC = 4.990 is obtained by Equation (35), and the COC = 3.895 is obtained by Equation (124). Most papers have used Equation (35) to compute the COC. In any case, the lower bound derived in [9] may underestimate the true value of the COC. As shown in Table 10, all COCs obtained by D z ˇ uni c ´ ’s memory method are greater than 3.56.

9. A Lie Symmetry Method

As a practical application of the proposed iterative schemes, we developed a Lie symmetry method based on the Lie group S L ( 2 , R ) to solve the second-order nonlinear boundary-value problem. This Lie symmetry method was first developed in [34] for computing the eigenvalues of the generalized Sturm–Liouville problem.
Let
u ( y ) = 3 2 u 2 ( y ) , y ( 0 , 1 ) ,
u ( 0 ) = 4 , u ( 1 ) = 1 ,
whose exact solution is
u ( y ) = 4 ( y + 1 ) 2 .
The conventional shooting method is assumed to have an unknown initial slope u ( 0 ) = x and integrated with Equation (125) with the initial conditions u ( 0 ) = 4 and u ( 0 ) = x , which results in an implicit equation f ( x ) = u ( 1 , x ) 1 = 0 to be solved.
From Equation (125), a nonlinear system consists of two first-order ordinary differential equations:
d d y u ( y ) u ( y ) = 0 1 3 u ( y ) 2 0 u ( y ) u y ) .
Let
A : = 0 1 3 u ( y ) 2 0
being the coefficient matrix. The Lie symmetry system in Equation (128) permits a Lie group symmetry S L ( 2 , R ) , a two-dimensional real-valued special linear group, because of tr A = 0 .
By using the closure property of the Lie group, there exists a G S L ( 2 , R ) , such that the following mapping holds:
u ( 1 ) u ( 1 ) = G u ( 0 ) u ( 0 ) ,
where
G ( x ) = exp [ A ( u ^ ) ] ,
u ^ ( x ) = x u ( 0 ) + ( 1 x ) u ( 1 ) = 3 x + 1 ,
and x ( 0 , 1 ) is an unknown weighting factor to be determined.
We can derive
G = cosh 3 u ^ 2 2 3 u ^ sinh 3 u ^ 2 3 u ^ 2 sinh 3 u ^ 2 cosh 3 u ^ 2 .
Then, it follows from Equations (130) and (133) that
u ( 1 ) = cosh 3 u ^ 2 u ( 0 ) + 2 3 u ^ sinh 3 u ^ 2 u ( 0 ) ,
u ( 1 ) = 3 u ^ 2 sinh 3 u ^ 2 u ( 0 ) + cosh 3 u ^ 2 u ( 0 ) .
Since u ( 0 ) = 4 and u ( 1 ) = 1 are given, we can obtain u ( 0 ) from Equations (132) and (134):
u ( 0 ) = 1 4 cosh 3 u ^ 2 2 3 u ^ sinh 3 u ^ 2 = 1 4 cosh 3 ( 3 x + 1 ) 2 2 3 ( 3 x + 1 ) sinh 3 ( 3 x + 1 ) 2 .
It is interesting that the unknown slope u ( 0 ) can be derived in Equation (136) by using the Lie symmetry method, which is more powerful than the traditional shooting method, of which no such explicit formula for u ( 0 ) can be obtained.
Now, we apply the fourth-order Runge–Kutta method to integrate Equation (125) with the initial conditions u ( 0 ) = 4 and u ( 0 ) given in Equation (136) in terms of x. The right-end value must satisfy u ( 1 ) 1 = 0 , which is an implicit function of x. We take N = 2000 steps in the Runge–Kutta method and fix the initial guess x 0 = 0.5 . In Table 11, we compare the NI, the error of u ( 0 ) , and the maximum error of u obtained by comparing to Equation (127). The weighting factor x = 0.614814522263784 is obtained, such that u ( 0 ) computed from Equation (136) is very close to the exact one u ( 0 ) = 8 .
Instead of the Dirichlet boundary conditions in Equation (126), we consider mixed-type boundary conditions:
u ( 0 ) = 4 , u ( 1 ) = 1 .
Now, from Equation (135), it follows that
u ( 0 ) = 1 + 4 3 ( 3 x + 1 ) 2 sinh 3 ( 3 x + 1 ) 2 cosh 3 ( 3 x + 1 ) 2 .
In Table 12, we compare the NI, the error of u ( 0 ) , and the maximum error of u. The weighting factor x = 0.5602528690788923 is obtained, such that the u ( 0 ) computed from Equation (138) is very close to the exact one u ( 0 ) = 8 .

10. Conclusions

In this paper, we addressed five one-step iterative methods: A (Wu’s method), B (Liu’s method), C (a novel method), D (D z ˇ uni c ´ ’s method), and E (a modification of D z ˇ uni c ´ ’s method). Without using specific values of the parameters and without memory, they all have second-order convergence; when specific optimal values of the parameters are used, they have third-order convergence. Three critical values of f ( r ) , f ( r ) , and f ( r ) are parameters in a 2 and a 3 , which are crucial for achieving good performance in the designed iterative scheme, such that the coefficients (involving a 2 and a 3 ) preceding e n 2 and e n 3 are zeros. We introduced a combination function, which is determined by raising the order of convergence. The optimal combination of A and B can generate a fourth-order one-step iterative scheme. When the values of the parameters and combination function were obtained by a memory-accelerating method with the third-degree Newton polynomial to interpolate the previous and current data, we obtained the first memory-accelerating technique to realize a fourth-order one-step iterative scheme.
In the novel method C, a relaxation factor appeared. If we used the memory-accelerating method for updating the values of the relaxation factor and other parameters, we obtained the second memory-accelerating technique to realize the fourth-order convergence of the derived novel one-step iterative scheme.
We mathematically improved D z ˇ uni c ´ ’s method to an iterative scheme with fourth-order convergence, and the third memory-accelerating technique was developed to realize a fourth-order one-step iterative scheme based on D z ˇ uni c ´ ’s memory method.
The optimal combination of A and D generated the fourth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between D z ˇ uni c ´ ’s and Wu’s methods. The optimal combination of B and D generated the fifth memory-accelerating technique to realize a fourth-order one-step iterative scheme based on an optimal combination function between D z ˇ uni c ´ ’s and Liu’s methods.
In E, we finally introduced a relaxation factor into D z ˇ uni c ´ ’s method, which is optimized to the fourth-order convergence by the sixth memory-accelerating technique.
In the first and fourth memory-accelerating techniques, three evaluations of the function and its derivative were required. In contrast, the second, third, fifth, and sixth memory-accelerating techniques needed two evaluations of the function. Numerical tests confirmed that these fourth-order one-step iterative schemes performed very well with high values of the COC and E.I. Among them, the fifth memory-accelerating technique was the best one with the COC > 5.028 and E.I. > 2.413 for all testing examples. Recall that the efficiency index of the optimal fourth-order two-step iterative scheme with three evaluations of the function and without memory is the E.I. = 4 3 = 1.587.
As an application of the derivative-free one-step iterative schemes with the second, third, fifth, and sixth memory-accelerating technique, a second-order nonlinear boundary-value problem was solved by the Lie symmetry method. It is remarkable that the Lie symmetry method can derive the unknown initial slope to be an explicit formula of the weighting factor x, whose implicit nonlinear equation f ( x ) = 0 can be solved with high efficiency and high accuracy.
The basic iterative schemes in Equations (9) and (10) are applicable to find the multiple roots of a nonlinear equation, for instance f 3 ( x ) = ( x 1 ) 3 2 = 0 with a triple root r = 2 1 / 3 + 1 . It can be treated well by the proposed accelerated one-step iterative schemes. As for the system of nonlinear equations, more studies are needed by extending the presented accelerating techniques.

Author Contributions

Conceptualization, C.-S.L.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L. and C.-W.C.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-L.K.; Resources, C.-S.L., C.-W.C. and C.-L.K.; Data curation, C.-S.L., C.-W.C. and C.-L.K.; Writing—original draft, C.-S.L.; Writing—review & editing, C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-L.K.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding authors. The data are not publicly available due to restrictions privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chun, C.; Ham, Y. Some fourth-order modifications of Newton’s method. Appl. Math. Comput. 2008, 197, 654–658. [Google Scholar] [CrossRef]
  2. Noor, M.A.; Noor, K.I.; Waseem, M. Fourth-order iterative methods for solving nonlinear equations. Int. J. Appl. Math. Eng. Sci. 2010, 4, 43–52. [Google Scholar]
  3. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  4. King, R. A family of fourth-order iterative methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  5. Li, S. Fourth-order iterative method without calculating the higher derivatives for nonlinear equation. J. Algorithms Comput. Technol. 2019, 13. [Google Scholar] [CrossRef]
  6. Chun, C. Certain improvements of Chebyshev-Halley methods with accelerated fourth-order convergence. Appl. Math. Comput. 2007, 189, 597–601. [Google Scholar] [CrossRef]
  7. Kuo, J.; Li, Y.; Wang, X. Fourth-order iterative methods free from second derivative. Appl. Math. Comput. 2007, 184, 880–885. [Google Scholar]
  8. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  9. Džunić, J. On efficient two-parameter methods for solving nonlinear equations. Numer. Algorithms 2013, 63, 549–569. [Google Scholar]
  10. Haghani, F.K. A modiffied Steffensen’s method with memory for nonlinear equations. Int. J. Math. Model. Comput. 2015, 5, 41–48. [Google Scholar]
  11. Khdhr, F.W.; Saeed, R.K.; Soleymani, F. Improving the computational efficiency of a variant of Steffensen’s method for nonlinear equations. Mathematics 2019, 7, 306. [Google Scholar] [CrossRef]
  12. Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
  13. Lee, M.Y.; Kim, Y.I.; Magrenãń, A.A. On the dynamics of tri-parametric family of optimal fourthorder multiple-zero finders with a weight function of the principal mth root of a function-function ratio. Appl. Math. Comput. 2017, 315, 564–590. [Google Scholar]
  14. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algorithms 2019, 81, 947–981. [Google Scholar] [CrossRef]
  15. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 14, 2036. [Google Scholar] [CrossRef]
  16. Singh, M.K.; Singh, A.K. A derivative free globally convergent method and its deformations. Arab. J. Math. 2021, 10, 481–496. [Google Scholar] [CrossRef]
  17. Singh, M.K.; Argyros, I.K. The dynamics of a continuous Newton-like method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
  18. Liu, C.S. A new splitting technique for solving nonlinear equations by an iterative scheme. J. Math. Res. 2020, 12, 40–48. [Google Scholar] [CrossRef]
  19. Liu, C.S.; Hong, H.K.; Lee, T.L. A splitting method to solve a single nonlinear equation with derivative-free iterative schemes. Math. Comput. Simul. 2021, 190, 837–847. [Google Scholar] [CrossRef]
  20. Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar]
  21. Wang, X.; Zhang, T. A new family of Newton-type iterative methods with and without memory for solving nonlinear equations. Calcolo 2014, 51, 1–15. [Google Scholar] [CrossRef]
  22. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parameter family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef]
  23. Chanu, W.H.; Panday, S.; Thangkhenpau, G. Development of optimal iterative methods with their applications and basins of attraction. Symmetry 2022, 14, 2020. [Google Scholar] [CrossRef]
  24. Lotfi, T.; Soleymani, F.; Noori, Z.; Kiliçman, A.; Haghani, F.K. Efficient iterative methods with and without memory possessing high efficiency indices. Discret. Dyn. Nat. Soc. 2014, 2014, 912796. [Google Scholar] [CrossRef]
  25. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameter. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  26. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Dynamics of iterative families with memory based on weight functions procedure. Appl. Math. Comput. 2019, 354, 286–298. [Google Scholar] [CrossRef]
  27. Torkashvand, V.; Kazemi, M.; Moccari, M. Sturcture a family of three-step with-memory methods for solving nonlinear equations and their dynamics. Math. Anal. Convex Optim. 2021, 2, 119–137. [Google Scholar]
  28. Sharma, E.; Panday, S.; Mittal, S.K.; Joit, D.M.; Pruteanu, L.L.; Jäntschi, L. Derivative-free families of with- and without-memory iterative methods for solving nonlinear equations and their engineering applications. Mathematics 2023, 14, 4512. [Google Scholar] [CrossRef]
  29. Thangkhenpau, G.; Panday, S.; Bolundut, L.C.; Jäntschi, L. Efficient families of multi-point iterative methods and their self-acceleration with memory for solving nonlinear equations. Symmetry 2023, 15, 1546. [Google Scholar] [CrossRef]
  30. Wu, X.Y. Newton-like method with some remarks. Appl. Math. Comput. 2007, 118, 433–439. [Google Scholar] [CrossRef]
  31. Wang, H.; Liu, H. Note on a cubically convergent Newton-type method under weak conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
  32. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  33. Argyros, I.K.; Chen, D.; Qian, Q. The Jarratt method in Banach space setting. J. Comput. Appl. Math. 1994, 51, 103–106. [Google Scholar] [CrossRef]
  34. Liu, C.S. Computing the eigenvalues of the generalized Sturm-Liouville problems based on the Lie-group SL(2, R ). J. Comput. Appl. Math. 2012, 236, 4547–4560. [Google Scholar] [CrossRef]
Table 1. The comparison of different methods for the COCs computed. ×: undefined.
Table 1. The comparison of different methods for the COCs computed. ×: undefined.
n234567
NM1.5941.8411.9781.999××
Algorithm 12.7793.046××
Algorithm 22.6342.968××
Algorithm 34.899××
Table 2. The comparison of different methods for the number of iterations.
Table 2. The comparison of different methods for the number of iterations.
Functions x 0 NMJMTMKMCMAlgorithm 3
g 1 −0.35546464995
g 2 0533333
g 3 3744444
g 4 3.51166775
g 5 1744844
Table 3. The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the first memory-accelerating technique.
Table 3. The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the first memory-accelerating technique.
Functions x 0 [ A 0 , B 0 ] Q 0 NICOCE.I. = (COC) 1 / 3
g 1 1.3 [ 17.09 , 0.48 ] −0.134.2071.614
g 2 0.2 [ 3.94 , 0.027 ] −0.535.4331.758
g 3 2.1 [ 3.99 , 0.87 ] 035.0901.720
g 4 −0.5 [ 2.503 , 0.627 ] 0.336.0951.827
g 5 1.3 [ 1.926 , 0.927 ] 036.7951.894
Table 4. The NI, COC, and E.I. for the method by updating two parameters, A and B.
Table 4. The NI, COC, and E.I. for the method by updating two parameters, A and B.
η −0.9−0.6−0.3−0.20.10.30.60.9
NI33333333
COC3.4133.1143.2333.4433.0812.9742.8702.795
E.I. = (COC) 1 / 2 1.8471.7651.7981.8551.7551.7241.6941.672
Table 5. The NI, COC, and E.I. for the method by updating three parameters, A, B, and η , in the second memory-accelerating technique.
Table 5. The NI, COC, and E.I. for the method by updating three parameters, A, B, and η , in the second memory-accelerating technique.
Functions x 0 [ A 0 , B 0 ] η 0 NICOCE.I. = (COC) 1 / 2
f 1 1 [ 5.25 , 0.9048 ] −0.344.6802.163
f 2 0.9 [ 14.44 , 0.33 ] 443.5351.880
f 3 2 [ 3.25 , 0.231 ] −0.343.9921.998
f 4 −0.5 [ 6.13 , 0.54 ] 1044.2372.058
f 5 1.5 [ 0.548 , 1.318 ] 0.353.6101.900
Table 6. The NI, COC, and E.I. for the method by updating two parameters, A and p, in the third memory-accelerating technique.
Table 6. The NI, COC, and E.I. for the method by updating two parameters, A and p, in the third memory-accelerating technique.
Functions x 0 A 0 p 0 NICOCE.I. = (COC) 1 / 2
f 1 15.25−0.335.1692.274
f 2 1.34.44−0.144.0222.005
f 3 25.11−333.1262.031
f 4 −0.54.83−1037.2272.688
f 5 1.3−1.926−1036.9452.635
Table 7. The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the fourth memory-accelerating technique.
Table 7. The NI, COC, and E.I. for the method by updating three parameters, A, B, and Q, in the fourth memory-accelerating technique.
Functions x 0 A 0 B 0 Q 0 NICOCE.I. = (COC) 1 / 3
f 1 1.223.750.39535.1561.728
f 2 1.4−1.978.340.556.6431.880
f 3 2.23.25−0.23239.1272.090
f 4 −0.54.830.610.537.5171.959
f 5 1.3−3.7330.4750.137.0911.921
Table 8. The NI, COC, and E.I. for the method by updating three parameters, A, B and Q, in the fifth memory-accelerating technique.
Table 8. The NI, COC, and E.I. for the method by updating three parameters, A, B and Q, in the fifth memory-accelerating technique.
Functions x 0 A 0 B 0 Q 0 NICOCE.I. = (COC) 1 / 2
f 1 1.29.990.67235.0282.242
f 2 1.35.06469,683165.0132.239
f 3 2.23.25−0.231239.1273.021
f 4 −0.54.830.61537.3432.742
f 5 1.3−3.7330.4750.137.0912.663
Table 9. The NI, COC, and E.I. for the method by updating three parameters, A, B, and β , in the sixth memory-accelerating technique.
Table 9. The NI, COC, and E.I. for the method by updating three parameters, A, B, and β , in the sixth memory-accelerating technique.
Functions x 0 A 0 B 0 β 0 NICOCE.I. = (COC) 1 / 2
f 1 1.217.090.48235.0282.423
f 2 1.3−13.790.721258.5262.920
f 3 2.23.25−0.23237.8792.807
f 4 −0.522.290.394135.5022.346
f 5 1.3−3.7330.475135.0812.254
Table 10. The values of the COC for D z ˇ uni c ´ ’s memory method and the sixth memory accelerating technique.
Table 10. The values of the COC for D z ˇ uni c ´ ’s memory method and the sixth memory accelerating technique.
Functions f 1 f 2 f 3 f 4 f 5
D z ˇ uni c ´ ’s method4.8194.9905.8524.4964.469
Equation (113)5.0288.5267.8795.5025.081
Table 11. For Equations (125) and (126), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Table 11. For Equations (125) and (126), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
MethodsSecondThirdFifthSixth
NI4334
Error of u ( 0 ) 2.736 × 10 14 2.736 × 10 14 2.736 × 10 14 2.736 × 10 14
Maximum error of u 3.997 × 10 14 3.997 × 10 14 3.997 × 10 14 3.997 × 10 14
Table 12. For Equations (125) and (137), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
Table 12. For Equations (125) and (137), comparing the performances of the second, third, fifth, and sixth memory-accelerating techniques in the solution of a second-order nonlinear boundary-value problem by the Lie symmetry method.
MethodsSecondThirdFifthSixth
NI5333
Error of u ( 0 ) 2.665 × 10 13 2.647 × 10 13 2.647 × 10 13 2.647 × 10 13
Maximum error of u 4.352 × 10 14 4.397 × 10 14 4.397 × 10 14 4.397 × 10 14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang, C.-W.; Kuo, C.-L. Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem. Symmetry 2024, 16, 120. https://doi.org/10.3390/sym16010120

AMA Style

Liu C-S, Chang C-W, Kuo C-L. Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem. Symmetry. 2024; 16(1):120. https://doi.org/10.3390/sym16010120

Chicago/Turabian Style

Liu, Chein-Shan, Chih-Wen Chang, and Chung-Lun Kuo. 2024. "Memory-Accelerating Methods for One-Step Iterative Schemes with Lie Symmetry Method Solving Nonlinear Boundary-Value Problem" Symmetry 16, no. 1: 120. https://doi.org/10.3390/sym16010120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop