Next Article in Journal
Deep Neural Network-Based Simulation of Sel’kov Model in Glycolysis: A Comprehensive Analysis
Next Article in Special Issue
A Class of Efficient Sixth-Order Iterative Methods for Solving the Nonlinear Shear Model of a Reinforced Concrete Beam
Previous Article in Journal
Endomorphism Spectra of Double-Edge Fan Graphs
Previous Article in Special Issue
Investigation of Higher Order Localized Approximations for a Fractional Pricing Model in Finance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Structured Spectral HS Method for Nonlinear Least Squares Problems and Applications in Robot Arm Control

by
Rabiu Bashir Yunus
1,2,*,
Nooraini Zainuddin
1,
Hanita Daud
1,
Ramani Kannan
3,
Samsul Ariffin Abdul Karim
4,5,6 and
Mahmoud Muhammad Yahaya
7
1
Department of Fundamental and Applied Sciences, Faculty of Science and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia
2
Department of Mathematics, Kano University of Science and Technology, Wudil 713101, Nigeria
3
Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak Darul Ridzuan, Malaysia
4
Software Engineering Programme, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia
5
Data Technologies and Applications (DaTA) Research Lab, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia
6
Creative Advanced Machine Intelligence (CAMI) Research Centre, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia
7
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thung Khru, Bangkok 10140, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3215; https://doi.org/10.3390/math11143215
Submission received: 30 May 2023 / Revised: 19 June 2023 / Accepted: 21 June 2023 / Published: 21 July 2023
(This article belongs to the Special Issue Numerical Analysis and Modeling)

Abstract

:
This paper proposes a modification to the Hestenes-Stiefel (HS) method by devising a spectral parameter using a modified secant relation to solve nonlinear least-squares problems. Notably, in the implementation, the proposed method differs from existing approaches, in that it does not require a safeguarding strategy and its Hessian matrix is positive and definite throughout the iteration process. Numerical experiments are conducted on a range of test problems, with 120 instances to demonstrate the efficacy of the proposed algorithm by comparing it with existing techniques in the literature. However, the results obtained validate the effectiveness of the proposed method in terms of the standard metrics of comparison. Additionally, the proposed approach is applied to address motion-control problems in a robotic model, resulting in favorable outcomes in terms of the robot’s motion characteristics.

1. Introduction

Most classical iterative algorithms for handling nonlinear least-squares (NLS) problems and their formulations demand computation of the gradient along with the Hessian matrix of the objective function. However, the computation and storage of the entire Hessian matrix is time-consuming, because the accurate second derivatives of the residuals function are not accessible at an affordable cost [1]. In solving NLS problems, there are two classes of iterative methods: general unconstrained optimization methods that include Newton’s and quasi-Newton methods. On the other hand, the special iterative methods that take the problem’s structure into account include the Gauss–Newton, Levenberg–Marquard, and structured quasi-Newton methods [2]. The NLS problems form a category of unconstrained optimization problems determined by the following equation:
min x R n f x , f x = 1 2 C x 2 = 1 2 i = 1 m C i x 2 ,
where C x = C 1 x , C 2 x , , C m x T and each residual C i : R n R , i = 1 , 2 , , m is twice a continuously differentiable function. J x R m × n stands for the Jacobian of the residual function C x ,   g x and 2 f x represents the gradient and Hessian of the objective function, f, respectively. However, g x and 2 f x of the NLS problem stated in Equation (1) have unique structures, which are determined by the following equations:
g x k : = i = 1 m C i x C i x = J x T C x .
2 f x k : = i = 1 m C i x C i x T + i = 1 m C i x 2 C i x = J x T J x + T x ,
where J x = C x is the Jacobian matrix of the residual function and T x = i = 1 m C i x S i x ;   C i x is the i th of the residual vector’s components C x and S i x is the Hessian matrix of C i x .
Due to the cost associated with calculating the entire Hessian matrix, certain methods have been developed that utilize only the information from the first derivative. For instance, the Levenberg–Marquardt approach and the Gauss–Newton method are famous [3]. However, in solving non-zero residual problems, both seem to manifest low performance [4]. In [5], using the weighted Frobenius norm, a universal diagonal updating for DQN techniques was built through least-change weak secant diagonal updates. Furthermore, based on the structure of the NLS problem, several improvements have been recorded in quasi-Newton algorithms. The reader may refer to the surveys provided by [4,6] for detailed information about NLS algorithms. When iteratively addressing optimization problems with no constraints, conjugate-gradient (CG) methods are particularly well-known for problems with higher dimensions [7,8], due to their minimal memory usage and straightforwardness [9]. Some of the classical CG methods include Hestenes–Stiefel (HS) [10], Polak–Ribiere and Polyak (PRP) [11], and Fletcher–Reeves (FR) [12], with their formulas stated as follows:
β k H S = g k T y k 1 d k T y k 1 ,
β k P R P = g k T y k 1 g k 1 2 ,
β k F R = g k 2 g k 1 2
where   y k 1 = g k g k 1 , and ||. || stands for the vectors’ Euclidean norm. The CG approaches previously outlined inspired researchers to develop several CG variations for unconstrained optimization problems; for more information, see [13,14,15,16,17,18,19].
Recently, CG algorithms have been appealing in solving the nonlinear least squares problems. For instance, Kobayashi et al. [20] developed a nonlinear CG method by fusing the structured secant relation and the Dai–Liao concept. Later, Dehghani and Mahdavi-Amiri [21] proposed a scaled CG approach by employing the modified structured secant relation to address NLS problems. A diagonal-based estimation using the Hessian matrix’s first and second terms was put forth by Muhammad and Santos [22], although, their search direction demanded a safeguarding strategy in order to attain the satisfaction of the sufficient descent condition. To overcome some of the problems they encountered in [22], Yahaya et al. [23] developed structured, quasi-Newton-based methods with the second term being approximated using higher-order Taylor’s series expansion. In subsequent efforts, Muhammad and Waziri [24] suggested a couple of BB-like algorithms for handling NLS problems and, later, Yahaya et al. [25] proposed a modified version by improving the numerical performance of [24]. However, the two algorithms demand several safeguarding techniques, especially when the spectral parameter is non-positive at some iteration points.
The objective of this research was to modify a CG-like method that utilizes a spectral parameter to improve the approximation of the Hessian matrix of the objective function. This is achieved by incorporating modified secant updates in the second part of the Hessian matrix. The motivation behind this work was the expectation that our matrix-free approach, with the inclusion of the spectral parameter update, could capture more information from the objective function and eliminate the need for a safeguarding strategy, unlike the method proposed in [25].Below is an overview of the key contributions of this study:
  • A new structured spectral quasi-Newton method was designed via devising a spectral parameter and combining the modified Hestenes–Stiefel formula with the modified secant equation.
  • Using a non-monotone line search, the suggested method was shown to be globally convergent under some conditions.
  • Numerical experiments were conducted to compare the performance of the proposed method with other methods found in the literature.
  • The proposed method was applied to solve the motion-control problem of a three-degrees-of-freedom robotic arm, showcasing its practicality.
The remaining sections are organized as follows: Section 2 provides the description of the proposed method and its algorithm. Section 3 verifies the convergence properties of the proposed algorithm under standard conditions. In Section 4, the effectiveness of the proposed method is tested through numerical experiments, comparing it with other methods in the literature. Section 5 applies the new algorithm to a robotic problem with three degrees of freedom. Section 6 presents limitations and directions for future work. Finally, Section 7 presents the conclusion, based on these comparisons.

2. Motivation and Formulation of the Modified SSHS Method

To approximate the Hessian of (1), we derive a structured spectral quasi-Newton approximation. Considering Equation (3) at a specific iteration, say   k , for k > 0 .
T x k 1 = i = 1 m C i x k 1 2 C i x k 1 .
For the matrix T x k 1 to satisfy the following secant equation, we constructed a structured vector, e.g.,   Ω ^ k 1 :
T x k s k 1 =   Ω ^ k 1
where s k 1 = x k x k 1 ; utilizing and simplifying the Taylor’s series expansion on C i x k 1 yields the following equation:
C i x k 1 C i x k 2 C i x k T s k 1 ,   i = 1 , 2 , , m .
Therefore, pre-multiplying (9) by C i x k and simplifying yields the following equation:
C i x k C i x k 1 C i x k C i x k C i x k 2 C i x k T s k 1   i = 1 , 2 , , m
and Equation (10) becomes
C i x k 2 C i x k T s k 1 C i x k C i x k C i x k C i x k 1   i = 1 , 2 , , m
Post-multiplying Equation (7) by s k 1 and summing over Equation (11), we obtain
T x k s k 1 C i x k C i x k C i x k C i x k 1 ,   i = 1 , 2 , , m . T x k s k 1 C i x k C i x k 1 C i x k ,   i = 1 , 2 , , m . T x k s k 1 J x k J x k 1 T C x k .
Post-multiplying the first term of (3) by s k 1 and adding (12), we obtain
2 f x k s k 1 J x k T J x k s k 1 + J x k J x k 1 T   C x k .  
Assuming that P k 2 f x k , such that
P k s k   Ω ^ k 1
where P k is some n × n positive definite and symmetric matrix obeying the quasi- Newton Equation (14), where s k 1 = x k x k 1 = α k d k we obtained Ω ^ k 1 from Equation (13), as follows:
Ω ^ k 1 = J x k T J x k s k 1 + J x k J x k 1 T   C x k .
Taking inspiration from [25], the modified method is coined to solve Equation (1), by incorporating the modified secant relation (15) into the CG formula, and we determined the search direction in the following manner:
d k = g k ,   k = 0 , λ k g k + β k S S H S d k 1 ,   k 0 .
The CG coefficient was defined as follows:
β k S S H S = m a x g k T Ω ^ k 1 d k 1 T Ω ^ k 1 , 0
where β k S S H S is the new CG coefficient, which we named the structured spectral Hestenes–Stiefel method (SSHS). The spectral parameter is defined as:
λ k = s k 1 T s k 1 s k 1 T   Ω ^ k 1
where λ k is the spectral parameter and Ω ^ k 1 is the modified secant equation defined in (15). One of the advantages of this method is its CG search direction, which ensures fast convergence and efficient memory usage, unlike the methods discussed in [24,25], where the direction follows the steepest descent method. These methods require storing the entire history of gradient vectors, which can be memory-intensive for large-scale problems. We employed the non-monotone line search developed by Zhang and Hager [26] to guarantee the global convergence of the proposed algorithms. Below is a description of the non-monotone line search. If the direction d k is sufficiently descending, the selection of step length α fulfills the Armijo-type non-monotone line-search conditions as follows:
f x k + α d k V k + δ α g k T d k ,   δ 0 , 1 ,     g k = J k T C k
where
  V 0 = f x 0 , V k + 1 = n k Q k V k + f x k + 1 Q k + 1 , Q 0 = 1 , Q k = n k Q k + 1 ,
and   J k = J x k ,   C k = C ( x k ). The parameter n k   regulates the degree of non-monotonicity. The non-monotone line search shown in Equation (19) typically simplifies to a Wolfe- or Armijo-type line search if for every   k , n k = 0 .
Remark 1
([26]). The sequence V k is located amidst f x k and   D k , where
D k = 1 k + 1 i = 1 k f x i ,   k 0 .
Now, the sequential steps for solving Equation (1) are described in Algorithm 1 as follows:
Algorithm 1: Structured Spectral Hestenes-Stiefel (SSHS)
Inputs: Given   x 0 R n ,   0 < n m i n n m a x ,     δ 0 , 1 ,   and   k m a x . Let ϵ > 0 be tolerance. Set k 0 ,   V 0 = f 0 ,     λ 0 = 1 ,   Q 0 = 1 .
Step 1: Compute C k ,   f k and g k .
Step 2: While g k > ϵ and k k m a x perform
Step 3: Compute
                         d k using (16)
Initialize α = 1 ;
while f x k + α d k > U k + δ α g k T d k carry out
                 α = α 2 .
End
Compute the scalar β k and λ k   using
                     β k S S H S = m a x g k T Ω ^ k 1 d k 1 T Ω ^ k 1 , 0 ;
                 λ k = s k 1 T s k 1 s k 1 T Ω ^ k 1 .
Compute the next iteration x k + 1 = x k + α k d k .
Update
                          n = m i n m a x   λ k , n m i n , n m a x .
Choose n k n m i n , n m a x and compute Q k + 1 and V k + 1 using Equation (19).
Set k k + 1 .
End

3. Convergence Analysis

This section presents the convergence analysis of the modified SSHS method. To begin, we introduce some valuable assumptions.
Assumption 1.
(a) 
The level set   L = x :   f x f x 0   is bounded, where x 0  denotes the starting point of Algorithm 1.
(b) 
There are non-negative constants   c 1 c 2 , c 3 ,   d 1 , d 2 , d 3  in such a way that
J x J y c 1 x y ,     for   all   x , y L
C x C y c 2 x y ,     for   all   x , y L
It is obvious that (21) and (22) imply
f x f y c 3 x y ,   for   all   x , y L
and
C x d 1 ,   x d 2 , g x d 3 x L
Assumption 2.
On an open convex set that contains the level set   L , the gradient of problem (1),   g x = J x T C x  is uniformly continuous.
Lemma 1
([25]). Let  f  satisfy Assumption 1 and let (15) be the definition of     Ω ^ k 1 . Then, a constant   M 0 , 1  exists such that
  Ω ^ k 1 M s k 1 ,   k 0 .
Proof of Lemma 1.
Keep in mind that   Ω ^ k 1 is defined in (14) as follows:
  Ω ^ k 1 = x k T x k s k 1 + x k x k 1 T   C x k
J x k T J x k s k 1 + J x k J x k 1 T   C x k .
Applying the triangular inequality, we obtain
J x k 2 s k 1 + J x k J x k 1 C x k .
By using the inequalities in Assumption (1), we obtain
d 2 2 s k 1 + c 1 x k x k 1 C x k .
By using the inequality in Equation (25), we obtain
d 2 2 s k 1 + c 1 d 1 s k 1 ,
= d 2 2 + c 1 d 1 s k 1 .
Hence, by setting   M d 2 2 + c 1 d 1 , the inequality (26) holds. □
Lemma 2.
Assume that Algorithm 1 generated the sequence   x k  and the search direction     d k . Let   m 1  and   m 2  be two constants with positive values. The following inequalities hold for every     k 0 :
(a) 
g k T d k m 1 g k 2 ,
(b) 
d k m 2 g k .
Proof of Lemma 2.
To show the inequality in (a), if the direction d k is defined by (16), then
g k T d k = g k T λ k g k + β k S S H S d k 1 ,
= g k T s k 1 T s k 1 s k 1 T   Ω ^ k 1 g k + g k T   Ω ^ k 1 d k 1 T   Ω ^ k 1 d k 1 ,
s k 1 2 s k 1   Ω ^ k 1 g k   2 + g k 2   Ω ^ k 1 d k 1   Ω ^ k 1 d k 1 ,
s k 1   Ω ^ k 1 g k 2 + g k 2 ,
1 M 1 g k 2 .
Below is the outcome, which sets   m 1 : = 1 M 1 . To show the inequality in (b), we proceed as follows:
d k = λ k g k + β k S S H S d k 1 ,
= s k 1 T s k 1 s k 1 T   Ω ^ k 1 g k + g k T   Ω ^ k 1 d k 1 T Ω ^ k 1 d k 1 ,
s k 1 2 s k 1 Ω ^ k 1 g k + g k Ω ^ k 1 d k 1 Ω ^ k 1 d k 1 ,
1 M + 1 g k .
The results follow by letting   m 2 1 + 1 M , and the proof is completed. □
Lemma 3.
Let  x k  be the iteration sequence produced by Algorithm 1 with   g k 0 , and let Assumption 2 hold. Moreover, using Lemma 2, where  d k  was shown to be in descent, then there exists a small positive step length  α *  such that
f x + α * d k V k + δ α * g k T   holds, indicating a clearly defined line search.
Proof of Lemma 3.
To show that we always find a small-enough step length   α * , we suppose, by way of contradiction, that
f α k + α j d k > V k + δ α j g k T d k ,   α j 0 ,   holds ,
Such that   lim j α j = 0 .
However, since f x k = 1 2 C x 2 as defined in (1), this implies that
f α k = 1 2 C α k 2 0 ,   k
In addition, as stated in (21), V k is between V k 1 and f x k , i.e., it is a convex combination of f x k and   V k + 1 . Further, V 0 = f x 0 at   k = 0 ; then, we obtain, for all   k , V k 0 .
Using the boundedness of   d k , (19) becomes
f α k + α j d k > V k δ α j n j g k 2 ; thus, using   lim j α j = 0 , we obtain
f x k V k .
However,
V k = n k 1 Q k 1 V k 1 + f x k n k 1 Q k 1 + 1 ,
indicating that V k is convexly combining f x k and V k 1 , therefore, combining the above statement and (28), yields f x k V k ; hence, this means that n k 1 = 0 , because Q k 1 and V k 1 cannot be zero.
Thus, the non-monotone line search becomes monotone; i.e., Equation (27) becomes f x k + α k d k > f x k + δ α j g k T d k , indicating that
f x k + α j d k f x k α j > δ g k T d k .
Now, by assuming the limit as j and using a uniform continuity assumption of gradient, we obtain   g k T d k δ g k T d k .
However, since   g k T d k < 0 , clearly this indicates that   δ > 1 , which is a contradiction. Thus, the proof. □
Theorem 1
([26]). Suppose  f x  as given by Equation (1), and also that Assumptions 1 and 2 are true. Then, the produced sequence  x k  by SSHS is included in the level set  L  and
L i m k   i n f g k = 0 .
Furthermore, if   n m a x < 1 , then
L i m k g k = 0 .
Proof of Theorem 1.
The proof can easily be obtained from [26], with slight modification. □

4. Numerical Experiments and Discussion

This section presents the outcomes of the numerical experiment of the performance of our newly proposed method, the SSHS algorithm for handling NLS problems. We verified the effectiveness of the proposed approach in contrast with structured spectral gradient method (SSGM1) [24] and alternative structured spectral algorithm (ASSA1) [25], which are derived based on spectral updates parameters. This experiment’s set of test problems consists of the 15 large-scale benchmark test NLS functions described in Table 1. We considered distinct dimensions for each of the problems. The dimensions comprise eight different dimensions: 1000, 3000, 5000, 7000, 9000, 11,000, 13,000, and 15,000. For more detail on the problems considered, see Table 1.
The experiment was carried out on a computer with an Intel CORE-i7-3537U processor operated as 2.00 GHz and 8 GB RAM, using MATLAB R2022a. The proposed algorithm, ASSA1, and SSGM1 were implemented with the parameters listed below.
  • SSHS algorithm: t m i n = 10 30 , t m a x = 10 30 ,   δ = 10 4 ,   n m i n = 0.1 ,   n m a x = 0.85 and tolerance, ϵ = 10 4 .
  • SSGM1 and ASSA1: These approaches continued to use the same settings with no changes, as in [24,25], respectively.
Furthermore, at the time of execution process, the stopping criteria, g k 10 4   , was employed. The failure was reported by using ‘#’ if either of the following scenarios occurred:
(i)
There were more than 1000 iterations, or
(ii)
There were more than 5000 function evaluations.
To assess the SSHS method’s effectiveness, we considered the following metrics for comparison: (a) the number of iterations (NI); (b) the number of function evaluation (NF); (c) the CPU (TIME) taken; (d) the number of gradient evaluations (NG); and (e) the function value (FV). The results of the experiment are illustrated in Table 2a–c for the metrics NI, NF, TIME, NG, and FV, respectively.
Considering the results obtained in Table 2a–c, it is evident that each algorithm is competitive. Interestingly, algorithm ASSA1 failed to solve Problems P2, P10, P11, and P14 and the SSGM1 failed to solve problems P1, P2, P3, P5, P6, P10, P11, P12, P13, P14, and P15. All the test problems considered in this experiment were successfully solved by the novel SSHS algorithm. The results reported in Table 2a–c are encapsulated in Figure 1a–c, utilizing a description of Dolan and Moré’s performance [30]. In terms of NI, NF, and TIME, these figures demonstrate that the new algorithm SSHS performs better than algorithms ASSA1 and SSGM1. Obviously, the effectiveness and efficiency of the new SSHS algorithm can be identified. However, our method demonstrates slow convergence, particularly in problems P2, P10, and P14, but the other methods, in comparison, failed to solve the problems. The comparison between these algorithms was achieved using the Dolan and Moré [30] performance profile. First, we provided a summarized description of this tool; it involved a performance ratio, which entailed comparing a solver’s (s) performance on problem p with the better results obtained by any solver for this problem, as provided by [31].
r p ,   s = τ p , s m i n τ p , s : s S
Suppose we select a parameter, denoted as M , such that r M r p , s is satisfied for all pairs of p and s . This condition implies that r M = r p , s holds true only when solver s fails to solve problem p . Although we might be interested in the performance of solver s on individual problems, our goal is to obtain a comprehensive evaluation of the solver’s overall performance, as defined by
ρ s τ = 1 n p s i z e p P : r p , s τ
Therefore, ρ s τ is the likelihood, for solver s S , that a performance ratio   r p , s is inside a component τ R of the finest feasible ration [31]. Moreover, the finest solver has high values of ρ τ and is visible in the figure’s upper-right corner.
The results shown in Figure 1d,e illustrate the performance profiles of the SSHS, ASSA1, and SSGM1 techniques, as measured by the median of the number of gradient evaluations and the function value, respectively. Furthermore, as Figure 1e shows the comparative errors of the SSHS and other methods, the convergence of the SSHS approach provides a good approximation of the solution with relatively less error. Based on the findings shown in Table 2a–c and Figure 1a–e, we can draw the conclusion that the suggested method is more successful than the ones that were compared for handling complicated nonlinear least-squares problems.

5. Application in Robotic Arm Motion Control

In this section, the proposed method utilizes Algorithm (1) to address the real-time motion control problem of a three-degrees-of-freedom planar robotic manipulator. This problem involves optimizing a time-varying nonlinear system. The majority of unconstrained optimization methods, such as [32,33], were specifically designed for addressing static nonlinear optimization problems. As a result, they may not possess the capability to effectively handle time-varying nonlinear optimization (TVNO) problems [34]. The fundamental distinction between TVNO problems and static nonlinear optimization problems is evident in the fact that TVNO problems undergo changes over time. This difference highlights the crucial role played by the time derivative in obtaining precise real-time solutions for TVNO problems. The description of the problem begins with a discrete-time kinematic representation at a specific position level, as mentioned in [34,35].
C θ k = c k ,
where the joint angle vector θ k 2 is the parameter and C θ k denotes the kinematic mapping function with a familiar characteristic that can be formulated as follows:
C θ = l 1 cos θ 1 + l 2 c o s θ 1 + θ 2 + l 3 c o s θ 1 + θ 2 + θ 3 l 1 sin θ 1 + l 2 s i n θ 1 + θ 2 + l 3 s i n θ 1 + θ 2 + θ 3 .
The function   C . is used to map the position and orientation of a robot’s end-effector or any part of the robot to the active joint displacements   θ k 2 . The length of each link is denoted by l i (for   i = 1 , 2 , 3 ). In the context of robotic motion control, C θ represents the end-effector position vector. Let c t k 2 be the preferred path vector at a specific moment   t k . Then, a least-squares problem is formulated and addressed at each interval   t k 0 , t f , described as follows:
min θ 2 1 2 C θ c t k 2
where the required Lissajous curve route at time   t k , as described in [33], is expressed as follows:
c t k 1 = 1.5 + 0.2 sin 2 t k 3 2 + 0.2 s i n t k
c t k 2 = 1.5 + 0.2 sin 4 t k 3 2 + 0.2 s i n 3 t k
Equation (33) presented above has a similar structure to that of Equation (1). Consequently, the application of the SSHS algorithm becomes viable for assessing its solution. The steps for solving the problem of robotic motion control are outlined in Algorithm 2.
Algorithm 2: SSHS for solving (33)
Inputs: initialize parameters   t 0 ,   θ t 0 ,   t m a x ,   g   a n d   K m a x .
for k = 1 :   K m a x    perform
    t k = k g ;
Evaluate c t k   , using (34) and (35).
Compute θ t 0 , using SSHS ( θ t 0 , c t k ) from Algorithm 1.
Set θ n e w = θ t 0 ; θ t k ;
end for
Output:  θ n e w
Remark 2.
The Lissajous curve c t k depends on the following scenarios: (i) if   c t k = c t k 1 , we solve the Lissajous curve route using Equation (34) and (ii) if   c t k = c t k 2 , we solve the Lissajous curve route using Equation (35).
The following are the parameters defined as part of the SSHS Algorithm’s implementation for the model: t = 0 is the time instant by   θ t 0 = 0 , π 3 , π 2 . In addition, we chose a maximum time factor of 10 s.
Upon careful examination of the depicted figures showcasing the outcomes, the effective utilization of the SSHS algorithm in solving Equation (33) becomes evident. Figure 2a–d provides graphical representations showing a clear visual illustration of the findings. These representations played a crucial role in showcasing the impact of the SSHS algorithm in solving the equation. Specifically, Figure 2a was highlighted to emphasize the successful synthesis of the robot trajectories. The figure vividly portrays the model of the robot’s end effector closely adhering to the desired trajectory. This indicates that the SSHS algorithm effectively produced trajectories that aligned closely with the intended path, demonstrating its ability to generate accurate and precise results. Additionally, Figure 2c,d provides compelling evidence of a residual error rate approaching 10−6. The residual error rate measures the discrepancy between the desired and achieved outcomes. In this case, the near 10−6 residual error rate signifies an exceptionally high level of precision achieved by the SSHS algorithm. This result further supports the effectiveness of the algorithm in obtaining accurate and reliable solutions to Equation (33).
Furthermore, the outputs of the second Lissajous curve of Equation (35) are depicted in Figure 3. Figure 3e provides a visual representation of the successful synthesis of robot trajectories accomplished by the SSHS algorithm. It is evident from this figure that the robot effectively tracks the movement through its end effectors. Additionally, Figure 3g,h presents a residual error rate of approximately 10−6, thereby confirming the convergence of the SSHS algorithm. This convergence assures the accuracy and effectiveness of the algorithm in achieving the desired outcomes.

6. Limitations and Direction for Future Research

Our method demonstrates slow convergence, particularly in problems P2, P10, and P14. Our future research aims to address this limitation. One avenue we will explore is extending the method to a three-term spectral conjugate gradient approach by leveraging the conjugacy condition discussed in [34]. This extension will allow us to apply the new method to tackle more intricate robotic motion-control problems because, in a three-term approach, we have three subspaces: the previous gradient, the previous direction, and the previous structured modified secant relation. Furthermore, we are intrigued by the potential of utilizing the concepts presented in this paper to solve systems of nonlinear equations and their applications in data fitting and imaging problems. This opens exciting possibilities for employing our method in various domains where accurate modeling and optimization are crucial. By expanding the scope of our research, we hope to contribute to advancements in these areas and to unlock new opportunities for practical applications.

7. Conclusions

In this research paper, a modified structured spectral Hestenes–Stiefel method was suggested for solving nonlinear least-squares problems. The algorithm was developed to satisfy the modified secant equation by introducing a CG-like parameter   β k , that approximates the action of a vector on the Hessian of the objective function. The implemented algorithm does not require the construction or storage of matrices during the iteration phase, making it suitable for large-scale problems. Additionally, a safeguarding strategy, as mentioned in [25], is not necessary. Numerical experiments were conducted on benchmark test functions to demonstrate the effectiveness of the novel approach. It was found out that the SSHS method outperformed ASSA1 and SSGM1 methods, based on the five metrics used in the numerical results. Finally, the SSHS algorithm was applied to address motion-control problems in a three-degrees-of-freedom robot; this indicated the applicability of the suggested method.

Author Contributions

Conceptualization, R.B.Y.; methodology, R.B.Y., N.Z. and M.M.Y.; software, R.B.Y. and M.M.Y.; validation, H.D., R.K. and S.A.A.K.; formal analysis, R.B.Y., N.Z. and M.M.Y.; visualization, R.B.Y. and N.Z.; supervision, N.Z., H.D., R.K. and S.A.A.K.; writing—original draft preparation, R.B.Y. and N.Z. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by a YUTP Grant under the cost center: 015LC0-315, Universiti Teknologi PETERONAS, Malaysia.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the anonymous referees and the editor for their attentive review of this paper, their valuable suggestions, and their comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, H.; Lam, W.-H.; Chan, S.-C. On the convergence analysis of cubic regularized and the incremental version in the application of large-scale problems. IEEE Access 2019, 7, 114042–114059. [Google Scholar] [CrossRef]
  2. Yabe, H. Factorized quasi-Newton methods for nonlinear least squares problems. Math. Program. 1991, 51, 75–100. [Google Scholar] [CrossRef]
  3. Awwal, A.M.; Mohammad, H.; Yahaya, M.M.; Muhammadu, A.B.; Ishaku, A. A New Algorithm with Structured Diagonal Hessian Approximation for Solving Nonlinear Least Squares Problems and Application to Robotic Motion Control. Thai J. Math. 2021, 19, 924–941. [Google Scholar]
  4. Mohammad, H.; Waziri, M.Y.; Santos, S.A. A brief survey of methods for solving nonlinear least-squares problems. Numer. Algebra Control Optim. 2019, 9, 1–13. [Google Scholar] [CrossRef] [Green Version]
  5. Leong, W.J.; Enshaei, S.; Kek, S.L. Diagonal quasi-Newton methods via least change updating principle with weighted Frobenius norm, Numerical Algorithms. Numer. Algorithms 2021, 86, 1225–1241. [Google Scholar] [CrossRef]
  6. Yuan, Y.X. Recent advances in numerical methods for nonlinear equations and non- linear least squares. Numer. Algebra Control Optim. 2011, 1, 15–34. [Google Scholar] [CrossRef]
  7. Malik, M.; Mamat, M.; Abas, S.S.; Sulaiman, I.M.; Sukono, F. A new coefficient of the conjugate gradient method with the sufficient descent condition and global convergence properties. Eng. Lett. 2020, 28, 704–714. [Google Scholar]
  8. Sulaiman, I.M.; Mamat, M. A new conjugate gradient method with descent properties and its application to regression analysis. J. Numer. Anal. Ind. Appl. Math. 2020, 14, 25–39. [Google Scholar]
  9. Sulaiman, I.M.; Awwal, A.M.; Malik, M.; Pakkaranang, N.; Panyanak, B. A Derivative-Free MZPRP Projection Method for Convex Constrained Nonlinear Equations and Its Application in Compressive Sensing. Mathematics 2022, 10, 2884. [Google Scholar] [CrossRef]
  10. Stiefel, E.; Hestenes, M.R. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar]
  11. Ribiere, G.; Polak, E. Note sur la convergence de méthodes de directions conjuguées. ESAIM Math. Model. Numer. Anal. Modél. Math. Anal. Numér. 1969, 3, 35–43. [Google Scholar]
  12. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef] [Green Version]
  13. Aji, S.; Kumam, P.; Awwal, A.M.; Yahaya, M.M.; Sitthithakerngkiet, K. An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Math. 2021, 6, 8078–8106. [Google Scholar] [CrossRef]
  14. Min, L. A derivative-free PRP method for solving large-scale nonlinear systems of equations and its global convergence. Optim. Methods Softw. 2014, 29, 503–514. [Google Scholar]
  15. Waziri, M.Y.; Sabi’u, J. A derivative-free conjugate gradient method and its global convergence for solving symmetric nonlinear equations. Int. J. Math. Math. Sci. 2015, 2015, 961487. [Google Scholar] [CrossRef] [Green Version]
  16. Awwal, A.M.; Kumam, P.; Abubakar, A.B. Spectral modified Polak–Ribiére–Polyak projection conjugate gradient method for solving monotone systems of nonlinear equations. Appl. Math. Comput. 2019, 362, 124514. [Google Scholar] [CrossRef]
  17. Awwal, A.M.; Wang, L.; Kumam, P.; Mohammad, H.; Watthayu, W. A projection Hestenes-Stiefel method with spectral parameter for nonlinear monotone equations and signal processing. Math. Comput. Appl. 2020, 25, 27. [Google Scholar] [CrossRef]
  18. Rivaie, M.; Mamat, M.; Abashar, A. A new class of nonlinear conjugate gradient coefficients with exact and inexact line searches. Appl. Math. Comput. 2015, 268, 1152–1163. [Google Scholar] [CrossRef]
  19. Yunus, R.B.; Kamfa, K.; Mohammed, S.I.; Mamat, M. A Novel three term conjugate gradient method for unconstrained optimization via shifted variable metric approach with application. In Intelligent Systems Modeling and Simulation II: Machine Learning, Neural Networks, Efficient Numerical Algorithm and Statistical Methods; Springer International Publishing: Cham, Switzerland, 2022; pp. 581–596. [Google Scholar]
  20. Kobayashi, M.; Narushima, Y.; Yabe, H. Nonlinear conjugate gradient methods with structured secant condition for nonlinear least squares problems. J. Comput. Appl. Math. 2010, 234, 375–397. [Google Scholar] [CrossRef] [Green Version]
  21. Dehghani, R.; Mahdavi-Amiri, N. Scaled nonlinear conjugate gradient methods for nonlinear least squares problems. Numer. Algorithms 2018, 82, 1–20. [Google Scholar] [CrossRef]
  22. Mohammad, H.; Santos, S.A. A structured diagonal hessian approximation method with evaluation complexity analysis for nonlinear least squares. Comput. Appl. Math. 2018, 37, 6619–6653. [Google Scholar] [CrossRef]
  23. Yahaya, M.M.; Kumam, P.; Awwal, A.M.; Aji, S. A structured quasi-newton algorithm with nonmonotone search strategy for structured NLS problems and its application in robotic motion control. J. Comput. Appl. Math. 2021, 395, 113582. [Google Scholar] [CrossRef]
  24. Muhammad, H.; Waziri, M.Y. Structured two-point step size gradient methods for nonlinear least squares. J. Optim. Theory Appl. 2019, 181, 298–317. [Google Scholar] [CrossRef] [Green Version]
  25. Yahaya, M.M.; Kumam, P.; Awwal, A.M.; Aji, S. Alternative structured spectral gradient algorithms for solving nonlinear least-squares problems. Heliyon 2021, 7, e07499. [Google Scholar] [CrossRef]
  26. Zhang, H.; Hager, W.W. A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 2004, 14, 1043–1056. [Google Scholar] [CrossRef] [Green Version]
  27. La Cruz, W.; Venezuela, C.; Martínez, J.M.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Theory Exp. P. Optim. 2004, 76, 79–1008. [Google Scholar] [CrossRef] [Green Version]
  28. Moré, J.J.; Garbow, B.S.; Hillstrom, K.E. Testing unconstrained optimization software. Technical report. ACM Trans. Math. Softw. (TOMS) 1981, 7, 17–41. [Google Scholar] [CrossRef] [Green Version]
  29. Lukšan, L.; Vlcek, J. Test Problems for Unconstrained Optimization; Technical Report No. 897; Academy of Sciences of the Czech Republic, Institute of Computer Science: Prague, Czech Republic, 2003; Volume 1. [Google Scholar]
  30. Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
  31. Yunus, R.B.; Mamat, M.; Abashar, A.; Rivaie, M.; Salleh, Z.; Zakaria, Z.A. The convergence properties of a new kind of conjugate gradient method for unconstrained optimization. Appl. Math. Sci. 2015, 9, 1845–1856. [Google Scholar] [CrossRef]
  32. Birgin, E.G.; Martinez, J.M. A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 2001, 43, 117–128. [Google Scholar] [CrossRef] [Green Version]
  33. Narushima, Y.; Yabe, H. Conjugate gradient methods based on secant conditions that generate descent search directions for unconstrained optimization. J. Comput. Appl. Math. 2012, 236, 4303–4317. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, Y.; He, L.; Hu, C.; Guo, J.; Li, J.; Shi, Y. General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization. J. Comput. Appl. Math. 2019, 347, 314–329. [Google Scholar] [CrossRef]
  35. Yahaya, M.M.; Kumam, P.; Awwal, A.M.; Chaipunya, P.; Aji, S.; Salisu, S. New Generalized Quasi-Newton Algorithm Based on Structured Diagonal Hessian Approximation for Solving Nonlinear Least-Squares Problems with Application to 3DOF Planar Robot Arm Manipulator. IEEE Access 2022, 10, 10816–10826. [Google Scholar] [CrossRef]
Figure 1. This figure shows (a) comparison results for SSHS, ASSA1, and SSGM1 in terms of number of iterations, (b) comparison results for SSHS, ASSA1, and SSGM1 in terms of function evaluation, (c) comparison results for SSHS, ASSA1, and SSGM1 in terms of CPU-Time, (d) comparison results for SSHS, ASSA1, and SSGM1 in terms of gradient evaluations, and (e) comparison results for SSHS, ASSA1, and SSGM1 in terms of error of the zero-residual.
Figure 1. This figure shows (a) comparison results for SSHS, ASSA1, and SSGM1 in terms of number of iterations, (b) comparison results for SSHS, ASSA1, and SSGM1 in terms of function evaluation, (c) comparison results for SSHS, ASSA1, and SSGM1 in terms of CPU-Time, (d) comparison results for SSHS, ASSA1, and SSGM1 in terms of gradient evaluations, and (e) comparison results for SSHS, ASSA1, and SSGM1 in terms of error of the zero-residual.
Mathematics 11 03215 g001
Figure 2. This figure describes (a) synthesized robot trajectories of Lissajous curve   C t k 1 , (b) end-effector trajectory and desired path of Lissajous curve   C t k 1 , (c) tracking residual error of Lissajous curve C t k 1 on x–axis, and (d) tracking residual error of Lissajous curve C t k 1 on y–axis.
Figure 2. This figure describes (a) synthesized robot trajectories of Lissajous curve   C t k 1 , (b) end-effector trajectory and desired path of Lissajous curve   C t k 1 , (c) tracking residual error of Lissajous curve C t k 1 on x–axis, and (d) tracking residual error of Lissajous curve C t k 1 on y–axis.
Mathematics 11 03215 g002
Figure 3. This figure describes (e) synthesized robot trajectories of Lissajous curve   C t k 2 , (f) end-effector trajectory and desired path of Lissajous curve   C t k 2 , (g) tracking residual error of Lissajous curve C t k 2 on x–axis, and (h) tracking residual error of Lissajous curve C t k 2 on y–axis.
Figure 3. This figure describes (e) synthesized robot trajectories of Lissajous curve   C t k 2 , (f) end-effector trajectory and desired path of Lissajous curve   C t k 2 , (g) tracking residual error of Lissajous curve C t k 2 on x–axis, and (h) tracking residual error of Lissajous curve C t k 2 on y–axis.
Mathematics 11 03215 g003
Table 1. List of benchmark test functions with reference and initial points.
Table 1. List of benchmark test functions with reference and initial points.
ProblemsFunction NameInitial PointReference
P1Penalty function I 1 3 , 1 3 , , 1 3 T [27]
P2Variably dimensioned 1 1 n , 1 2 n , , 0 T [27]
P3Trigonometric function 1 n , 1 n , , 1 n T [28]
P4Discrete boundary value 1 n + 1 1 n + 1 1 , , 1 n + 1 n n + 1 1 T [28]
P5Linear full rank function 1 , 1 , , 1 T [28]
P6Linear rank-1 function 1 , 1 , , 1 T [28]
P7Problem 202 2 , 2 , , 2 T [29]
P8Problem 206 1 n , 1 n , , 1 n T [29]
P9Exponential function I 0.5 , 0.5 , , 0.5 T [27]
P10Singular function 2 1 , 1 , , 1 T [27]
P11Singular Broyden 1 , 1 , , 1 T [29]
P12Broyden tridiagonal function 1 , 1 , , 1 T [27]
P13Function 27 100 , 1 / n 2 , , 1 / n 2 T [27]
P14Linear rank 2 1 n , 1 n , , 1 n T [27]
P15Zero Jacobian function f o r   i = 1   , 100 n 100 n ,
f o r   i     2 , n 1000 n 500 60 n 2 .
[27]
Table 2. Numerical results of the SSHS algorithm and the ASSA1 and SSGM1 algorithms on large-scale problems 1–15 in terms of NI, NF, TIME, NG, and FV, respectively. The symbol ‘#’ denotes failure of a function to solve a problem.
Table 2. Numerical results of the SSHS algorithm and the ASSA1 and SSGM1 algorithms on large-scale problems 1–15 in terms of NI, NF, TIME, NG, and FV, respectively. The symbol ‘#’ denotes failure of a function to solve a problem.
(a)
ProblemDimensionSSHSASSA1SSGM1
NI/NF/TIME/NG/FVNI/NF/TIME/NG/FVNI/NF/TIME/NG/FV
10002/4/0.011358/10/0.036123/5/0.012429/13/0.004316#/#/#/#/#
30002/4/0.013405/10/0.046543/5/0.022454/10/0.048782#/#/#/#/#
50002/4/0.020036/10/0.057773/5/0.058253/10/0.048677#/#/#/#/#
P170003/4/0.015607/10/0.069853/5/0.047486/10/0.048491#/#/#/#/#
90003/4/0.02476/10/0.082783/5/0.026428/10/0.048244#/#/#/#/#
11,0003/5/0.017349/7/3.73 × 10−14/5/0.014667/7/0.37321#/#/#/#/#
13,0003/5/0.014613/7/3.78 × 10−14/5/0.014776/7/0.3783#/#/#/#/#
15,0003/5/0.055171/7/3.83 × 10−14/5/0.028917/7/0.38325#/#/#/#/#
100029/513/0.064544/88/2.52 × 10−19#/#/#/#/##/#/#/#/#
300033/820/0.21943/100/1.37 × 10−19#/#/#/#/##/#/#/#/#
500038/987/0.38171/115/1.49 × 10−19#/#/#/#/##/#/#/#/#
P2700037/1065/0.57431/112/2.22 × 10−19#/#/#/#/##/#/#/#/#
900029/778/0.76929/88/1.55 × 10−19#/#/#/#/##/#/#/#/#
11,00055/1379/1.5488/166/2.40 × 10−17#/#/#/#/##/#/#/#/#
13,00046/1066/1.3319/139/1.85 × 10−19#/#/#/#/##/#/#/#/#
15,00044/1032/1.3834/133/1.50 × 10−18#/#/#/#/##/#/#/#/#
100016/34/0.076156/106/4.72 × 10−720/40/0.033097/61/3.04 × 10−7#/#/#/#/#
300018/38/0.2165/169/1.02 × 10−518/42/0.063206/55/1.26 × 10−5#/#/#/#/#
500018/38/0.85469/175/5.00 × 10−723/48/0.14509/70/5.32 × 10−6#/#/#/#/#
P3700019/41/0.64686/145/1.02 × 10−623/49/0.2846/70/8.52 × 10−6#/#/#/#/#
900020/42/0.86614/163/6.64 × 10−621/48/0.40443/64/6.26 × 10−6#/#/#/#/#
11,00021/43/0.86565/148/8.79 × 10−624/51/0.40143/73/5.87 × 10−6#/#/#/#/#
13,00021/44/0.95234/166/6.50 × 10−722/50/0.3723/67/1.06 × 10−5#/#/#/#/#
15,00021/44/1.1937/154/4.34 × 10−625/53/0.51278/76/1.12 × 10−6#/#/#/#/#
10006/5/0.030069/7/3.30 × 10−810/14/0.024906/13/4.44 × 10−821/23/0.075393/34/4.28 × 10−8
30006/5/0.033187/9/9.24 × 10−912/15/0.01883/15/9.74 × 10−922/25/0.065956/34/1.15 × 10−8
50005/7/0.029342/9/6.03 × 10−913/15/0.039985/27/4.22 × 10−923/26/0.0204/48/6.21 × 10−9
P470006/5/0.025398/8/3.08 × 10−913/16/0.03135/28/2.78 × 10−923/26/0.02165/51/3.17 × 10−9
90005/5/0.028674/11/2.51 × 10−915/18/0.037032/29/2.51 × 10−925/31/0.03047/54/2.51 × 10−9
11,0005/5/0.037101/12/1.68 × 10−915/18/0.025466/31/1.68 × 10−927/35/0.035078/61/1.68 × 10−9
13,0005/5/0.038462/13/1.20 × 10−916/21/0.036502/32/1.20 × 10−929/37/0.030798/64/1.20 × 10−9
15,0005/8/0.041678/16/9.03 × 10−1017/23/0.053914/31/9.03 × 10−1031/43/0.038641/76/9.03 × 10−10
10002/3/0.014686/7/5.00 × 10−14/5/0.004563/7/0.50018#/#/#/#/#
30002/3/0.010132/7/5.00 × 10−14/5/0.006245/7/0.50029#/#/#/#/#
50002/3/0.021818/8/5.00 × 10−15/5/0.012006/7/0.50029#/#/#/#/#
P570003/3/0.054611/10/5.00 × 10−15/7/0.015672/7/0.50029#/#/#/#/#
90003/3/0.034913/10/5.00 × 10−15/7/0.012716/7/0.50022#/#/#/#/#
11,0002/3/0.03168/7/5.00 × 10−15/7/0.01547/7/0.50018#/#/#/#/#
13,0003/3/0.064686/10/5.00 × 10−15/7/0.015508/7/0.50015#/#/#/#/#
15,0002/3/0.038598/7/5.00 × 10−15/7/0.017924/7/0.50013#/#/#/#/#
100012/59/0.011921/7/1.26 × 10+214/61/0.010744/7/125.5626#/#/#/#/#
300012/69/0.018244/7/3.76 × 10+214/71/0.023509/7/375.5625#/#/#/#/#
500012/73/0.047106/7/6.26 × 10+214/75/0.034067/7/625.5625#/#/#/#/#
P6700012/76/0.041661/7/8.76 × 10+214/78/0.040503/7/875.5625#/#/#/#/#
900014/79/0.185367/10/1.13 × 10+315/161/0.082234/13/1125.563#/#/#/#/#
11,00014/88/0.23688/31/1375.56317/234/0.10084/49/1375.563#/#/#/#/#
13,00015/82/0.20939/10/1625.56317/193/0.092382/22/1625.563#/#/#/#/#
15,00015/84/0.16251/10/1875.56317/158/0.10326/13/1875.563#/#/#/#/#
(b)
10004/5/0.006943/14/1.95 × 10−145/6/0.005326/16/1.70 × 10−96/7/0.027251/19/8.90 × 10−11
30004/5/0.01982/14/2.29 × 10−145/6/0.00863/16/5.16 × 10−96 /7 /0.014115/19/5.17 × 10−8
50004/5/0.021647/14/3.09 × 10−145/6/0.049918/16/8.62 × 10−96/8/0.015656/19/3.55 × 10−11
P770004/5/0.033112/14/3.95 × 10−145/6/0.014603/16/1.21 × 10−85/7/0.019737/19/1.10 × 10−5
90004/5/0.049036/14/4.82 × 10−145/6/0.078329/16/1.55 × 10−85/8/0.03029/19/1.57 × 10−6
11,0004/5/0.056278/14/5.70 × 10−145/6/0.029075/16/1.90 × 10−86/9/0.036986/21/6.53 × 10−9
13,0004/5/0.056559/14/6.58 × 10−145/6/0.04135/16/2.25 × 10−87/8/0.043513/21/2.11 × 10−6
15,0004/5/0.067435/14/7.47 × 10−145/6/0.045155/16/2.59 × 10−87/8/0.03234/21/8.21 × 10−6
10004/6/0.032058/4/6.60 × 10−810/8/0.01048/8/8.63 × 10−812/15/0.014342/14/5.02 × 10−9
30006/6/0.032955/5/1.84 × 10−814/14/0.02164/13/1.94 × 10−815/20/0.014741/16/3.36 × 10−9
50007/9/0.022651/5/1.21 × 10−814/15/0.019703/10/8.42 × 10−911/23/0.024378/13/2.40 × 10−9
P870008/9/0.021604/7/6.15 × 10−915/17/0.013349/11/5.55 × 10−919/27/0.025558/24/1.81 × 10−9
90009/12/0.041471/7/5.02 × 10−917/18/0.034446/13/5.02 × 10−9#/#/#/#/#
11,00010/18/0.018886/13/3.36 × 10−919/22/0.015397/17/3.36 × 10−9#/#/#/#/#
13,00010/19/0.036625/14/2.40 × 10−925/29/0.03634/23/2.40 × 10−9#/#/#/#/#
15,00011/19/0.019079/14/1.81 × 10−932/36/0.018665/31/1.81 × 10−9#/#/#/#/#
10002/3/0.013945/4/1.20 × 10−63/4/0.005307/5/4.04 × 10−63/4/0.022246/5/5.39E-07
30002/3/0.003967/4/4.07 × 10−63/4/0.005192/5/4.07 × 10−63/4 /0.009165/5/4.07 × 10−6
50002/3/0.008789/4/2.44 × 10−63/4/0.009438/5/2.44 × 10−63/4/0.013304/5/2.44 × 10−6
P970002/3/0.038487/4/1.75 × 10−63/4/0.005782/5/1.75 × 10−63/4/0.005297/5/1.75 × 10−6
90002/3/0.011686/4/1.36 × 10−63/4/0.009688/5/1.36 × 10−63/4/0.007591/5/1.36 × 10−6
11,0002/3/0.004065/4/3.79 × 10−63/4/0.004049/5/3.79 × 10−63/4/0.004384/5/3.79 × 10−6
13,0002/3/0.001971/4/3.21 × 10−63/4/0.003526/5/3.21 × 10−63/4/0.003947/6/3.21 × 10−6
15,0004/5/0.004646/7/2.78 × 10−65/5/0.02421/7/2.78 × 10−65/5/0.003753/7/2.78 × 10−6
100025/517/0.19922/76/2.5275#/#/#/#/##/#/#/#/#
300031/619/0.63201/94/1.0418#/#/#/#/##/#/#/#/#
500035/329/0.53717/106/36.0174#/#/#/#/##/#/#/#/#
P10700045/921/1.5763/136/1922.027#/#/#/#/##/#/#/#/#
900047/1012/2.3182/142/127.9268#/#/#/#/##/#/#/#/#
11,00049/973/2.9744/148/4778.505#/#/#/#/##/#/#/#/#
13,00034/925/2.7605/103/4778.505#/#/#/#/##/#/#/#/#
15,00041/739/2.644/124/9932.661#/#/#/#/##/#/#/#/#
10003/139/0.095251/10/515#/#/#/#/##/#/#/#/#
30003/138/0.10986/10/1515#/#/#/#/##/#/#/#/#
50004/229/0.32759/13/2515#/#/#/#/##/#/#/#/#
P1170004/229/0.37494/13/3515#/#/#/#/##/#/#/#/#
90005/362/0.99275/16/4515#/#/#/#/##/#/#/#/#
11,0005/362/1.1212/16/5515#/#/#/#/##/#/#/#/#
13,0005/362/1.1222/16/6515#/#/#/#/##/#/#/#/#
15,0005/362/1.2672/16/7515#/#/#/#/##/#/#/#/#
1000131/224/0.15723/394/1.1629147/410/0.12585/442/1.1629#/#/#/#/#
300066/95/0.23273/199/0.7039182/235/0.16029/247/0.88148#/#/#/#/#
500077/106/0.36204/232/0.7039177/227/0.35681/232/0.83612#/#/#/#/#
P12700065/91/0.76641/196/0.70391141/431/0.35773/424/0.97255#/#/#/#/#
900067/101/0.81341/202/0.8361299/317/0.49921/298/0.97255#/#/#/#/#
11,00088/263/0.91256/457/0.83612152/282/0.82151/265/1.2744#/#/#/#/#
13,000110/183/1.9178/331/0.94217191/599/1.303/574/1.4545#/#/#/#/#
15,000142/388/1.6325/649/0.94217206/467/2.0717/427/1.4545#/#/#/#/#
(c)
10009/32/0.036114/28/5.91 × 10−817/45/0.022348/52/6.37 × 10−7#/#/#/#/#
30009/32/0.030503/28/5.89 × 10−817/45/0.029784/52/6.37 × 10−7#/#/#/#/#
50009/32/0.043114/28/5.89 × 10−817/45/0.097405/52/6.37 × 10−7#/#/#/#/#
P1370009/32/0.071886/28/5.89 × 10−817/45/0.11269/52/6.37 × 10−7#/#/#/#/#
90009/32/0.097217/28/5.89 × 10−817/45/0.10614/52/6.37 × 10−7#/#/#/#/#
11,0009/32/0.10795/28/5.89 × 10−817/45/0.1248/52/6.37 × 10−7#/#/#/#/#
13,0009/32/0.14875/28/5.89 × 10−817/45/0.15258/52/6.37 × 10−7#/#/#/#/#
15,0009/32/0.13755/28/5.89 × 10−817/45/0.28263/52/6.37 × 10−7#/#/#/#/#
100019/594/0.2163/58/418461#/#/#/#/##/#/#/#/#
300019/629/0.69794/58/7.25 × 10+8#/#/#/#/##/#/#/#/#
500028/1091/0.59477/85/3.39 × 10+8#/#/#/#/##/#/#/#/#
P14700034/1094/0.85736/92/1.00 × 10+10#/#/#/#/##/#/#/#/#
900040/1585/1.1221/121/1.05 × 10+10#/#/#/#/##/#/#/#/#
11,00040/1589/1.1221/121/1.05 × 10+10#/#/#/#/##/#/#/#/#
13,00041/1743/1.3302/124/9.02 × 10+9#/#/#/#/##/#/#/#/#
15,00046/1847/1.4432/139/3.28 × 10+9#/#/#/#/##/#/#/#/#
10008/27/0.039893/25/1.42 × 10−1016/31/0.020794/49/2.45 × 10−7#/#/#/#/#
30009/28/0.019763/28/6.66 × 10−717/32/0.033332/52/4.65 × 10−7#/#/#/#/#
50009/28/0.043256/28/3.17 × 10−817/32/0.067191/52/2.19 × 10−7#/#/#/#/#
P1570009/29/0.074132/28/2.25 × 10−617/32/0.090807/52/3.04 × 10−7#/#/#/#/#
900011/29/0.099276/34/5.55 × 10−717/32/0.12987/52/3.62 × 10−7#/#/#/#/#
11,00011/29/0.13154/34/3.25 × 10−717/32/0.14986/52/4.03 × 10−7#/#/#/#/#
13,00012/30/0.1364/37/1.50 × 10−718/32/0.1706/54/4.34 × 10−7#/#/#/#/#
15,00013/32/0.15359/40/1.10 × 10−621/35/0.15893/64/4.57 × 10−7#/#/#/#/#
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yunus, R.B.; Zainuddin, N.; Daud, H.; Kannan, R.; Abdul Karim, S.A.; Yahaya, M.M. A Modified Structured Spectral HS Method for Nonlinear Least Squares Problems and Applications in Robot Arm Control. Mathematics 2023, 11, 3215. https://doi.org/10.3390/math11143215

AMA Style

Yunus RB, Zainuddin N, Daud H, Kannan R, Abdul Karim SA, Yahaya MM. A Modified Structured Spectral HS Method for Nonlinear Least Squares Problems and Applications in Robot Arm Control. Mathematics. 2023; 11(14):3215. https://doi.org/10.3390/math11143215

Chicago/Turabian Style

Yunus, Rabiu Bashir, Nooraini Zainuddin, Hanita Daud, Ramani Kannan, Samsul Ariffin Abdul Karim, and Mahmoud Muhammad Yahaya. 2023. "A Modified Structured Spectral HS Method for Nonlinear Least Squares Problems and Applications in Robot Arm Control" Mathematics 11, no. 14: 3215. https://doi.org/10.3390/math11143215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop