Next Article in Journal
Research on Lower Limb Motion Recognition Based on Fusion of sEMG and Accelerometer Signals
Next Article in Special Issue
Convergence Analysis on a Second Order Algorithm for Orthogonal Projection onto Curves
Previous Article in Journal
Effects of Emotional Valence on Hemispheric Asymmetries in Response Inhibition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Second-Order Iterative Algorithm for Orthogonal Projection onto a Parametric Surface

1
College of Data Science and Information Engineering, Guizhou Minzu University, Guiyang 550025, China
2
School of Mathematics and Computer Science, Yichun University, Yichun 336000, China
3
Center for Economic Research, Shandong University, Jinan 250100, China
4
Department of Science, Taiyuan Institute of Technology, Taiyuan 030008, China
5
Center for the World Ethnic Studies, Guizhou Minzu University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2017, 9(8), 146; https://doi.org/10.3390/sym9080146
Submission received: 15 June 2017 / Revised: 21 July 2017 / Accepted: 28 July 2017 / Published: 5 August 2017

Abstract

:
To compute the minimum distance between a point and a parametric surface, three well-known first-order algorithms have been proposed by Hartmann (1999), Hoschek, et al. (1993) and Hu, et al. (2000) (hereafter, the First-Order method). In this paper, we prove the method’s first-order convergence and its independence of the initial value. We also give some numerical examples to illustrate its faster convergence than the existing methods. For some special cases where the First-Order method does not converge, we combine it with Newton’s second-order iterative method to present the hybrid second-order algorithm. Our method essentially exploits hybrid iteration, thus it performs very well with a second-order convergence, it is faster than the existing methods and it is independent of the initial value. Some numerical examples confirm our conclusion.

1. Introduction

In this paper, we discuss how to compute the minimum distance between a point and a parametric surface, and to return the nearest point (footpoint) on the surface as well as its corresponding parameter, which is also called the point projection problem (or the point inversion problem) of a parametric surface. It is a very interesting problem in geometric modeling, computer graphics and computer vision [1]. Both projection and inversion are essential for interactively selecting curves and surfaces [1,2], for the curve fitting problem [1,2], for reconstructing surfaces [3,4,5] and for projecting a space curve onto a surface for surface curve design [6,7]. It is also a key issue in the ICP (iterative closest point) algorithm for shape registration [8]. Mortenson (1985) [9] turns the projection problem into finding the root of a polynomial, then finds the root by using the Newton–Raphson method. Zhou et al. (1993) [10] present an algorithm for computation of the stationary points of the squared distance functions between two point sets. The problem is reformulated in terms of solution of n polynomial equations with n variables expressed in the tensor product Bernstein basis. Johnson and Cohen (2005) [11] present a robust search for distance extrema from a point to a curve or a surface. The robustness comes from using geometric operations with tangent cones rather than numerical methods to find all local extrema. Limaien and Trochu (1995) [12] compute the orthogonal projection of a point onto parametric curves and surfaces by constructing an auxiliary function and finding its zeros. Polak et al. (2003) [13] present a new feedback precision-adjustment rule with a smoothing technique and standard unconstrained minimization algorithms in the solution of finite minimax problems. Patrikalakis and Maekawa (2001) [14] reduce the distance function problem to solving systems of nonlinear polynomial equations. Based on Ma et al. [1], Selimovic (2006) [15] presents improved algorithms for the projection of points on NURBS curves and surfaces. Cohen et al. (1980) [16] provide classical subdivision algorithms which have been widely applied in computer-aided geometric design, computer graphics, and numerical analysis. Based on the subdividing concept [16], Piegl and Tiller (1995) [17] present an algorithm for point projection on NURBS surfaces by subdividing a NURBS surface into quadrilaterals, projecting the test point onto the closest quadrilateral, and then recover the parameter from the closest quadrilateral. Liu et al. (2009) [18] propose a local surface approximation technique—torus patch approximation—and prove that the approximation torus patch and the original surface are second-order osculating. By using the tangent line and the rectifying plane, Li et al. (2013) [19] present an algorithm for finding the intersection between the two spatial parametric curves. Hu et al. (2005) [8] use curvature iteration information for solving the projection problem. Scholars (Ku-Jin Kim (2003) [20], Li et al. (2004) [21], Chen et al. (2010) [22], Chen et al. (2009) [23], Bharath Ram Sundar et al. (2014) [24]) have fully analyzed and discussed the intersection curve between two surfaces, the minimum distance between two curves, the minimum distance between curve and surface and the minimum distance between two surfaces. By clipping circle technology, Chen et al. (2008) [25] provide a method for computing the minimum distance between a point and a NURBS curve. Then, based on clipping sphere strategy, Chen et al. (2009) [26] propose a method for computing the minimum distance between a point and a clamped B-spline surface. Being analogous to [25,26], based on an efficient culling technique that eliminates redundant curves and surfaces which obviously contain no projection from the given point, Young-Taek Oh et al. (2012) [27] present an efficient algorithm for projecting a given point to its closest point on a family of freeform curves and surfaces. Song et al. (2011) [7] propose an algorithm for calculating the orthogonal projection of parametric curves onto B-spline surfaces. It uses a second-order tracing method to construct a polyline to approximate the pre-image curve of the orthogonal projection curve in the parametric domain of the base surface. Regarding the projection problem, Kwanghee Ko et al. (2014) [28] give a detailed review on literatures before 2014. To sum up, those algorithms employ various techniques such as turning the problem into finding the root of the system of nonlinear equations, geometric methods, subdivision methods and circular clipping algorithms. It is well known that there are three classical first-order algorithms for computing the minimum distance between a point and a parametric surface [29,30,31]. However, they did not prove convergence for the First-Order method. In this paper, we contribute in two aspects. Firstly, we prove the method’s first-order convergence and its independence of the initial value. We also give some numerical examples to illustrate its faster convergence than the existing ones. Secondly, for some special cases where the First-Order method does not converge, we combine it with Newton’s second-order iterative method to present the hybrid second-order algorithm. Our method essentially exploits hybrid iteration, thus it performs very well with a second-order convergence, it is faster than the existing methods, and it is independent of the initial value. Some numerical examples confirm our conclusion.
The rest of this paper is organized as follows. Section 2 presents a convergence analysis for the First-Order method for orthogonal projection onto a parametric surface. In Section 3, some numerical examples illustrate that it converges faster than the existing methods. In Section 4, for some special cases where the First-Order method is not convergent, an improved hybrid second-order algorithm is presented. Convergence analysis and experimental results for the hybrid second-order algorithm are also presented in this section. Finally, Section 5 concludes the paper.

2. Convergence Analysis for the First-Order Method

In this section, we prove that the method defined by (5) is the first-order convergent and its convergence is independent of the initial value.
Assume a regular parametric surface s ( u , v ) = ( f 1 ( u , v ) , f 2 ( u , v ) , f 3 ( u , v ) ) , i.e., p = ( p 1 , p 2 , p 3 ) is a test point. The first-order geometric iteration [29,30,31] to compute the footpoint q of the test point p is the following. Projecting the test point p onto a tangent plane of the parametric surface s ( u , v ) at ( u , v ) = ( u n , v n ) yields a point q determined by s ( u n , v n ) and the partial derivatives s u ( u n , v n ) , s v ( u n , v n ) . (see the following Formulas (2) and (3)). The footpoint can be approximatively expressed in the following way
q = s ( u n , v n ) + s u ( u n , v n ) Δ u + s v ( u n , v n ) Δ v ,
where
s u ( u n , v n ) = ( f 1 ( u n , v n ) u , f 2 ( u n , v n ) u , f 3 ( u n , v n ) u ) = ( f 1 u ( u n , v n ) , f 2 u ( u n , v n ) , f 3 u ( u n , v n ) ) ,
s v ( u n , v n ) = ( f 1 ( u n , v n ) v , f 2 ( u n , v n ) v , f 3 ( u n , v n ) v ) = ( f 1 v ( u n , v n ) , f 2 v ( u n , v n ) , f 3 v ( u n , v n ) ) .
Multiplying with s u ( u n , v n ) and s v ( u n , v n ) , respectively, we obtain
s u ( u n , v n ) , s u ( u n , v n ) Δ u + s u ( u n , v n ) , s v ( u n , v n ) Δ v = q s ( u n , v n ) , s u ( u n , v n ) , s u ( u n , v n ) , s v ( u n , v n ) Δ u + s v ( u n , v n ) , s v ( u n , v n ) Δ v = q s ( u n , v n ) , s v ( u n , v n ) ,
where x , y denotes the scalar product of vectors x , y R 3 , and x denotes the norm of a vector x. The corresponding solution of a regular system of linear Equation (4) is
Δ u = s v , s v s q , s u s u , s v s q , s v s u , s u s v , s v s u , s v s u , s v , Δ v = s u , s v s q , s u s u , s u s q , s v s u , s u s v , s v s u , s v s u , s v ,
where s u , s u = s u ( u n , v n ) , s u ( u n , v n ) , s u , s v = s u ( u n , v n ) , s v ( u n , v n ) , s v , s v = s v ( u n , v n ) , s v ( u n , v n ) , s q , s u = q s ( u n , v n ) , s u ( u n , v n ) , s q , s v = q s ( u n , v n ) , s v ( u n , v n ) .
We update u n , v n by adding Δ u , Δ v , and repeat the above procedure until Δ u < ε and Δ v < ε where ε is a given tolerance. This is the first-order geometric iteration method in [29,30,31] (See Figure 1). Furthermore, convergence of the First-Order method is independent of the initial value.
Theorem 1.
The method defined by (5) is the first-order convergent. Convergence of the iterative Formula (5) is independent of the initial value.
Proof. 
We firstly derive the expression of footpoint q. Assume that parameter ( α , β ) takes the value so that the test point p = ( p 1 , p 2 , p 3 ) orthogonally projects onto the parametric surface s ( u , v ) = ( f 1 ( u , v ) , f 2 ( u , v ) , f 3 ( u , v ) ) . It is not difficult to show that there is a relational expression
p h , V 1 = 0 , p h , V 2 = 0 ,
where h = ( f 1 ( α , β ) , f 2 ( α , β ) , f 3 ( α , β ) ) and tangent vector V 1 = ( f 1 ( α , β ) u , f 2 ( α , β ) u , f 3 ( α , β ) u ) = ( f 1 u ( α , β ) , f 2 u ( α , β ) , f 3 u ( α , β ) ) , V 2 = ( f 1 ( α , β ) v , f 2 ( α , β ) v , f 3 ( α , β ) v ) = ( f 1 v ( α , β ) , f 2 v ( α , β ) , f 3 v ( α , β ) ) . The relationship can also be expressed in the following way,
p s ( α , β ) , s ( α , β ) u = 0 , p s ( α , β ) , s ( α , β ) v = 0 .
Because the footpoint q is the intersection of the tangent plane of the parametric surface s ( u , v ) at ( u , v ) = ( u n , v n ) and the perpendicular line determined by test point p, the equation of the tangent plane of the parametric surface s ( u , v ) at ( u , v ) = ( u n , v n ) is
x 1 = f 1 ( u n , v n ) + f 1 u ( u n , v n ) u + f 1 v ( u n , v n ) v , y 1 = f 2 ( u n , v n ) + f 2 u ( u n , v n ) u + f 2 v ( u n , v n ) v , z 1 = f 3 ( u n , v n ) + f 3 u ( u n , v n ) u + f 3 v ( u n , v n ) v .
At the same time, the vector of the line segment connected by the test point p and the point s ( u n , v n ) is
( x 2 , y 2 , z 2 ) = ( p 1 x 1 , p 2 y 1 , p 3 z 1 ) .
Because the vector (9) and the tangent vectors s u ( u n , v n ) = ( f 1 u ( u n , v n ) , f 2 u ( u n , v n ) , f 3 u ( u n , v n ) ) , s v ( u n , v n ) = ( f 1 v ( u n , v n ) , f 2 v ( u n , v n ) , f 3 v ( u n , v n ) ) are orthogonal to each other, respectively, we naturally obtain a system of nonlinear equations of the parameters u , v ,
( f 1 u ( u n , v n ) u + f 1 v ( u n , v n ) v + f 1 ( u n , v n ) p 1 ) f 1 u ( u n , v n ) + ( f 2 u ( u n , v n ) u + f 2 v ( u n , v n ) v + f 2 ( u n , v n ) p 2 ) f 2 u ( u n , v n ) + ( f 3 u ( u n , v n ) u + f 3 v ( u n , v n ) v + f 3 ( u n , v n ) p 3 ) f 3 u ( u n , v n ) = 0 , ( f 1 u ( u n , v n ) u + f 1 v ( u n , v n ) v + f 1 ( u n , v n ) p 1 ) f 1 v ( u n , v n ) + ( f 2 u ( u n , v n ) u + f 2 v ( u n , v n ) v + f 2 ( u n , v n ) p 2 ) f 2 v ( u n , v n ) + ( f 3 u ( u n , v n ) u + f 3 v ( u n , v n ) v + f 3 ( u n , v n ) p 3 ) f 3 v ( u n , v n ) = 0 .
So the corresponding solutions of parameters u , v of the Formula (10) are
u 0 = P 1 P 0 , v 0 = P 2 P 0 ,
where
P 0 = f 1 u 2 ( u n , v n ) f 2 v 2 ( u n , v n ) + f 1 u 2 ( u n , v n ) f 3 v 2 ( u n , v n ) 2 f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 2 u ( u n , v n ) f 2 v ( u n , v n ) 2 f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) + f 1 v 2 ( u n , v n ) f 2 u 2 ( u n , v n ) + f 1 v 2 ( u n , v n ) f 3 u 2 ( u n , v n ) + f 3 v 2 ( u n , v n ) f 2 u 2 ( u n , v n ) + f 2 v 2 ( u n , v n ) f 3 u 2 ( u n , v n ) 2 f 2 u ( u n , v n ) f 2 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) , P 1 = ( f 1 ( u n , v n ) f 1 u ( u n , v n ) f 2 v 2 ( u n , v n ) + f 1 ( u n , v n ) f 1 u ( u n , v n ) f 3 v 2 ( u n , v n ) f 1 ( u n , v n ) f 1 v ( u n , v n ) f 2 u ( u n , v n ) f 2 v ( u n , v n ) f 2 ( u n , v n ) f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 2 v ( u n , v n ) f 3 ( u n , v n ) f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 3 v ( u n , v n ) p 1 f 1 u ( u n , v n ) f 2 v 2 ( u n , v n ) p 1 f 1 u ( u n , v n ) f 3 v 2 ( u n , v n ) p 2 f 2 u ( u n , v n ) f 1 v 2 ( u n , v n ) + f 3 ( u n , v n ) f 3 u ( u n , v n ) f 1 v 2 ( u n , v n ) + p 1 f 2 u ( u n , v n ) f 1 v ( u n , v n ) f 2 v ( u n , v n ) + f 2 ( u n , v n ) f 2 u ( u n , v n ) f 3 v 2 ( u n , v n ) f 3 ( u n , v n ) f 2 u ( u n , v n ) f 2 v ( u n , v n ) f 3 v ( u n , v n ) p 2 f 2 u ( u n , v n ) f 3 v 2 ( u n , v n ) + f 3 ( u n , v n ) f 3 u ( u n , v n ) f 2 v 2 ( u n , v n ) + p 2 f 2 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) ) f 1 ( u n , v n ) f 1 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) + p 2 f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 2 v ( u n , v n ) + p 3 f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 3 v ( u n , v n ) + f 2 ( u n , v n ) f 1 v 2 ( u n , v n ) f 2 u ( u n , v n ) p 3 f 3 u ( u n , v n ) f 2 v 2 ( u n , v n ) p 3 f 3 u ( u n , v n ) f 1 v 2 ( u n , v n ) + p 1 f 1 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) + p 3 f 2 u ( u n , v n ) f 2 v ( u n , v n ) f 3 v ( u n , v n ) f 2 ( u n , v n ) f 2 v ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) , P 2 = f 1 ( u n , v n ) f 1 u ( u n , v n ) f 2 u ( u n , v n ) f 2 v ( u n , v n ) f 1 ( u n , v n ) f 1 v ( u n , v n ) f 2 u 2 ( u n , v n ) f 1 ( u n , v n ) f 1 v ( u n , v n ) f 3 u 2 ( u n , v n ) f 2 ( u n , v n ) f 2 v ( u n , v n ) f 1 u 2 ( u n , v n ) + p 2 f 2 v ( u n , v n ) f 1 u 2 ( u n , v n ) f 3 ( u n , v n ) f 3 v ( u n , v n ) f 1 u 2 ( u n , v n ) + p 3 f 3 v ( u n , v n ) f 1 u 2 ( u n , v n ) + f 2 ( u n , v n ) f 1 u ( u n , v n ) f 2 u ( u n , v n ) f 1 v ( u n , v n ) + f 3 ( u n , v n ) f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 3 u ( u n , v n ) p 1 f 1 u ( u n , v n ) f 2 u ( u n , v n ) f 2 v ( u n , v n ) + p 1 f 1 v ( u n , v n ) f 2 u 2 ( u n , v n ) + p 1 f 1 v ( u n , v n ) f 3 u 2 ( u n , v n ) + f 2 ( u n , v n ) f 2 u ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) f 3 ( u n , v n ) f 3 v ( u n , v n ) f 2 u 2 ( u n , v n ) + p 3 f 3 v ( u n , v n ) f 2 u 2 ( u n , v n ) + f 3 ( u n , v n ) f 2 u ( u n , v n ) f 3 u ( u n , v n ) f 2 v ( u n , v n ) p 2 f 2 u ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) + p 2 f 2 v ( u n , v n ) f 3 u 2 ( u n , v n ) + f 1 ( u n , v n ) f 1 u ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) p 2 f 1 u ( u n , v n ) f 2 u ( u n , v n ) f 1 v ( u n , v n ) p 3 f 1 u ( u n , v n ) f 1 v ( u n , v n ) f 3 u ( u n , v n ) p 1 f 1 u ( u n , v n ) f 3 u ( u n , v n ) f 3 v ( u n , v n ) f 2 ( u n , v n ) f 2 v ( u n , v n ) f 3 u 2 ( u n , v n ) p 3 f 2 u ( u n , v n ) f 3 u ( u n , v n ) f 2 v ( u n , v n ) .
Substituting (11) into (8), and simplifying, we have
q 1 = f 1 ( u n , v n ) + f 1 u ( u n , v n ) u 0 + f 1 v ( u n , v n ) v 0 , q 2 = f 2 ( u n , v n ) + f 2 u ( u n , v n ) u 0 + f 2 v ( u n , v n ) v 0 , q 3 = f 3 ( u n , v n ) + f 3 u ( u n , v n ) u 0 + f 3 v ( u n , v n ) v 0 .
So the footpoint q = ( q 1 , q 2 , q 3 ) is the Formula (12). Substituting (12) into (5), and simplifying, we obtain the relationship,
Δ u = s v , s v s q , s u s u , s v s q , s v s u , s u s v , s v s u , s v s u , s v , Δ v = s u , s v s q , s u s u , s u s q , s v s u , s u s v , s v s u , s v s u , s v ,
where s u , s u = s u ( u n , v n ) , s u ( u n , v n ) , s u , s v = s u ( u n , v n ) , s v ( u n , v n ) , s v , s v = s v ( u n , v n ) , s v ( u n , v n ) , s q , s u = q s ( u n , v n ) , s u ( u n , v n ) , s q , s v = q s ( u n , v n ) , s v ( u n , v n ) .
Using Taylor’s expansion, we obtain
f 1 ( u n , v n ) = f 1 ( α , β ) + C 11 e 1 n + C 12 e 2 n + 1 2 C 13 e 1 n 2 + C 14 e 1 n e 2 n + 1 2 C 15 e 2 n 2 + o ( e n 3 ) , f 2 ( u n , v n ) = f 2 ( α , β ) + C 21 e 1 n + C 22 e 2 n + 1 2 C 23 e 1 n 2 + C 24 e 1 n e 2 n + 1 2 C 25 e 2 n 2 + o ( e n 3 ) , f 3 ( u n , v n ) = f 3 ( α , β ) + C 31 e 1 n + C 32 e 2 n + 1 2 C 33 e 1 n 2 + C 34 e 1 n e 2 n + 1 2 C 35 e 2 n 2 + o ( e n 3 ) ,
where e n = e 1 n e 2 n = u n α v n β , C i 1 = f i ( α , β ) u , C i 2 = f i ( α , β ) v , C i 3 = 2 f i ( α , β ) u 2 , C i 4 = 2 f i ( α , β ) u v , C i 5 = 2 f i ( α , β ) v 2 .
Thus, we have
f 1 u ( u n , v n ) = C 11 + C 13 e 1 n + C 14 e 2 n + o ( e n 2 ) , f 2 u ( u n , v n ) = C 21 + C 23 e 1 n + C 24 e 2 n + o ( e n 2 ) , f 3 u ( u n , v n ) = C 31 + C 33 e 1 n + C 34 e 2 n + o ( e n 2 ) ,
and
f 1 v ( u n , v n ) = C 12 + C 14 e 1 n + C 15 e 2 n + o ( e n 2 ) , f 2 v ( u n , v n ) = C 22 + C 24 e 1 n + C 25 e 2 n + o ( e n 2 ) , f 3 v ( u n , v n ) = C 32 + C 34 e 1 n + C 35 e 2 n + o ( e n 2 ) .
From (7) and (14), then we have
( p 1 f 1 ( α , β ) ) C 11 + ( p 2 f 2 ( α , β ) ) C 21 + ( p 3 f 3 ( α , β ) ) C 31 = 0 , ( p 1 f 1 ( α , β ) ) C 12 + ( p 2 f 2 ( α , β ) ) C 22 + ( p 3 f 3 ( α , β ) ) C 32 = 0 .
By (14)–(16), and using Taylor’s expansion, Formula (13) can be transformed into the following form,
u n + 1 = u n + u A + u B e 1 n + u C e 2 n + o ( e n 2 ) u v A + u v B e 1 n + u v C e 2 n + o ( e n 2 ) , v n + 1 = v n v A + v B e 1 n + v C e 2 n + o ( e n 2 ) u v A + u v B e 1 n + u v C e 2 n + o ( e n 2 ) ,
where
u A = C 11 C 12 C 22 f 2 ( α , β ) p 2 C 11 C 12 C 22 + C 11 C 12 C 32 f 3 ( α , β ) C 11 C 12 C 32 p 3 + C 12 C 21 C 22 f 1 ( α , β ) C 12 C 21 C 22 p 1 + C 12 C 31 C 32 f 1 ( α , β ) C 12 C 31 C 32 p 1 + C 21 C 22 C 32 f 3 ( α , β ) C 21 C 22 C 32 p 3 + C 22 C 31 C 32 f 2 ( α , β ) C 22 C 31 C 32 p 2 C 11 C 22 2 f 1 ( α , β ) + C 11 C 22 2 p 1 C 11 C 32 2 f 1 ( α , β ) + C 11 C 32 2 p 1 + C 12 2 C 21 p 2 C 12 2 C 31 f 3 ( α , β ) + C 12 2 C 31 p 3 C 21 C 32 2 f 2 ( α , β ) + C 21 C 32 2 p 2 C 22 2 C 31 f 3 ( α , β ) + C 22 2 C 31 p 3 C 12 2 C 21 f 2 ( α , β ) , u B = C 14 C 21 C 22 p 1 + C 12 C 31 C 34 p 1 C 12 C 32 C 33 p 1 C 14 C 31 C 32 p 1 C 22 2 C 23 f 2 ( α , β ) + C 22 2 C 23 p 2 C 22 C 32 C 33 p 2 C 24 C 31 C 32 p 2 C 12 2 C 13 f 1 ( α , β ) + C 12 2 C 13 p 1 + C 12 C 32 C 33 f 1 ( α , β ) + 2 C 11 C 12 C 21 C 22 + 2 C 11 C 12 C 31 C 32 + C 11 C 12 C 14 f 1 ( α , β ) C 12 2 C 33 f 3 ( α , β ) C 22 2 C 31 2 C 21 2 C 32 2 C 12 2 C 31 2 C 12 2 C 21 2 C 11 2 C 32 2 + C 13 C 32 2 p 1 + C 13 C 22 2 p 1 + C 12 2 C 33 p 3 + C 12 2 C 23 p 2 + C 23 C 32 2 p 2 + C 22 2 C 33 p 3 C 23 C 32 2 f 2 ( α , β ) + C 14 C 21 C 22 f 1 ( α , β ) C 11 2 C 22 2 C 11 C 12 C 14 p 1 + C 11 C 12 C 24 f 2 ( α , β ) + C 21 C 22 C 34 f 3 ( α , β ) C 12 C 31 C 34 f 1 ( α , β ) + C 22 C 32 C 33 f 2 ( α , β ) + C 24 C 31 C 32 f 2 ( α , β ) C 21 C 22 C 24 p 2 C 22 2 C 33 f 3 ( α , β ) C 12 2 C 23 f 2 ( α , β ) C 11 C 12 C 34 p 3 + 2 C 21 C 22 C 31 C 32 C 13 C 22 2 f 1 ( α , β ) C 22 C 31 C 34 f 2 ( α , β ) + C 21 C 22 C 24 f 2 ( α , β ) + C 22 C 31 C 34 p 2 C 11 C 12 C 24 p 2 + C 14 C 31 C 32 f 1 ( α , β ) + C 11 C 12 C 34 f 3 ( α , β ) C 21 C 22 C 34 p 3 C 13 C 32 2 f 1 ( α , β ) , u C = C 12 2 C 24 f 2 ( α , β ) C 12 2 C 34 f 3 ( α , β ) C 11 C 12 C 35 p 3 C 22 C 32 C 34 p 2 + C 22 C 31 C 35 p 2 C 15 C 21 C 22 p 1 + C 12 C 31 C 35 p 1 C 12 C 32 C 34 p 1 + C 15 C 21 C 22 f 1 ( α , β ) + C 22 C 32 C 34 f 2 ( α , β ) + C 25 C 31 C 32 f 2 ( α , β ) + C 24 C 32 2 p 2 C 24 C 32 2 f 2 ( α , β ) + C 12 2 C 34 p 3 + C 12 2 C 24 p 2 + C 14 C 32 2 p 1 C 14 C 22 2 f 1 ( α , β ) C 14 C 32 2 f 1 ( α , β ) + C 11 C 12 C 35 f 3 ( α , β ) C 22 C 31 C 35 f 2 ( α , β ) + C 21 C 22 C 35 f 3 ( α , β ) C 21 C 22 C 35 p 3 C 21 C 22 C 25 p 2 + C 11 C 12 C 15 f 1 ( α , β ) C 12 2 C 14 f 1 ( α , β ) + C 12 2 C 14 p 1 C 11 C 12 C 25 p 2 + C 15 C 31 C 32 f 1 ( α , β ) + C 22 2 C 34 p 3 + C 14 C 22 2 p 1 + C 22 2 C 24 p 2 + C 21 C 22 C 25 f 2 ( α , β ) C 25 C 31 C 32 p 2 C 15 C 31 C 32 p 1 C 22 2 C 34 f 3 ( α , β ) + C 11 C 12 C 25 f 2 ( α , β ) C 11 C 12 C 15 p 1 C 22 2 C 24 f 2 ( α , β ) C 12 C 31 C 35 f 1 ( α , β ) + C 12 C 32 C 34 f 1 ( α , β ) ,
v A = C 11 2 C 22 f 2 ( α , β ) C 11 2 C 22 p 2 + C 11 2 C 32 f 3 ( α , β ) C 11 2 C 32 p 3 C 11 C 12 C 31 f 3 ( α , β ) + C 11 C 12 C 31 p 3 C 11 C 21 C 22 f 1 ( α , β ) + C 11 C 21 C 22 p 1 + C 12 C 21 2 f 1 ( α , β ) C 12 C 21 2 p 1 + C 12 C 31 2 f 1 ( α , β ) C 12 C 31 2 p 1 + C 21 2 C 32 f 3 ( α , β ) C 21 C 22 C 31 f 3 ( α , β ) + C 21 C 22 C 31 p 3 C 21 C 31 C 32 f 2 ( α , β ) + C 21 C 31 C 32 p 2 + C 11 C 31 C 32 p 1 C 22 C 31 2 p 2 + C 11 C 12 C 21 p 2 C 11 C 31 C 32 f 1 ( α , β ) C 21 2 C 32 p 3 + C 22 C 31 2 f 2 ( α , β ) C 11 C 12 C 21 f 2 ( α , β ) , v B = C 11 C 12 C 13 p 1 C 21 C 22 C 23 f 2 ( α , β ) + C 21 C 22 C 23 p 2 C 11 C 12 C 13 f 1 ( α , β ) + C 22 C 31 C 33 f 2 ( α , β ) C 11 C 12 C 33 f 3 ( α , β ) + C 12 C 31 C 33 f 1 ( α , β ) C 12 C 31 C 33 p 1 + C 23 C 31 C 32 p 2 + C 11 C 12 C 23 p 2 + C 11 C 12 C 33 p 3 C 23 C 31 C 32 f 2 ( α , β ) + C 21 C 22 C 33 p 3 C 13 C 31 C 32 f 1 ( α , β ) + C 11 2 C 24 f 2 ( α , β ) + C 21 2 C 34 f 3 ( α , β ) + C 11 2 C 34 f 3 ( α , β ) C 11 2 C 24 p 2 C 21 2 C 34 p 3 C 24 C 31 2 p 2 C 14 C 31 2 p 1 + C 24 C 31 2 f 2 ( α , β ) C 14 C 21 2 p 1 C 11 2 C 34 p 3 + C 11 2 C 14 f 1 ( α , β ) C 21 C 31 C 34 f 2 ( α , β ) + C 11 C 31 C 34 p 1 + C 13 C 21 C 22 p 1 C 21 2 C 24 p 2 + C 21 2 C 24 f 2 ( α , β ) C 21 C 22 C 33 f 3 ( α , β ) C 11 C 12 C 23 f 2 ( α , β ) + C 21 C 31 C 34 p 2 C 22 C 31 C 33 p 2 + C 13 C 31 C 32 p 1 + C 14 C 31 2 f 1 ( α , β ) + C 14 C 21 2 f 1 ( α , β ) C 11 2 C 14 p 1 C 11 C 31 C 34 f 1 ( α , β ) C 13 C 21 C 22 f 1 ( α , β ) , v C = C 12 C 31 C 34 f 1 ( α , β ) C 11 C 12 C 24 f 2 ( α , β ) C 11 C 12 C 34 f 3 ( α , β ) C 25 C 31 2 p 2 C 11 2 C 25 p 2 C 11 2 C 35 p 3 C 15 C 31 2 p 1 C 15 C 21 2 p 1 + C 15 C 31 2 f 1 a b C 21 2 C 35 p 3 + C 21 2 C 35 f 3 ( α , β ) + C 15 C 21 2 f 1 ( α , β ) + C 11 2 C 35 f 3 ( α , β ) + C 14 C 31 C 32 p 1 C 14 C 21 C 22 f 1 ( α , β ) C 12 C 31 C 34 p 1 + C 11 C 31 C 35 p 1 + C 11 C 12 C 34 p 3 2 C 11 C 12 C 21 C 22 + C 11 C 12 C 24 p 2 2 C 11 C 12 C 31 C 32 + C 21 C 31 C 35 p 2 C 24 C 31 C 32 f 2 ( α , β ) 2 C 21 C 22 C 31 C 32 + C 21 C 22 C 34 p 3 C 21 C 31 C 35 f 2 ( α , β ) + C 11 C 12 C 14 p 1 C 21 C 22 C 24 f 2 ( α , β ) + C 21 C 22 C 24 p 2 C 11 C 12 C 14 f 1 ( α , β ) + C 12 2 C 21 2 + C 12 2 C 31 2 + C 11 2 C 32 2 + C 11 2 C 22 2 C 21 C 22 C 34 f 3 ( α , β ) + C 1 1 2 C 15 f 1 ( α , β ) C 11 2 C 15 p 1 + C 21 2 C 25 f 2 ( α , β ) C 21 2 C 25 p 2 + C 25 C 31 2 f 2 ( α , β ) C 14 C 31 C 32 f 1 ( α , β ) + C 14 C 21 C 22 p 1 + C 24 C 31 C 32 p 2 C 22 C 31 C 34 p 2 C 11 C 31 C 35 f 1 ( α , β ) + C 22 2 C 31 2 + C 21 2 C 32 2 + C 11 2 C 25 f 2 ( α , β ) + C 22 C 31 C 34 f 2 a b , u v A = C 11 2 C 22 2 + C 11 2 C 32 2 2 C 11 C 12 C 21 C 22 2 C 11 C 12 C 31 C 32 + C 12 2 C 21 2 + C 12 2 C 31 2 + C 21 2 C 32 2 2 C 21 C 22 C 31 C 32 + C 22 2 C 31 2 , u v B = 2 C 11 2 C 22 C 24 + 2 C 11 2 C 32 C 34 2 C 11 C 12 C 21 C 24 2 C 11 C 12 C 22 C 23 + 2 C 11 C 13 C 32 2 2 C 11 C 14 C 21 C 22 2 C 11 C 14 C 31 C 32 + 2 C 12 2 C 21 C 23 + 2 C 12 C 14 C 21 2 + 2 C 12 C 14 C 31 2 + 2 C 21 2 C 32 C 34 2 C 21 C 22 C 31 C 34 + 2 C 22 2 C 31 C 33 2 C 22 C 23 C 31 C 32 + 2 C 22 C 24 C 31 2 + 2 C 11 C 13 C 22 2 2 C 11 C 12 C 32 C 33 2 C 12 C 13 C 21 C 22 + 2 C 21 C 23 C 32 2 2 C 21 C 24 C 31 C 32 2 C 11 C 12 C 31 C 34 + 2 C 12 2 C 31 C 33 2 C 21 C 22 C 32 C 33 2 C 12 C 13 C 31 C 32 , u v C = 2 C 11 2 C 22 C 25 + 2 C 11 2 C 32 C 35 2 C 11 C 12 C 21 C 25 2 C 11 C 12 C 22 C 24 + 2 C 11 C 14 C 22 2 + 2 C 11 C 14 C 32 2 2 C 11 C 15 C 21 C 22 2 C 11 C 15 C 31 C 32 2 C 12 C 14 C 21 C 22 2 C 12 C 14 C 31 C 32 + 2 C 12 C 15 C 21 2 + 2 C 12 C 15 C 31 2 2 C 21 C 22 C 32 C 34 + 2 C 21 C 24 C 32 2 2 C 21 C 25 C 31 C 32 + 2 C 22 2 C 31 C 34 2 C 11 C 12 C 32 C 34 + 2 C 12 2 C 31 C 34 2 C 21 C 22 C 31 C 35 + 2 C 22 C 25 C 31 2 2 C 11 C 12 C 31 C 35 + 2 C 12 2 C 21 C 24 + 2 C 21 2 C 32 C 35 2 C 22 C 24 C 31 C 32 .
From (17), we can obtain u A = 0 and v A = 0 . So Formula (18) can be transformed into the following form
u n + 1 = u n + u B e 1 n + u C e 2 n + o ( e n 2 ) u v A + u v B e 1 n + u v C e 2 n + o ( e n 2 ) , v n + 1 = v n v B e 1 n + v C e 2 n + o ( e n 2 ) u v A + u v B e 1 n + u v C e 2 n + o ( e n 2 ) .
Using Taylor’s expansion by symbolic computation software Maple 18, and simplifying, we obtain
e 1 ( n + 1 ) = u C 1 e 1 n + u C 2 e 2 n + o ( e n 2 ) , e 2 ( n + 1 ) = v C 1 e 1 n + v C 2 e 2 n + o ( e n 2 ) ,
where e 1 ( n + 1 ) = u n + 1 α , e 2 ( n + 1 ) = v n + 1 β , u C 1 = u B + u v A u v A , u C 2 = u C u v A , v C 1 = v B u v A , v C 2 = u v A v C u v A . Formula (20) can be further simplified into the following form,
e n + 1 = C 0 e n + o ( e n 2 ) ,
where C 0 = u C 1 u C 2 v C 1 v C 2 , e n = e 1 n e 2 n = u n α v n β .
The result of Formula (21) implies that the iterative Formula (5) is the first-order convergent. In the following, we continue to illustrate that convergence of the iterative Formula (5) is independent of the initial value(See Figure 2).
Our proof method is analogous to those methods in references [32,33]. We project all points, curves and the surface of Figure 1 onto the y z plane, and this yields Figure 2. When the iterative Formula (5) starts to iterate, according to the graphical demonstration, the corresponding parametric value of the footpoint q is ( u n + 1 , v n + 1 ) . The middle point of point ( u n + 1 , v n + 1 ) and point ( u n , v n ) is M where M = ( u n + 1 + u n 2 , v n + 1 + v n 2 ) . From the graphic demonstration, there are two inequality relationships u n < u n + 1 + u n 2 < α and v n < v n + 1 + v n 2 < β . These results indicate two inequality relationships e 1 ( n + 1 ) < e 1 n and e 2 ( n + 1 ) < e 2 n ; namely, there is an iterative relational error expression e n + 1 2 < e n 2 , where e n 2 = ( e 1 n α ) 2 + ( e 2 n β ) 2 . To sum up, we can verify that convergence of the iterative Formula (5) is independent of the initial value. ☐

3. Numerical Examples

Example 1.
There is a general parametric surface s ( u , v ) = ( f 1 ( u , v ) , f 2 ( u , v ) , f 3 ( u , v ) ) = ( u + 2 v , cos ( u + v ) , sin ( u + v ) ) , u , v [ 2.0 , 2.0 ] with a test point p = ( p 1 , p 2 , p 3 ) = ( 0.3 , 0.5 , 1.0 ) . Using the First-Order method, the corresponding orthogonal projection parametric value is ( α , β ) = ( 1.9142974355881810 , 0.80714871779409050 ) , the initial iterative values ( u 0 , v 0 ) are ( 1 , 2 ) , ( 2 , 2 ) , ( 2 , 2 ) , ( 0 , 0 ) , ( 1 , 1 ) , a n d ( 2 , 1 ) , respectively. Each initial iterative value repeatedly iterates 10 times, respectively, yielding 10 different iteration times in the time unit of nanoseconds. In Table 1, the mean running time of the first-order iterative method is 134,760, 141,798, 41,033, 140,051, 137,059, and 42,399 nanoseconds for six different initial iterative values, respectively. In the end, the overall average running time in Table 1 is 106183.33 nanoseconds (≈0.10618 ms), while the overall average running time of Table 1 and Table 2 in [18] is 0.3565 ms. So the First-Order method is faster than the algorithm in [18]. (See Figure 3).
Example 2.
There is a quasi-B-spline parametric surface s ( u , v ) = ( f 1 ( u , v ) , f 2 ( u , v ) , f 3 ( u , v ) ) = ( u 6 + u 5 v + v 6 u 2 v 3 + u v 4 v 5 + u 4 + u 3 v + v 4 + u 3 u v 2 + v 3 + u v + v 2 + 1 , u 6 + 2 u 4 v 2 v 6 + u 5 + u 2 v 3 + v 5 + u 4 u v 3 + u 3 2 u v 2 + v 3 + 4 u v + 2 v 2 + 2 u + 2 v + 1 , u 6 + u v 5 + u 4 v + u 2 v 3 + u 3 v + 2 u 2 v 2 + v 4 + u 3 + 2 u v + u ) , u , v [ 0.0 , 2.0 ] with a test point p = ( p 1 , p 2 , p 3 ) = ( 15.0 , 20.0 , 25.0 ) . Using the First-Order method, the corresponding orthogonal projection parametric value is ( α , β ) = ( 1.0199334308624865 , 1.3569112459785527 ) , the initial iterative values ( u 0 , v 0 ) are ( 1 , 1 ) , ( 2 , 2 ) , ( 2 , 0 ) , ( 1 , 2 ) , ( 2 , 1 ) , a n d ( 0 , 2.0 ) , respectively. Each initial iterative value repeatedly iterates 10 times, respectively, yielding 10 different iteration times in nanoseconds. In Table 2, the mean running time of the First-Order method is 500,831, 440,815, 480,969, 445,755, 426,737, and 488,092 nanoseconds for six different initial iterative values, respectively. In the end, the overall average running time is 463,866.50 nanoseconds (≈0.463865 ms), while the overall average running time of the Example 2 for the algorithm in [18] is 1.705624977 ms under the same initial iteration condition. So the First-Order method is faster than the algorithm in [18]. (See Figure 4).
In Table 1 and Table 2, the convergence tolerance required such that ( u n u n 1 ) 2 + ( v n v n 1 ) 2 < 1 E 17 is displayed. The approximate zero ( α , β ) found up to the 17th decimal place is displayed. All computations were done under g++ in Fedora linux 8 environment using our personal computer with T2080 1.73 GHz CPU and 2.5 GB memory. The overall average running time of 0.336222 ms in Examples 1 and 2 for their algorithm [18] means the First-Order method is faster than the algorithm in [18]. In [18], authors point out that their algorithm is faster than that in [8], which means the First-Order method is faster than the one in [8]. At the same time, the overall average running time of 61.81167 ms in three Examples for their algorithm [26] is obtained, then the First-Order method is faster than the algorithm in [26]. However, in [26], the authors indicate that their algorithm is faster than those in [1,15], which means the First-Order method is faster than the one in [1,15]. To sum up, the First-Order method converges faster than the existing methods in [1,8,15,18,26].

4. The Improved Algorithm

4.1. Counterexamples

In Section 2 and Section 3, convergence of the First-Order method is independent of the initial value and some numerical examples illustrate that it converges faster than the existing methods. To show some special cases where the First-Order method is not convergent, we create five counterexamples as follows.
Counterexample 1.
Suppose the parametric surface s ( u , v ) = ( u , v , 1 + u 2 + v 2 ) with a test point p = ( 0 , 0 , 0 ) . It is clear that the uniquely corresponding orthogonal projection point and parametric value of the test point p are ( 0 , 0 , 1 ) and ( α , β ) = ( 0 , 0 ) , respectively. For any initial iterative value, the First-Order method could not converge to (0,0).
Counterexample 2.
Suppose a parametric surface s ( u , v ) = ( u , v , s i n ( u + v ) ) , u , v [ 0 , 2 ] with a test point p = ( 3 , 4 , 5 ) . It is not difficult to know that the uniquely corresponding orthogonal projection point and parametric value of the test point p are ( 0.59213983546158970 , 1.5921398354615897 , 0.81764755470656153 ) and ( α , β ) = ( 0.59213983546158970 , 1.5921398354615897 ) , respectively. Whatever initial iterative value is given, the First-Order method could not converge to ( 0.59213983546158970 , 1.5921398354615897 ) . In addition, for a parametric surface s ( u , v ) = ( u , v , sin ( a u + b v ) ) , a 0 , b 0 , for any test point p and any initial iterative value, the First-Order method could not converge.
Counterexample 3.
Suppose a parametric surface s ( u , v ) = ( u , v , cos ( u + v ) ) , u , v [ 0 , 2 ] with a test point p = ( 4 , 5 , 6 ) . It is easy to know that the uniquely corresponding orthogonal projection point and parametric value of the test point p are ( 0.83182106378141485 , 1.8318210637814148 , 0.88793946367725301 ) and ( α , β ) = ( 0.83182106378141485 , 1.8318210637814148 ) , respectively. For any initial iterative value, the First-Order iterative method could not converge to ( 0.83182106378141485 , 1.8318210637814148 ) . In ddition, for a parametric surface s ( u , v ) = ( u , v , c o s ( a u + b v ) ) , a 0 , b 0 , for any test point p and any initial iterative value, the First-Order method could not converge.
Counterexample 4.
Suppose a parametric surface s ( u , v ) = ( u , v , s i n ( u 2 + v 2 ) ) , u , v [ 0 , 2 ] with a test point p = ( 4 , 5 , 6 ) . It is known that the uniquely corresponding orthogonal projection point and parametric value of the test point p are ( 0.86886081685860457 , 1.0860760210732557 , 0.93459272735858134 ) and ( α , β ) = ( 0.86886081685860457 , 1.0860760210732557 ) , respectively. For any initial iterative value, the First-order method could not converge to ( 0.86886081685860457 , 1.0860760210732557 ) . Furthermore, for a parametric surface s ( u , v ) = ( u , v , s i n ( u 2 n + v 2 n ) ) , n = 1 , 2 , 3 , , for any test point p and any initial iterative value, the First-Order method could not converge.
Counterexample 5.
Suppose a parametric surface s ( u , v ) = ( u , v , cos ( u 2 + v 2 ) ) , u , v [ 0 , 2 ] with a test point p = ( 4 , 5 , 6 ) . It is known that the uniquely corresponding orthogonal projection point and parametric value of the test point p are (1.0719814278710903,1.3399767848388629,−0.98067565161631654) and ( α , β ) = (1.0719814278710903, 1.3399767848388629), respectively. For any initial iterative value, the First-Order method could not converge to (1.0719814278710903, 1.3399767848388629). Furthermore, for a parametric surface s ( u , v ) = ( u , v , cos ( u 2 n + v 2 n ) ) , n = 1 , 2 , 3 , , for any test point p and any initial iterative value, the First-Order method could not converge.

4.2. The Improved Algorithm

Since the First-Order method is not convergent for some special cases, we present the improved algorithm which will converge for any parametric surface, test point and initial iterative value. For simplicity, we briefly write ( u , v ) T as t, namely, t = ( u , v ) T , t n = ( u n , v n ) T . It is well known that the classic iterative method to solve a system of nonlinear equations is Newton’s method, whose iterative expression is
t n + 1 = t n [ F ( t n ) ] 1 F ( t n ) , n 0 ,
where the system F ( t ) = 0 is expressed as follows:
F 1 ( t ) = p s ( u , v ) , s ( u , v ) u = 0 , F 2 ( t ) = p s ( u , v ) , s ( u , v ) v = 0 .
Then, the more specific expression of Newton’s iterative Formula (22) is the following,
t n + 1 = G ( t n ) = t n [ F ( t n ) ] 1 F ( t n ) ,
where G ( t n ) = G 1 ( t n ) G 2 ( t n ) = u n F 2 ( t n ) v F 1 ( t n ) F 1 ( t n ) v F 2 ( t n ) F 0 v n F 1 ( t n ) u F 2 ( t n ) F 2 ( t n ) u F 1 ( t n ) F 0 , F 0 = F 1 ( t n ) u F 2 ( t n ) v F 1 ( t n ) v F 2 ( t n ) u . This method is quadratically convergent in a neighborhood of the solution where the Jacobian matrix is non-singular. Its convergence rate is greater than that of the First-Order method. However, a limitation of this method is that its convergence depends on the choice of the initial value. Only if the convergence condition for Newton’s second-order iterative method is satisfied, is the method effective. In order to improve the robustness of convergence, based on the First-Order convergence method, we present Algorithm 1 (the hybrid second-order algorithm) for orthogonal projection onto the parametric surface. The hybrid second-order algorithm essentially combines the respective advantages of these two methods: if the iterative parametric value from the First-Order geometric iteration method satisfies the convergence condition for Newton’s second-order iterative method, we then turn to the method to accelerate the convergence process. If not, we continue the First-Order geometric iteration method until its iterative parametric value satisfies the convergence condition for Newton’s second-order iterative method, and we then turn to it as above. Then comes the end of the whole algorithm. The procedure that we proposed guarantees that the whole iterative process is convergent, and its convergence is independent of the choice of the initial value. In addition, the procedure can increase its rapidity. The hybrid second-order iterative algorithm for computing the minimum distance between a point and a parametric surface can be realized as follows.
Algorithm 1: The hybrid second-order algorithm.
Input: Input the initial iterative parametric value t 0 , parametric surface s ( u , v ) and test point p.
Output: Output the final iterative parametric value.
Step 1.
Input the initial iterative parametric value t 0 .
Step 2.
Use the iterative Formula (5), compute the parametric incremental value Δ t , and update t 0 + Δ t to t 0 , namely, t 0 = t 0 + Δ t .
Step 3.
Judge whether the norm of difference between the former t 0 and the latter t 0 is near 0( Δ t < ε ). If so, end this algorithm.
Step 4.
Substitute new t 0 into G 1 ( t 0 ) u , G 1 ( t 0 ) v , G 2 ( t 0 ) u , G 2 ( t 0 ) v , respectively, judge if
      ( G 1 ( t 0 ) u < 1 2 a n d G 1 ( t 0 ) v < 1 2 a n d G 2 ( t 0 ) u < 1 2 a n d G 2 ( t 0 ) v < 1 2 ) .
     If ( G 1 ( t 0 ) u < 1 2 a n d G 1 ( t 0 ) v < 1 2 a n d G 2 ( t 0 ) u < 1 2 a n d G 2 ( t 0 ) v < 1 2 )
       {
        Using Newton’s second-order iterative Formula (22), compute
         t 0 = t 0 [ F ( t 0 ) ] 1 F ( t 0 ) until the norm of difference between the former t 0 and
        the latter t 0 is near 0( Δ t < ε ), then end this algorithm.
       }
       Else {
           go to Step 2.
       }
Remark 1.
We give a geometric interpretation of Newton’s iterative method with two variables in Figure 5, where abscissa axis t represents the two-dimensional coordinate ( u , v ) , the vertical coordinate z represents function value F 1 ( t ) or F 2 ( t ) . Curve F 1 ( t ) and F 2 ( t ) actually denote two surfaces in three-dimensional space, which are determined by the first and the second Formula in Equation (23), respectively. γ is the intersection point of two lines, the first of which is the intersection line of F 1 ( t ) and plane t, and the second of which is the intersection line of F 2 ( t ) and plane t. In another way, γ is the solution to Equation (23). Through point t 0 , draw a line perpendicular to the plane t, and then the perpendicular line intersects the surface F 1 ( t ) and F 2 ( t ) , respectively. The intersection points are designated as the first and the second intersection point, respectively. Through the first and the second intersection point, we make two tangent planes, respectively. These two tangent planes intersect with plane t and generate two intersection lines, which are designated as the first and the second intersection lines. Two intersection lines intersect to obtain the point t 1 . Given the intersection point t 1 , we repeat the procedure above to obtain the new first and the second intersection point, and then the new first and the second intersection lines. Finally, we can obtain the new intersection point t 2 . Repeat the above steps until the iterative value converges to γ = ( α , β ) . Of course, it must satisfy the condition for the fixed point theorem of Newton’s iterative method before Newton’s iterative method starts to work.
Remark 2.
For some special cases, in Section 4, where the First-Order method is not convergent, our hybrid second-order algorithm could converge. In addition, we find that the method is convergent for any initial iterative value, any test point and parametric surface in many tested examples.

4.3. Convergence Analysis for the Improved Algorithm

Definition 1.
In Reference [34] (Fixed Point), a function G from D R 2 into R 2 has a fixed point at p D if G ( p ) = p .
Theorem 2.
In Reference [34] (Fixed Point Theorem), let D = { ( x 1 , x 2 , , x n ) T a i x i b i , i = 1 , 2 , , n } for some collection of constants a 1 , a 2 , , a n and b 1 , b 2 , , b n . Suppose G is a continuous function from D R n into R n with the property that G ( x ) D whenever x D . Then, G has a fixed point in D. Moreover, suppose that all the component functions of G have continuous partial derivatives and a constant 0 < L < 1 exists with g i ( x ) x j L n , whenever x D , for each j = 1 , 2 , , n and each component function g i . Then, these sequence x k k = 0 defined by an arbitrarily selected x 0 in D and generated by
x k = G ( x k 1 ) , k 1 .
This converges to the unique fixed point p D and satisfies the iterative relationship
x k p 2 L k 1 L x 1 x 0 2 .
In the following, we directly present the fixed point theorem for Newton’s iterative method.
Theorem 3.
Let D = { t = ( u , v ) T a 1 u b 1 , a 2 v b 2 } for some collection of constants a 1 , a 2 and b 1 , b 2 . Suppose G (see Equation (24)) is a continuous function from D R 2 into R 2 with the property that G ( t ) D whenever t D . Then, G has a fixed point in D. Moreover, suppose that all the component functions of G have continuous partial derivatives and a constant 0 < L < 1 exists with G 1 ( t ) u < L 2 a n d G 1 ( t ) v < L 2 a n d G 2 ( t ) u < L 2 a n d G 2 ( t ) v < L 2 , whenever t D , for each j = 1 , 2 and each component function G i . Then, these sequence t k k = 0 defined by an arbitrarily selected t 0 in D and generated by
t k = G ( t k 1 ) , k 1 .
This converges to the unique fixed point p D and satisfies the iterative relationship
t k p 2 L k 1 L t 1 t 0 2 .
Theorem 4.
The convergence order of the hybrid second-order algorithm for orthogonal projection onto the parametric surface is 2. Convergence of the hybrid second-order algorithm is independent of the initial value.
Proof. 
Let γ = ( α , β ) T be a root of system F ( t ) = 0 , t = ( u , v ) T , where the system F ( t ) = 0 is expressed by Formula (23). Define e n = t n γ and use Taylor’s expansion around γ , we have
F ( t n ) = F ( γ ) + ( t n γ ) 1 ! F ( γ ) + ( t n γ ) 2 2 ! F ( γ ) + ( t n γ ) 3 3 ! F ( γ ) +
Since γ is a simple zero of the Formula (23), so F ( γ ) = 0 . Then, we have
F ( t n ) = F ( γ ) [ e n + b 2 e n 2 + b 3 e n 3 + o ( e n 4 ) ] ,
F ( t n ) = F ( γ ) [ I + 2 b 2 e n + 3 b 3 e n 2 + o ( e n 3 ) ] ,
where b n = [ F ( γ ) ] 1 F ( n ) ( γ ) n ! , n = 2 , 3 , .
Using (21), and (29)–(31), we obtain
y n = t n [ F ( t n ) ] 1 F ( t n ) = γ + b 2 C 0 2 e n 2 + o ( e n 3 ) .
The result means that the hybrid algorithm is second-order convergent. According to the procedure of the hybrid second-order algorithm, if the iterative parametric value satisfies the convergence condition for Newton’s second-order iterative method, we then turn to Newton’s second-order iterative method. If not, we continue the First-Order geometric iteration method until its iterative parametric value satisfies the convergence condition of Newton’s second-order iterative method. Then comes the end of the whole algorithm. When the hybrid second-order iterative algorithm implements the First-Order iterative method, its independence of the initial value can be guaranteed by Theorem 1. When the hybrid second-order iterative algorithm implements Newton’s second-order iterative method, its independence of the initial value can be guaranteed by Theorem 3. To sum up, convergence of the hybrid second-order iterative algorithm is independent of the initial value throughout the whole algorithm operating process. ☐

4.4. Numerical Experiments

Example 3.
We replicate Example 1 using the hybrid second-order algorithm in Table 3 and compare it with results in Table 1. In Table 3, the mean running time of the hybrid second-order algorithm is 303,210.7, 319,045.5, 92,325.2, 315,116.9, 308,383.9, and 95,399.5 nanoseconds for six different initial iterative values, respectively. In the end, the overall average running time in Table 3 is 238,913.62 nanoseconds (≈0.2389 ms), while the overall average running time of Table 1 and Table 2 in [18] is 0.3565 ms. So our hybrid second-order algorithm is faster than the algorithm in [18]. On the other hand, from Table 3,
d = p s ( α , β ) = ( p 1 f 1 ( α , β ) ) 2 + ( p 2 f 2 ( α , β ) ) 2 + ( p 3 f 3 ( α , β ) ) 2 = ( 0.3 0.3 ) 2 + ( 0.5 0.4472135953 ) 2 + ( 1.0 0.8944271911 ) 2 = 0.1180339887 .
Example 4.
We replicate Example 2 using the hybrid second-order algorithm in Table 4 and compare it with results in Table 2. In Table 4, the mean running time of the hybrid second-order algorithm is 1,126,871, 991,834.4, 1,082,181, 1,002,949, 960,158.4, and 1,098,208 nanoseconds for six different initial iterative values, respectively. In the end, the overall average running time is 1,043,700 nanoseconds(≈1.0437 ms), while the overall average running time of the Example 2 for the algorithm in [18] is 1.705624977 ms under the same initial iteration condition. So the hybrid second-order algorithm is faster than the algorithm proposed in [18]. On the other hand, from Table 4,
d = p s ( α , β ) = ( p 1 f 1 ( α , β ) ) 2 + ( p 2 f 2 ( α , β ) ) 2 + ( p 3 f 3 ( α , β ) ) 2 = ( 15.0 16.9423279997 ) 2 + ( 20.0 20.4810001317 ) 2 + ( 25.0 23.3940081767 ) 2 = 2.5657764754 .
In Table 3 and Table 4, the convergence tolerance, the decimal place of approximate zero and the computation environment are set as those in Table 1 and Table 2. The overall average running time of 0.336222 ms in Examples 3 and 4 for the algorithm in [18] implies that our hybrid second-order algorithm is faster than the algorithm in [18] . In [18], the authors point out that their algorithm is faster than the algorithm in [8], which means the hybrid second-order algorithm is faster than the one in [8]. At the same time, the overall average running time of 61.81167 ms in three Examples for their algorithm implies that our hybrid second-order algorithm is faster than the algorithm in [26]. However, in [26], the authors indicate that their algorithm is faster than the ones in [1,15], which means our hybrid second-order algorithm is faster than the ones in [1,15]. To sum up, with the exception of the First-Order method, our hybrid second-order algorithm converges faster than the existing methods in [1,8,15,18,26]. In Table 3 and Table 4, the unit of time is nanoseconds.
Remark 3.
The hybrid second-order algorithm presented can be applied for one projection point case for orthogonal projection of a test point onto a parametric surface s ( u , v ) . For the multiple orthogonal projection points situation, the basic idea of our approach is as follows:
(1)
Divide a parametric region [ a , b ] × [ c , d ] of a parametric surface s ( u , v ) into M 2 sub-regions [ a i , a i + 1 ] × [ c j , c j + 1 ] , i , j = 0 , 1 , 2 , , M 1 , where a = a 0 , a i + 1 a i = b a M , b = a M , c = c 0 , c j + 1 c j = d c M , d = c M .
(2)
Randomly select an initial iterative parametric value in each sub-region.
(3)
For each initial iterative parametric value, holding other initial iterative parametric values fixed, use the hybrid second-order iterative algorithm and iterate until it converges. Let us assume that the converged iterative parametric values are ( α i , β j ) , i , j = 0 , 1 , 2 , , M 1 , respectively.
(4)
Compute the local minimum distances d i j , i , j = 0 , 1 , 2 , , M 1 , where d i j = p s ( α i , β j ) .
(5)
Compute the global minimum distance d = p s ( α , β ) = m i n d i j , i , j = 0 , 1 , 2 , , M 1 . If we try to find all solutions as soon as possible, divide a parametric region [ a , b ] × [ c , d ] of parametric surface s ( u , v ) into M 2 sub-regions [ a i , a i + 1 ] × [ c j , c j + 1 ] , i , j = 0 , 1 , 2 , , M 1 , where a = a 0 , a i + 1 a i = b a M , b = a M , c = c 0 , c j + 1 c j = d c M , d = c M such that M is very large.
Remark 4.
In addition to two test examples, we have also tested many other examples. According to these test results, for different initial iterative values, it can converge to the corresponding orthogonal projection point by using the hybrid second-order algorithm. Namely, if the initial iterative value is ( u 0 , v 0 ) [ a , b ] × [ c , d ] , which belongs to the parametric region of a parametric surface s ( u , v ) , the corresponding orthogonal projection parametric value for the test point p = ( p 1 , p 2 , p 3 ) is ( α , β ) , and then the test point p and its corresponding orthogonal projection parametric value ( α , β ) satisfy two inequality relationships
p s ( α , β ) , s ( α , β ) u < 1 E 16 , p s ( α , β ) , s ( α , β ) v < 1 E 16 .
This indicates that these two inequality relationships satisfy Formula (6) or (7). Thus, it illustrates that convergence of the hybrid second-order algorithm is independent of the initial value. Furthermore, the hybrid second-order algorithm is robust and efficient as shown by the previous two of ten challenges proposed by [35].

5. Conclusions

This paper investigates the problem related to a point projection onto a parametric surface. To compute the minimum distance between a point and a parametric surface, three well-known first-order algorithms have been proposed by [29,30,31] (hereafter, the First-Order method). In this paper, we prove the method’s first-order convergence and its independence of the initial value. We also give some numerical examples to illustrate its faster convergence than the existing ones. For some special cases where the First-Order method does not converge, we combine it with Newton’s second-order iterative method to present the hybrid second-order algorithm. Our method essentially exploits hybrid iteration, thus it performs very well with a second-order convergence, it is faster than the existing methods and it is independent of the initial value. Some numerical examples confirm our conclusion. An area for future research is to develop a method for computing the minimum distance between a point and a higher-dimensional parametric surface.

Acknowledgments

We take the opportunity to thank anonymous reviewers for their thoughtful and meaningful comments. This work is supported by the National Natural Science Foundation of China (Grant No. 61263034), Scientific and Technology Foundation Funded Project of Guizhou Province (Grant No. [2014]2092, Grant No. [2014]2093), Feature Key Laboratory for Regular Institutions of Higher Education of Guizhou Province(Grant No. [2016]003), National Bureau of Statistics Foundation Funded Project (Grant No. 2014LY011), Key Laboratory of Pattern Recognition and Intelligent System of Construction Project of Guizhou Province (Grant No. [2009]4002), Information Processing and Pattern Recognition for Graduate Education Innovation Base of Guizhou Province. Linke Hou is supported by Shandong Provincial Natural Science Foundation of China(Grant No. ZR2016GM24). Juan Liang is supported by Scientific and Technology Key Foundation of Taiyuan Institute of Technology(Grant No. 2016LZ02). Qiaoyang Li is supported by Fund of National Social Science(No. 14XMZ001) and Fund of Chinese Ministry of Education(No. 15JZD034).

Author Contributions

The contributions of all of the authors are the same. All of them have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, Y.L.; Hewitt, W.T. Point inversion and projection for NURBS curve and surface: Control polygon approach. Comput. Aided Geom. Des. 2003, 20, 79–99. [Google Scholar] [CrossRef]
  2. Yang, H.P.; Wang, W.P.; Sun, J.G. Control point adjustment for B-spline curve approximation. Comput. Aided Des. 2004, 36, 639–652. [Google Scholar] [CrossRef] [Green Version]
  3. Johnson, D.E.; Cohen, E. A Framework for efficient minimum distance computations. In Proceedings of the IEEE Intemational Conference on Robotics & Automation, Leuven, Belgium, 20 May 1998. [Google Scholar]
  4. Piegl, L.; Tiller, W. Parametrization for surface fitting in reverse engineering. Comput. Aided Des. 2001, 33, 593–603. [Google Scholar] [CrossRef]
  5. Pegna, J.; Wolter, F.E. Surface curve design by orthogonal projection of space curves onto free-form surfaces. ASME Trans. J. Mech. Design 1996, 118, 45–52. [Google Scholar] [CrossRef]
  6. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  7. Song, H.-C.; Yong, J.-H.; Yang, Y.-J.; Liu, X.-M. Algorithm for orthogonal projection of parametric curves onto B-spline surfaces. Comput.-Aided Des. 2011, 43, 381–393. [Google Scholar] [CrossRef]
  8. Hu, S.M.; Wallner, J. A second order algorithm for orthogonal projection onto curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 251–260. [Google Scholar] [CrossRef]
  9. Mortenson, M.E. Geometric Modeling; Wiley: New York, NY, USA, 1985. [Google Scholar]
  10. Zhou, J.M.; Sherbrooke, E.C.; Patrikalakis, N. Computation of stationary points of distance functions. Eng. Comput. 1993, 9, 231–246. [Google Scholar] [CrossRef]
  11. Johnson, D.E.; Cohen, E. Distance extrema for spline models using tangent cones. In Proceedings of the 2005 Conference on Graphics Interface, Victoria, BC, Canada, 9–11 May 2005. [Google Scholar]
  12. Limaien, A.; Trochu, F. Geometric algorithms for the intersection of curves and surfaces. Comput. Graph. 1995, 19, 391–403. [Google Scholar] [CrossRef]
  13. Polak, E.; Royset, J.O. Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 2003, 119, 459–484. [Google Scholar] [CrossRef]
  14. Patrikalakis, N.; Maekawa, T. Shape Interrogation for Computer Aided Design and Manufacturing; Springer: Berlin, Germany, 2001. [Google Scholar]
  15. Selimovic, I. Improved algorithms for the projection of points on NURBS curves and surfaces. Comput. Aided Geom. Des. 2006, 23, 439–445. [Google Scholar] [CrossRef]
  16. Cohen, E.; Lyche, T.; Riesebfeld, R. Discrete B-splines and subdivision techniques in computer-aided geometric design and computer graphics. Comput. Graph. Image Proc. 1980, 14, 87–111. [Google Scholar] [CrossRef]
  17. Piegl, L.; Tiller, W. The NURBS Book; Springer: New York, NY, USA, 1995. [Google Scholar]
  18. Liu, X.-M.; Yang, L.; Yong, J.-H.; Gu, H.-J.; Sun, J.-G. A torus patch approximation approach for point projection on surfaces. Comput. Aided Geom. Des. 2009, 26, 593–598. [Google Scholar] [CrossRef]
  19. Li, X.W.; Xin, Q.; Wu, Z.N.; Zhang, M.S.; Zhang, Q. A geometric strategy for computing intersections of two spatial parametric curves. Vis. Comput. 2013, 29, 1151–1158. [Google Scholar] [CrossRef]
  20. Kim, K.-J. Minimum distance between a canal surface and a simple surface. Comput.-Aided Des. 2003, 35, 871–879. [Google Scholar] [CrossRef]
  21. Li, X.Y.; Jiang, H.; Chen, S.; Wang, X.C. An efficient surface-surface intersection algorithm based on geometry characteristics. Comput. Graph. 2004, 28, 527–537. [Google Scholar] [CrossRef]
  22. Chen, X.-D.; Ma, W.Y.; Xu, G.; Paul, J.-C. Computing the Hausdorff distance between two B-spline curves. Comput.-Aided Des. 2010, 42, 1197–1206. [Google Scholar] [CrossRef]
  23. Chen, X.-D.; Chen, L.Q.; Wang, Y.G.; Xu, G.; Yong, J.-H.; Paul, J.-C. Computing the minimum distance between two Bézier curves. J. Comput. Appl. Math. 2009, 229, 294–301. [Google Scholar] [CrossRef] [Green Version]
  24. Sundar, B.R.; Chunduru, A.; Tiwari, R.; Gupta, A.; Muthuganapathy, R. Footpoint distance as a measure of distance computation between curves and surfaces. Comput. Graph. 2014, 38, 300–309. [Google Scholar] [CrossRef]
  25. Chen, X.-D.; Yong, J.-H.; Wang, G.Z.; Paul, J.-C.; Xu, G. Computing the minimum distance between a point and a NURBS curve. Comput.-Aided Des. 2008, 40, 1051–1054. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, X.-D.; Xu, G.; Yong, J.-H.; Wang, G.Z.; Paul, J.-C. Computing the minimum distance between a point and a clamped B-spline surface. Graph. Model. 2009, 71, 107–112. [Google Scholar] [CrossRef] [Green Version]
  27. Oh, Y.-T.; Kim, Y.-J.; Lee, J.; Kim, Y.-S.; Elber, G. Efficient point-projection to freeform curves and surfaces. Comput. Aided Geom. Des. 2012, 29, 242–254. [Google Scholar] [CrossRef]
  28. Ko, K.; Sakkalis, T. Orthogonal projection of points in CAD/CAM applications: An overview. J. Comput. Des. Eng. 2014, 1, 116–127. [Google Scholar] [CrossRef]
  29. Hartmann, E. On the curvature of curves and surfaces defined by normal forms. Comput. Aided Geom. Des. 1999, 16, 355–376. [Google Scholar] [CrossRef]
  30. Hoschek, J.; Lasser, D. Fundamentals of Computer Aided Geometric Design; A.K. Peters, Ltd.: Natick, MA, USA, 1993. [Google Scholar]
  31. Hu, S.M.; Sun, J.G.; Jin, T.G.; Wang, G.Z. Computing the parameter of points on NURBS curves and surfaces via moving affine frame method. J. Softw. 2000, 11, 49–53. (In Chinese) [Google Scholar]
  32. Melmant, A. Geometry and convergence of eulers and halleys methods. SIAM Rev. 1997, 39, 728–735. [Google Scholar] [CrossRef]
  33. Traub, J.F. A Class of globally convergent iteration functions for the solution of polynomial equations. Math. Comput. 1966, 20, 113–138. [Google Scholar] [CrossRef]
  34. Burden, R.L.; Faires, J.D. Numerical Analysis, 9th ed.; Brooks/Cole Cengage Learning: Boston, MA, USA, 2011. [Google Scholar]
  35. Piegl, L.A. Ten challenges in computer-aided design. Comput.-Aided Des. 2005, 37, 461–470. [Google Scholar] [CrossRef]
Figure 1. Graphic demonstration of the First-Order method.
Figure 1. Graphic demonstration of the First-Order method.
Symmetry 09 00146 g001
Figure 2. Graphic demonstration for convergence analysis.
Figure 2. Graphic demonstration for convergence analysis.
Symmetry 09 00146 g002
Figure 3. Illustration of Example 1.
Figure 3. Illustration of Example 1.
Symmetry 09 00146 g003
Figure 4. Illustration of Example 2.
Figure 4. Illustration of Example 2.
Symmetry 09 00146 g004
Figure 5. Graphic demonstration for the hybrid second-order algorithm.
Figure 5. Graphic demonstration for the hybrid second-order algorithm.
Symmetry 09 00146 g005
Table 1. Running time (in nanoseconds) for different initial iterative values by the First-Order method.
Table 1. Running time (in nanoseconds) for different initial iterative values by the First-Order method.
( u 0 , v 0 ) (1,−2)(−2,2)(−2,−2)(0,0)(1,1)(−2,−1)
1142,941132,54439,857135,606133,85940,180
2132,044155,59238,792133,112160,15445,392
3133,380132,94738,952181,563132,47246,018
4132,869140,08943,009132,702133,37740,517
5134,137134,04540,559134,416125,63839,725
6136,875140,14543,830146,907132,28941,029
7133,808133,41539,890132,827133,38840,389
8132,753148,69241,072133,869135,50044,930
9132,332153,14638,9071373,30141,66341,428
10136,460147,36145,462132,184142,25044,384
Average time134,760141,79841,033140,051137,05942,399
Table 2. Running time for (in nanoseconds) for different initial iterative values by the First-Order method.
Table 2. Running time for (in nanoseconds) for different initial iterative values by the First-Order method.
( u 0 , v 0 ) (1,1)(2,2)(2,0)(1,2)(2,1)(0,2)
1567,161492,909550,174349,185384,5164,920,804
2575,759521,233492,835385,293390,527523,743
3484,250381,832487,414389,792502,568498,248
4499,588346,864434,103494,559436,330536,043
5456,397433,893463,222501,650493,197434,355
6517,495433,521440,372435,692399,478362,600
7488,340452,431489,985439,752471,909524,395
8499,700475,481481,180433,924473,261522,088
9438,441386,592438,255503,681366,078469,647
10481,179483,391532,150524,016349,502517,721
Average time500,831440,815480,969445,755426,737488,092
Table 3. Running time (in nanoseconds) for different initial iterative values by the hybrid second-order algorithm.
Table 3. Running time (in nanoseconds) for different initial iterative values by the hybrid second-order algorithm.
( u 0 , v 0 ) (1,−2)(−2,2)(−2,−2)(0,0)(1,1)(−2,−1)
1321,617298,22489,679305,115301,18390,406
2297,101350,08487,282299,503360,347102,133
3300,105299,13187,643408,517298,064103,542
4298,957315,20196,771298,580300,10091,164
5301,810301,60391,259302,437282,68789,383
6307,970315,32798,618330,541297,65192,317
7301,068300,18589,754298,861300,12590,876
8298,695334,55792,413301,206304,875101,094
9297,748344,58087,542308,994318,74393,215
10307,036331,563102,291297,415320,06499,865
Average time303,210.7319,045.592,325.2315,116.9308,383.995,399.5
Table 4. Running time (in nanoseconds) for different initial iterative values using the hybrid second-order algorithm.
Table 4. Running time (in nanoseconds) for different initial iterative values using the hybrid second-order algorithm.
( u 0 , v 0 ) (1,1)(2,2)(2,0)(1,2)(2,1)(0,2)
11,276,1131,109,0471,237,892785,668865,1611,107,181
21,295,4601,172,7761,108,880866,911878,6871,178,422
31,089,563859,1221,096,683877,0341,130,7791,121,060
41,124,075780,446976,7321,112,759981,7441,206,097
51,026,895976,2611,042,2511,128,7131,109,694977,300
61,164,364975,423990,837980,309898,827815,852
71,098,7651,017,9711,102,468989,4421,061,7961,179,891
81,124,3271,069,8341,082,657976,3311,064,8391,174,699
9986,493869,833986,0751,133,284823,6761,056,707
101,082,6541,087,6311,197,3381,179,038786,3811,164,873
Average time1,126,871991,834.41,082,1811,002,949960,158.41,098,208

Share and Cite

MDPI and ACS Style

Li, X.; Wang, L.; Wu, Z.; Hou, L.; Liang, J.; Li, Q. Hybrid Second-Order Iterative Algorithm for Orthogonal Projection onto a Parametric Surface. Symmetry 2017, 9, 146. https://doi.org/10.3390/sym9080146

AMA Style

Li X, Wang L, Wu Z, Hou L, Liang J, Li Q. Hybrid Second-Order Iterative Algorithm for Orthogonal Projection onto a Parametric Surface. Symmetry. 2017; 9(8):146. https://doi.org/10.3390/sym9080146

Chicago/Turabian Style

Li, Xiaowu, Lin Wang, Zhinan Wu, Linke Hou, Juan Liang, and Qiaoyang Li. 2017. "Hybrid Second-Order Iterative Algorithm for Orthogonal Projection onto a Parametric Surface" Symmetry 9, no. 8: 146. https://doi.org/10.3390/sym9080146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop