Previous Article in Journal
A Bayesian Approach for Modeling and Forecasting Solar Photovoltaic Power Generation
Previous Article in Special Issue
Adversarial Robustness with Partial Isometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Polynomial Regression on Lie Groups and Application to SE(3)

Ecole Nationale de l’Aviation Civile, Université de Toulouse, 7, Avenue Edouard Belin, 31400 Toulouse, France
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(10), 825; https://doi.org/10.3390/e26100825
Submission received: 30 May 2024 / Revised: 14 September 2024 / Accepted: 24 September 2024 / Published: 27 September 2024
(This article belongs to the Special Issue Information Geometry for Data Analysis)

Abstract

:
In this paper, we address the problem of estimating the position of a mobile such as a drone from noisy position measurements using the framework of Lie groups. To model the motion of a rigid body, the relevant Lie group happens to be the Special Euclidean group S E ( n ) , with n = 2 or 3. Our work was carried out using a previously used parametric framework which derived equations for geodesic regression and polynomial regression on Riemannian manifolds. Based on this approach, our goal was to implement this technique in the Lie group S E ( 3 ) context. Given a set of noisy points in S E ( 3 ) representing measurements on the trajectory of a mobile, one wants to find the geodesic that best fits those points in a Riemannian least squares sense. Finally, applications to simulated data are proposed to illustrate this work. The limitations of such a method and future perspectives are discussed.

1. Introduction

Estimating the position of an aircraft in the context of air traffic management (ATM) is necessary in two situations. In the first one, the goal is to present air traffic controllers with a clean image of the aircraft positions in their sectors. In this case, only the past trajectory is known, and one wants to give the best estimate of the current position. This is the radar tracking problem that has been dealt with for decades using Kalman filtering and its extensions [1], which has recently been studied in the framework of Lie groups [2,3]. In [4,5], the symmetries of the state space seen as a manifold are used to improve estimation. In the second situation, a database of full or partial trajectories is available so that the estimate of a position may be computed using past and future measurements. The corresponding noise removal procedure is known as regression and generally relies on a simple, often local model of the trajectory whose parameters are estimated so as to minimize a least squares criterion.
When the data belong to a Euclidean space, statistical regression analyses are usually of two kinds: parametric methods such as linear and polynomial regression and non-parametric methods such as kernel-based technique, spline smoothing, and local polynomial regression. As an alternative, due to the functional nature of trajectories mapping time to position, the problem arising from the targeted application may also be described in the general framework of functional data statistics in which mobile trajectories are functional objects belonging to a given Hilbert space. Usually, in the absence of the notion of density probability in the original infinite-dimensional Hilbert space, the classical approach is to project on a finite-dimensional subspace. However, this works only if it is possible to find a suitable basis in low dimension representing well all the possible trajectories, which is not the case in the context of highly maneuvering aircraft.
Rather than considering trajectories in the state space, we may use the framework of Lie groups by studying the motion of the mobiles in another representation space. Lie groups are continuous groups of transformations such as scalings, rotations, and translations. For poses of a rigid object, the appropriate Lie group is the Special Euclidean group S E ( n ) , which has been used in the last few years in the fields of navigation [6,7,8]; robotics [9,10]; computational anatomy [11,12,13,14]; automation and control theory [15,16,17], which explains concepts involved in calculus on manifolds such as covariant derivatives and curvature; and signal processing [18,19,20], among others.
Recently, non-parametric estimation methods on Lie groups have also been explored, such as [21], in which the authors developed non-parametric kernel-based regression where the response variable is real-valued, and the Lie group-valued predictors are contaminated by measurement errors.
Parametric models have also been studied: the analogies of Riemannian kth-order polynomials with the theory of geodesics in Riemannian manifolds were studied in [22] and further generalized regarding splines on manifolds in [23]. Riemannian polynomials [24] as well as manifold-valued spline schemes based on Bézier curves [25,26] have also been investigated. Our work is guided by the applications and is situated in a parametric framework of the geodesic regression method developed in [27]. This method was extended in [24] to higher-order polynomials on Riemannian manifolds, which include Lie groups with a left or right invariant Riemannian metric. The algorithm for geodesic and polynomial regression is explicitly derived in S O ( 3 ) . Our goal is to follow the same idea by modeling the motion of rigid bodies and rigid transformations with the Special Euclidean group S E ( n ) , n = 2 or 3. This group, unfortunately, does not provide a bi-invariant metric (a metric may be left- or right-invariant but not both at the same time) and, thus, does not allow for regression consistent with group operations, meaning that regression on a given sample may be similar to the regression applied to the same sample either translated on the left or on the right.
In [24], geodesic and polynomial calculations were carried out using the Levi–Civita connection, the main advantage of which is that it is compatible with the metric, and the calculations are indeed performed using that connection only. In the present contribution, instead of introducing Lagrange multipliers, a more straightforward approach is taken, by reasoning in terms of differential forms (minimizing an objective function means finding the points that cancel its first differential). Then, we extend this principle to an arbitrary connection since many applications use connections other than the Levi–Civita connection. For example, Ref. [28] uses Cartan–Schouten connections to express geodesics in Lie groups as one-parameter subgroups. Similarly, α -connections [29] are often used in statistical models on manifolds. The advantage of using any given connection can be seen in two ways: either it simplifies the writing of the problem, or it allows us to express a non-intrinsic quantity, i.e., one whose writing depends on a specific coordinate system (e.g., this is the case for polynomial curves, which are expressed differently depending on the coordinate system chosen).
The paper is organized as follows: In the second section, the elements of differential geometry necessary to lay the theoretical background are discussed. The third section presents the notion of a polynomial function in a Lie group using jet bundles. The fourth section provides a mathematical formulation for the regression problem using differential forms. Finally, several applications to simulated data are proposed. The limitations of the method are discussed, and future perspectives to improve the estimation are proposed.

2. Elements of Differential Geometry

2.1. Manifolds, Lie Groups, and Vector Bundles

We assume the reader is familiar with the definition of a manifold as a submanifold of R n and recall the definition of an abstract manifold here:
Definition 1.
An n-dimensional smooth manifold is a second countable, Hausdorff set M endowed with an equivalence class of n-dimensional atlases of class C on M . An n-dimensional atlas of class C on M is a set of pairs { ( U i , ϕ i ) } i I satisfying the following axioms:
  • Each U i is an open subset of M and, M i U i ;
  • Each ϕ i is a bijection from U i into an open subset ϕ i ( U i ) of R n , and ϕ i ( U i U j ) R n is open for every i and j;
  • For every pair ( i , j ) , the map ϕ j ϕ i 1 : ϕ i ( U i U j ) ϕ j ( U i U j ) is a C diffeomorphism.
A Lie group inherits the algebraic properties of a group and the topological properties of a manifold.
Definition 2.
A Lie group G is a smooth manifold and a group such that the multiplication m : G × G G , ( x , y ) x y and the inversion i : G G , x x 1 are smooth operations.
We shall use the following notations:
  • L g : G G , x g x for the left translation;
  • R g : G G , x x g for the right translation;
  • e G for the unit element.
Definition 3.
Let E , M be smooth manifolds. A real vector bundle on M is a triple E , π , M with π : E M a smooth onto mapping such that
(i)
For all p M , π 1 ( p ) is a real vector space isomorphic to R k , k a fixed integer;
(ii)
For all p M , there exists an open set U M , p U and a diffeomorphism ϕ U : U × R k π 1 ( U ) such that
  • q U , v R k , π ϕ U ( q , v ) = q ;
  • q U , the mapping v R k ϕ U ( q , v ) is an isomorphism on ϕ 1 ( q ) .
Remark 1.
Definition 3 can be utilized verbatim to define vector bundles with other fields of scalars, particularly complex or quaternionic vector bundles. However, this additional generality will not be required for the purposes of this paper.
Notation 1.
A bundle ( E , π , M ) is often represented by its total space E along, π being implicit.
Definition 4.
Let U M be an open set and ( E , π , M ) a vector bundle on M . A smooth mapping s : U E such that p M , π s ( p ) = p is called a local section. When U = M , s is designated as a global section or simply a section.
Proposition 1.
Let U M be an open set and ( E , π , M ) a vector bundle on M . The set of sections on U has the structure of a real vector space and a C ( U , R ) module.
Notation 2.
The C ( U , R ) -module of local sections on U is designated by Γ U ; E . When U = M , it is simply denoted by Γ ( E ) .
Remark 2.
One can define equivalently a vector bundle as the sheaf generated by the local sections.
Remark 3.
A vector bundle of dimension 1 is called a line bundle. Its sections are the C ( M , R ) -mappings.
Definition 5.
A triple ( M , E , ) with M a smooth manifold, E a vector bundle on M and ∇ a Koszul connection on E is called a gauge structure.
Proposition 2.
Let E , F be two vector bundles over M . The pointwise operations on sections , , hom ( · , · ) define the respective vector bundles E F , E F , hom ( E , F ) .
Proof. 
See [30], Theorem 6.2, p. 67.    □
Definition 6.
The hom bundle with F = R is called the dual bundle of E and is denoted by E * .

2.2. Koszul Connections

Definition 7.
Let ( E , π , M ) be a vector bundle over M . A Koszul connection is an R -linear mapping:
: Γ ( E ) Γ ( T M * E )
such that
s Γ ( E ) , f C ( M , R ) , ( f s ) = d f s + f s
A Koszul connection is often called a covariant derivative.
Remark 4.
Applying Definition 7 on a local frame { e i , i = 1 m } of E yields the so-called Christoffel symbols:
k , j 1 , , m , i = 1 dim ( M ) , i e j = Γ i j k e k
with i , i = 1 dim ( M ) a basis of T M . The Christoffel symbols obviously uniquely characterize the connection .
Remark 5.
There is no reason for a locally constant section to vanish under the action of a Koszul connection.
Definition 8.
Given a local frame of sections { e i } , the connection forms are the one forms ω j i such that
e j = e i ω j i
Proposition 3.
If E , F are two vector bundles and E , F are, respectively, connections on E , F , there exist two canonical connections:
E F : s E s F ( E s E ) ( F s F ) ,
     E F : s E s F ( E s E ) s F + s E ( F s F ) .
Proposition 4.
Let E be a vector bundle over M and let E be its dual bundle. If ∇ is a Koszul connection on E, then the dual-connection on E is defined as follows:
X ( ω ) ( Y ) = X ω ( Y ) ω ( X Y ) , X , Y Γ ( T M ) , ω Γ ( E ) .
Definition 9
([31]). Let f C ( M , R ) , that is f is a section of the trivial bundle M × R . The covariant derivative of f, still denoted by f , is defined as:
f = d f .
Propositions 3 and 4 and Definition 9 can be combined in order to obtain the covariant derivative of arbitrary tensors.
Definition 10.
Let ( E , π E , M ) , ( F , π F , N ) be two vector bundles. A bundle morphism is a couple of mappings ϕ ˜ : E F , ϕ : M N such that, for all x M , the mapping u π E 1 ( x ) ϕ F 1 ( ϕ ( x ) ) is linear, which makes the next diagram commute:
E ϕ ˜ F π E π F M ϕ N
Definition 11.
Let ϕ : M N be a smooth mapping and ( F , π F , N ) a vector bundle on N . The pullback bundle ϕ * F is the vector bundle on M generated by the local sections of the form x s ( ϕ ( x ) ) where s is a local section of F . Furthermore, there exists a bundle morphism ϕ ˜ : E F sending a generating section s ϕ to s .
Definition 12.
Let M , N be smooth manifolds and ϕ : M N a smooth mapping. Let E be a vector bundle on N equipped with a Koszul connection . The pullback connection of ∇ by ϕ, designated by ϕ , is defined by generating sections s ϕ as follows:
X ϕ s ϕ = D ϕ X s
where X T M .
Remark 6.
The quantities in Equation (8) depend only on X ( p ) T p M and T ϕ ( p ) N .

2.3. Covariant Derivatives and Parallel Sections

Definition 13.
Let M be a smooth manifold, E a vector bundle on M and ∇ a Koszul connection on E . Let γ : ] 0 , 1 [ M a smooth mapping, that is a curve on M . The covariant derivative of s Γ ( E ) along γ is the mapping:
t ] 0 , 1 [ 1 γ s γ ( t )
where 1 is the unit constant vector in ] 0 , 1 [ .
Notation 3.
The covariant derivative of a section s along a curve γ is often denoted as follows:
γ ˙ s
Proposition 5.
With the same writing conventions as in Definition 13, the covariant derivative of a section s = s i e i expressed in a local frame ( e 1 , , e n ) is given as follows:
d s i γ ˙ ( t ) e i + Γ i j k s i γ ˙ j ( t ) e n
Proof. 
This result is a direct application of Definition 12 with ϕ = γ .    □
Remark 7.
The first term in Equation (10) can be written by a common abuse of notation:
d d t s i ( t ) e i
where s i e i stands for the section in the pullback bundle associated to s .
Definition 14.
Let ( M , E , ) be a gauge structure and let γ : ] 0 , 1 [ M be a smooth curve. A section s Γ ( E ) is said to be ∇-parallel along γ if
γ ˙ s = 0 .
Remark 8.
A parallel section is, in a local frame, the solution of an ordinary differential equation (ODE):
d d t s k ( t ) + Γ i j k s i γ ˙ j ( t ) = 0 , k = 1 n
This gives a practical way to determine a parallel section from an initial condition.
Notation 4.
The parallel transport of a section s from a to b will be denoted a b s .
Given a gauge structure ( M , T M , ) , x M and a tangent vector X T x M , the next ODE locally defines a smooth curve γ :
γ ˙ ( t ) = s γ ( t ) γ ˙ s = 0 γ ( 0 ) = x
Definition 15.
Let ( M , T M , ) be a gauge structure. For any x M , there exists an open neighborhood U of 0 in T x M such that the curve γ defined by Equation (13) exists to time 1 for a tangent vector X U . The mapping is as follows:
X exp x X = γ ( 1 )
where γ is the solution curve to Equation (13) is called the exponential map.
Definition 16.
Let ( e 1 , , e n ) be a basis of T x M such that the exponential map exists for all the e i , i = 1 n . The mapping
( t 1 , , t n ) [ 0 , 1 ] n exp x i = 1 n t i e i
defines a local coordinate system in a neighborhood of x called the normal coordinates at x .
Remark 9.
By the very definition of a flow, the following is deduced:
d d t exp x t X = s γ ( t )
with γ , s is the solution to ODE (13). Taking the second derivative is slightly awkward since the corresponding tangent vector would lie in T T M . However, it makes sense to consider the covariant derivative along γ:
γ ˙ s = 0 .
The normal coordinate system thus has a vanishing second (covariant) derivative. The generalization of this property to higher covariant derivatives will be used later as an extension of the notion of polynomial curves.
Notation 5.
A normal coordinate system at x is often denoted by x i , i = 1 n and the associated vector fields by i . A vector field X is said to be a coordinate vector field if it can be written as X = a i i with a i , i = 1 n constant mappings [31].
Proposition 6.
Any two coordinate vector fields X , Y have a vanishing commutator [ X , Y ] = 0 .
Proof. 
Writing X = X i i , Y = Y i i , and using the fact that X i , Y i , i = 1 n are constant mappings, the following is derived:
[ X , Y ] = j Y i X j j X i Y j i = 0
   □

2.4. Musical Isomorphisms

In the sequel, we will often have to toggle between matching differential forms and vectors. We define the musical isomorphisms using the metric g.
Definition 17.
Let ( M , g ) be a Riemannian manifold. Let α T * M . α is the only element of T M such that, for all X T M ,
α ( X ) = g ( α , X )
and we note it as α : = ( α ) .
In a similar way, for Y T M , Y is the only element of T * M such that, for all X T M ,
Y ( X ) = g ( X , Y )
and we note it as Y : = ( Y ) .
Both isomorphisms can be expressed in local coordinates:
α = ( α ) = ( α i e i ) = g i j α j X i e i , X = ( X ) = ( X i e i ) = g i j X j α i e i .
where g i j and g i j are, respectively, the i , j coefficient of the metric matrix and the i , j coefficient of the inverse of the metric matrix.

3. Manifold-Valued Polynomial Functions

Here, we provide the basic elements on how a manifold-valued polynomial function P : I R M can be well defined.
As a consequence of the Taylor formula, the notion of a polynomial function of degree k on R n is defined as a smooth mapping P : R n R such that
| I | P I 1 x 1 I n x n = 0
for any multi-index I = ( I 1 , , I n ) such that | I | = i = 1 n I i > k .
While it is meaningful to speak about higher derivatives only for vector space-valued maps, things are more complicated for manifold-valued maps. This kind of property may hold locally, that is, for example, if a function has a finite length Taylor expansion in terms of a local coordinate system, but generally, it fails globally, due to the transition functions between local charts. Two possibilities exist to overcome this:
  • Ref. [32] restricts the Hessian to the kernel of d f p so that the second derivative makes sense through the transition maps of the manifold;
  • A possibility to bypass this [33] is to consider directly a k-th order Taylor series through jet bundles: let γ 1 , γ 2 be two curves I R M such that γ 1 ( 0 ) = γ 2 ( 0 ) = p and ( U , ϕ ) a local chart such that i 1 k ,
    d i d t i t = 0 ϕ γ 1 ( t ) = d i d t i t = 0 ϕ γ 2 ( t )
    This relation is an equivalence relation in which equivalence classes are called k-jets at p, noted J p k . The space of all k-jets of the bundle ( E , π , M ) is the union of k-jets at p over all the points of M .
    The set of all k-jets at p can be given a differentiable structure in a “tower-like” bundle endowed with adapted coordinates, in which
    m 1 , , k , p M , π m : J m J m 1 , [ γ ] p m [ γ ] p m 1
    are projections from one level of the bundle to the one under.
J m π m J m 1 π m 1 π 1 J 0 = E π M
This work uses the second approach, though it will not be explicitly reminded in the sequel.

4. Optimality Conditions

In this section, the first and second-order local optimality conditions for a real-valued mapping with domain a Riemannian manifold are briefly introduced.
Lemma 1.
Let M be a smooth manifold and f : M R a smooth mapping. For p M to be a local minimum of f , it is necessary that p be a critical point of f, which is the case if and only if the differential of f at p vanishes: d f p = 0 .
Proof. 
See [34]    □
In fact, if γ : ] ϵ , + ϵ [ M is a smooth curve on M such that γ ( 0 ) = p , then d f p ( γ ˙ ( 0 ) ) is the first-order variation of f in the direction of the tangent vector γ ˙ ( 0 ) T p M . This approach can be extended by iterating the derivative.
Definition 18.
Let M be a smooth manifold: a retraction is a smooth map
R : T M M , ( x , v ) R x ( v )
such that for any curve c : ] ε , ε [ M , t R x ( t v ) , we have c ( 0 ) = x and c ( 0 ) = v .
Remark 10.
The Riemannian exponential ( x , v ) exp x ( v ) and the linear map ( x , v ) x + v are examples of a retraction.
From this, a possible algorithm for optimization on manifolds is the Riemannian gradient descent (RGD) explained in Algorithm 1, taken from [34]:
Algorithm 1: Riemannian gradient descent
Entropy 26 00825 i001
where R is a retraction.

5. The Geodesic Regression Problem

5.1. Polynomial Curves

We recall that in a Lie group G with its Lie algebra g , there is a way to easily construct a left-invariant metric from any given inner product on g , which itself can be derived from the Euclidean inner product, since g R dim G :
X g , Y g g = T g L g 1 X g , T g L g 1 Y g
Please note that any symmetric positive-definite matrix may be used to produce different left invariant metrics:
X g , Y g A , g = T g L g 1 A X g , T g L g 1 A Y g
Likewise, a right-invariant metric can also be defined. A minimizing geodesic between two points g 0 , g 1 with respect to · , · g is a curve γ : [ 0 , 1 ] G , such that γ ( 0 ) = g 0 , γ ( 1 ) = g 1 n minimizing the functional
L ( γ ) = 1 2 0 1 γ ˙ ( t ) , γ ˙ ( t ) g d t
Due to the left invariance of the metric, it is enough to seek a solution to the reduced problem:
L ( X ) = 1 2 0 1 X ˙ ( t ) , X ˙ ( t ) e d t
where X : ] ϵ , ϵ [ g is a g -valued curve. It is well known that X satisfies the so-called Euler–Arnold equation [35]:
X ˙ ( t ) = ad X ( t ) t X ( t )
From the solution of Equation (24), the original geodesic is reconstructed with the help of the left translation
γ ˙ ( t ) = T e L γ ( t ) X ( t )
which simplifies in a matrix Lie group as follows:
γ ( t ) = exp 0 t X ( u ) d u
Minimizing geodesics are Riemannian equivalents to straight lines and are thus the natural object to consider for extending the linear regression to a Lie group G. Furthermore, if ∇ is a connection on G, then a minimizing geodesic γ has to satisfy the following:
γ ˙ ( t ) γ ˙ ( t ) = 0
thus, the Riemannian equivalent to a degree k polynomial is a curve γ such that
γ ˙ ( t ) k γ ˙ ( t ) = 0
Here again, by the left invariance of the metric, the connection in T g G has to be considered only on the basis of left-invariant vector fields T e L g e 1 , , T e L g e n . The connection restricted to elements of the Lie algebra will be denoted ¯ in the sequel.
Applying the Koszul formula to the basis vectors ( e 1 , e n ) of g , the following is obtained:
Γ i j k = 1 2 c i j k + c k i j c j k i
where the c i j k = [ e i , e j ] , e k are the structure constants and the Γ i j k , ( i , j , k ) 1 , , n are the Christoffel symbols of the connection.
Using this form, one can compute a degree p polynomial curve X in g by solving an augmented differential equation
v 1 ( t ) = X ( t ) v 0 ( t ) , v 2 ( t ) = X ( t ) v 1 ( t ) , , X ( t ) v p 1 ( t ) = 0
with the convention v 0 = X . Please note that the previous system can be written in the following form:
d d t v 0 v p 1 = F v 0 , , v p 1
where the vector v 0 , v p 1 is in R p n , with n the dimension of g as a real vector space.

5.2. Regression in a Lie Group

An extended notion of parametric linear regression on Riemannian manifolds, called intrinsic geodesic regression, has been independently developed in [27,36]. Let ( y 1 , , y N ) be a set of points representing the trajectory of the mobile on a Riemannian manifold ( M , g ) . The idea is to find a geodesic curve γ ( t ) on the manifold that best fits those points in a "linear" manner at known times t 1 , , t N . The estimate is found by minimizing a least squares criterion based on a Riemannian distance between the model γ ( t ) and the data.
More formally, the geodesic regression model is expressed in [27] as follows:
Y = Exp γ ( t ) ( ε ) = Exp Exp p ( t v ) ( ε )
where γ ( t ) = Exp p ( t v ) is the geodesic curve given by the initial conditions γ ( 0 ) = p and γ ˙ ( 0 ) = v , and ε denotes a Gaussian random variable taking values in the tangent space at γ ( t ) .
In [24], this geodesic regression model is generalized to a polynomial one
Y = Exp γ ( t ) ( ε )
where the curve γ ( t ) is a Riemannian polynomial of order k, as defined by Equation (28). The Riemannian least mean squares estimate is obtained by minimizing the next criterion over the decision vectors ( γ ( 0 ) , γ ˙ ( 0 ) , , γ ˙ ( i ) ( 0 ) , , γ ˙ ( k ) ( 0 ) ) as follows:
E ( γ ) = 1 N j = 1 N d γ ( t j ) , y j 2
where d is a geodesic distance between the model γ ( t j ) and the data y j , j = 1 , , N .
This minimization problem is then performed under the constraints that the model curve γ has, in the geodesic regression problem, a parallel first derivative, namely γ ˙ ( t ) γ ˙ ( t ) = 0 , or, more generally, a k-th order polynomial constraint ( γ ˙ ( t ) ) k γ ˙ ( t ) = 0 that is uniquely solved by adding initial conditions γ ( 0 ) , γ ˙ ( 0 ) , , γ ˙ ( i ) ( 0 ) , , γ ˙ ( k ) ( 0 ) .
Using properties of translation given by Equations (23) and (25) in a Lie group, by denoting X 1 , , X i , , X k the i-th order velocities of in g , we obtain the “forward” polynomial equations
d d t γ ( t ) = γ ( t ) X 1 ( t ) d d t X i ( t ) = ¯ X 1 X i ( t ) + X i + 1 ( t ) d d t X k ( t ) = ¯ X 1 X k ( t )
Let us now move to minimizing the criterion given by Equation (32). The next lemma is a well-known result that can be proven using normal coordinates.
Lemma 2.
Let p , q M such that log p q = V T p M is well defined. The differential at p of the mapping m d 2 ( m , q ) is the 1-form α q : X T p M 2 g ( X , V ) .
Proposition 7
(Variational formula [37] p. 333, Theorem B.3). Let X be a vector field depending on a parameter λ, with value at point p denoted as X ( p ; λ ) . Let Φ X be its flow, i.e., Φ X ( t , x ; λ ) is the value of the integral curve of X at time t, with Φ ( 0 , x ; λ ) = x . Then, the derivative λ Φ of Φ with respect to λ is the solution of the following:
d d t λ Φ X ( t , x ; λ ) = x X Φ ( t , x ; λ ) ; λ λ Φ X ( t , x ; λ ) + λ X Φ X ( t , x ; λ ) , λ
with the initial condition as follows:
λ Φ X ( 0 , x ; λ ) = 0 .
Similarly, the derivative with respect to the initial condition is the solution of the following:
d d t x Φ X ( t , x ; λ ) = x X Φ ( t , x ; λ ) ; λ x Φ X ( t , x ; λ )
with the initial condition as follows:
x Φ X ( 0 , x ; λ ) = Id .
The variational equation can be applied to the geodesic equation γ ˙ γ ˙ = 0 by considering a perturbation in the derivative at 0 in the form λ γ ˙ ( 0 ) + λ v while keeping the starting point p = γ ( 0 ) constant. The one-parameter family of curves will be denoted by γ ( p , t ; λ ) . Letting J = λ γ , we obtain the following:
d 2 d t 2 J i = l Γ j k i γ ˙ i γ ˙ j J l 2 Γ j k i d d t J j γ ˙ k ,
where the 2 factor is a consequence of ∇ being torsionless. A more familiar form can be obtained by expanding J on a frame X 1 ( t ) , X n ( t ) coming from parallel transport of a basis of T p M . In this case, letting J ( t ) = J α ( t ) X α ( t ) , Equation (38) becomes
d 2 d t 2 J + R J , γ ˙ γ ˙ = 0 .
with R the curvature of .
Remark 11.
Equation (39) is valid for an arbitrary Koszul connection without torsion, even if it is a more common use in the Riemannian setting. A solution vector field J along γ is said to be a Jacobi vector field.
Remark 12.
The result is used locally before the injectivity radius of the exp function.
Going back to the original problem, which is to find a local minimum of Criterion (32), and assuming that γ is the integral curve of a vector field X with the initial condition γ ( 0 ) = p , an application of Lemma 2 and Proposition 7 yields the next expression for the differential at T p M of E ( γ ) with respect to the initial conditions as follows:
d E : Y T p M 2 N i = 1 N g γ ( t i ) ; p γ ( t i , p ) . Y , log γ ( t i , p ) y i , 2 N i = 1 N g γ ( t i ) ; v γ ( t i , p ) . Y , log γ ( t i , p ) y i , v = γ ˙ ( 0 ) .
The Riemannian gradient at T p M is thus the tangent vector Z 0 T p M (resp. Z 1 T p M ) such that [27]
Y T p M , g p ; Y , Z 0 = 2 N i = 1 N g γ ( t i ) ; p γ ( t i , p ) . Y , log γ ( t i , p ) y i , g p ; Y , Z 1 = 2 N i = 1 N g γ ( t i ) ; v γ ( t i , p ) . Y , log γ ( t i , p ) y i .
Let, for i = 1 , , N , the individual contributions of the gradient
d E i p : = g γ ( t i ) ; p γ ( t i , p ) . Y , log γ ( t i , p ) y i ,
d E i v : = g γ ( t i ) ; v γ ( t i , p ) . Y , log γ ( t i , p ) y i ,
* γ ( t i , p ) . Y and log γ ( t i , p ) y i both belong to T γ ( t i ) M . As illustrated in Figure 1 (where Π t k 0 denotes the parallel transport from T γ ( t k ) M to T γ ( 0 ) M , and ϵ k = log γ ( t k , p ) y k denotes the residue at time t k ), we can transport them back to T p M , so that there are two vectors ( Z 0 i , Z 1 i ) such that
d E i p = g γ ( t i ) ; p γ ( t i , p ) . Y , ϵ i = g ( p ; Y , Z 0 i ) ,
d E i v = g γ ( t i ) ; v γ ( t i , p ) . Y , ϵ i = g ( p ; Y , Z 1 i ) .
The vector Z 0 (respectively, Z 1 ) can be evaluated by summing the individual contributions as follows:
Y T p M , g p ; Y , Z 0 = 2 N i = 1 N g ( p ; Y , Z 0 i ) = 2 N g ( p ; Y , i = 1 N Z 0 i ) , g p ; Y , Z 1 = 2 N i = 1 N g ( p ; Y , Z 1 i ) = 2 N g ( p ; Y , i = 1 N Z 1 i ) .
Finding each Z 0 i , (respectively, Z 1 i ) boils down to inverting the matrix p γ ( t i , p ) (respectively, v γ ( t i , p ) ) and then transposing it with respect to the metric. However, an easier way to do this is by parallelly transporting the residue log γ ( t i , p ) y i from T γ ( t i ) M back to T p M .
Following [24], the problem can be tackled with the introduction of the adjoint equations. While the original work was conducted with the Levi–Civita connection, the derivation of the adjoint equations can be performed mutantis mutandis using the dual-connection * as follows:  
γ ˙ * λ i ( t ) = λ i 1 ( t ) ,
         γ ˙ * λ 0 ( t ) = i R * v i ( t ) , λ i ( t ) v 1 ( t ) ,
where the v i values are the vector fields solutions of Equation (30), and the λ i values are adjoint vector fields coupled with the v i values.

5.3. Application to SE(3)

Coming back to the original problem, which is to estimate a trajectory (i.e., a time-stamped curve) from raw positioning data belonging to S E ( 3 ) , we remind here some specific elements about S E ( n ) and its Lie algebra se ( n ) .
Definition 19.
The set of affine maps f : R n R n such that f ( x ) = R x + T where R is a rotation matrix ( R S O ( n ) ), and T is a vector of R n is called the Special Euclidean group S E ( n ) and is a Lie group.
S E ( n ) is often called the group of rigid motions, or rigid-body motions. An element A S E ( n ) can be represented by a (n + 1)-by-(n + 1) matrix of the form R T 0 1 where R S O ( n ) and T R n It is a group, where the product of two elements of S E ( 3 ) is given as follows:
R 1 T 1 0 1 × R 2 T 2 0 1 = R 1 R 2 R 1 T 2 + T 1 0 1
and the inverse of an element is as follows:
R T 0 1 1 = R T R T T 0 1
We have dim ( S E ( n ) ) = n ( n + 1 ) 2 ; therefore, dim ( S E ( 3 ) ) = 6 .
Definition 20.
The vector space of real (n + 1)-by-(n + 1) matrices of the form
X = K V 0 0
where K is a skew-symmetric n-by-n matrix and V is a vector in R n and the Lie algebra of S E ( n ) .
Remark 13.
It can be easily shown that the matrix exponential of any matrix defined by (49) belongs to S E ( n ) . It can also be proven by the exp map se ( n ) S E ( n ) is onto but not one-to-one due to the rotation part of the matrix being the same with angles differing by an integer multiple of  2 π .
Since dim ( se ( 3 ) ) = dim ( S E ( 3 ) ) = 6 , a basis for se ( 3 ) corresponding to the infinitesimal rotations and translations along all three axes can be written as follows:
E 1 = 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 , E 2 = 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 , E 3 = 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 , E 4 = 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 , E 5 = 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 E 6 = 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
There is a canonical isomorphism between se ( 3 ) and R 6 often denoted ∧ (“hat”), with its inverse being denoted ∨ (“vee”) as follows:
: R 6 se ( 3 ) , : se ( 3 ) R 6
With these elements in mind, we can solve the regression. Ref. [24] gave a closed-form expression for geodesics in groups like S O ( 3 ) that have a bi-invariant metric, but no such closed form can be derived for a higher-order polynomial. Unfortunately, S E ( 3 ) has no such bi-invariant metric [11], and numerical integration is the only solution available, even for order 1 polynomials, i.e., geodesics.

6. Implementation

Algorithms

We now show how the solution is numerically implemented.
The algorithm was tested on several generic trajectories generated with the MatLab software, with different noise variances according to the pseudo-code presented in Algorithm 2. In this algorithm, a maneuver is defined as a change in the motion or orientation of the object, altering its speed and direction with respect to a fixed reference point. A maneuver can be as simple as a turn, a climb, a descent, or something more complex such as a loop or helix.
Note that, here, for a given maneuver, the velocity in se ( 3 ) is assumed to be constant during the said maneuver; otherwise, we would have to integrate the velocity vector field over time, as in Equation (26).
The metric used is the one derived from the usual inner product in matrix spaces:
A , B = t r ( A T B )
We used the Levi–Civita connection for its compatibility with the metric to measure the differences between the actual, noisy, and estimated trajectories. The algorithm may easily be changed to account for another connection.
The pseudocode in Algorithm 3 describes the procedure for determining the optimal geodesic but can easily be extended to determine an optimal curve of order k.
The following factors should be noted:
  • Initialization of p = γ ( 0 ) may be carried out in several ways: naive initialization p 0 = e may work, but the Fréchet mean ([38]) is often a better alternative. Another possibility is to use one of the measured points;
  • Equations (33) are solved in Algorithm 3 in the specific case of a geodesic, which means k = 1 as follows: we may rewrite the forward equations using the Christoffel symbols of the connection and then solve a “large” system of ODEs.
    d d t γ ( t ) = γ ( t ) X 1 ( t ) d d t X i ( t ) = ¯ X 1 X i ( t ) + X i + 1 ( t ) d d t X k ( t ) = ¯ X 1 X k ( t ) γ ( t ) = exp 0 t X 1 ( u ) d u X ˙ i ( t ) = Γ l j m X 1 l X i j + X i + 1 X ˙ k ( t ) = Γ l j m X 1 l X k j
Algorithm 2: Simulated data generation
Entropy 26 00825 i002
Algorithm 3: Determining the optimum geodesic
Entropy 26 00825 i003
The first examples are shown in Figure 2 and Figure 3. We can see that the estimate is close to the actual trajectory, even though a slightly curving move is present due to the estimated rotation part of the trajectory in S E ( 3 ) being close to zero but not exactly zero. This could be corrected by post-processing by the flagging of the trajectory.
Figure 4 displays a trajectory that is highly unlikely for a civilian aircraft but could be plausible for a UAV. This kind of trajectory would be difficult to track by a Kalman filter unless the state variables also lie on a Lie group. Also, turn maneuvers were tested as shown in Figure 5 and Figure 6.
We can measure how well the model fits the data using a coefficient of determination R 2 . Following [27], since the data do not belong to an Euclidean space, we will have to use the Fréchet variance as the denominator as follows:
Var F = 1 N min y M j d ( y , y j ) 2 ,
leading to
R 2 = 1 S S E sample Fr é chet variance = 1 j d ( γ ( t j ) , y j ) 2 min y M j d ( y , y j ) 2
Note that since we use simulated data, we can also measure the R 2 coefficient with respect to the “real” data:
R 2 = 1 j d ( γ ( t j ) , y j real ) 2 min y M j d ( y , y j real ) 2
Similarly to the Euclidean case, an R 2 close to 0 will indicate that the estimator does not provide a better estimate than the sample mean (Fréchet mean for manifold-valued data), whereas an R 2 equal to 1 will indicate a perfect fit between the model and the data.
Some of the results obtained from the simulations are provided in Table 1.
We notice that the algorithm performs rather well even with “sharp” maneuvers, even though the performance degrades when the noise variance increases, which makes total sense. The algorithm seems to perform well even with a small sample size such as 10 points. The nature of the maneuver does not seem to affect the performance. The source code and more data sets are available as Supplementary Material.

7. Conclusions and Future Work

In this work, we have written polynomial regression in Lie groups as a solution of a k-times-iterated covariant derivative. For that, the iterated connection is not necessarily the Levi–Civita connection. The solution was tested in the specific case of the Special Euclidean group S E ( 3 ) that models rigid-body motions.
From that, several possibilities may be pursued. A theoretical study of the performance of the estimator in terms of convergence could be undertaken. Also, since the connection exponential defines a local coordinate system, we can go further and find coordinate systems that flatten the manifold, thus greatly simplifying the regression problem and having it behave as in the Euclidean case. Further work would consist of characterizing the statistical quantities associated with the estimation process using polynomial regression. Determining the limit law to be considered in this case is a problem in itself. Another axis would involve using local G-valued polynomials to track the changes in maneuvers and adapt the parameters of the optimum fitting curve.   

Supplementary Materials

All MATLAB scripts and some datasets are freely available at https://github.com/jhnaby/LieGroupRegression (accessed on 26 September 2024).

Author Contributions

Conceptualization, J.A. and F.N.; software, J.A. and F.N.; investigation, F.N., writing—original draft preparation, J.A.; writing—review and editing, J.A. and F.N. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Ecole Nationale de l’Aviation Civile.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in GitHub at https://github.com/jhnaby/LieGroupRegression (accessed on 26 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 21–24 April 1997; Kadar, I., Ed.; International Society for Optics and Photonics. SPIE: Bellingham, DC, USA, 1997; Volume 3068, pp. 182–193. [Google Scholar] [CrossRef]
  2. Bourmaud, G.; Mégret, R.; Giremus, A.; Berthoumieu, Y. Discrete Extended Kalman Filter on Lie groups. In Proceedings of the 21st European Signal Processing Conference (EUSIPCO 2013), Marrakech, Morocco, 9–13 September 2013; pp. 1–5. [Google Scholar]
  3. Phogat, K.S.; Chang, D.E. Invariant extended Kalman filter on matrix Lie groups. Automatica 2020, 114, 108812. [Google Scholar] [CrossRef]
  4. Bonnabel, S. Left-invariant Extended Kalman Filter and attitude estimation. In Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 1027–1032. [Google Scholar] [CrossRef]
  5. Bonnabel, S.; Martin, P.; Salaün, E. Invariant Extended Kalman Filter: Theory and application to a velocity-aided attitude estimation problem. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) Held Jointly with 2009 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 1297–1304. [Google Scholar] [CrossRef]
  6. Fang, K.; Cai, T.; Wang, B. The Kinematic Models of the SINS and Its Errors on the SE(3) Group in the Earth-Centered Inertial Coordinate System. Sensors 2024, 24, 3864. [Google Scholar] [CrossRef] [PubMed]
  7. Jeong, D.B.; Lee, B.; Ko, N.Y. Three-Dimensional Dead-Reckoning Based on Lie Theory for Overcoming Approximation Errors. Appl. Sci. 2024, 14, 5343. [Google Scholar] [CrossRef]
  8. Sun, J.; Chen, Y.; Cui, B. An Improved Initial Alignment Method Based on SE2(3)/EKF for SINS/GNSS Integrated Navigation System with Large Misalignment Angles. Sensors 2024, 24, 2945. [Google Scholar] [CrossRef] [PubMed]
  9. Park, F.; Bobrow, J.; Ploen, S. A Lie Group Formulation of Robot Dynamics. Int. J. Robot. Res. 1995, 14, 609–618. [Google Scholar] [CrossRef]
  10. Wang, W.; Peng, X.; Ai, J.; Fu, C.; Li, C.; Zhang, Z. SE(3) Based LTV-MPC Algorithm for Multi-Obstacle Trajectory Tracking of Fully Driven Spacecraft. IEEE Access 2024, 12, 37850–37861. [Google Scholar] [CrossRef]
  11. Miolane, N.; Pennec, X. Computing Bi-Invariant Pseudo-Metrics on Lie Groups for Consistent Statistics. Entropy 2015, 17, 1850–1881. [Google Scholar] [CrossRef]
  12. Boisvert, J.; Pennec, X.; Ayache, N.; Labelle, H.; Cheriet, K. 3D anatomical variability assessment of the scoliotic spine using statistics on Lie groups. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, Arlington, VA, USA, 6–9 April 2006; pp. 750–753. [Google Scholar] [CrossRef]
  13. Boisvert, J.; Cheriet, F.; Pennec, X.; Labelle, H.; Ayache, N. Geometric Variability of the Scoliotic Spine Using Statistics on Articulated Shape Models. IEEE Trans. Med Imaging 2008, 27, 557–568. [Google Scholar] [CrossRef] [PubMed]
  14. Hanik, M.; Hege, H.C.; von Tycowicz, C. Bi-Invariant Dissimilarity Measures for Sample Distributions in Lie Groups. Siam J. Math. Data Sci. 2022, 4, 1223–1249. [Google Scholar] [CrossRef]
  15. Fiori, S.; Rossi, L.D. Minimal control effort and time Lie-group synchronisation design based on proportional-derivative control. Int. J. Control 2022, 95, 138–150. [Google Scholar] [CrossRef]
  16. Duan, X.; Sun, H.; Zhao, X. A Matrix Information-Geometric Method for Change-Point Detection of Rigid Body Motion. Entropy 2019, 21, 531. [Google Scholar] [CrossRef] [PubMed]
  17. Fiori, S. Manifold Calculus in System Theory and Control—Second Order Structures and Systems. Symmetry 2022, 14, 1144. [Google Scholar] [CrossRef]
  18. Smith, S. Covariance, subspace, and intrinsic Cramer-Rao bounds. IEEE Trans. Signal Process. 2005, 53, 1610–1630. [Google Scholar] [CrossRef]
  19. Labsir, S.; Renaux, A.; Vilà-Valls, J.; Chaumette, É. Cramér-Rao Bound on Lie Groups with Observations on Lie Groups: Application to SE(2). In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
  20. Labsir, S.; Giremus, A.; Yver, B.; Benoudiba–Campanini, T. An intrinsic Bayesian bound for estimators on the Lie groups SO(3) and SE(3). Signal Process. 2024, 214, 109232. [Google Scholar] [CrossRef]
  21. Jeon, J.M.; Park, B.U.; Keilegom, I.V. Nonparametric regression on Lie groups with measurement errors. Ann. Stat. 2022, 50, 2973–3008. [Google Scholar] [CrossRef]
  22. Camarinha, M.; Silva Leite, F.; Crouch, P. On the geometry of Riemannian cubic polynomials. Differ. Geom. Its Appl. 2001, 15, 107–135. [Google Scholar] [CrossRef]
  23. Camarinha, M.; Silva Leite, F.; Crouch, P. High-Order Splines on Riemannian Manifolds. Proc. Steklov Inst. Math. 2023, 321, 158–178. [Google Scholar] [CrossRef]
  24. Hinkle, J.; Muralidharan, P.; Fletcher, P.T.; Joshi, S. Polynomial Regression on Riemannian Manifolds. In Proceedings of the Computer Vision—ECCV 2012, Florence, Italy, 7–13 October 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–14. [Google Scholar]
  25. Popiel, T.; Noakes, L. Bézier curves and C2 interpolation in Riemannian manifolds. J. Approx. Theory 2007, 148, 111–127. [Google Scholar] [CrossRef]
  26. Hanik, M.; Hege, H.C.; Hennemuth, A.; von Tycowicz, C. Nonlinear Regression on Manifolds for Shape Analysis using Intrinsic Bézier Splines. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020, Lima, Peru, 4–8 October 2020; Martel, A.L., Abolmaesumi, P., Stoyanov, D., Mateus, D., Zuluaga, M.A., Zhou, S.K., Racoceanu, D., Joskowicz, L., Eds.; Springer: Cham, Switzerland, 2020; pp. 617–626. [Google Scholar]
  27. Fletcher, P.T. Geodesic Regression on Riemannian Manifolds. In Proceedings of the Third International Workshop on Mathematical Foundations of Computational Anatomy—Geometrical and Statistical Methods for Modelling Biological Shape Variability, Westin Harbour Castle, TO, Canada, 22 September 2011; Pennec, X., Joshi, S., Nielsen, M., Eds.; pp. 75–86. [Google Scholar]
  28. Pennec, X. Bi-invariant Means on Lie Groups with Cartan-Schouten Connections. In Proceedings of the Geometric Science of Information, Paris, France, 28–30 August 2013; Nielsen, F., Barbaresco, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 59–67. [Google Scholar]
  29. Amari, S.I.; Nagaoka, H. Methods of Information Geometry; Translations of Mathematical Monographs 191; American Mathematical Society and Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  30. Husemöller, D. Fibre Bundles; Graduate Texts in Mathematics; Springer: New York, NY, USA, 2013. [Google Scholar]
  31. Willmore, T. Riemannian Geometry; Oxford Science Publications; Clarendon Press: Oxford, UK, 1996. [Google Scholar]
  32. Agrachev, A.A.; Sachkov, Y.L. Control Theory from the Geometric Viewpoint; Encyclopaedia of Mathematical Sciences; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar] [CrossRef]
  33. Saunders, D.J. The Geometry of Jet Bundles; London Mathematical Society Lecture Note Series; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  34. Boumal, N. An Introduction to Optimization on Smooth Manifolds; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar] [CrossRef]
  35. Marsden, J.; Ratiu, T. Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems; Texts in Applied Mathematics; Springer: New York, NY, USA, 1999. [Google Scholar]
  36. Niethammer, M.; Huang, Y.; Vialard, F.X. Geodesic Regression on Image Time Series. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention: MICCAI—International Conference on Medical Image Computing and Computer-Assisted Intervention, Toronto, ON, Canada, 18–22 September 2011; Volume 14, pp. 655–662. [Google Scholar] [CrossRef]
  37. Duistermaat, J.; Kolk, J. Lie Groups; Universitext; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  38. Fréchet, M. Les éléments aléatoires de nature quelconque dans un espace distancié. Ann. L’Institut Henri PoincarÉ 1948, 10, 215–310. [Google Scholar]
Figure 1. Parallel transport of the residues.
Figure 1. Parallel transport of the residues.
Entropy 26 00825 g001
Figure 2. A straight line.
Figure 2. A straight line.
Entropy 26 00825 g002
Figure 3. A straight line, with a “large” additive noise.
Figure 3. A straight line, with a “large” additive noise.
Entropy 26 00825 g003
Figure 4. A helix.
Figure 4. A helix.
Entropy 26 00825 g004
Figure 5. A gradual turn.
Figure 5. A gradual turn.
Entropy 26 00825 g005
Figure 6. A sharp turn.
Figure 6. A sharp turn.
Entropy 26 00825 g006
Table 1. Results after simulations.
Table 1. Results after simulations.
ManeuverNoise StdNmeas R 2
Straight line 10 3 251
Straight line 10 2 250.99
Straight line 10 1 250.84
Straight line 10 2 250.99
Straight line 10 2 150.99
Straight line 10 2 100.99
Straight line 10 1 250.84
Straight line 10 1 150.82
Straight line 10 1 100.81
Gradual turn 10 2 250.99
Gradual turn 10 2 150.99
Gradual turn 10 2 100.99
Gradual turn 10 1 250.79
Gradual turn 10 1 150.75
Gradual turn 10 1 100.74
Sharp turn 10 1 250.92
Sharp turn 10 1 150.89
Sharp turn 10 1 100.81
Note: the bold lines are illustrated by images.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aubray, J.; Nicol, F. Polynomial Regression on Lie Groups and Application to SE(3). Entropy 2024, 26, 825. https://doi.org/10.3390/e26100825

AMA Style

Aubray J, Nicol F. Polynomial Regression on Lie Groups and Application to SE(3). Entropy. 2024; 26(10):825. https://doi.org/10.3390/e26100825

Chicago/Turabian Style

Aubray, Johan, and Florence Nicol. 2024. "Polynomial Regression on Lie Groups and Application to SE(3)" Entropy 26, no. 10: 825. https://doi.org/10.3390/e26100825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop