*Article* **Lie Geometric Methods in the Study of Driftless Control Affine Systems with Holonomic Distribution and Economic Applications**

**Liviu Popescu 1,\*, Daniel Militaru <sup>2</sup> and Gabriel Tic ˘a <sup>1</sup>**


**Abstract:** In the present paper, two optimal control problems are studied using Lie geometric methods and applying the Pontryagin Maximum Principle at the level of a new working space, called Lie algebroid. It is proved that the framework of a Lie algebroid is more suitable than the cotangent bundle in order to find the optimal solutions of some driftless control affine systems with holonomic distributions. Finally, an economic application is given.

**Keywords:** control affine systems; controllability; optimal control; Hamilton–Jacobi–Bellman equations; Lie geometric methods; holonomic distribution; economic applications

#### **1. Introduction**

In the last decades, Lie geometric methods have been applied successfully in different domains of research such as dynamical systems or optimal control theory. In this paper, some Lagrangian systems with some external holonomic constraints are studied. These types of nonlinear systems have many applications in different areas of optimal control theory, cybernetics or mathematical economics. In [1], an introduction to optimal control problems in life sciences and economics is presented, while in [2], some applications of the control theory of economic growth are given. The book [3] presents a modern and thorough exposition of the fundamental mathematical formalism used to study optimal control theory, i.e., continuous time dynamic economic processes and to interpret dynamic economic behavior. In the book [4], the notions of deterministic optimal control systems governed by ordinary differential equations are studied. These models cover the problems of economic growth, exploitation of (non)renewable resources, pollution control, behavior of firms or differential games. The monograph [5] presents some optimal control models with management science applications. The book [6] covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions. In the paper [7], an optimal control problem regarding a production–inventory system with customer impatience is studied, and optimizing a production–inventory system under a cost target is investigated in [8]. In addition, a new approach to maximizing the profit in a stockdependent demand inventory model is presented in [9]. Otherwise, there are a multitude of papers that study the optimization of production and storage costs with various restrictions (see form instance, [10–15]).

The notion of the Lie algebroid was introduced in differential geometry in the early 1950s, but it can also be found in physics or algebra under a wide variety of names. Using the geometry of Lie algebroids, in [16] a generalized theory of Lagrangian mechanics is developed and the equations of motions are obtained using the Poisson structure on the dual of a Lie algebroid and Legendre transformation associated with a regular Lagrangian. Later, in [17–19], the same equations of motion are obtained using the symplectic formalism

**Citation:** Popescu, L.; Militaru, D.; Tic ˘a, G. Lie Geometric Methods in the Study of Driftless Control Affine Systems with Holonomic Distribution and Economic Applications. *Mathematics* **2022**, *10*, 545. https://doi.org/10.3390/ math10040545

Academic Editors: Mihaela Neamt,u, Eva Kaslik and Anca R ˘adulescu

Received: 30 December 2021 Accepted: 7 February 2022 Published: 10 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

for Lagrangians and Hamiltonians, similarly with the J. Klein formalism for the classical Lagrangian mechanics. The first step in studying the mechanical control systems on Lie algebroids seems to be performed in [20], where the problems of accessibility and controllability are also approached. The Pontryagin Maximum Principle on Lie algebroids is presented in [21] and later extended on almost Lie algebroids in [22]. Some aspects regarding the abnormality problem in control theory on Lie algebroids are presented in [23]. The link between optimal control and connection theory on Lie algebroids can be found in [24–26]. Lie geometric methods in control theory have been applied in many papers. In [27], the connection between Lie theory and control is presented. The book [28] deals with the issue of being able to steer the system from any point of departure to any desired destination and also studies the optimal control problems and the question of finding the best possible trajectory. In addition, some facts and methods of control theory treated from the geometric point of view are presented in [29]. The geometry of Lie algebroids is used in the study of distributional systems (driftless control afine system) in the papers [12,14,30–36].

One of the most well-known and useful methods in the geometric approach is the analysis of the solution for the optimal control problem, as given by Pontryagin's Maximum Principle. It is known that a curve given by *c*(*t*)=(*x*(*t*), *u*(*t*)) is an optimal trajectory if there exists a lifting of *x*(*t*) to the dual space (*x*(*t*), *p*(*t*)), which satisfies the Hamilton– Jacobi–Bellman equations. On the other hand, finding the optimal solution to the control system remains an extremely difficult problem for several reasons. Firstly, we need to integrate a Hamiltonian system, which is generally difficult to achieve, depending on the shape of the dynamic equations and the Hamiltonian function. Thus, if the Lagrangian has a complicated expression, we cannot guarantee that the Hamiltonian can be calculated without any dependence on the control. The best situation happens for systems with quadratic cost, or the so called linear quadratic problems. Secondly, there are some more special solutions, the so-called abnormal ones, which should be studied and which do not depend on the shape of the Hamiltonian function. Finally, even if all the solutions are found, the problem of selecting the optimal solutions from them remains extremely difficult. For these reasons, we believe that it is important to find new methods and workspaces that could simplify the study. The optimal trajectories of a driftless control affine system with holonomic distribution can be regarded as the geodesics in the geometry of Lie algebroids [37]. In fact, sub-Riemannian problems are distributional systems with quadratic cost and nonholonomic distribution (bracket generating), see [38,39]. If the distribution is holonomic, then the system is not controllable, and the distribution determines a foliation with the property that any curve is contained in a single leaf of the foliation and the restriction to each leaf of the foliation is bracket-generating. In many cases, it is not possible to find the exact solution to the optimal control problem. Thus, using the geometry of the space, we can find information about their local or global behavior. Moreover, if the geodesic curves belong to a manifold with a constant positive curvature, then the geodesics focus, and contrarily, the negative curvature spreads the geodesics.

In this paper, we solve two optimal control problems and prove that the framework of a Lie algebroid is more suitable than the cotangent space in the study of some driftless control affine systems with holonomic distributions. The paper is organized as follows. In the second section, the known results about Lie geometric methods in optimal control theory for control affine systems are presented, including the controllability issues in the case of holonomic and nonholonomic distributions. In addition, only the necessary notions about Lie algebroids and their prolongation over the vector bundle projection are given, and the geometric viewpoint of the optimal control systems on this space is presented. Moreover, the relation between the Hamiltonian H on dual the Lie algebroid *E*<sup>∗</sup> and the Hamiltonian *H* on the cotangent space *T*∗*M* is given. Our strategy is to apply the Pontryagin Maximum Principle at the level of a Lie algebroid built in the case of control affine systems with holonomic distribution. The last two sections contain the novelty of the paper. In the third section, we give an application of driftless control affine system with

positive homogeneous cost, which is more general than the quadratic cost and show that the Hamilton–Jacobi–Bellman equations, provided by the Pontryagin Maximum Principle in the cotangent bundle, lead to a very complicated system of differential equations. Moreover, it is very difficult to find the Hamiltonian function without dependence on control variables. For these reasons, we will use a different approach considering the framework of a Lie algebroid, which simplifies the study. We also prove that the distribution generated by the vector fields is holonomic, and it determines a foliation in three-dimensional space. Using the Lie algebra of the vectors of distribution we study the controllability of the system and find the surfaces that generate the foliation. These aspects are not investigated in other previous papers of the authors. In the last part of the third section, we find the complete solution of the problem using the framework of Lie algebroids. The fourth section deals with the study of an economic problem of inventory and production using the mathematical model of optimal control and Lie geometric methods for controllability issues. We prove that the system is not controllable, meaning that we cannot produce any final quantity of products. The problem has a solution (it is controllable) if and only if there is a certain relationship between the final quantities of products. The mathematical models and final results are completely new and different from other previous works. The optimal solution is obtained using the Pontryagin Maximum Principle on a Lie algebroid. This approach simplifies the study and shows the connection between the geometry of Lie algebroids and optimal control for distributional systems.

#### **2. Lie Geometric Methods in Optimal Control**

Let *M* be a smooth *n*-dimensional manifold and a continuous-time control system given by differential equations, in the following form:

$$\frac{d\mathbf{x}^i}{dt} = f^i(\mathbf{x}, \boldsymbol{u})\_{\prime}$$

where *<sup>x</sup>* <sup>∈</sup> *<sup>M</sup>* are the state of the system, and *<sup>u</sup>* <sup>∈</sup> *<sup>U</sup>* <sup>⊂</sup> *<sup>R</sup><sup>m</sup>* represents the controls. Let *<sup>x</sup>*<sup>0</sup> and *x*<sup>1</sup> be two points of *M*. We consider an optimal control problem, which consists of finding the optimal trajectories of the control system that connects *x*<sup>0</sup> and *x*<sup>1</sup> and minimizing the functional costs:

$$\min \int\_{0}^{T} L(\mathbf{x}(t), \boldsymbol{\mu}(t))dt, \quad \mathbf{x}(0) = \mathbf{x}\_{0\prime} \ \mathbf{x}(T) = \mathbf{x}\_{1\prime}$$

where *L* is the *Lagrangian* function (energy, cost, time, distance, etc.). We have to remark that the time *T* can be fixed or free. Fixing the initial point *x*<sup>0</sup> and letting the final point *x*<sup>1</sup> vary in some domain, we obtain a family of optimal control problems. Similarly, we can fix *x*<sup>1</sup> and let *x*<sup>0</sup> vary. The theory of control deals with some systems whose evolution can be influenced by some external agents. It is known that one of the most important methods for studying the optimal solutions in control theory is Pontryagin's Maximum Principle. It generates the differential equations of first order, which are only necessary for the optimal solutions. For each optimal trajectory, *c*(*t*)=(*x*(*t*), *u*(*t*)), it generates a lift on the cotangent space (*x*(*t*), *p*(*t*)), satisfying the Hamilton–Jacobi–Bellman equations. The Hamiltonian function is given by the relation:

$$H(\mathbf{x}, p, \mu) = \langle p, f(\mathbf{x}, \mu) \rangle - L(\mathbf{x}, \mu), \quad p \in T^\*M,$$

and the maximization condition with respect to the control variables *u*, given by:

$$H(\mathbf{x}(t), p(t), \boldsymbol{\mu}(t)) = \max\_{\boldsymbol{v}} H(\mathbf{x}(t), p(t), \boldsymbol{v}),$$

which leads to *<sup>∂</sup><sup>H</sup> <sup>∂</sup><sup>u</sup>* = 0, and the extreme trajectories satisfy the equations:

$$
\dot{\mathbf{x}} = \frac{\partial \mathbf{H}}{\partial p}, \quad \dot{p} = -\frac{\partial \mathbf{H}}{\partial \mathbf{x}}.\tag{1}
$$

#### *2.1. Control Affine Systems*

Let us consider a control affine system in the following form [40]:

$$\dot{\mathbf{x}} = X\_0(\mathbf{x}) + \sum\_{i=1}^{m} u\_i X\_i(\mathbf{x}),\tag{2}$$

where *x* = (*x*1, ..., *xn*) are local coordinates on a smooth *n*-dimensional manifold *M*, *u*(*t*) = (*u*1(*t*), ..., *um*(*t*)) <sup>∈</sup> *<sup>U</sup>* <sup>⊂</sup> <sup>R</sup>*m*, *<sup>m</sup>* <sup>≤</sup> *<sup>n</sup>* are control variables, and *<sup>X</sup>*0, *<sup>X</sup>*1...*Xm* are smooth vector fields on *M*. In addition, *X*<sup>0</sup> is usually called the drift vector field, describing the dynamics of the system in the absence of controls, and the vector fields *Xi*, *i* = 1, *m* are called the input vector fields.

**Definition 1.** *A control system is named controllable if for any two points x*<sup>0</sup> *and x*<sup>1</sup> *on M there exists a finite time T and an admissible control u* : [0, *T*] → *U, such that for x satisfying x*(0) = *x*0*, we have x*(*T*) = *x*1*.*

In other words, the control system is controllable if for any two points *x*0, *x*<sup>1</sup> there exists a trajectory of the system (2) that connects *x*<sup>0</sup> to *x*1. Controllability is the ability to steer a system from a given initial state to any final state, in finite time, using the available controls. The reachable set R of a point *x*<sup>0</sup> ∈ *M* characterizes the states *x* ∈ *M* that can be reached from a given initial state *x*<sup>0</sup> in positive time, by choosing various controls and switching from one to another from time to time. A system is controllable if R(*x*) = *M*, ∀*x* ∈ *M*. Controllability does not depend on the quality of the trajectory between two states of the system and neither the amount nor the effort made for this.

**Definition 2.** *A distribution* Δ *on the smooth manifold M is a map which assigns to each point in M a subspace of the tangent bundle at this point:*

$$\mathbf{x} \to \Delta(\mathbf{x}) \subset T\_{\mathbf{x}}\mathcal{M}\_{\mathbf{x}}$$

The distribution Δ is named locally finitely generated if there is a set of vector fields {*Xi*}*i*=1,*m*, called local generators, which spans Δ, that is Δ(*x*) = *span*{*X*1(*x*), ..., *Xm*(*x*)}. In addition, the distribution Δ has dimension *k* if dim Δ(*x*) = *k*, for all points *x* in *M*. We recall that the Lie bracket of two vector fields is given by the relation:

$$[f, g](x) = \frac{\partial g}{\partial x}(x) f(x) - \frac{\partial f}{\partial x}(x) g(x),$$

( *∂g <sup>∂</sup><sup>x</sup>* is the Jacobian matrix of *g*). A distribution Δ on *M* is called involutive if ∀ *x* ∈ *M*, then:

$$f(\mathbf{x})\_\prime g(\mathbf{x}) \in \Delta(\mathbf{x}) \Rightarrow [f\_\prime g](\mathbf{x}) \in \Delta(\mathbf{x}).$$

Moreover, if the involutive distribution is generated by the vector fields {*Xi*}*i*=1,*m*, then we have:

$$\left[X\_{i\prime}X\_{j}\right](\mathbf{x}) = \sum\_{k=1}^{m} L\_{ij}^{k}(\mathbf{x})X\_{k}(\mathbf{x})\dots$$

In other words, every Lie bracket can be expressed as a linear combination of the vector fields from distribution, and therefore, it already belongs to Δ. A foliation {*Sα*}*α*∈*<sup>A</sup>* of manifold *M* is a partition of *M* = ∪*S<sup>α</sup>* into disjoint connected (immersed) submanifolds *Sα*, called leaves.

**Definition 3.** *A distribution* Δ *of constant dimension on M is called integrable (or holonomic) if there exists a foliation* {*Sα*}*α*∈*<sup>A</sup> on M whose tangent space is just* Δ*, i.e., TxS* = Δ(*x*)*, where S is the leaf passing through x.*

**Theorem 1.** *(Frobenius) If* Δ *is a distribution with a constant dimension, then* Δ *is integrable if and only if* Δ *is involutive.*

**Definition 4.** *The distribution* Δ = *span*{*X*1, ..., *Xm*} *of the manifold M is called bracketgenerating if the iterated Lie brackets*

$$[X\_{i\prime}, [X\_{i\prime}, X\_j], [X\_{i\prime}, [X\_j, X\_k]], \cdot \cdot, \cdot, \quad 1 \le i, j, k \le m\_{\prime}$$

*span the tangent space TM of M at every point.*

Using the Lie brackets of vector fields, we construct the flag of subsheaves:

$$
\Delta \subset \Delta^2 \subset \dots \subset \Delta^r \subset \dots \subset TM
$$

$$
\Delta^2 = \Delta + [\Delta, \Delta], \dots, \Delta^{r+1} = \Delta^r + [\Delta, \Delta^r]
$$

$$
[\Delta, \Delta^r] = span\{ [X, Y] : X \in \Delta, \ Y \in \Delta^r \}.
$$

where

If there exists an *<sup>r</sup>* <sup>≥</sup> 2 such that <sup>Δ</sup>*<sup>r</sup>* <sup>=</sup> *TM*, we say that <sup>Δ</sup> is a bracket-generating distribution, and *r* is called the step of the distribution Δ. In this case, the distribution Δ is not integrable and is called nonholonomic. This condition is also known as a *strong Hörmander condition* or a *Lie algebra rank condition*. If *r* = 2, the distribution is called a *strong bracket-generating* distribution. Next, we consider the *driftless control affine system* (*X*<sup>0</sup> = 0) or distributional systems in the following form:

$$\dot{\mathbf{x}} = \sum\_{i=1}^{m} u\_i \mathbf{X}\_i(\mathbf{x}). \tag{3}$$

The vector fields *Xi*, *i* = 1, *m*, generate a distribution Δ on the manifold *M* (assumed to be connected) such that the rank of Δ is assumed to be constant. For *x*<sup>0</sup> and *x*1, two points on *M*, we consider an optimal control problem that consists of finding those trajectories of the distributional system which connect *x*<sup>0</sup> and *x*<sup>1</sup> and minimizing the cost:

$$\min\_{u(\cdot)} \int\_0^T \mathcal{F}(u(t))dt,\tag{4}$$

where F is a positive homogeneous function of Δ. We will characterize the controllability using the properties of vector fields, which generate the distribution Δ.

**Theorem 2.** *(Chow–Rashevsky) If the distribution* Δ = *span*{*X*1, ..., *Xm*} *is bracket-generating (nonholonomic), then the driftless control affine system is controllable.*

If Δ is not bracket-generating and is integrable (holonomic), then the system (3) is not controllable, and Δ determines a foliation on *M*, with the property that any curve is contained in a single leaf of the foliation, and the restriction of Δ to each leaf of the foliation is bracket-generating. We will study in this paper the case of holonomic distributions.

If we assume that the distribution Δ = *span*{*X*1, *X*2, ..., *Xm*} is holonomic with constant rank, which means that [*Xi*, *Xj*] ∈ Δ for every *i*, *j* = 1, *m*, *i* = *j*, then from the Frobenius theorem, it results that the distribution Δ is integrable, it determines a foliation on *M* and two points can be joined if and only if they are situated on the same leaf.

Next, we will present some notions about Lie algebroids, which are useful in the study of driftless control affine systems.

#### *2.2. Lie Algebroids*

Let *M* be a real, *C*∞-differentiable, *n*-dimensional manifold and *TxM* its tangent space at *x* ∈ *M*. The tangent bundle of *M* is denoted (*TM*, *πM*, *M*), where *TM* = ∪*TxM* and *π<sup>M</sup>* is the canonical projection map *π<sup>M</sup>* : *TM* → *M* taking a tangent vector *X*(*x*) ∈ *TxM* ⊂ *TM* to the base point *x* ∈ *M*. A vector bundle is a triple (*E*, *π*, *M*) where *E* and *M* are manifolds, called the total space and the base space and the map *π* : *E* → *M* is a surjective submersion. Using [41], we have:

**Definition 5.** *A Lie algebroid over a manifold M is given by a triple* (*E*, [·, ·]*E*, *σ*)*, where* (*E*, *π*, *M*) *is a vector bundle of rank m over M, which satisfies the following conditions:*

*a) C*∞(*M*)*-module of sections* <sup>Γ</sup>(*E*) *is equipped with a Lie algebra structure* [·, ·]*E;*

*b) σ* : *E* → *TM is a bundle map, called the anchor, which induces a Lie algebra homomorphism from the Lie algebra of sections* (Γ(*E*), [·, ·]*E*) *to the Lie algebra of vector fields* (X (*M*), [·, ·])*, satisfying the Leibniz rule:*

$$[s\_1, fs\_2]\_\mathbb{E} = f[s\_1, s\_2]\_\mathbb{E} + (\sigma(s\_1)f)s\_2, \forall s\_1, s\_2 \in \Gamma(\mathcal{E}), \ f \in \mathbb{C}^\infty(M).$$

In addition, we have the following relations:

1◦ [·, ·]*<sup>E</sup>* is a R-bilinear operation,

2◦ [·, ·]*<sup>E</sup>* is skew-symmetric, i.e., [*s*1,*s*2]*<sup>E</sup>* = −[*s*2,*s*1]*E*, ∀*s*1,*s*<sup>2</sup> ∈ Γ(*E*),

3◦ [·, ·]*<sup>E</sup>* verifies the Jacobi identity:

$$\left[\mathbf{s}\_{1\prime}\left[\mathbf{s}\_{2\prime}\mathbf{s}\_{3}\right]\_{E}\right]\_{E} + \left[\mathbf{s}\_{2\prime}\left[\mathbf{s}\_{3\prime}\mathbf{s}\_{1}\right]\_{E}\right]\_{E} + \left[\mathbf{s}\_{3\prime}\left[\mathbf{s}\_{1\prime}\mathbf{s}\_{2}\right]\_{E}\right]\_{E} = \mathbf{0}\_{\prime}$$

and *σ*, being a Lie algebra homomorphism, then satisfies the relation:

$$
\sigma[s\_1, s\_2]\_E = [\sigma(s\_1), \sigma(s\_2)]\_\cdot,
$$

For a function *f* on *M*, then *d f*(*x*) ∈ *E*<sup>∗</sup> *<sup>x</sup>* is given by *d f*(*x*), *a* = *σ*(*a*)*f* for ∀*a* ∈ *Ex*. If *<sup>ω</sup>* <sup>∈</sup> *k*(*E*∗), then the *exterior derivative dE<sup>ω</sup>* <sup>∈</sup> *k*+1(*E*∗) is given by the formula:

$$\begin{aligned} d^{\mathbb{E}}\omega(s\_1,\ldots,s\_{k+1}) &= \sum\_{i=1}^{k+1} (-1)^{i+1} \sigma(s\_i) \omega(s\_1,\ldots,\hat{s}\_i,\ldots,s\_{k+1}) \\ &+ \sum\_{1 \le i < j \le k+1} (-1)^{i+j} \omega([s\_i s\_j]\_{\mathbb{E}}, s\_1,\ldots,\hat{s}\_i,\ldots,\hat{s}\_j,\ldots s\_{k+1}), \end{aligned}$$

where *si* ∈ Γ(*E*), *i* = 1, *k* + 1, and the hat over an argument means the absence of the argument. In addition, it results that (*dE*)<sup>2</sup> = 0. If we consider the local coordinates (*x<sup>i</sup>* ) on *<sup>U</sup>* <sup>⊂</sup> *<sup>M</sup>* and a local basis {*sα*} of the sections of the bundle *<sup>π</sup>*−1(*U*) <sup>→</sup> *<sup>U</sup>*, then these generate local coordinates (*x<sup>i</sup>* , *yα*) on *E*. The local functions *σ<sup>i</sup> <sup>α</sup>*(*x*), *<sup>L</sup><sup>γ</sup> αβ*(*x*) on *M* are given by the following relations:

$$\sigma(\mathbf{s}\_a) = \sigma\_a^i \frac{\partial}{\partial \mathbf{x'}} \quad [\mathbf{s}\_{\mathbf{a'}} \mathbf{s}\_{\beta}]\_{\mathbb{E}} = L\_{\mathbf{a}\beta}^{\gamma} \mathbf{s}\_{\gamma\prime} \quad i = \overline{1,n}, \quad \mathbf{a}, \beta, \gamma = \overline{1,m}, \dots$$

and are called the structure functions of Lie algebroids. Some examples of Lie algebroids are as follows:

**Example 1.** *The tangent bundle E* = *TM itself, with identity mapping as anchor. With respect to the usual coordinates* (*x*, · *x*)*, the structure functions are L<sup>i</sup> jk* = <sup>0</sup>*, <sup>σ</sup><sup>i</sup> <sup>j</sup>* = *<sup>δ</sup><sup>i</sup> j , but if we were to change to another basis for the vector fields, the structure functions would become nonzero.*

**Example 2.** *Any integrable subbundle of tangent bundle TM is a Lie algebroid with the inclusion as anchor and the induced Lie bracket.*

#### *2.3. The Prolongation of a Lie Algebroid*

Let us consider *τ* : *E*<sup>∗</sup> → *M* as the dual bundle of *π* : *E* → *M* and (*E*, [·, ·]*E*, *σ*) a Lie algebroid structure over *M*. A Lie algebroid structure over *E*∗ can be constructed by taking the prolongation of (*E*, [·, ·]*E*, *σ*) over *τ* : *E*<sup>∗</sup> → *M* (see [42–44]). This structure is given by:

(i) The associated vector bundle is (T *E*∗, *τ*1, *E*∗), where T *E*<sup>∗</sup> = ∪T*u*∗*E*∗, *u*<sup>∗</sup> ∈ *E*∗,

$$\mathcal{T}\_{\mathfrak{u}^\*} \mathcal{E}^\* = \{ (\mathfrak{u}\_{\mathfrak{x}}, \mathfrak{v}\_{\mathfrak{u}^\*}) \in E\_{\mathfrak{x}} \times T\_{\mathfrak{u}^\*} \mathcal{E}^\* \, | \, \sigma(\mathfrak{u}\_{\mathfrak{x}}) = T\_{\mathfrak{u}^\*} \pi(\mathfrak{v}\_{\mathfrak{u}^\*}), \pi(\mathfrak{u}^\*) = \mathfrak{x} \in M \},$$

and the projection *τ*<sup>1</sup> : T *E*<sup>∗</sup> → *E*∗, *τ*1(*ux*, *vu*<sup>∗</sup> ) = *u*∗.

(ii) The Lie algebra structure [·, ·]<sup>T</sup> *<sup>E</sup>*<sup>∗</sup> on <sup>Γ</sup>(T *<sup>E</sup>*∗) is defined as follows: If *<sup>ρ</sup>*1, *<sup>ρ</sup>*<sup>2</sup> ∈ Γ(T *E*∗) are such that *ρi*(*u*∗)=(*Xi*(*τ*(*u*∗)), *Ui*(*u*∗)), where *Xi* ∈ Γ(*E*), *Ui* ∈ *χ*(*E*∗) and *σ*(*Xi*(*τ*(*u*∗)) = *Tu*∗*τ*(*Ui*(*u*∗)), *i* = 1, 2, then

$$[\rho\_1, \rho\_2]\_{\mathcal{T}E^\*} (u^\*) = ([X\_1, X\_2]\_{\mathcal{T}E^\*} (\pi(u^\*)), [\mathcal{U}\_1, \mathcal{U}\_2]\_{\mathcal{T}E^\*} (u^\*)).$$

(iii) The anchor map is the projection *<sup>σ</sup>*<sup>1</sup> : <sup>T</sup> *<sup>E</sup>*<sup>∗</sup> <sup>→</sup> *TE*∗, *<sup>σ</sup>*1(*u*, *<sup>v</sup>*) = *<sup>v</sup>*.

We remark that if T *<sup>τ</sup>* : T *<sup>E</sup>*<sup>∗</sup> → *<sup>E</sup>*, T *<sup>τ</sup>*(*u*, *<sup>v</sup>*) = *<sup>u</sup>* then (*V*T *<sup>E</sup>*∗, *<sup>τ</sup>*1|*V*<sup>T</sup> *<sup>E</sup>*<sup>∗</sup> , *<sup>E</sup>*∗) with *<sup>V</sup>*<sup>T</sup> *<sup>E</sup>*<sup>∗</sup> <sup>=</sup> *Ker*<sup>T</sup> *<sup>τ</sup>* is a sub-bundle of (<sup>T</sup> *<sup>E</sup>*∗, *<sup>τ</sup>*1, *<sup>E</sup>*∗), called the vertical sub-bundle. If (*x<sup>i</sup>* , *μα*) are local coordinates on *E*<sup>∗</sup> at *u*<sup>∗</sup> and *s<sup>α</sup>* is a local basis of sections of *π* : *E* → *M*, then a local basis of <sup>Γ</sup>(<sup>T</sup> *<sup>E</sup>*∗) is {X*α*,P*α*}, where:

$$\mathcal{X}\_{\mathfrak{a}}(\mathfrak{u}^\*) = \left( s\_{\mathfrak{a}}(\mathfrak{r}(\mathfrak{u}^\*)), \sigma^i\_{\mathfrak{a}} \frac{\partial}{\partial \mathfrak{x}^i}|\_{\mathfrak{u}^\*} \right), \quad \mathcal{P}^{\mathfrak{a}}(\mathfrak{u}^\*) = \left( 0, \frac{\partial}{\partial \mu\_{\mathfrak{a}}}|\_{\mathfrak{u}^\*} \right).$$

The structure functions of T *E*<sup>∗</sup> are given by the following formulas:

$$
\sigma^1(\mathcal{X}\_a) = \sigma^i\_a \frac{\partial}{\partial \mathbf{x}^{i'}} \quad \sigma^1(\mathcal{P}^a) = \frac{\partial}{\partial \mu\_a} \prime
$$
 
$$
[\mathcal{X}\_{a\prime} \mathcal{X}\_\beta]\_{\mathcal{T}E^\*} = L^\gamma\_{a\beta} \mathcal{X}\_{\gamma\prime} \quad [\mathcal{X}\_{a\prime} \mathcal{P}^a]\_{\mathcal{T}E^\*} = 0 \,, \quad [\mathcal{P}^a \prime \mathcal{P}^\beta]\_{\mathcal{T}E^\*} = 0 \,,
$$

and therefore:

$$d^{\mathbb{E}}\mathbf{x}^i = \sigma^i\_a \mathcal{X}^a,\ d^{\mathbb{E}}\mu\_a = \mathcal{P}\_{a\prime}\ d^{\mathbb{E}}\mathcal{X}^\gamma = -\frac{1}{2}L^\gamma\_{a\not\!\!/\!} \mathcal{X}^a \wedge \mathcal{X}^\beta,\ \ d^{\mathbb{E}}\mathcal{P}\_a = 0,$$

where {X *<sup>α</sup>*,P*α*} is the dual basis of {X*α*,P*α*}. In local coordinates, the Liouville section is given by:

$$
\theta\_E = \mu\_n \mathcal{X}^n.
$$

The canonical symplectic section *ω<sup>E</sup>* is defined by:

$$
\omega\_E = -d^E \theta\_{E'}
$$

and it results in a nondegenerate 2-section and *dEω<sup>E</sup>* = 0. We obtain:

$$
\omega\_E = \mathcal{X}^\kappa \wedge \mathcal{P}\_\kappa + \frac{1}{2} \mu\_\alpha L^\alpha\_{\beta\gamma} \mathcal{X}^\beta \wedge \mathcal{X}^\gamma.
$$

By a control system on the Lie algebroid (*E*, [·, ·]*E*, *σ*) (see [21]) with the control space given by *τ* : *A* → *M*, we understand a section *ρ* of *E* along *τ*. A trajectory of the system *<sup>ρ</sup>* is an integral curve of the vector field *<sup>σ</sup>*(*ρ*). Considering the cost function L ∈ *<sup>C</sup>*∞(*A*), we must to minimize the integral of L over the family of those system trajectories which satisfy certain constraints. The Hamiltonian function H ∈ *<sup>C</sup>*∞(*E*<sup>∗</sup> <sup>×</sup>*<sup>M</sup> <sup>A</sup>*) is given by:

$$\mathcal{H}(\mu, u) = \langle \mu, \rho(u) \rangle - \mathcal{L}(u)\_{\prime\prime}$$

and the associated Hamiltonian control system *ρ<sup>H</sup>* is given by the symplectic equation on Lie algebroid:

$$i\_{\rho\_H} \omega\_E = d^E \mathcal{H}\_\cdot$$

In local coordinates, the solution of the previous equation reads as:

$$
\rho\_H = \frac{\partial \mathcal{H}}{\partial \mu\_\alpha} \mathcal{X}\_\alpha - \left( \sigma\_\alpha^i \frac{\partial \mathcal{H}}{\partial x^i} + \mu\_\gamma L\_{\alpha\beta}^\gamma \frac{\partial \mathcal{H}}{\partial \mu\_\beta} \right) \mathcal{P}^\alpha,
$$

on the subset where

$$\frac{\partial \mathcal{H}}{\partial u^{A}} = 0.$$

The critical trajectories are given by [21]:

$$\frac{\partial \mathcal{H}}{\partial \boldsymbol{u}^{A}} = 0, \quad \frac{d\boldsymbol{x}^{i}}{dt} = \sigma\_{a}^{i} \frac{\partial \mathcal{H}}{\partial \mu\_{a}}, \quad \frac{d\mu\_{a}}{dt} = -\sigma\_{a}^{i} \frac{\partial \mathcal{H}}{\partial \boldsymbol{x}^{i}} - \mu\_{\boldsymbol{\gamma}} L\_{a\boldsymbol{\beta}}^{\boldsymbol{\gamma}} \frac{\partial \mathcal{H}}{\partial \mu\_{\boldsymbol{\beta}}}.\tag{5}$$

We have to remark that it can be associated to any positive homogeneous cost L : *E* → R on Lie algebroids *E*. A cost *L* on *Imσ* ⊂ *TM* is defined in the following form:

$$L(v) = \{ \mathcal{L}(\mathfrak{u}) | \mathfrak{u} \in E\_{\mathfrak{x}}, \quad \sigma(\mathfrak{u}) = v \},$$

where *v* ∈ (*Imσ*)*<sup>x</sup>* ⊂ *TxM*, *x* ∈ *M*. From [35], we have:

**Theorem 3.** *The relation between the Hamiltonian function H on the cotangent bundle T*∗*M and the Hamiltonian function* H *on the dual Lie algebroid E*<sup>∗</sup> *has the form:*

$$H(p) = \mathcal{H}(\sigma^\*(p)), \quad \mu = \sigma^\*(p), \quad p \in T\_x^\*M, \quad \mu \in E\_x^\*. \tag{6}$$

**Proof.** From the Fenchel–Legendre dual of Lagrangian *L*, we obtain the Hamiltonian *H* given by:

$$\begin{aligned} H(p) &= \sup\_{\boldsymbol{v}} \{ \langle p, \boldsymbol{v} \rangle - L(\boldsymbol{v}) \} = \sup\_{\boldsymbol{v}} \{ \langle p, \boldsymbol{v} \rangle - \mathcal{L}(\boldsymbol{u}); \boldsymbol{\sigma}(\boldsymbol{u}) = \boldsymbol{v} \} \\ &= \sup\_{\boldsymbol{u}} \{ \langle p, \boldsymbol{\sigma}(\boldsymbol{u}) \rangle - \mathcal{L}(\boldsymbol{u}) \} = \sup\_{\boldsymbol{u}} \{ \langle \boldsymbol{\sigma}^\*(p), \boldsymbol{u} \rangle - \mathcal{L}(\boldsymbol{u}) \} = \mathcal{H}(\boldsymbol{\sigma}^\*(p)), \end{aligned}$$

Furthermore, it results in the following:

$$H(p) = \mathcal{H}(\mu), \quad \mu = \sigma^\*(p), \quad p \in T\_x^\*M, \quad \mu \in E\_{x, \mu}^\*$$

or in local coordinates:

$$
\mu\_{\alpha} = \sigma\_{\alpha}^{\*i} p\_{i\prime} \tag{7}
$$

where the Hamiltonian *<sup>H</sup>*(*p*) is degenerate on *Kerσ* <sup>⊂</sup> *<sup>T</sup>*∗*M*.

#### **3. Application to Optimal Control**

Let us consider the following driftless control affine system (distributional system) with positive homogeneous cost (Randers type metric):

$$\begin{cases}
\dot{\mathbf{x}}^1 = \mu\_2 \\
\dot{\mathbf{x}}^2 = \mu\_1 + \mu\_2 \mathbf{x}^2 \\
\dot{\mathbf{x}}^3 = \mu\_1 + \mu\_2 \mathbf{x}^3
\end{cases} \tag{8}$$

$$\min\_{u(\cdot)} \int\_0^T \left(\sqrt{u\_1^2 + u\_2^2} + \varepsilon u\_1\right) dt, \quad 0 \le \varepsilon < 1.$$

We are looking for the optimal trajectories of the system, starting from the initial point (0, 1, 0)*<sup>t</sup>* , which are parameterized by constant speed, that is, *u*2 <sup>1</sup> + *<sup>u</sup>*<sup>2</sup> <sup>2</sup> + *εu*<sup>1</sup> = *const*. (minimum time problem), and have free endpoints. The control system can be written in the following form:

$$\dot{\mathbf{x}} = \boldsymbol{\mu}\_1 \boldsymbol{X}\_1 + \boldsymbol{\mu}\_2 \boldsymbol{X}\_2, \quad \mathbf{x} = \begin{pmatrix} \mathbf{x}^1 \\ \mathbf{x}^2 \\ \mathbf{x}^3 \end{pmatrix} \in \mathbb{R}^3, \; \boldsymbol{X}\_1 = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}, \; \mathbf{X}\_2 = \begin{pmatrix} 1 \\ \mathbf{x}^2 \\ \mathbf{x}^3 \end{pmatrix} \tag{9}$$

$$\min\_{u(\cdot)} \int\_0^T \mathcal{F}(u(t))dt,\ \mathcal{F}(u) = \sqrt{u\_1^2 + u\_2^2} + \varepsilon u\_{1\prime} \quad 0 \le \varepsilon < 1.$$

The vector fields of distributional system are given by:

$$X\_1 = \frac{\partial}{\partial \mathbf{x}^2} + \frac{\partial}{\partial \mathbf{x}^3}, \quad X\_2 = \frac{\partial}{\partial \mathbf{x}^1} + \mathbf{x}^2 \frac{\partial}{\partial \mathbf{x}^2} + \mathbf{x}^3 \frac{\partial}{\partial \mathbf{x}^3}.$$

The Lie bracket of the vector fields is:

$$[X\_1, X\_2] = \left[ \frac{\partial}{\partial \mathbf{x}^2} + \frac{\partial}{\partial \mathbf{x}^3}, \frac{\partial}{\partial \mathbf{x}^1} + \mathbf{x}^2 \frac{\partial}{\partial \mathbf{x}^2} + \mathbf{x}^3 \frac{\partial}{\partial \mathbf{x}^3} \right] = X\_1 \dots$$

and it results that the associated distribution Δ = *span*{*X*1, *X*2} is holonomic and has the constant rank 2. Moreover, from the system (8), we obtain:

$$
\dot{\mathbf{x}}^2 - \dot{\mathbf{x}}^3 = \dot{\mathbf{x}}^1 (\mathbf{x}^2 - \mathbf{x}^3),
$$

which yields the following by integration:

$$
\ln\left|\mathbf{x}^2 - \mathbf{x}^3\right| = \mathbf{x}^1 + \mathbf{c},
\tag{10}
$$

where *c* is a constant, and it results that Δ determines a foliation on R3, given by the surfaces (10). Furthermore, the coordinate point (0, 1, 0)*<sup>t</sup>* is on the surface given by ln *x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>3</sup> = *<sup>x</sup>*1, and the optimal trajectories of the system belong to the same surface. In order to solve this optimal control problem, the Pontryagin Maximum Principle in the cotangent bundle can be used. The Lagrangian has the form <sup>L</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup>F<sup>2</sup> (every minimizer parametrized by arclength, or constant speed F = 1, is also a minimizer of the so-called energy cost <sup>L</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup>F2; see [34] for more details), and we obtain the Hamiltonian as follows:

$$H(u, \mathbf{x}, p) = p\_i \dot{\mathbf{x}}^i - \mathcal{L} = p\_1 u\_2 + p\_2 (u\_1 + u\_2 \mathbf{x}^2) + p\_3 (u\_1 + u\_2 \mathbf{x}^3) - \frac{1}{2} \left(\sqrt{u\_1^2 + u\_2^2} + \epsilon u\_1\right)^2 \mathcal{L}$$

The Hamilton–Jacobi–Bellman equations on the cotangent bundle given by *<sup>∂</sup><sup>H</sup> <sup>∂</sup>ui* = 0, *dx<sup>i</sup> dt* <sup>=</sup> *<sup>∂</sup><sup>H</sup> ∂pi* , *dpi dt* <sup>=</sup> <sup>−</sup>*∂<sup>H</sup> <sup>∂</sup>x<sup>i</sup>* lead to the following system:

$$\begin{cases} p\_2 + p\_3 - \left(\sqrt{u\_1^2 + u\_2^2} + \varepsilon u\_1\right) \left(\varepsilon + \frac{u\_1}{\sqrt{u\_1^2 + u\_2^2}}\right) = 0\\ p\_1 + p\_2 \ge^2 + p\_3 \ge^3 - \left(\sqrt{u\_1^2 + u\_2^2} + \varepsilon u\_1\right) \frac{u\_2}{\sqrt{u\_1^2 + u\_2^2}} = 0 \end{cases} \tag{11}$$

and a very complicated system of implicit differential equations. From (11), it is difficult to find the Hamiltonian *H* without dependence on the control variables. For this reason, a different approach will be used, involving the framework of a Lie algebroid.

In order to use the Pontryagin Maximum Principle in the framework of Lie algebroids, we will consider *E* = Δ (holonomic distribution with constant rank). The anchor *σ* : *E* → *TM* is the inclusion, and [·, ·]*<sup>E</sup>* the induced Lie bracket. In the case of the previous control system (9), the anchor *σ* has the local components given by:

$$
\sigma\_a^{\bar{i}} = \begin{pmatrix} 0 & 1 \\ 1 & x^2 \\ 1 & x^3 \end{pmatrix}.
$$

and we consider the Lagrangian function given by:

$$\mathcal{L} = \frac{1}{2} \left( \sqrt{\mu\_1^2 + \mu\_2^2} + \varepsilon u\_1 \right)^2.$$

Using [45], we can find the Hamiltonian on *E*∗ in the following form:

$$\mathcal{H}(\mu) = \frac{1}{2} \left( \sqrt{\frac{(\mu\_1)^2}{(1-\varepsilon^2)^2} + \frac{(\mu\_2)^2}{1-\varepsilon^2}} - \frac{\varepsilon\mu\_1}{1-\varepsilon^2} \right)^2. \tag{12}$$

Using the relations (6) and (7), we can calculate the Hamiltonian *H* on *T*∗*M* given by *<sup>H</sup>*(*x*, *<sup>p</sup>*) = <sup>H</sup>(*μ*), *<sup>μ</sup>* <sup>=</sup> *<sup>σ</sup>*(*p*), where:

$$
\left(\begin{array}{cc}\mu\_1\\\mu\_2\end{array}\right) = \left(\begin{array}{cc}0&1&1\\1&\mathbf{x}^2&\mathbf{x}^3\end{array}\right) \left(\begin{array}{cc}p\_1\\p\_2\\p\_3\end{array}\right).
$$

We obtain that *μ*<sup>1</sup> = *p*<sup>2</sup> + *p*3, *μ*<sup>2</sup> = *p*<sup>1</sup> + *p*2*x*<sup>2</sup> + *p*3*x*3, and it results in the Hamiltonian in the cotangent bundle:

$$H(\mathbf{x}, p) = \frac{1}{2} \left( \sqrt{\frac{(p\_2 + p\_3)^2}{(1 - \varepsilon^2)} + \frac{(p\_1 + p\_2 \mathbf{x}^2 + p\_3 \mathbf{x}^3)^2}{1 - \varepsilon^2}} - \frac{\varepsilon (p\_2 + p\_3)}{1 - \varepsilon^2} \right)^2. \tag{13}$$

Unfortunately, the Equations (1) on *T*∗*M* with *H*(*x*, *p*) from (13) lead to a complicated system of implicit differential equations. We will use the framework of a Lie algebroid. From the relation [*Xα*, *<sup>X</sup>β*] = *<sup>L</sup><sup>γ</sup> αβXγ*, we obtain the non-zero components *<sup>L</sup>*<sup>1</sup> <sup>12</sup> = 1, *<sup>L</sup>*<sup>1</sup> <sup>21</sup> = −1, while from (5), we deduce the following:

$$\begin{array}{rcl} \dot{\mathbf{x}}^1 &=& \frac{\partial \mathcal{H}}{\partial \mu\_2}, \dot{\mathbf{x}}^2 = \frac{\partial \mathcal{H}}{\partial \mu\_1} + \mathbf{x}^2 \frac{\partial \mathcal{H}}{\partial \mu\_2}, \dot{\mathbf{x}}^3 = \frac{\partial \mathcal{H}}{\partial \mu\_1} + \mathbf{x}^3 \frac{\partial \mathcal{H}}{\partial \mu\_2}, \\\dot{\mu}\_1 &=& -\mu\_1 \frac{\partial \mathcal{H}}{\partial \mu\_2}, \dot{\mu}\_2 = \mu\_1 \frac{\partial \mathcal{H}}{\partial \mu\_1}. \end{array}$$

where

$$\frac{\partial \mathcal{H}}{\partial \mu\_1} = \frac{(1+\varepsilon^2)\mu\_1}{(1-\varepsilon^2)^2} - \frac{\varepsilon \sqrt{\frac{(\mu\_1)^2}{(1-\varepsilon^2)^2} + \frac{(\mu\_2)^2}{1-\varepsilon^2}}}{1-\varepsilon^2} - \frac{\varepsilon \mu\_1^2}{(1-\varepsilon^2)^3 \sqrt{\frac{(\mu\_1)^2}{(1-\varepsilon^2)^2} + \frac{(\mu\_2)^2}{1-\varepsilon^2}}}.$$

$$\frac{\partial \mathcal{H}}{\partial \mu\_2} = \frac{\mu\_2}{1-\varepsilon^2} - \frac{\varepsilon \mu\_1 \mu\_2}{(1-\varepsilon^2)^2 \sqrt{\frac{(\mu\_1)^2}{(1-\varepsilon^2)^2} + \frac{(\mu\_2)^2}{1-\varepsilon^2}}}.$$

We consider the following change of variables:

$$
\mu\_1(t) = (1 - \varepsilon^2)r(t)\operatorname{sech}\theta(t), \quad \mu\_2(t) = \sqrt{1 - \varepsilon^2}r(t)\tanh\theta(t). \tag{14}
$$

where

$$\sinh\theta = \frac{\varepsilon^{\theta} - \varepsilon^{-\theta}}{2},\\\cosh\theta = \frac{\varepsilon^{\theta} + \varepsilon^{-\theta}}{2},\\\tanh\theta = \frac{\sinh\theta}{\cosh\theta'},\\\ \mathrm{sech}\theta = \frac{1}{\cosh\theta}.$$

In these conditions, we have:

$$\sqrt{\frac{(\mu\_1)^2}{(1-\varepsilon^2)^2} + \frac{(\mu\_2)^2}{1-\varepsilon^2}} = |r|\_{\prime}$$

and the differential equations:

$$
\dot{\mu}\_1 = -\mu\_1 \frac{\partial \mathcal{H}}{\partial \mu\_2},
$$

with the relation (14) yielding:

$$\sqrt{1-\varepsilon^2}\left(\frac{\dot{r}}{r}-\dot{\theta}\tanh\theta\right)=r(-\tanh\theta+\varepsilon\text{sech}\theta\tanh\theta).\tag{15}$$

,

In addition, from the equation

$$
\dot{\mu}\_2 = \mu\_1 \frac{\partial \mathcal{H}}{\partial \mu\_1},
$$

and (14), we obtain:

$$r\sqrt{1-\varepsilon^2}\left(\frac{\dot{r}}{r}\tanh\theta+\dot{\theta}\operatorname{sech}^2\theta\right) = r((1+\varepsilon)^2\operatorname{sech}^2\theta-\varepsilon\operatorname{sech}\theta-\varepsilon\operatorname{sech}^3\theta).\tag{16}$$

Now, reducing ˙ *θ* and *<sup>r</sup>*˙ *<sup>r</sup>* from the Equations (15) and (16), we obtain:

$$
\sqrt{1-\varepsilon^2}\dot{r} = r^2 \varepsilon \text{sech}\theta \tanh\theta (\varepsilon \text{sech}\theta - 1),
$$

and

$$
\sqrt{1 - \varepsilon^2} \dot{\theta} = r(\varepsilon \text{sech}\theta - 1)^2.
$$

The last two equations lead to:

$$\frac{\dot{r}}{\dot{\theta}} = \frac{r \text{sech}\theta \tanh\theta}{\varepsilon \text{sech}\theta - 1},$$

and to:

$$\frac{dr}{r} = \frac{\varepsilon \text{sech}\theta \tanh\theta}{\varepsilon \text{sech}\theta - 1} d\theta \,\omega$$

respectively, with the solution given by:

$$
\ln|r| = -\ln(\varepsilon \text{sech}\theta - 1) - \ln c.
$$

which leads to:

$$|r| = \frac{1}{c(\varepsilon \text{sech}\theta - 1)}.$$

Since the optimal trajectories are parameterized by arclength, the conclusion corresponds to the 1/2 level of the Hamiltonian, and we obtain:

$$\mathcal{H} = \frac{r^2}{2} (1 - \mathfrak{c}\text{sech}\theta)^2 = \frac{1}{2c^2}.$$

Now, *c* = ±1 and

$$r = \pm \frac{1}{\varepsilon \text{sech}\theta - 1}$$

The equation

$$
\dot{\mu}\_1 = -\mu\_1 \dot{\mathfrak{x}}^1 \prime
$$

.

implies the following:

$$\mathbf{x}^1(\theta) = \ln \frac{c\_1(1 - \epsilon \text{sech}\theta)}{(1 - \epsilon^2)\text{sech}\theta}, \quad c\_1 \in R.$$

Since we are looking for the optimal trajectories starting from the initial point (0, 1, 0)*<sup>t</sup>* , we have *x*1(0) = 0 and:

$$\ln \frac{c\_1}{1+\varepsilon} = 0 \Rightarrow c\_1 = 1 + \varepsilon\_r$$

which leads to:

$$\ln^1(\theta) = \ln \frac{1 - \varepsilon \text{sech}\theta}{(1 - \varepsilon)\text{sech}\theta} = \ln \frac{\cosh \theta - \varepsilon}{1 - \varepsilon}.$$

We also obtain the following:

$$
\dot{\mu}\_2 = \mu\_1 \left( \dot{\mathbf{x}}^2 - \mathbf{x}^2 \frac{\partial \mathcal{H}}{\partial \mu\_2} \right) = \mu\_1 \dot{\mathbf{x}}^2 + \mathbf{x}^2 \dot{\mu}\_1.
$$

and, consequently, *μ*<sup>2</sup> = *μ*1*x*<sup>2</sup> + *c*2. Furthermore:

$$\alpha^2(\theta) = \frac{\sinh \theta}{\sqrt{1 - \varepsilon^2}} \pm \frac{c\_2(1 - \varepsilon \text{sech}\theta)}{(1 - \varepsilon^2)\,\text{sech}\theta}.$$

From *x*2(0) = 1, we obtain that *c*<sup>2</sup> = 1 + *ε*, and this yields:

$$\alpha^2(\theta) = \frac{\sinh \theta}{\sqrt{1 - \varepsilon^2}} + \frac{\cosh \theta - \varepsilon}{1 - \varepsilon}.$$

In the same way, we obtain:

$$x^3(\theta) = \frac{\sinh \theta}{\sqrt{1 - \varepsilon^2}} \pm \frac{c\_3(1 - \varepsilon \text{sech}\theta)}{(1 - \varepsilon^2)\,\text{sech}\theta}.$$

From *x*3(0) = 0, we obtain that *c*<sup>3</sup> = 0 and it results in the following:

$$x^3(\theta) = \frac{\sinh \theta}{\sqrt{1 - \varepsilon^2}} \cdot \theta$$

The solution is optimal because the Hamiltonian is a convex function. Using (8), we have *u*<sup>2</sup> = *x*˙ 1, *u*<sup>1</sup> = *x*˙ <sup>3</sup> <sup>−</sup> *<sup>u</sup>*2*x*<sup>3</sup> <sup>=</sup> *<sup>x</sup>*˙ <sup>2</sup> <sup>−</sup> *<sup>u</sup>*2*x*2, and by direct computation, we obtain the control variables:

$$
\mu\_2(\theta) = \frac{\sinh \theta}{\cosh \theta - \varepsilon'}, \quad \mu\_1(\theta) = \frac{1}{\sqrt{1 - \varepsilon^2}} \frac{1 - \varepsilon \cosh \theta}{\cosh \theta - \varepsilon}.
$$

We have to remark that if *ε* = 0, then we obtain the case of driftless control affine systems with quadratic cost with the solution:

$$\mathbf{x}^1(t) = \ln \cosh t, \quad \mathbf{x}^2(t) = \sinh t + \cosh t, \quad \mathbf{x}^3(t) = \sinh t,$$

and control variables

$$
\mu\_2(t) = \tanh t, \quad \mu\_1(t) = \mathrm{sech} t.
$$

#### **4. Economic Application**

Let us consider that in a fixed period of time *T*, three types of products, *P*1, *P*2, *P*3, must be manufactured in certain fixed quantities. It is assumed that the production rate of the product *P*<sup>3</sup> depends on production rates of *P*<sup>1</sup> and *P*<sup>2</sup> by a given law. We assume that the unit production costs for the first two products increase linearly with the level of production and the production operations costs for the third product are considered negligible (for example, being obtained by combining the first two products together with other external products). We have storage costs of holding inventory given by constants (*β*1, *β*2, *β*3) for each type of product, and there are no restrictions on production capacity. The goal is to find an optimal production plan such as to ensure the required quantities of each type of product on a fixed date but with minimal costs of production and storage.

We will consider the following notations: *Pi* are the products, *i* = 1, 2, 3; *T* is the fixed period of time to ensure the quantities of products; *x<sup>i</sup>* (*t*) are the accumulated quantities by time *t*; *si* are the final quantities required; *p<sup>i</sup>* (*t*) are the rates of production at time *t*; and *ci* are the unit production costs.

We assume that the initial quantity of products is zero. If, however, there is a certain quantity of products, then it is deducted from the final quantities required, that is, *x<sup>i</sup>* (0) = 0 and *x<sup>i</sup>* (*T*) = *si*. The production costs increase linearly with the production level and are given by *ci* = *α<sup>i</sup> p<sup>i</sup>* , *α*1, *α*<sup>2</sup> > 0, *i* = 1, 2. The production rate for the last product is assumed to be given by the law *x*˙ <sup>3</sup> <sup>=</sup> *<sup>u</sup>*1*x*<sup>2</sup> <sup>+</sup> *<sup>u</sup>*2*x*1, where *<sup>u</sup>*1, *<sup>u</sup>*<sup>2</sup> <sup>≥</sup> 0 are control variables with *x*˙ <sup>1</sup> = *u*1, *x*˙ <sup>2</sup> = *u*2. It is known that the inventory level is the accumulated past production *p<sup>i</sup>* = *p<sup>i</sup>* (*t*). Considering *x<sup>i</sup>* (0) = 0, we obtain:

$$x^i(t) = \int\_0^t p^i(s)ds$$

and the rate of change of inventory level *x*˙ *<sup>i</sup>* is the production rate and we have *x*˙ *i* (*t*) = *p<sup>i</sup>* (*t*). The unit production costs *ci* increase linearly with the production level, *ci* = *α<sup>i</sup> p<sup>i</sup>* , where *α*1, *α*<sup>2</sup> > 0 are positive constants, and it results that the total cost of production is given by *c*<sup>1</sup> *p*<sup>1</sup> + *c*<sup>2</sup> *p*<sup>2</sup> = *α*1(*p*1)<sup>2</sup> + *α*2(*p*2)<sup>2</sup> = *α*1(*x*˙ <sup>1</sup>)<sup>2</sup> + *α*2(*x*˙ <sup>2</sup>)<sup>2</sup> = *α*1*u*<sup>2</sup> <sup>1</sup> + *<sup>α</sup>*2*u*<sup>2</sup> <sup>2</sup>. It results that the total cost of production and storage is *α*1*u*<sup>2</sup> <sup>1</sup> + *<sup>α</sup>*2*u*<sup>2</sup> <sup>2</sup> + *<sup>β</sup>*1*x*<sup>1</sup> + *<sup>β</sup>*2*x*<sup>2</sup> + *<sup>β</sup>*3*x*3. Finally, we obtain the following optimal control problem:

$$\begin{aligned} \dot{\mathbf{x}}^1 &= u\_1 \\ \dot{\mathbf{x}}^2 &= u\_2 \\ \dot{\mathbf{x}}^3 &= u\_1 \mathbf{x}^2 + u\_2 \mathbf{x}^1 \\ \mathbf{x}^i(0) &= 0, \quad \mathbf{x}^i(T) = s\_i \\ \mathbf{u}\_1, \mathbf{u}\_2 &\ge 0, \quad \mathbf{u}\_1, \mathbf{u}\_2 > 0, \quad \beta\_1, \beta\_2, \beta\_3 \ge 0. \end{aligned} \tag{17}$$

We are looking for a plan of production with the minimum total cost:

$$\min\_{\mathbf{u}} \int\_{0}^{T} \left( \alpha\_{1} (\mu\_{1}(t))^{2} + \alpha\_{2} (\mu\_{2}(t))^{2} + \beta\_{1} \mathbf{x}^{1} + \beta\_{2} \mathbf{x}^{2} + \beta\_{3} \mathbf{x}^{3} \right) dt.$$

From the conditions *u*1, *u*<sup>2</sup> ≥ 0, it results that *x*˙ <sup>1</sup> <sup>≥</sup> 0 and *<sup>x</sup>*˙ <sup>2</sup> <sup>≥</sup> 0 and *<sup>x</sup>*1(*t*), *<sup>x</sup>*2(*t*) are increasing functions, which together with the initial conditions *x<sup>i</sup>* (0) = 0 ensures the economic conditions of positivity *x<sup>i</sup>* (*t*) ≥ 0, *i* = 1, 2. In addition, *x*˙ <sup>3</sup> <sup>=</sup> *<sup>u</sup>*1*x*<sup>2</sup> <sup>+</sup> *<sup>u</sup>*2*x*<sup>1</sup> <sup>≥</sup> 0, and using *<sup>x</sup>*3(0) = 0, we obtain *<sup>x</sup>*3(*t*) <sup>≥</sup> 0. Some different mathematical models can be

found in the papers [10,12–15]. It results that the system (17) is a driftless control affine system on R<sup>3</sup> <sup>+</sup>, written in the following form:

$$\begin{aligned} \dot{\mathbf{x}} &= \boldsymbol{\mu}\_1 \mathbf{X}\_1 + \boldsymbol{\mu}\_2 \mathbf{X}\_2, \quad \mathbf{x} = (\mathbf{x}^1, \mathbf{x}^2, \mathbf{x}^3)^t \in \mathbb{R}\_+^3, \\ \min\_{\boldsymbol{\mu}} & \int\_0^T F(\boldsymbol{\mu}(t), \mathbf{x}(t)) dt, \end{aligned}$$

where we have denoted the vector fields and the total cost:

$$\begin{aligned} X\_1 &= \begin{pmatrix} 1 \\ 0 \\ \mathbf{x}^2 \end{pmatrix}, \quad X\_2 = \begin{pmatrix} 0 \\ 1 \\ \mathbf{x}^1 \end{pmatrix} \\ F(\boldsymbol{\mu}(t), \boldsymbol{\nu}(t)) &= \alpha\_1 (\boldsymbol{\mu}\_1(t))^2 + \alpha\_2 (\boldsymbol{\mu}\_2(t))^2 + \beta\_1 \mathbf{x}^1 + \beta\_2 \mathbf{x}^2 + \beta\_3 \mathbf{x}^3. \end{aligned}$$

In the next, we are looking for the optimal trajectories of the dynamical system starting from the initial point (0, 0, 0) and endpoint (*s*1,*s*2,*s*3). In addition, the distribution Δ = *span*{*X*1, *X*2} generated by the vector fields *X*1, *X*<sup>2</sup> has constant dimension, dim Δ(*x*) = 2, for all *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>3. Moreover, in a natural basis *<sup>∂</sup> <sup>∂</sup>x*<sup>1</sup> , *<sup>∂</sup> <sup>∂</sup>x*<sup>2</sup> , *<sup>∂</sup> ∂x*<sup>3</sup> of R3, the vector fields have the following expressions:

$$X\_1 = \frac{\partial}{\partial \mathbf{x}^1} + \mathbf{x}^2 \frac{\partial}{\partial \mathbf{x}^3}, \quad X\_2 = \frac{\partial}{\partial \mathbf{x}^2} + \mathbf{x}^1 \frac{\partial}{\partial \mathbf{x}^3}.$$

Using the Lie bracket formula [ *f X*, *gY*] = *f g*[*X*,*Y*] + *f X*(*g*)*Y* − *gY*(*f*)*X*, it results in the following:

$$[X\_1, X\_2] = \left[\frac{\partial}{\partial x^1} + \mathbf{x}^2 \frac{\partial}{\partial \mathbf{x}^3}, \frac{\partial}{\partial \mathbf{x}^2} + \mathbf{x}^1 \frac{\partial}{\partial \mathbf{x}^3}\right] = \frac{\partial}{\partial \mathbf{x}^3} - \frac{\partial}{\partial \mathbf{x}^3} = 0,$$

and we obtain that the distribution Δ is involutive. From the Frobenius theorem, it results that the distribution is integrable (holonomic), and it determines a foliation on R<sup>3</sup> <sup>+</sup>. Two points can be joined by an optimal trajectory if and only if they are situated on the same leaf. In fact, the economical system is not controllable in the sense that we cannot reach any final stock quantity. Indeed, from the system (17), we obtain:

$$\dot{\mathbf{x}}^3 = \dot{\mathbf{x}}^1 \mathbf{x}^2 + \dot{\mathbf{x}}^2 \mathbf{x}^1 \mathbf{y}$$

and it results, through integration, that *<sup>x</sup>*<sup>3</sup> <sup>=</sup> *<sup>x</sup>*1*x*<sup>2</sup> <sup>+</sup> *<sup>c</sup>*, *<sup>c</sup>* <sup>∈</sup> <sup>R</sup>, which are the surfaces in <sup>R</sup><sup>3</sup> +, which determine a foliation. Moreover, using *x<sup>i</sup>* (0) = 0, we obtain the relation *x*<sup>3</sup> = *x*1*x*2. From *x<sup>i</sup>* (*T*) = *si*, it results that the problem has a solution (the system is controllable) if and only if the final product amounts satisfy the condition *s*<sup>3</sup> = *s*1*s*2. In order to use the Pontryagin Maximum Principle in the framework of Lie algebroids, we will consider *E* = Δ (holonomic distribution with constant rank), where the anchor *σ* : *E* → *TM* is the inclusion and [·, ·]*<sup>E</sup>* is the induced Lie bracket. In the case of the previous control system (17), the anchor *σ* has the local components given by:

$$
\sigma\_\alpha^i = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ x^2 & x^1 \end{pmatrix}.
$$

and we consider the Lagrangian function given by:

$$\mathcal{L} = F(u(t), \mathbf{x}(t)) = \alpha\_1 (u\_1(t))^2 + \alpha\_2 (u\_2(t))^2 + \beta\_1 \mathbf{x}^1 + \beta\_2 \mathbf{x}^2 + \beta\_3 \mathbf{x}^3.$$

Using the Legendre transformation associated to the regular Lagrangian L, we can find the Hamiltonian H on *E*<sup>∗</sup> in the form:

$$\mathcal{H}(\mu) = \frac{(\mu\_1)^2}{4\alpha\_1} + \frac{(\mu\_2)^2}{4\alpha\_2} - \beta\_1 \mathbf{x}^1 - \beta\_2 \mathbf{x}^2 - \beta\_3 \mathbf{x}^3. \tag{18}$$

Using the relations (6) and (7), we can calculate the Hamiltonian *H* on *T*∗*M* given by *<sup>H</sup>*(*x*, *<sup>p</sup>*) = <sup>H</sup>(*μ*), *<sup>μ</sup>* <sup>=</sup> *<sup>σ</sup>*(*p*), where

$$
\begin{pmatrix} \mu\_1 \\ \mu\_2 \end{pmatrix} = \begin{pmatrix} 1 & 0 & \mathbf{x}^2 \\ 0 & 1 & \mathbf{x}^1 \end{pmatrix} \begin{pmatrix} p\_1 \\ p\_2 \\ p\_3 \end{pmatrix}.
$$

We find that *μ*<sup>1</sup> = *p*<sup>1</sup> + *p*3*x*2, *μ*<sup>2</sup> = *p*<sup>2</sup> + *p*3*x*1, and it results in the Hamiltonian function in the cotangent bundle:

$$H(\mathbf{x}, p) = \frac{(p\_1 + p\_3 \mathbf{x}^2)^2}{4a\_1} + \frac{(p\_2 + p\_3 \mathbf{x}^1)^2}{4a\_2} - \beta\_1 \mathbf{x}^1 - \beta\_2 \mathbf{x}^2 - \beta\_3 \mathbf{x}^3. \tag{19}$$

However, the Equations (1) on *T*∗*M* with *H*(*x*, *p*) from (19) lead to a complicated system of implicit differential equations. We will use the framework of a Lie algebroid. We will consider two cases:

#### *4.1. The Case β*<sup>1</sup> = *β*<sup>2</sup> = *β*<sup>3</sup> = 0

From an economic point of view, this means that we have no storage costs, for example, the products are delivered immediately after manufacture.

**Theorem 4.** *The optimal solution of the control system (17) in the case of zero storage costs has the following form for* 0 ≤ *t* ≤ *T:*

$$\mathbf{x}^1(t) = \frac{\mathbf{s}\_1 t}{T}, \quad \mathbf{x}^2(t) = \frac{\mathbf{s}\_2 t}{T}, \quad \mathbf{x}^3(t) = \frac{\mathbf{s}\_1 \mathbf{s}\_2 t^2}{T^2}, \tag{20}$$

*where the production rates (control variables) are positive constants:*

$$
\mu\_1(t) = \frac{s\_1}{T}, \quad \mu\_2(t) = \frac{s\_2}{T}. \tag{21}
$$

**Proof.** The Hamiltonian function (18) has, in this case, the form <sup>H</sup>(*μ*) = (*μ*1)<sup>2</sup> <sup>4</sup>*α*<sup>1</sup> <sup>+</sup> (*μ*2)<sup>2</sup> <sup>4</sup>*α*<sup>2</sup> . From the relation [*Xα*, *Xβ*] = 0 and Equation (5), we deduce the following:

$$\dot{\mathbf{x}}^1 = \frac{\mu\_1}{2a\_1}, \quad \dot{\mathbf{x}}^2 = \frac{\mu\_2}{2a\_2}, \quad \dot{\mathbf{x}}^3 = \frac{\mu\_1 \mathbf{x}^2}{2a\_1} + \frac{\mu\_2 \mathbf{x}^1}{2a\_2}, \quad \dot{\mu}\_1 = 0, \quad \dot{\mu}\_2 = 0, \dots$$

which leads to *μ*<sup>1</sup> = *a*1, *μ*<sup>2</sup> = *a*2, where *a*1, *a*<sup>2</sup> ∈ R. It results in *x*˙ <sup>1</sup> = *<sup>a</sup>*<sup>1</sup> 2*α*<sup>1</sup> , *x*˙ <sup>2</sup> = *<sup>a</sup>*<sup>2</sup> 2*α*<sup>2</sup> . Moreover, *x*<sup>1</sup> = *<sup>a</sup>*1*<sup>t</sup>* <sup>2</sup>*α*<sup>1</sup> <sup>+</sup> *<sup>b</sup>*1, *<sup>x</sup>*<sup>2</sup> <sup>=</sup> *<sup>a</sup>*2*<sup>t</sup>* <sup>2</sup>*α*<sup>2</sup> <sup>+</sup> *<sup>b</sup>*2. By using *<sup>x</sup><sup>i</sup>* (0) = 0, we obtain *x*1(*t*) = *<sup>a</sup>*1*<sup>t</sup>* 2*α*<sup>1</sup> , *x*2(*t*) = *<sup>a</sup>*2*<sup>t</sup>* <sup>2</sup>*α*<sup>2</sup> and *<sup>x</sup>*3(*t*) = *<sup>a</sup>*1*a*2*<sup>t</sup>* 2 <sup>4</sup>*α*1*α*<sup>2</sup> . From *<sup>x</sup><sup>i</sup>* (*T*) = *si*, we obtain *s*<sup>1</sup> = *<sup>a</sup>*1*<sup>T</sup>* <sup>2</sup>*α*<sup>1</sup> , *<sup>s</sup>*<sup>2</sup> <sup>=</sup> *<sup>a</sup>*2*<sup>T</sup>* <sup>2</sup>*α*<sup>2</sup> , which leads to *a*<sup>1</sup> = <sup>2</sup>*α*1*s*<sup>1</sup> *<sup>T</sup>* , *<sup>a</sup>*<sup>2</sup> <sup>=</sup> <sup>2</sup>*α*2*s*<sup>2</sup> *<sup>T</sup>* , and we finally obtain, for 0 ≤ *t* ≤ *T*, the results (20) and (21). The solution is optimal because the Hamiltonian is a convex function, which ends the proof.

#### *4.2. The Case β*1, *β*2, *β*<sup>3</sup> > 0

From an economic point of view, this means that the products are stored and delivered at the end of the fixed period.

**Theorem 5.** *The optimal solution of the control system (17) is given by:*

$$\alpha^1(t) = c\_1 e^{at} + c\_2 e^{-at} + c\_3 \cos at + c\_4 \sin at - \frac{\beta\_2}{\beta\_3} \tag{22}$$

$$\mathbf{x}^2(t) = \sqrt{\frac{\alpha\_1}{\alpha\_2}} \left( c\_1 e^{at} + c\_2 e^{-at} - c\_3 \cos at - c\_4 \sin at \right) - \frac{\beta\_1}{\beta\_3},\tag{23}$$

$$\mathbf{x}^3(t) = \mathbf{x}^1(t) \mathbf{x}^2(t),$$

*with control variables:*

$$u\_1(t) = \frac{\sqrt{\beta\_3}}{\sqrt{2\sqrt{\alpha\_1 \alpha\_2}}} \left(c\_1 e^{at} - c\_2 e^{-at} - c\_3 \sin at + c\_4 \cos at\right),\tag{24}$$

$$u\_2(t) = \frac{\sqrt{\beta\_3 \sqrt{a\_1}}}{\sqrt{2a\_2}} \left( c\_1 e^{at} - c\_2 e^{-at} + c\_3 \sin at - c\_4 \cos at \right), \tag{25}$$

*where α* = √ √ *β*3 2 <sup>√</sup>*α*1*α*<sup>2</sup> *, β* = *αT and constant coefficients*

$$\mathcal{L}\_1 = \frac{1}{e^{\beta} - e^{-\beta}} \left( \frac{s\_1}{2} + \frac{s\_2 \sqrt{\alpha\_2}}{2 \sqrt{\alpha\_1}} + \left( \frac{\beta\_2}{2 \beta\_3} + \frac{\beta\_1 \sqrt{\alpha\_2}}{2 \beta\_3 \sqrt{\alpha\_1}} \right) \left( 1 - e^{\beta} \right) \right), \tag{26}$$

$$\varepsilon\_{2} = \frac{1}{\varepsilon^{-\beta} - \varepsilon^{\beta}} \left( \frac{s\_{1}}{2} + \frac{s\_{2}\sqrt{\alpha\_{2}}}{2\sqrt{\alpha\_{1}}} + \left(\frac{\beta\_{2}}{2\beta\_{3}} + \frac{\beta\_{1}\sqrt{\alpha\_{2}}}{2\beta\_{3}\sqrt{\alpha\_{1}}}\right) \left(1 - \varepsilon^{-\beta}\right) \right),\tag{27}$$

$$c\_3 = \frac{\beta\_2}{2\beta\_3} - \frac{\beta\_1 \sqrt{\alpha\_2}}{2\beta\_3 \sqrt{\alpha\_1}},\tag{28}$$

$$c\_4 = \frac{1}{2\sin\beta} \left( s\_1 + \frac{\beta\_2}{\beta\_3} - \frac{\sqrt{\alpha\_2}}{\sqrt{\alpha\_1}} \left( s\_2 + \frac{\beta\_1}{\beta\_3} \right) - \left( \frac{\beta\_2}{\beta\_3} - \frac{\beta\_1}{\beta\_3} \frac{\sqrt{\alpha\_2}}{\sqrt{\alpha\_1}} \right) \cos\beta \right). \tag{29}$$

**Proof.** The necessary conditions for optimality (5) and the Hamiltonian function (18) lead to the following differential equations:

$$\dot{\mathbf{x}}^1 = \frac{\mu\_1}{2a\_1}, \; \dot{\mathbf{x}}^2 = \frac{\mu\_2}{2a\_2}, \; \dot{\mathbf{x}}^3 = \mathbf{x}^2 \frac{\mu\_1}{2a\_1} + \mathbf{x}^1 \frac{\mu\_2}{2a\_2}, \tag{30}$$

$$
\dot{\mu}\_1 = \beta\_1 + \mathbf{x}^2 \beta\_3, \; \dot{\mu}\_2 = \beta\_2 + \mathbf{x}^1 \beta\_3. \tag{31}
$$

From (30) and (31) by derivation, it results in the following:

$$\dot{\mathbf{x}}^1 = \frac{\dot{\mu}\_1}{2a\_1} = \frac{\beta\_1}{2a\_1} + \mathbf{x}^2\\\frac{\beta\_3}{2a\_1}, \dot{\mathbf{x}}^2 = \frac{\dot{\mu}\_2}{2a\_2} = \frac{\beta\_2}{2a\_2} + \mathbf{x}^1\\\frac{\beta\_3}{2a\_2},\tag{32}$$

which leads, by twice derivation, to:

$$\text{"\"\"\"\"}^1 = \text{\"}^2 \frac{\beta\_3}{2a\_1}, \text{\"\"\"\"}^2 = \text{\"}^1 \frac{\beta\_3}{2a\_2}, \tag{33}$$

and together with (32) yields the nonhomogeneous differential equations of order 4, given by:

$$\mathbf{x}^{1} = \mathbf{x}^{1} \frac{\beta\_{3}^{2}}{4\alpha\_{1}\alpha\_{2}} + \frac{\beta\_{2}\beta\_{3}}{4\alpha\_{1}\alpha\_{2}},\tag{34}$$

$$\dddot{\mathbf{x}}^2 = \mathbf{x}^2 \frac{\beta\_3^2}{4\alpha\_1\alpha\_2} + \frac{\beta\_1\beta\_3}{4\alpha\_1\alpha\_2}.\tag{35}$$

Considering the homogeneous differential equation from (34)

$$\mathbf{f}\dddot{\mathbf{x}}^1 = \mathbf{x}^1 \frac{\beta\_3^2}{4\alpha\_1\alpha\_2},\tag{36}$$

with characteristic equation

$$
\lambda^4 - \frac{\beta\_3^2}{4\alpha\_1\alpha\_2} = 0,
$$

we obtain the following solutions:

$$
\lambda\_{1,2} = \pm \frac{\sqrt{\mathcal{B}\_3}}{\sqrt{2\sqrt{\pi\_1 \kappa\_2}}}, \\
\lambda\_{3,4} = \pm \frac{i\sqrt{\mathcal{B}\_3}}{\sqrt{2\sqrt{\pi\_1 \kappa\_2}}}.
$$

Then, we obtain the general solution of the homogeneous differential Equation (36) in the following form:

$$\mathbf{x}^1(t) = c\_1 \mathbf{e}^{\alpha t} + c\_2 \mathbf{e}^{-\alpha t} + c\_3 \cos \alpha t + c\_4 \sin \alpha t + d\_{1,2}$$

and the solution of the nonhomogeneous differential Equation (34) is given by (22).

In the same way, considering Equations (32) and (35), obtain (23). Using *u*<sup>1</sup> = *x*˙ 1, *u*<sup>2</sup> = *x*˙ 2, we obtain (24) and (25). The initial and final conditions *x<sup>i</sup>* (0) = 0, *x<sup>i</sup>* (*T*) = *si*, *i* = 1, 2 lead to the following linear system:

$$\begin{cases} c\_1 + c\_2 + c\_3 = \frac{\beta\_2}{\beta\_3} \\ c\_1 + c\_2 - c\_3 = \frac{\beta\_1 \sqrt{\alpha\_2}}{\beta\_3 \sqrt{\alpha\_1}} \\ c\_1 e^{\beta} + c\_2 e^{-\beta} + c\_3 \cos \beta + c\_4 \sin \beta = s\_1 + \frac{\beta\_2}{\beta\_3} \\ c\_1 e^{\beta} + c\_2 e^{-\beta} - c\_3 \cos \beta - c\_4 \sin \beta = \frac{\sqrt{\alpha\_2}}{\sqrt{\alpha\_1}} \left( s\_2 + \frac{\beta\_1}{\beta\_3} \right) \end{cases}$$

By direct computation, we obtain the solution (26)–(29). The solution is optimal because the Hamiltonian is convex.

#### **5. Conclusions**

In this paper, some topics of dynamical systems (distributional systems) using Lie geometric methods are studied. In the case of two driftless control affine systems with holonomic distributions, we proved that the framework of Lie algebroids is more sustainable than cotangent bundles in order to apply the Pontryagin Maximum Principle and find the optimal solutions. This approach significantly simplifies the study and shows once again the intrinsic link between geometry and the optimal control, in particular between Lie algebroids and distributional systems with holonomic distributions. In addition, an economical application is given. As further developments, we will try to use the framework of Lie algebroids in the case of nonholonomic distribution (in particular, strong bracketgenerating distributions) and characterize the abnormal solutions using the geometry of Lie algebroids.

**Author Contributions:** Conceptualization, L.P.; methodology, L.P.; investigation, L.P., D.M., G.T.; writing, L.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work of PhD student Gabriel Tic ˘a was supported by the grant POCU380/6/13/123990, co-financed by the European Social Fund within the Sectorial Operational Human Capital 2014–2020.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

