Next Article in Journal
A Multi-Objective Pigeon-Inspired Optimization Algorithm for Community Detection in Complex Networks
Previous Article in Journal
Optimal Control for an Epidemic Model of COVID-19 with Time-Varying Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Time-Optimal Motions of a Mechanical System with Viscous Friction

by
Dmitrii Kamzolkin
and
Vladimir Ternovski
*,†
Department of Computational Mathematics and Cybernetics, Shenzhen MSU-BIT University, International University Park Road 1, Shenzhen 518172, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(10), 1485; https://doi.org/10.3390/math12101485
Submission received: 17 April 2024 / Revised: 5 May 2024 / Accepted: 9 May 2024 / Published: 10 May 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
Optimal control is a critical tool for mechanical robotic systems, facilitating the precise manipulation of dynamic processes. These processes are described through differential equations governed by a control function, addressing a time-optimal problem with bilinear characteristics. Our study utilizes the classical approach complemented by Pontryagin’s Maximum Principle (PMP) to explore this inverse optimal problem. The objective is to develop an exact piecewise control function that effectively manages trajectory control while considering the effects of viscous friction. Our simulations demonstrate that the proposed control law markedly diminishes oscillations induced by boundary conditions. This research not only aims to delineate the reachability set but also strives to determine the minimal time required for the process. The findings include an exact analytical solution for the stated control problem.

1. Introduction

This paper explores the time-optimal control in systems subject to viscous friction, an aspect vital across various domains including robotics and economic systems. It particularly examines the application of Pontryagin’s Maximum Principle (PMP) to provide a fundamental understanding necessary for mastering time-optimal control strategies [1,2,3,4]. The literature cited includes a broad range of sources from theoretical frameworks to practical applications, highlighting both the complexities and strategies for optimizing control processes to enhance time efficiency.
Incorporating the damping term, denoted as μ > 0 , significantly increases the complexity of solving the optimal problem and understanding the dynamics of the system. Despite this complexity, including the damping term is vital for developing methods to experimentally determine modal characteristics, such as eigenmodes, eigenfrequencies, and generalized masses. The cited references [5,6,7] specifically address the behavior of the damped system for computational and, more importantly, for experimental analysis purposes. It is well known that transient simulation of systems with friction requires excessive computational power due to the nonlinear constitutive laws and the high stiffnesses involved. In Ref. [8], authors proposed control laws for friction dampers which maximize energy dissipation in an instantaneous sense by modulating the normal force at the friction interface. Besides optimization of the mechanical design or various types of passive damping treatments, active structural vibration control concepts are efficient means to reduce unwanted vibrations [9]. The conclusion from this broad survey is that the system model and friction model are fundamentally coupled, and they cannot be chosen independently.
Viscous dampers work by converting mechanical energy from motion (kinetic energy) into heat energy through viscous fluids. As a part of the damping process, they oppose the relative motion through fluid resistance, effectively controlling the speed and motion of connected components. Viscous dampers are essential for managing dynamic systems where control of movement and stability is necessary, making them indispensable in many high-stakes environments like automotive engineering and structural design. These dampers are increasingly sophisticated, incorporating technologies like electrorheological and magnetorheological fluids, which allow for variable stiffness and damping properties. This adaptability enhances their ability to mitigate vibrations across various earthquake intensities [10]. By integrating dampers into the structural design using mathematical models, engineers can significantly improve a building’s ability to absorb and dissipate energy during earthquakes. This includes detailed discussions on the calculation of damping coefficients and their impact on the building’s overall dynamic response to seismic events [11]. It is noteworthy that optimizing this type of damper (friction damper) remains a relatively unexplored subject worldwide, which highlights the innovative nature of our paper and serves as the driving motivation for our research.
Within the sphere of optimal control, the time-varying harmonic oscillator garners particular interest for its ability to reach designated energy levels effectively in the form
x ¨ ( t ) + μ x ˙ ( t ) + ω 2 ( t ) x ( t ) = u ( t ) .
Systems that are linear with respect to their variables and exhibit bounded control u ( t ) from the right side (1) often resort to a bang–bang control strategy. The oscillations in such a system significantly differ both from the natural oscillations in a system described by an equation with constant coefficients and from the forced oscillations due to an external force that depends only on time. This approach toggles the system’s excitation between two extremities at precisely calculated switching intervals, which are essential as they mark the instances of control adjustments. These intervals are visually represented by a switching curve within the state space, directing the oscillator’s management for any given state combination (position x ( t ) and velocity x ˙ ( t ) ). An extensive examination of time-optimality for both undamped and damped harmonic oscillators, including simulations that illustrate their practicality, is detailed in references [12,13].
A complex nonlinear system under state feedback control with a time delay corresponding to two coupled nonlinear oscillators with a parametric excitation is investigated by an asymptotic perturbation method based on Fourier expansion and time rescaling [14]. Given that the present investigation focuses on the optimal control of the coefficient ω ( t ) and u ( t ) = 0 , the issue assumes a bilinear form. In the field of engineering, particularly in nonlinear dynamics, parametric excitation is used to control vibrations in complex mechanical systems. The pendulum with periodically varying length which is also treated as a simple model of a child’s swing is investigated in [15]. Simulations were performed in [16] on a double obstacle pendulum system to investigate the effects of various parameters, including the positions and quantities of obstacle pins, and the initial release angles, on the pendulum’s motion through numerical simulations. The pendulum with vertically oscillating support and the pendulum with periodically varying length were considered as two forced dissipative pendulum systems, with a view to draw comparisons between their behavior [17]. The varying length pendulum is studied to address its oscillations damping using the conveniently generated Coriolis force [18]. By applying the homotopy analysis method to the governing equation of the pendulum, a closed-form approximate solution was obtained [19].
The damping results in prolonged oscillations until equilibrium is achieved. Adjusting the damping coefficient, ω ( t ) , can expedite the damping process. Time-optimal control problems, known for their inverse characteristics, are prone to instability [20], which challenges traditional analytical approaches and necessitates regularization of solutions. To complement complex analytical solutions, numerical methods are employed, offering a tangible presentation of results. This research unveils an analytical solution for the control function ω ( t ) and the optimal duration of the process across a wide range of parameters. It also introduces bang–bang relay-type controls and defines the system’s reachability set.
Moreover, the paper underscores the critical role of time-optimal control in contemporary industrial and technological realms, stressing the urgency for durable solutions where time efficiency is pivotal to the sustainability of robot-technical systems [21].
To summarize, we address the time-optimal control problem, which must primarily be solved analytically. Of course, this problem can be solved numerically, but this is difficult for a wide range of parameters. This research addresses whether the time-optimal process exhibits periodicity and whether the control function shows symmetry throughout its period, with findings confirming the former and refuting the latter. This focus on the control coefficient ω ( t ) opens new avenues for inquiry, especially concerning the periodicity of the optimal process and the nature of the control function, providing both a confirmation of, and challenging, insights into established assumptions.
The rest of this paper is organized as follows. Section 2 contains the formulation of the optimal control problem. Section 3 contains a preliminary study of the considered controlled system and reveals some of its properties. Section 4 reveals the local properties of the problem and the application of the maximum principle (PMP) to a single semi-oscillation. Section 5 concludes the study and establishes the global properties of the optimal solution. Section 6 presents the main result of the study: a step-by-step optimization algorithm for solving the problem. Section 6 also presents numerical examples and a discussion of the results obtained. The full text of the paper is summarized in Section 7.

2. Optimal Control Problem Statement

Let us consider the optimal control problem of a mechanical system:
x ¨ ( t ) + μ x ˙ ( t ) + ω 2 ( t ) x ( t ) = 0 , x ( 0 ) = A , x ˙ ( 0 ) = 0 , A > 0 , x ( T ) = B , x ˙ ( T ) = 0 , B 0 , ω 0 ω ( t ) 1 , t [ 0 , T ] , 0 < ω 0 < 1 , T min ω ( t ) ,
where x ( t ) is the coordinate, and ω ( t ) is the unknown frequency of the external controlling action, subject to determination. The minimum in the problem is sought in the class of piecewise–continuous functions ω ( t ) . μ is the coefficient of viscous friction, where 0 < μ < 2 ω 0 . If this condition is violated, subsequent analysis is also possible, but we have not investigated it, as we believe it does not arouse interest from a technical point of view. The case A < 0 leads to a consideration of a change in the sign of the variable x ( t ) .
In this setting, the problem is not symmetric with respect to time inversion because of friction.

3. General Properties of the Controlled System (2)

With any permissible control ω ( t ) , it is observed that the trajectory x ( t ) of the controlled system in (2) oscillates around the starting coordinate with successive intervals of monotonic increase and decrease (Figure 1). The amplitude and duration of each oscillation can vary, based on the chosen control function ω ( t ) (typically discontinuous). Indeed, if the conditions x ˙ ( t * ) = 0 and x ( t * ) 0 are satisfied at some moment in time t * [ 0 , T ] , it can be derived from the differential equation of problem (2) that x ¨ ( t ) = μ x ˙ ( t ) ω 2 ( t ) x ( t ) . Given that the functions x ( t ) and x ˙ ( t ) are continuous, the sign of the second derivative will match the sign of x ( t ) in a small vicinity of point t * , except, possibly, at a finite number of discontinuity points of the function ω ( t ) . This implies that, for x ( t * ) > 0 , the trajectory will have a point of local maximum, and, for x ( t * ) < 0 , a point of local minimum.
From the boundary conditions, it is understood that the speeds x ˙ ( 0 ) and x ˙ ( T ) at the initial and final moments of time equal zero, a situation that occurs only at the extreme points of the oscillatory process. These moments in time are denoted as t i (Figure 1), and the time intervals t [ t i , t i + 1 ] are referred to as semi-oscillations. From this point, it is inferred that the optimal trajectory comprises a whole number of semi-oscillations N, being an even number when B > 0 , and an odd number when B < 0 ( A > 0 ) .
To investigate the total optimal control problem, let us divide the trajectory into separate semi-oscillations and first solve the problem for one semi-oscillation t [ t i , t i + 1 ] . We will denote A i = x ( t i ) ( A 0 = A , A N = B ). It leads to the following N subproblems for i = 0 , , N 1 :
x ¨ ( t ) + μ x ˙ ( t ) + ω 2 ( t ) x ( t ) = 0 , t [ t i , t i + 1 ] , x ( t i ) = A i , x ˙ ( t i ) = 0 , x ( t i + 1 ) = A i + 1 , x ˙ ( t i + 1 ) = 0 , t i + 1 t i min ω ( t ) .
Utilizing the linearity and homogeneity of the differential equation allows for the normalization of the variable x ( t ) by dividing it by its initial value A i . It is also taken into account that the coefficient of friction μ is independent of time t, meaning the initial moment in time can be considered as zero. This approach transforms all subproblems (3) for i = 0 , , N 1 into a unified auxiliary mini-problem of optimal control:
x ¨ i ( t ) + μ x ˙ i ( t ) + ω i 2 ( t ) x i ( t ) = 0 , t [ 0 , T i ] , x i ( 0 ) = 1 , x ˙ i ( 0 ) = 0 , x i ( T i ) = A i + 1 / A i , x ˙ i ( T i ) = 0 , T i min ω i ( t ) .
Given probem (2) and knowing the numbers t i and A i , the equation for optimal time in task (4) will be accurately represented by T i = t i + 1 t i , and the optimal trajectories and control in the auxiliary task (4) will coincide with the optimal trajectories and control in task (2) over the interval [ t i , t i + 1 ] [1]. It will be demonstrated below that the optimal process is broken down into individual equal time intervals, calculated using analytical formulas.
Furthermore, for convenience in solving (4), instead of x i and ω i , the notations x and ω will be used.

4. Solution of the Optimal Control Problem for a Single Semi-Oscillation

In the previous section, it was demonstrated how to resolve the initial problem (2) by first solving an auxiliary problem:
x ¨ ( t ) + μ x ˙ ( t ) + ω 2 ( t ) x ( t ) = 0 , x ( 0 ) = 1 , x ˙ ( 0 ) = 0 , x ( T ) = C < 0 , x ˙ ( T ) = 0 , x ˙ ( t ) < 0 , t ( 0 , T ) , T min ω ( t ) ,
and find the dependency of the optimal time T on the terminal value C.
Here, the condition x ˙ ( t ) < 0 denotes the monotonicity of the trajectory x ( t ) , which corresponds to one semi-oscillation.
First, the question of controllability will be examined, and the range of values for C for which problem (5) has a solution will be defined.
The following notations will be introduced:
β 1 = 1 μ 2 4 , β 2 = ω 0 2 μ 2 4 ,
φ 1 = arctan 2 β 1 μ , φ 2 = arctan 2 β 2 μ ,
The largest value x max = | x ( T ) | can be attained with the control
ω ( t ) = 1 , x ( t ) > 0 , ω 0 , x ( t ) 0 ,
because, with such control, acceleration is maximized when x ( t ) 0 and deceleration is minimized when x ( t ) < 0 .
Similarly, the smallest value x min = | x ( T ) | can be reached analogously with the control
ω ( t ) = ω 0 , x ( t ) > 0 , 1 , x ( t ) 0 .
Solving the differential equation with the boundary conditions from system (5) and with control (6) or (7), the following is obtained:
x min | x ( T ) | x max ,
where
x min = ω 0 e μ 2 π φ 2 β 2 + φ 1 β 1 , x max = 1 ω 0 e μ 2 π φ 1 β 1 + φ 2 β 2 .
To apply PMP [1], we introduce the notation x ˙ ( t ) = v ( t ) and rewrite (5) in the form of a system of first-order differential equations:
x ˙ ( t ) = v ( t ) , v ˙ ( t ) = μ v ( t ) ω 2 ( t ) x ( t ) , x ( 0 ) = 1 , v ( 0 ) = 0 , x ( T ) = C < 0 , v ( T ) = 0 , v ( t ) < 0 , t ( 0 , T ) , T min ω ( t ) .
Now, we let the terminal value C satisfy condition (8), which ensures the controllability of the system.
We write the Pontryagin function:
H ( ψ 1 , ψ 2 , x , v , ω ) = ψ 1 v s . ψ 2 ( μ v s . + ω 2 x )
and denote its upper boundary:
M ( ψ 1 , ψ 2 , x , v ) = sup ω [ ω 0 , 1 ] H ( ψ 1 , ψ 2 , x , v , ω )
If x ( t ) , v ( t ) , and ω ( t ) constitute a solution to the optimal control problem (10), then the following three conditions are satisfied:
(I)
There exist continuous functions ψ 1 ( t ) and ψ 2 ( t ) , which never simultaneously become zero and are solutions to the adjoint system:
ψ ˙ 1 ( t ) = H x = ψ 2 ( t ) ω 2 ( t ) , ψ ˙ 2 ( t ) = H v = ψ 1 ( t ) + μ ψ 2 ( t ) .
(II)
For any t [ 0 , T ] , the maximum condition is satisfied:
H ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) , ω ( t ) ) = M ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) ) .
(III)
For any t [ 0 , T ] , a specific inequality occurs:
M ( ψ 1 ( t ) , ψ 2 ( t ) , x ( t ) , v ( t ) ) 0 .
From condition (12) for the maximum of the function H, the optimal control is obtained in the form
ω ( t ) = 1 , ψ 2 ( t ) x ( t ) < 0 , ω 0 , ψ 2 ( t ) x ( t ) > 0 , unknown value , ψ 2 ( t ) x ( t ) 0 .
Let us show that the case of singular control in Formula (13), specifically when ψ 2 ( t ) x ( t ) 0 over a non-zero length interval of time, is impossible, assuming the opposite. This means considering the existence of a time interval during which ψ 2 ( t ) x ( t ) 0 . In such an interval, determining the value of optimal control from the maximum condition would not be feasible.
Given the continuity of the functions ψ 2 ( t ) and x ( t ) , it is possible either for ψ 2 ( t ) 0 over some interval or for x ( t ) 0 over a certain time period.
If ψ 2 ( t ) 0 , then ψ ˙ 2 ( t ) 0 must also be, identically, zero. However, this conclusion, derived from the second equation of the adjoint system (11), implies that ψ 1 ( t ) 0 , contradicting the maximum principle’s condition (I).
In the scenario where x ( t ) 0 , it follows that v ( t ) = x ˙ ( t ) 0 . Such a case is deemed impossible, as the controlled system cannot stay in a zero state under any control value, given that the term of the system’s differential Equation (5), which includes the control, would also equate to zero.
This reasoning leads to the formulation of a statement:
Lemma 1. 
Optimal control ω ( t ) is limited to only two values, 1 and ω 0 , dictated by the sign of the product ψ 2 ( t ) x ( t ) . Considering the case where this product equals zero as non-existent is justified by the fact that the control value at a single point or a finite number of points lacks any impact on the trajectory of the controlled system.
Now, we consider condition (III). It represents the greatest interest at values t = 0 and t = T .
At t = 0 , the condition is expressed as
M ( ψ 1 ( 0 ) , ψ 2 ( 0 ) , x ( 0 ) , v ( 0 ) ) = H ( ψ 1 ( 0 ) , ψ 2 ( 0 ) , x ( 0 ) , v ( 0 ) , ω ( 0 ) ) = ψ 1 ( 0 ) v ( 0 ) ψ 2 ( 0 ) ( μ v ( 0 ) + ω 2 ( 0 ) x ( 0 ) ) 0 .
At t = T , the condition becomes
M ( ψ 1 ( T ) , ψ 2 ( T ) , x ( T ) , v ( T ) ) = H ( ψ 1 ( T ) , ψ 2 ( T ) , x ( T ) , v ( T ) , ω ( T ) ) = ψ 1 ( T ) v ( T ) ψ 2 ( T ) ( μ v ( T ) + ω 2 ( T ) x ( T ) ) 0 .
Given the boundary conditions that v ( 0 ) = v ( T ) = 0 , and considering that the control value ω ( t ) is always positive, with x ( 0 ) > 0 and x ( T ) < 0 , the following additional conditions are derived from (14) and (15)
ψ 2 ( 0 ) 0 , ψ 2 ( T ) 0 .
Now, we explore the potential form of optimal control and the number of switches. It is already known that the value of optimal control is determined by the sign of the product ψ 2 ( t ) x ( t ) .
The trajectory x ( t ) , due to its monotonic nature, crosses zero only once. This moment in time is denoted as τ .
Thus, control ω ( t ) may only change its value at the point τ and at points where the sign of the adjoint variable ψ 2 ( t ) changes. If at point τ , both x ( t ) and ψ 2 ( t ) change their signs simultaneously, then the control value remains unchanged.
Firstly, consider an interval of time where control ω ( t ) 1 . Then, the general solution x ( t ) of the differential equation from system (4) and ψ 2 ( t ) from the adjoint system (5) will take a specific form:
x ( t ) = e μ 2 t C 1 sin ( β 1 t + C 2 ) , ψ 2 ( t ) = e μ 2 t C 3 sin ( β 1 t + C 4 ) ,
where constants C 1 , C 2 , C 3 , and C 4 must be determined from the boundary conditions on the interval of constant control. The value of the adjoint variable ψ 1 ( t ) is not of interest, as it does not enter into Formula (13).
Now, we consider an interval of time during which control ω ( t ) ω 0 . Similarly, it is obtained that
x ( t ) = e μ 2 t D 1 sin ( β 2 t + D 2 ) , ψ 2 ( t ) = e μ 2 t D 3 sin ( β 2 t + D 4 ) ,
where constants D 1 , D 2 , D 3 , and D 4 are also to be determined from the boundary conditions.
It is now proposed that the adjoint variable ψ 2 ( t ) turns to zero at most twice within the interval [ 0 , τ ] , or in [ τ , T ] . For instance, let ψ 2 ( ξ 1 ) = ψ 2 ( ξ 2 ) = 0 , where 0 ξ 1 < ξ 2 τ . Then, within the interval [ ξ 1 , ξ 1 ] , the control value does not change, and this leads to a contradiction with Formulas (17) and (18) because the distance between zeros of the function ψ 2 ( t ) (for example, π β 1 for Formula (17)) exceeds the maximum length of an interval of constancy of sign and monotonicity of the function x ( t ) (for example, π φ 1 β 1 or φ 1 β 1 ).
Thus, it is proven that
Lemma 2. 
In problem 5, optimal control can have no more than one switch in each of the intervals [ 0 , τ ] and [ τ , T ] .
The function ψ 2 ( t ) has a continuous derivative (as the right-hand side of the second equation of the adjoint system (11) is continuous) and turns to zero no more than twice within the interval [ 0 , T ] . Moreover, these zeros cannot both lie within the same subinterval [ 0 , τ ] or [ τ , T ] . This leads to 10 different cases (Figure 2) of sign changes for the function ψ 2 ( t ) over the interval [ 0 , T ] . Dashed gray lines on the graph indicate scenarios that contradict the PMP, while solid red lines indicate cases with no contradiction with PMP found. A detailed analysis of these cases is provided.
If ψ 2 ( τ ) = 0 , then ψ 2 ( t ) 0 for t τ , leading to cases (1) and (2). In case (1), a constant control equal to 1 is maintained throughout the entire time interval. Case (2) is not possible, as ψ 2 ( T ) < 0 and does not satisfy condition (16).
If ψ 2 ( t ) turns to zero twice within the interval ( 0 , T ) , there exist ξ 1 ( 0 , τ ) and ξ 2 ( τ , T ) such that ψ 2 ( ξ 1 ) = 0 and ψ 2 ( ξ 2 ) = 0 , leading to cases (3) and (4). These cases contradict condition (16) since ψ 2 ( 0 ) and ψ 2 ( T ) have the same sign.
If ψ 2 ( t ) turns to zero once at a point ξ ( 0 , τ ) and does not equal zero within the interval ( τ , T ) , cases (5) and (6) are obtained. Case (5) is impossible because ψ 2 ( 0 ) > 0 .
If ψ 2 ( t ) turns to zero once at a point ξ ( τ , T ) and does not equal zero within the interval ( 0 , τ ) , cases (7) and (8) emerge. Case (8) is not feasible, as ψ 2 ( T ) < 0 .
Finally, if ψ 2 ( t ) does not turn to zero within the interval ( 0 , T ) , cases (9) and (10) are considered. Case (9) is possible if ψ 2 ( 0 ) = 0 . Case (10) is possible if ψ 2 ( T ) = 0 .
After analyzing cases (1–10), it is determined that the following statement holds:
Lemma 3. 
Optimal control (bang–bang) in the problem (5) can be one of the five types represented in Figure 3.
It is noted that all types of control satisfying the maximum principle (illustrated in Figure 3) differ in the length of the segment where the control value equals ω 0 , and its placement, respectively, to the point τ .
Introducing the parameter s = ξ τ , the values of τ and T can be distinctly determined from the equation and three boundary conditions (excluding the condition x ( T ) = C ) of problem (5) by substituting the corresponding control. This results in the determination of the end time T ( s ) and the terminal value C ( s ) = x ( T ( s ) ) as functions of the unknown parameter s.
For control type 3 (illustrated in Figure 3), s = 0 corresponds, and for control type 1, the smallest value s min = π φ 2 β 2 < 0 . For control type 5, the largest value s max = φ 2 β 2 is obtained as the longest possible duration of motion under constant control ω ( t ) ω 0 , that is s max = T τ , with the moments of time τ and T derived from Formula (18) and the conditions x ( τ ) = 0 , x ˙ ( T ) = 0 , x ( T ) < 0 , T > τ , aiming to minimize T τ . Similarly, from Formula (18), the smallest value of s min = τ is obtained. Controls of type 2 and 4 correspond to intermediate values of s within intervals ( s min , 0 ) and ( 0 , s max ) .
Knowing the switching moment of control and having an analytical solution (Formulas (17) and (18)), the end time T and the terminal trajectory value x ( T ) can be explicitly calculated as functions of the parameter s.
Let us consider s 0 , φ 2 β 2 ; then, for t [ 0 , τ ) , ω ( t ) 1 and from Formula (17) and the initial condition x ( 0 ) = 1 , x ˙ ( 0 ) = 0 , it is found that C 1 = 1 β 1 , C 2 = φ 1 , leading to x ( t ) = 1 β 1 e μ 2 t sin ( β 1 t + φ 1 ) and τ = π φ 1 β 1 .
Subsequently, for t [ τ , τ + s ) , ω ( t ) ω 0 and from Formula (18) and the continuity of x ˙ ( t ) at t = τ , similarly, x ( t ) = 1 β 2 e μ 2 t sin ( β 2 ( t τ ) ) .
Finally, for t [ τ + s , T ] , ω ( t ) 1 and from Formula (17) and the continuity of x ˙ ( t ) at t = τ + s , it is found that
x ( t ) = e μ 2 t sin ( β 2 s ) sin ( β 1 ( t τ s ) + φ 3 ) β 2 sin φ 3 ,
where φ 3 = arctan β 1 β 2 tan ( β 2 s ) .
From Formula (19) and the condition x ˙ ( T ) = 0 , the end moment of time is obtained as follows:
T ( s ) = 1 β 1 ( π φ 3 ) + s .
Simplifying Expressions (19) and (20), ultimately, for s [ 0 , s max ] , one obtains
T ( s ) = 1 β 1 π arctan β 1 β 2 tan ( β 2 s ) + s , C ( s ) = x ( T ( s ) ) = 1 β 2 e μ 2 T ( s ) β 2 2 cos 2 ( β 2 s ) + β 1 2 sin 2 ( β 2 s ) .
Conducting analogous calculations for the case s s min , 0 , one obtains
T ( s ) = 1 β 1 π 2 arctan β 2 β 1 cot ( β 2 s ) s , C ( s ) = β 2 e μ 2 T ( s ) β 2 2 cos 2 ( β 2 s ) + β 1 2 sin 2 ( β 2 s ) .
We note that Formulas (21) and (22) parametrically define a certain curve T ( C ) depicting the dependency of the end time on the terminal value C when utilizing controls that satisfy the maximum principle. The parametric formulation of the function allows for the calculation of the first two derivatives of T ( C ) as functions of the variable C. Thus, the following properties of the function T ( C ) are established
Lemma 4
Formulas (21) and (22):
1.
Uniquely determine the function T ( C ) , defined for C [ x max , x min ] .
2.
The function T ( C ) is continuous for C [ x max , x min ] .
3.
The function T ( C ) is differentiable for C ( x max , x min ) . At the endpoints of the interval, the derivative equals infinity, while at the point corresponding to the parameter s = 0 , the derivative equals zero. Let x * = C ( 0 ) be denoted.
4.
The function T ( C ) decreases on the interval C [ x max , x * ] and increases on the interval C [ x * , x min ] .
5.
The second derivative of the function T ( C ) is negative on the intervals C ( x max , x * ) ( x * , x min ) . This condition signifies that the function T ( C ) is concave down for C [ x max , x min ] .
Remark 1. 
It is important to note that the constancy of the sign of the second derivative was established by calculations via symbolic mathematics by Wolfram.
Investigating the properties of the function T ( C ) , it was found that each permissible terminal value C corresponds to a unique control that satisfies the PMP. Therefore, the statement follows
Lemma 5. 
The function T ( C ) , defined by formulas (21) and (22), determines the optimal time in problem (5).

Example 1

Considering an example with given parameters μ = 0.1 , ω 0 = 0.5 , it is calculated that x min 0.39 , and x max 1.59 . From (21), it is found that x * = C ( 0 ) 0.85 . Figure 4 illustrates the graph of the function T ( C ) , demonstrating how the optimal time varies with different terminal values C within the specified range.
It has been demonstrated that each value of s unequivocally corresponds to a specific optimal control and an optimal trajectory, leading to a particular terminal point C ( s ) . Different optimal trajectories, corresponding to various types of controls, are presented in Figure 5. Controls of types 1 and 5 correspond to trajectories reaching the extreme points of the reachability set. Control of type 2 corresponds to the upper branch of the T ( C ) curve (left branch on Figure 4). Control of type 3, which has no switches, corresponds to the trajectory with the minimum possible time. Control of type 4 corresponds to the lower branch of the T ( C ) curve (right branch on Figure 4).
Trajectories are constructed for the given parameter values on Figure 4, but the general character of the drawing does not change with different parameter values.

5. Solution to the General Timing Optimal Problem

We apply the results of the previous section to solve the original problem (2). Let us first explore the question of controllability and determine under what boundary conditions A and B the system is controllable.
Using the estimate (8), we obtain an estimate for x ( T ) depending on the number of semi-oscillations N:
A x min N | x ( T ) | A x max N .
Thus, the lemma is the following:
Lemma 6. 
The system is controllable if and only if there exists an even natural number N (for B > 0 ) or an odd natural number N (for B < 0 ), such that
B A [ x min N , x max N ] .
Since φ 2 ( 0 , π / 2 ) and ω 0 < 1 , it follows that x min < 1 and x min N 0 as N + .
Therefore, the system will be controllable for any non-zero values of A and B provided that
x max > 1 .
Utilizing Formula (9), this inequality can be expressed as follows:
π arctan 4 μ 2 μ 4 μ 2 μ + arctan 4 ω 0 2 μ 2 μ 4 ω 0 2 μ 2 μ > ln ω 0 .
Having resolved the question of controllability, we now return to the original problem of optimal control (2). Given x ( t ) , the solution of the optimal control problem (2), let us consider two consecutive semi-oscillations t [ t i 1 , t i + 1 ] . This segment of the optimal trajectory satisfies the boundary conditions of the original problem and must itself be optimal. Using the results of the previous section and normalizing variable x ( t ) , the time for this segment can be expressed by the formula
t i + 1 t i 1 = T x ( t i + 1 ) x ( t i ) + T x ( t i ) x ( t i 1 ) = T A i + 1 A i + T A i A i 1 .
We fix A i + 1 and A i 1 (noting that they have the same sign) and find the minimum of the last expression by the variable A i . Denoting D = A i + 1 A i 1 and introducing a new variable q = A i A i 1 , the time t i + 1 t i 1 can be expressed by the function
g ( q ) = t i + 1 t i 1 = T q + T D q ,
where T ( C ) is parametrically defined using Formulas (21) and (22). Let q and D q belong to the domain of definition of the function T ( C ) . We find the first derivative of the function g ( q ) :
g ( q ) = T ( q ) T D q D q 2 .
It is easy to notice that this derivative becomes zero at the point q * = D . Let us compute the second derivative at the point q * :
g ( q * ) = T ( q * ) + T D q * D 2 q * 4 + T D q * 2 D q * 3 = 2 T ( D ) T D 2 D .
Given that D ( x max , x 0 ] , the function T ( C ) decreases and is concave downwards. Therefore, all terms in the above expression are positive, and the found point is a point of minimum. At the boundary points of the domain of definition, the function T ( C ) is not differentiable, but, in this case, there exists a unique control (either Equations (6) or (7)), leading the controlled system to its extreme position. For the remaining values of D , the positivity of the above expression (26) follows from complex algebraic manipulations using the parametric setting of the function T ( C ) with the help of Formulas (21) and (22).
We have shown that the numbers A i 1 , A i , and A i + 1 form a geometric progression. Applying this reasoning to the entire trajectory, we obtain the following statement:
Lemma 7. 
The numbers A i for the optimal process satisfy the condition
A i = A | B | A N i , i = 0 , , N ,
where the number of semi-oscillations is determined as the smallest N satisfying Lemma 6.
Since the ratio A i + 1 A i is constant for the optimal trajectory, the optimal control on each segment [ t i , t i + 1 ] will be the same. Hence, if the number of semi-oscillations required to reach the end point is more than one, then the optimal control is a periodic function, where the period is one semi-oscillation.

6. Main Result

Thus, necessary and sufficient conditions are determined for the optimal control problem to have a solution. The controlled process is oscillatory in nature and a formula has been obtained to determine the number of semi-oscillations. It has been proven that the amplitudes of semi-oscillations form a geometric progression. We will combine all the statements proven above (Lemmas 1–7) into a step-by-step algorithm for solving the original problem (2).
  • Determine if the problem has a solution and find the number of semi-oscillations N from the condition (24). Note that the problem will have a solution for any A > 0 and B 0 if condition (25) is satisfied.
  • Calculate the denominator of the geometric progression C * = A i + 1 A i using formula
    C * = | B | A N .
    This value determines how much the amplitude changes over one semi-oscillation.
  • Using the parametric setting (21) and (22) of the function T ( C ) and the value C * found in the previous step, calculate the value of the parameter s * as the solution of the equation C ( s * ) = C * and the duration of one semi-oscillation T * = T ( C * ) .
    The optimal time for rapid action in problem (2) is then
    T = N T * .
  • The value s * = ξ τ uniquely determines the type of optimal control for one semi-oscillation (Figure 3) and allows determining the number and position of switching points for one semi-oscillation.
    In the case of s * > 0 we have optimal control of the type 4 or 5. In this case, within one semi-oscillation, we calculate the moment of the first switching τ = π φ 1 β 1 . Then, if s * < s max , the second switching moment is calculated using formula ξ = τ + s * .
    In the case of s * = 0 , there is no switching moment (it is optimal control of type 3).
    In the case of s * < 0 , the optimal control of the type 1 or 2 is considered. Here, first, the second switching moment τ = T * φ 1 β 1 is calculated, then the first switching moment ξ = τ + s * is found.
    Subsequently, control values for each semi-oscillation periodically repeat. Thus, we find the optimal control and optimal trajectory over the entire segment t [ 0 , T ] .

6.1. Example 2

Applying the obtained algorithm, we find the optimal control and trajectory for the parameter values A = 1 , B = 1 4 , μ = 0.1 , and ω 0 = 0.5 .
From Equation (9), it follows that x min 0.39 and x max 1.59 . From (21), x * = C ( 0 ) 0.85 . It is also noted that, since x max > 1 , the problem will have a solution for any boundary conditions given the specific values of μ and ω 0 .
From (24), it is determined that the end point is reachable within N = 3 semi-oscillations. Further, according to the Formula (27), the value is C * = 1 4 3 0.63 . This value of C * corresponds to Formula (22) and optimal control of type 2, from which we find s * 1.09 , T * 3.35 , and T = T * · 3 10.05 . The second switching moment is τ 1.83 and the first switching moment is ξ 0.74 . The optimal trajectory and phase portrait are shown on Figure 6.

6.2. Example 3

Using the obtained result about the periodicity of optimal control, we can construct the reachability set and optimal trajectories for the case when the endpoint is reachable within no more than three semi-oscillations for μ = 0.2 , ω 0 = 0.75 (Figure 7). Since the optimal control is a periodic function, the optimal trajectories are first constructed for several selected values of the parameter s on one semi-oscillation using given above algorithm. Then, the optimal control on one semi-oscillation is repeated on the next two semi-oscillations. For this example, three semi-oscillations are considered, but the process could easily be extended to any number of semi-oscillations.
It is important to note the discontinuity in the curve of optimal time T ( C ) in the case of more than one semi-oscillation. That is, a small change in the boundary conditions can lead to a significant change in the optimal time by increasing the number of semi-oscillations at the optimal trajectory. We also note that for these parameter values, condition (25) is not satisfied, and the optimal control problem is not solvable for any boundary conditions. Figure 7 shows that all optimal trajectories are damped oscillations and, although the optimal control is a periodic function, this period differs for different trajectories.

7. Conclusions

In conclusion, this study presents an insightful examination of a bilinear optimal control problem, with a particular emphasis on the coefficient modulation. Through rigorous analysis, it has been established that optimal control is non-singular, the optimal process exhibits periodic characteristics with oscillatory behavior, and the amplitudes form a geometric progression. Furthermore, it was determined that while the optimal process itself is indeed periodic, the control function does not retain symmetry within a single period.
The implications of these findings extend to the broader realm of control theory and its applications in engineering and physics, offering a new perspective on the nature of bilinear control systems. The periodicity of the optimal process suggests potential for efficient energy usage and system stabilization in various applications, from mechanical systems to electrical circuits.
However, the lack of symmetry in the control function within the period underscores the complexity of bilinear control systems and indicates that intuition alone may not be sufficient to predict the system behavior. Future research may explore the nuances of this asymmetry and its impact on system performance.
As far as zero speed boundary conditions are concerned, our study is focused on mechanisms that perform full oscillations. Therefore, we can always use such boundary conditions for the fastest damping.
The analytical solution obtained in the paper allows for the precise determination of the switching moments, as well as the amplitudes and the total optimal time of the process.This paper contributes to the ongoing discourse in control theory, providing a foundation for subsequent studies to build upon. The results underscore the necessity for a nuanced approach to control strategy development, especially in systems where time-optimality is a paramount consideration. The methodologies and findings herein have practical implications for designing more efficient and robust control systems in the future.

Author Contributions

Conceptualization, V.T.; investigation, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data supporting the results of this study are available from the corresponding authors upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests of personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mishchenko, E.F. The Mathematical Theory of Optimal Processes; Interscience Publishers (division of John Wiley and Sons, Inc.): New York, NY, USA, 1962; 360p. [Google Scholar]
  2. Lewis, F.L.; Vrabie, D.; Syrmos, V.L. Optimal Control; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2012; 540p. [Google Scholar]
  3. Kirk, D.E. Optimal Control Theory: An Introduction; Dover Publications, Inc.: New York, NY, USA, 2004; 464p. [Google Scholar]
  4. Bryson, A.E.; Ho, Y.C. Applied Optimal Control: Optimization, Estimation, and Control; Taylor & Francis Group: New York, NY, USA, 1975; 496p. [Google Scholar]
  5. Geradin, M.; Rixen, D. Mechanical Vibrations: Theory and Application to Structural Dynamics; John Wiley & Sons, Ltd: Chichester, UK, 2015; 598p. [Google Scholar]
  6. Lavrovskii, E.K.; Formal’skii, A.M. Optimal control of the pumping and damping of a swing. J. Appl. Math. Mech. 1993, 57, 311–320. [Google Scholar] [CrossRef]
  7. Golubev, Y.F. Brachistochrone with dry and arbitrary viscous friction. J. Comput. Syst. Sci. Int. 2012, 51, 22–37. [Google Scholar] [CrossRef]
  8. Dupont, P.; Kasturi, P.; Stokes, A. Semi-active control of friction dampers. J. Sound Vib. 1997, 202, 203–218. [Google Scholar] [CrossRef]
  9. Berger, E. Friction modeling for dynamic system simulation. ASME Appl. Mech. Rev. 2002, 55, 535–577. [Google Scholar] [CrossRef]
  10. Zoccolini, L.; Bruschi, E.; Cattaneo, S.; Quaglini, V. Current Trends in Fluid Viscous Dampers with Semi-Active and Adaptive Behavior. Appl. Sci. 2023, 13, 10358. [Google Scholar] [CrossRef]
  11. Cetin, H.; Aydin, E.; Ozturk, B. Optimal Design and Distribution of Viscous Dampers for Shear Building Structures Under Seismic Excitatio. Front. Built Environ. 2019, 5, 90. [Google Scholar] [CrossRef]
  12. Scaramozzino, S.; Listmann, K.D.; Gebhardt, J. Time-optimal control of harmonic oscillators at resonance. In Proceedings of the European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 1955–1961. [Google Scholar] [CrossRef]
  13. Hatvani, L. On the parametrically excited pendulum equation with a step function coefficient. Int. J. Non Linear Mech. 2015, 77, 172–182. [Google Scholar] [CrossRef]
  14. Maccari, A. Vibration Control for Parametrically Excited Coupled Nonlinear Oscillators. J. Comput. Nonlinear Dynam. 2008, 3, 031010. [Google Scholar] [CrossRef]
  15. Belyakov, A.O.; Seyranian, A.P.; Luongo, A. Dynamics of the pendulum with periodically varying length. Phys. D Nonlinear Phenom. 2009, 238, 1589–1597. [Google Scholar] [CrossRef]
  16. Yu, Y.; Ma, J.; Shi, X.; Wu, J.; Cai, S.; Li, Z.; Wang, W.; Wei, H.; Wei, R. Study on the variable length simple pendulum oscillation based on the relative mode transfer method. PLoS ONE 2024, 19, e0299399. [Google Scholar] [CrossRef]
  17. Wright, J.A.; Bartuccelli, M.; Gentile, G. Comparisons between the pendulum with varying length and the pendulum with oscillating support. J. Math. Anal. Appl. 2017, 449, 1684–1707. [Google Scholar] [CrossRef]
  18. Anderle, M.; Čelikovský, S.; Vyhlídal, T. Lyapunov Based Adaptive Control for Varying Length Pendulum with Unknown Viscous Friction. In Proceedings of the 2021 23rd International Conference on Process Control (PC), Strbske Pleso, Slovakia, 1–4 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  19. Yang, T.; Fang, B.; Li, S.; Huang, W. Explicit analytical solution of a pendulum with periodically varying length. Eur. J. Phys. 2010, 31, 1089–1096. [Google Scholar] [CrossRef]
  20. Matveev, A.S. The Instability of Optimal Control Problems to Time Delay. SIAM J. Control Optim. 2005, 43, 1757–1786. [Google Scholar] [CrossRef]
  21. Hao, L.; Pagani, R.; Beschi, M.; Legnani, G. Dynamic and Friction Parameters of an Industrial Robot: Identification, Comparison and Repetitiveness Analysis. Robotics 2021, 10, 49. [Google Scholar] [CrossRef]
Figure 1. An example of the trajectory of the controlled system (2) under the action of bang–bang control ω ( t ) for the case μ = 0.05 , ω 0 = 0.75 .
Figure 1. An example of the trajectory of the controlled system (2) under the action of bang–bang control ω ( t ) for the case μ = 0.05 , ω 0 = 0.75 .
Mathematics 12 01485 g001
Figure 2. Cases (1)–(10) of sign changes in the function ψ 2 ( t ) . The dashed gray line represents situations that do not satisfy the PMP. The solid red line represents cases that do not contradict the PMP.
Figure 2. Cases (1)–(10) of sign changes in the function ψ 2 ( t ) . The dashed gray line represents situations that do not satisfy the PMP. The solid red line represents cases that do not contradict the PMP.
Mathematics 12 01485 g002
Figure 3. All possible variants of optimal control encountered in problem (10).
Figure 3. All possible variants of optimal control encountered in problem (10).
Mathematics 12 01485 g003
Figure 4. Optimal time curve T ( C ) in problem (5) for the case of μ = 0.1 , ω 0 = 0.5 .
Figure 4. Optimal time curve T ( C ) in problem (5) for the case of μ = 0.1 , ω 0 = 0.5 .
Mathematics 12 01485 g004
Figure 5. The reachability set of optimal trajectories and control switching points for various values of C in the case of a single oscillation. μ = 0.1 , ω 0 = 0.5 .
Figure 5. The reachability set of optimal trajectories and control switching points for various values of C in the case of a single oscillation. μ = 0.1 , ω 0 = 0.5 .
Mathematics 12 01485 g005
Figure 6. Optimal trajectory x ( t ) (blue), control ω ( t ) (red), and phase portrait (green) for μ = 0.1 , ω 0 = 0.5 .
Figure 6. Optimal trajectory x ( t ) (blue), control ω ( t ) (red), and phase portrait (green) for μ = 0.1 , ω 0 = 0.5 .
Mathematics 12 01485 g006
Figure 7. The reachability set of optimal trajectories and control switching points for various values of C in the case of no more than 3 semi-oscillations. μ = 0.2 , and ω 0 = 0.75 .
Figure 7. The reachability set of optimal trajectories and control switching points for various values of C in the case of no more than 3 semi-oscillations. μ = 0.2 , and ω 0 = 0.75 .
Mathematics 12 01485 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kamzolkin, D.; Ternovski, V. Time-Optimal Motions of a Mechanical System with Viscous Friction. Mathematics 2024, 12, 1485. https://doi.org/10.3390/math12101485

AMA Style

Kamzolkin D, Ternovski V. Time-Optimal Motions of a Mechanical System with Viscous Friction. Mathematics. 2024; 12(10):1485. https://doi.org/10.3390/math12101485

Chicago/Turabian Style

Kamzolkin, Dmitrii, and Vladimir Ternovski. 2024. "Time-Optimal Motions of a Mechanical System with Viscous Friction" Mathematics 12, no. 10: 1485. https://doi.org/10.3390/math12101485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop