Next Article in Journal
Systematic and Model-Assisted Evaluation of Solvent Based- or Pressurized Hot Water Extraction for the Extraction of Artemisinin from Artemisia annua L.
Next Article in Special Issue
Special Issue: Combined Scheduling and Control
Previous Article in Journal / Special Issue
Combined Noncyclic Scheduling and Advanced Control for Continuous Chemical Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Control Discretization Based on Turnpike Theory for Dynamic Optimization

Process Systems Engineering Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Current address: Aspen Technology, 20 Crosby Dr, Bedford, MA 01730, USA.
Processes 2017, 5(4), 85; https://doi.org/10.3390/pr5040085
Submission received: 12 November 2017 / Revised: 8 December 2017 / Accepted: 11 December 2017 / Published: 18 December 2017
(This article belongs to the Special Issue Combined Scheduling and Control)

Abstract

:
Dynamic optimization offers a great potential for maximizing performance of continuous processes from startup to shutdown by obtaining optimal trajectories for the control variables. However, numerical procedures for dynamic optimization can become prohibitively costly upon a sufficiently fine discretization of control trajectories, especially for large-scale dynamic process models. On the other hand, a coarse discretization of control trajectories is often incapable of representing the optimal solution, thereby leading to reduced performance. In this paper, a new control discretization approach for dynamic optimization of continuous processes is proposed. It builds upon turnpike theory in optimal control and exploits the solution structure for constructing the optimal trajectories and adaptively deciding the locations of the control discretization points. As a result, the proposed approach can potentially yield the same, or even improved, optimal solution with a coarser discretization than a conventional uniform discretization approach. It is shown via case studies that using the proposed approach can reduce the cost of dynamic optimization significantly, mainly due to introducing fewer optimization variables and cheaper sensitivity calculations during integration.

Graphical Abstract

1. Introduction

The use of dynamic optimization for optimizing the performance of transient manufacturing processes has drawn much attention in recent years [1]. A typical dynamic optimization problem is to find optimal trajectories for process control variables, e.g., flowrates in a chemical process, so that a particular performance index is maximized subject to some process constraints. Numerical solution of dynamic optimization problems poses a number of difficulties. First, optimization over control trajectories gives rise to an infinite-dimensional optimization problem, which must be transformed to an approximate finite-dimensional one by discretizing control trajectories. In so-called indirect methods [2], control trajectories are discretized automatically along with the state variables as the integration routine solves a two-point boundary value problem. The resolution of the discretization depends on the error-control mechanism of the integrator. A disadvantage of indirect methods is the rather high level of expertise required to formulate the optimality conditions for problems of practical size and complexity. Moreover, it can be very difficult to solve the resulting boundary value problem, especially in the presence of constraints [2,3]. Direct methods, on the other hand, rely on a priori discretization of the control trajectories. Two main classes of direct methods are (i) the simultaneous method, where the dynamic model is also discretized, using e.g., collocation techniques and extra optimization variables [4], to arrive at a fully algebraic model and (ii) the sequential method, where the dynamic model is retained, and instead is resolved using numerical integration [5]. The focus in this work is on the sequential method that has shown capabilities for handling large-scale, stiff problems with high accuracy [6], arguably due to keeping the problem dimension relatively small [7] and the use of adaptive numerical integrators furnished with error control mechanisms.
The quality of the optimal solution is directly related to the quality of the control discretization, with a finer discretization generally offering an improved optimal solution. However, a finer discretization can lead to increased computational cost as a result of increased number of optimization variables, with too fine of a discretization also posing robustness issues [8]. For the sequential method, it also increases the cost of computing the sensitivity information by introducing extra sensitivity equations to the dynamic model. The integration of sensitivity equations can be a computationally dominant task despite the significant progress made so far regarding their efficient calculation (see [9,10,11]). This is especially true when the number of optimization variables is large, and can be a potentially limiting factor in applying dynamic optimization to large-scale processes.
A second difficulty with dynamic optimization arises from potentially severe nonlinearity caused by the embedded dynamic model. The nonlinearity can result in a highly nonconvex problem exhibiting many suboptimal local solutions. It is known that even small-scale dynamic systems can exhibit multiple suboptimal local solutions [12,13,14]. Unfortunately, a derivative-based optimization method can become trapped in any of these, which can lead to a suboptimal operation and loss of profit.
To deal with the issue of control discretization, nonuniform (i.e., not equally spaced) discretization techniques can be applied to reduce the effective number of discretization points needed. A simple way to do this is by considering the location of the discretization points as extra optimization variables. However, the extra optimization variables can cause additional nonconvexity, and thus suboptimal, local solutions, especially when the number of discretization points is fairly large. In the authors’ experience, this strategy often leads to convergence difficulties, thereby defeating the point of using a coarser discretization for faster convergence. More elegant nonuniform discretization techniques have been proposed, where the discretization points over the time horizon are distributed adaptively based on the behavior of the optimal trajectories. As a result, excessive discretization can be avoided while maintaining the quality of the solution. Srinivasan et al. [15] proposed a parsimonious discretization method for optimization of batch processes with control-affine dynamics. The method relied on analysis of the information from successive batch runs for approximating the structure of the control trajectory and improving the solution in the face of fixed parametric uncertainty. Binder et al. [16] presented an adaptive discretization strategy, in which the problem with a coarse discretization of the control trajectory is first solved. Then, using a wavelet analysis of the obtained optimal solution, together with the gradient information, the discretization is refined by eliminating discretization points that are deemed unnecessary and adding new points where necessary. The problem with the refined discretization is then solved to give an improved optimal solution, and the procedure is repeated for further improvements. In a subsequent variant, Schlegel et al. [17] used a pure wavelet analysis in the refinement step in order to make the strategy better suited for problems with path constraints. In addition, Schlegel and Marquardt [18] proposed another adaptive discretization strategy that explicitly incorporates the structure of the optimal control trajectory. Specifically, the solution with a coarse discretization is used to deduce the structure based on the active or inactive status of the bound and path constraints, and the deduced structure is used to refine the discretization. A combination of the two strategies is also possible as presented in [19]. Recently, Liu et al. [20] proposed an adaptive control discretization method, in which the discretization is refined by a particular slope analysis on the approximate optimal control trajectories. The dynamic optimization problem is solved again with the refined discretization, and the procedure is repeated until a stopping criterion based on the relative improvement of the objective function is met. The foregoing strategies can be applied to a wide class of dynamic optimization problems arising from both batch and continuous processes. Nonetheless, a major drawback of them is the need for repeated solution of the dynamic optimization problem in order to arrive at a satisfactory optimal solution. Depending on the case, this may be even more costly overall than a one-time optimization with a sufficiently fine discretization. In addition, the post-processing of results required for each refinement step would take additional time and expertise.
In this paper, a new approach to control discretization is presented. Similar to the above-mentioned strategies, it aims to create an adaptive discretization based on a deduction of the structure of the optimal trajectories. However, a different philosophy is used for this purpose. Specifically, the proposed approach is based on turnpike theory [21,22] in optimal control, which analyzes optimal control problems with respect to the structure of the optimal solution, i.e., optimal trajectories of the control and state variables. Turnpike theory has been used in the context of indirect methods in [23,24,25], where the structure of the optimal trajectories is exploited to approximate the resulting boundary-value problem by two initial-value problems corresponding to the initial and terminal segments of the time horizon. However, the focus in this work is on using turnpike theory for efficient control discretization in direct methods. Of particular interest is the input-state turnpike [26,27] structure, where both control and state optimal trajectories are composed of three phases: a transient phase at the beginning, followed by a non-transient phase that is close to the optimal steady state, followed by another transient phase at the end. This type of turnpike is called steady-state turnpike in this paper. The proposed approach exploits this structure to place the discretization points in an “optimal” way. To do so, an adaptive discretization strategy is built into the dynamic optimization formulation. In this way, the solution structure and locations of the discretization points are adjusted “dynamically” during the optimization iterations so that optimal trajectories with an adapted discretization scheme are obtained at convergence. Therefore, unlike the previous strategies, only one dynamic optimization problem is solved in the proposed approach, and no post-processing of the results is needed. Furthermore, the proposed approach helps deal with the issue of suboptimal solutions by potentially avoiding a number of such solutions whose trajectories do not conform to the turnpike structure. It is noted, however, that this approach is most suitable for optimization of transient continuous processes in which an approximate steady state may occur. It is not meant for optimization of batch or semi-batch processes, in which steady state is not possible.
The remainder of the paper is organized as follows. The problem statement along with some background on turnpike theory is presented in Section 2. The proposed control discretization approach is described in Section 3. Numerical case studies are performed in Section 4, which is followed by some conclusions in Section 5.

2. Problem Statement and Pertinent Background

The dynamic optimization problem under study can take a quite general form. For simplicity, however, a minimal formulation is considered below:
min u U 0 t f J ( x ( t , u ) , u ( t ) ) d t ,
s . t . x ˙ ( t , u ) = f ( x ( t , u ) , u ( t ) ) , t ( 0 , t f ] ,
g ( x ( t f , u ) , u ( t f ) ) 0 ,
x ( 0 , u ) = x 0 ,
where u ( t ) R n u and x ( t ) R n x are vectors of control and state variables, respectively, with x 0 R n x the state initial conditions; U is the set of admissible controls defined as U { u ( L 1 ( [ 0 , t f ] ) ) n u : u ( t ) U a . e . in [ 0 , t f ] } with U R n u bounds on u ; f and g are vector functions of appropriate sizes representing the dynamic model and process constraints, respectively. Note also that equality constraints or path constraints can be included in the formulation with no impact on the validity of the ensuing developments.
The time-dependent control variables u in Problem (1)–(4) give rise to an infinite-dimensional optimization problem. For a numerical solution, the problem dimension must be reduced to a finite one by discretizing u over the time horizon. In this work, the popular piecewise constant discretization approach is used for this purpose. With the time horizon [ 0 , t f ] discretized over N (not necessarily uniform) epochs [ 0 , t 1 ) , [ t 1 , t 2 ) ,..., [ t N 1 , t N ] , u is approximated over each epoch by a constant parameter vector as u ( t ) = p k , t [ t k 1 , t k ) , with k = 1 , , N and u ( t f ) = p N . The discontinuities in u make the dynamic model (2) a continuous-discrete hybrid dynamic system [28]. The discrete behavior potentially occurs at times t k , k = 1 , , N 1 (called event times). The dynamic system is said to switch from one mode to another at these times. The hybrid behavior can have implications regarding the differentiability of the dynamic system, and consequently that of the optimization problem, as discussed later.
The piecewise constant discretization reduces the optimization to one over the finite-dimensional parameter vector p = ( p 1 , p 2 , , p N ) U N . However, the optimal solution of the approximated problem is generally inferior to that of the original one. To improve the solution, a finer discretization may be used by increasing the number of epochs N. The effect of increasing N, and thus the number of optimization variables, on the computational cost of the solution is twofold: (i) it increases the cost of parametric sensitivities that are calculated during solution of the dynamic model, as required by a gradient-based optimization solver; and (ii) it can increase the cost of optimization by requiring more iterations to be performed within the larger search space. In some cases, increasing N too much can also lead to robustness issues [8] and failure of the optimization solver.
The above computational concerns become even more important when dealing with large-scale models of real-life manufacturing processes. This motivates efficient control discretization strategies that can maintain the solution quality with a coarser discretization scheme. The new discretization approach presented in this paper relies on turnpike theory, which is reviewed in the following subsection.

Turnpike in Optimal Control

It appears that turnpike theory in optimal control was first discussed in the field of econometrics [22], and later gained attention in other fields including chemical processes [29]. The theory characterizes the structure of the solution of an optimal control problem by describing how the optimal control and state trajectories of a system evolve with time. It was initially investigated for optimal control problems with convex cost functions (in a minimization case). In particular, it was established that, given a time horizon T : = [ 0 , t f ] , the time the optimal trajectories spend outside an ϵ -neighborhood of the optimal steady state is limited to two intervals [ 0 , t 1 ] and [ t f t 2 , t f ] , where 0 t 1 , t 2 γ , with γ > 0 . The system is said to be in a transient phase in these intervals. The interesting point is that γ is independent of t f but only dependent on ϵ and the initial and final conditions of the system [30,31]. If the time horizon is long enough, i.e., t f > t 1 + t 2 , then a turnpike appears between t 1 and t 2 , and the turnpike trajectories lie in the ϵ -neighborhood of the optimal steady state. See Figure 1 for visualization of the concept. An appealing implication of turnpike theory is that an increase in t f will only stretch the duration of the turnpike, and has no effect on the duration and solution of the transient phases [32]. For relatively large t f , the optimal trajectories will traverse close to the optimal steady state for most of the time horizon, and the transient phases will be short in comparison.
The extension of turnpike theory to nonconvex problems has led to generalized definitions of the turnpike, in which it may no longer be a steady state, but a time-dependent trajectory [31,33]. Nonetheless, the interest in the present work is in a steady-state turnpike for generally nonconvex problems. Recently, it was shown in [27,34] that the steady-state turnpike still occurs if the convexity assumption is replaced by a dissipativity assumption. In particular, they present a notion of strict dissipativity, and prove that, if a dynamic system is strictly dissipative with respect to a reachable optimal steady state, then the optimal trajectories will have a turnpike at that steady state. Such a turnpike emerges in practice if the time horizon is sufficiently long. The equivalence of strict dissipativity and steady-state turnpike is a key result in optimal control and has applications in stability analysis of economic model predictive control [35,36].

3. Proposed Adaptive Control Discretization Approach

It is known that the emergence of a steady-state turnpike can be exploited in the numerical solution of optimal control problems, as noted in, e.g., [32,37], especially for problems with sufficiently long time horizons. In particular, the control trajectories need not be discretized over the entire horizon, but only over the intervals before and after the turnpike. If the turnpike interval is considerably long, this results in a coarser discretization and can reduce the computational load, as discussed earlier. However, the difficulty is that the duration of the turnpike and its location in the optimal trajectory are not known a priori. Therefore, it is not possible to adapt the discretization in advance based on the turnpike structure. To resolve this issue, this work considers embedding the tasks of approximating the turnpike structure and adaptive discretization into the dynamic optimization formulation. In this way, no pre- or post-processing for adjusting the discretization is needed. To this end, a first idea involves performing a nonuniform discretization where the duration of the epochs, i.e., Δ t 1 : = t 1 and Δ t i : = t i t i 1 for i = 2 , , N are themselves optimization variables. To account for the turnpike, the parametrized controls on one of the intermediate epochs, e.g., the middle one, can be set to their optimal steady-state values. The following optimization problem will then result:
min p k U , k = 1 , , N , k m , Δ t i [ 0 , t f ] , i = 1 , , N 0 t f J ( x ( t , p ) , p ) d t ,
s . t . x ˙ ( t , p ) = f ( x ( t , p ) , p ) , t ( 0 , t f ] ,
g ( x ( t f , p ) , p N ) 0 ,
i = 1 N Δ t i = t f ,
p m = u s s * , m = N 2 ,
x ( 0 , p 1 ) = x 0 ,
where a gives the smallest integer not less than a. The optimal steady-state control u s s * is obtained from solving the following static optimization problem:
min x s s X , u s s U J ( x s s , u s s ) ,
s . t . 0 = f ( x s s , u s s ) ,
where X R n x are the bounds on x s s . Problem (5)–(10) will generally prescribe a nonuniform discretization to minimize the objective function. Particularly, the optimizer can adjust Δ t i so that only one epoch, here [ t m 1 , t m ), is enough to represent the steady-state turnpike. The remaining N 1 epochs are automatically adjusted to cover the transient phases, where a finer discretization is needed. Problem (5)–(10) leads to a total of n p = ( N 1 ) × n u + N optimization variables. The nonuniform discretization is shown schematically in Figure 2, where the middle epoch [ t 2 , t 3 ) is enlarged to cover the steady-state turnpike. In case no turnpike appears in the optimal solution, the Δ t i corresponding to the turnpike can be simply pushed to zero by the optimizer. Therefore, the existence of a turnpike is not assumed and need not be verified in advance.
Note that the nonuniform discretization in Problem (5)–(10) results in a hybrid dynamic system with variable switching times t i . For numerical reasons, a change of variable is usually used to transform Problem (5)–(10) to one with fixed switching times [38,39]. More details about this transformation are deferred to the next subsection.
With the flexibility in the values of Δ t i , it is expected that a lower N suffices to achieve the same optimal solution as in a uniform discretization. However, the addition of N optimization variables Δ t i in the nonuniform discretization strategy can adversely affect the overall computational cost and offset the benefits gained by a coarser discretization. It could contribute to more nonconvexity and thus local suboptimal solutions. The alternative proposed in this work is a semi-uniform adaptive discretization approach, as described in the sequel.

Semi-Uniform Adaptive Control Discretization

Here, a semi-uniform discretization approach for utilizing turnpike theory is proposed. The idea is to limit the use of nonuniform discretization where possible, while still being able to accommodate the variable durations of the steady-state turnpike and transient phases. To do so, the time horizon is split into the three phases prescribed by turnpike theory: two transient phases and a steady-state turnpike in between (see Figure 1). A uniform discretization is then applied on the transient phases, which connect to each other by one epoch representing the turnpike. The durations of these three phases are included as extra optimization variables. The approach is depicted in the left plot of Figure 3, where τ 1 , τ 2 , and τ 3 denote the unknown durations of the first transient, turnpike, and second transient phases, respectively. Observe that the durations of the epochs in each particular phase are equal although they are generally not equal from one phase to another, hence a semi-uniform discretization. In addition, note that each of the transient phases can take a different resolution (i.e., number of epochs). This will allow for a more accurate solution of systems where, based on experience or engineering insights, one transient phase is deemed to be considerably longer or more severe than the other, thereby requiring a finer discretization. The semi-uniform discretization approximates u as follows. Suppose the first transient phase (also called startup hereafter) is discretized into N s uniform epochs. Then, the control vector u over t [ 0 , τ 1 ) is approximated as u ( t ) = p k , t [ t k 1 , t k ) , with k = 1 , , N s and t N s = τ 1 . In addition, suppose the second transient (also called shutdown hereafter) is discretized into N d uniform epochs. Then, the control vector u over t [ τ 1 + τ 2 , t f ] is approximated as u ( t ) = p k , t [ t k 1 , t k ) , with k = N s + 2 , , N s + 1 + N d and u ( t N s + 1 + N d = t f ) = p N s + 1 + N d . Notice that u ( t ) = p N s + 1 , t [ t N s , t N s + 1 ) represents the one-epoch discretization corresponding to the turnpike. With this discretization scheme, the proposed approach can be formulated through the following dynamic optimization problem:
min p k U , k = 1 , , N , k N s + 1 , τ 1 , τ 2 , τ 3 [ 0 , t f ] 0 t f J ( x ( t , p ) , p ) d t ,
s . t . x ˙ ( t , p ) = f ( x ( t , p ) , p ) , t ( 0 , t f ] ,
g ( x ( t f , p ) , p N ) 0 ,
i = 1 3 τ i = t f ,
p N s + 1 = u s s * ,
x ( 0 , p 1 ) = x 0 ,
where N = N s + 1 + N d is the total number of epochs. The extra variables τ i allow for the flexibility required to approximate the transient and turnpike phases well. Unlike the nonuniform formulation, however, the proposed formulation accommodates this flexibility by introducing only three extra optimization variables, regardless of the total number of epochs considered (the total number of optimization variables is n p = ( N 1 ) × n u + 3 in this formulation). Furthermore, if a steady-state turnpike does not appear for a particular system, the optimizer would push τ 2 to zero. Therefore, the emergence of a turnpike need not be known a priori. However, this approach will best serve its purpose of reducing the computational cost if a turnpike exists and appears in the solution trajectories.
Similar to Problem (5)–(10), the proposed formulation results in a hybrid dynamic system with variable switching times. This is because the locations of discretization points depend on the variables τ i . For example, the switch for u from the first epoch to the second occurs at time τ 1 N s according to the following statement:
u ( t ) = p 1 , if 0 t < τ 1 N s , p 2 , if τ 1 N s t < 2 τ 1 N s .
Similarly, other switching times depend on τ i and are not fixed. The computation of parametric sensitivities in a hybrid system where the switching times are a function of the parameters is involved and requires some assumptions for the system to ensure existence and uniqueness of the sensitivities [28]. Moreover, the parametric sensitivities are generally discontinuous over the switches [38], and additional computations are necessary to transfer the sensitivity values across the switches [28]. To avoid these numerical complications, the time transformation presented in [39,40] is used so that the switching times are fixed in the transformed formulation. In particular, the time horizon t [ 0 , t f ] is transformed to a new time horizon t [ 0 , 3 ] using
d t d t = τ ( t ) ,
in which
τ ( t ) = τ 1 , if 0 t < 1 , τ 2 , if 1 t < 2 , τ 3 , if 2 t 3 ,
is a piecewise constant function on t . With this transformation, Problem (13)–(18) is rewritten as:
min p k U , k = 1 , , N , k N s + 1 , τ 1 , τ 2 , τ 3 [ 0 , t f ] 0 3 J ( y ( t , p ) , p ) d t ,
s . t . y ˙ ( t , p ) = τ ( t ) f ( y ( t , p ) , p ) , t ( 0 , 3 ] ,
g ( y ( 3 , p ) , p N ) 0 ,
i = 1 3 τ i = t f ,
p N s + 1 = u s s * ,
y ( 0 , p 1 ) = y 0 ,
where y ( t ) = x ( t ) . Interestingly, each of the startup, turnpike, and shutdown phases has a duration of 1 in the t domain, as shown in the right plot of Figure 3. Moreover, the switch from one phase to another now occurs at fixed times t = 1 and t = 2 . Accordingly, all the control switches are triggered at fixed times in the new time domain. This ensures existence and uniqueness of the parametric sensitivities and their continuity over the switches [28].
Despite the adaptive discretization strategy incorporated in Problem (22)–(27), the quality of the optimal solution can still depend on the resolutions set for the transient phases ( N s and N d ). If these resolutions are too coarse, the optimal solution may be compromised because the optimizer may have to adjust the τ i s away from their true values, e.g., in order to keep the problem feasible. Even with an adequate resolution, it is possible that the τ i s are under- or over-approximated by the local optimizer, thereby leading to an inferior solution. In some instances, it may be possible to avoid such a suboptimal solution by special modifications to the proposed formulation. In the next subsection, a variant formulation that can avoid a particular suboptimal scenario is presented.

A Variant Formulation

Figure 4 illustrates the suboptimal scenario that is dealt with by this variant formulation. The dashed lines show the optimal control trajectory that would be obtained ideally from no discretization. The left and right plots depict a suboptimal and an optimal solution, respectively, both obtained with the transient resolutions of N s = N d = 2 . In the left plot, τ 2 is under-approximated. As a result, part of the actual turnpike is deemed transient by the adaptive discretization scheme. This leads to unnecessarily using three epochs for the turnpike, leaving only one epoch to represent each transient phase. Observe, however, that the suboptimal control trajectory over the turnpike is still the same as the one obtained in the optimal case. This implies that the state trajectories in part of the obtained transient phases are in fact within an ϵ -neighborhood of the optimal steady state (not shown).
The variant formulation avoids the above-mentioned scenario by making it infeasible to the optimization problem. Specifically, a new constraint is added to ensure that the state trajectories in the obtained transient phases are indeed outside an ϵ -neighborhood of the optimal steady state. For each state variable, this implies
| x ( t ) x s s * | ϵ r | x s s * | + ϵ a , t [ τ 1 , τ 1 + τ 2 ] ,
where ϵ r and ϵ a are relative and absolute tolerances defining the ϵ -neighborhood, respectively. Writing the squared form of Inequality (28) (to avoid potential non-differentiability) for all state variables and adding them up gives
1 n x i = 1 n x x i ( t ) x i , s s * ϵ r | x i , s s * | + ϵ a 2 1 , t [ τ 1 , τ 1 + τ 2 ] ,
which, upon applying the time transformation, can be added to Problem (22)–(27) in order to rule out the possibility of the suboptimal scenario given in Figure 4. The following optimization problem will then result:
min p k U , k = 1 , , N , k N s + 1 , τ 1 , τ 2 , τ 3 [ 0 , t f ] 0 3 J ( y ( t , p ) , p ) d t ,
s . t . y ˙ ( t , p ) = τ ( t ) f ( y ( t , p ) , p ) , t ( 0 , 3 ] ,
g ( y ( 3 , p ) , p N ) 0 ,
1 n x i = 1 n x x i ( t ) x i , s s * ϵ r | x i , s s * | + ϵ a 2 1 , t [ 1 , 2 ] ,
i = 1 3 τ i = t f ,
p N s + 1 = u s s * ,
y ( 0 , p 1 ) = y 0 .
Notice that (33) is a path constraint that is enforced on the transient phases only. For numerical implementation, it is converted to an equivalent terminal constraint by introducing an auxiliary state variable. See [2] for details.

4. Results and Discussion

In this section, three examples are considered to demonstrate the proposed discretization approach, and compare it against conventional uniform discretization and the nonuniform discretization approach, i.e., Problem (5)–(10), which was described earlier in this paper. The local gradient-based solver IPOPT [41] is used to solve the optimization problems. The integration of the hybrid dynamic systems and parametric sensitivity calculations are performed by the software package DAEPACK (RES Group, Needham, MA, USA) [42,43]. Note that DAEPACK is best suited for large-scale, sparse problems, and, due to the overhead associated with sparse linear algebra, may not be an optimal choice for the small-scale problems considered here. In addition, the global solver BARON [44] within the GAMS environment [45] is used to solve the static Problem (11)–(12) to global optimality. The numerical experiments are performed on a 64-bit Ubuntu 14.04 platform with a 3.2 GHz CPU.

4.1. Example 1

Consider the following dynamic optimization problem:
min u J ( u ) : = 0 t f x 1 ( t , u ) 2 + x 2 ( t , u ) 2 d t s . t . x 1 ˙ ( t , u ) = x 2 ( t , u ) x 1 ( t , u ) , x 2 ˙ ( t , u ) = x 2 ( t , u ) u ( t ) , 5 x 1 ( t f , u ) x 2 ( t f , u ) 2 = 9 , ( x 1 ( 0 , u ) , x 2 ( 0 , u ) ) = ( 0 , 1 ) , u ( t ) [ 3 , 3 ] , a . e . in [ 0 , t f ] ,
where t f = 20 . The optimal steady-state values required for the nonuniform discretization and proposed approaches are easily obtained by inspection as ( x 1 , x 2 , u ) = ( 0 , 0 , 0 ) . In addition, the tolerance values used in the variant formulation are set to ϵ a = 10 4 and ϵ r = 10 3 . The optimal objective values and solver statistics for the uniform discretization with different numbers of epochs, nonuniform discretization, and the proposed approach are provided in Table 1. With the same number of epochs N = 5 , the optimal solution from the proposed approach is significantly better than the one obtained from the uniform discretization. The uniform discretization was able to reach the same optimal solution only when N = 60 epochs were applied. In addition, with the same optimal solution J * = 2.45 , the proposed approach converges remarkably faster, i.e., about 10 times, compared to the uniform discretization. This speed-up is due to the coarser discretization that has resulted in both fewer iterations and lower cost per iteration. The latter is because, with much fewer optimization variables, the proposed approach has a much smaller parametric sensitivity system to solve during integration. Additionally, fewer epochs mean fewer restarts of the integration at the beginning of each epoch. This can further speed up the integration. Within the proposed approach, it is seen that both the main and variant formulations yield the same solution. The variant formulation requires slightly fewer iterations to converge, which could be due to the smaller search space resulting from Constraint (33). Nevertheless, its convergence time is slightly higher; this can be attributed to the higher cost per iteration resulting from the auxiliary differential equation representing Constraint (33) and corresponding sensitivities. Finally, notice that the nonuniform discretization strategy did not converge after 1000 iterations.
The optimal trajectories obtained from the nonuniform discretization strategy are given in Figure 5. It is seen that the steady-state turnpike is realized for both N = 5 and N = 60 cases. However, with the coarser discretization, the turnpike is realized only partially due to the limited number of control moves available and inability to adjust their switching times optimally. Specifically, the control u must depart from its optimal steady state as early as t = 12 in order to use the remaining two epochs to satisfy the terminal equality constraint. On the other hand, with N = 60 epochs, the turnpike is realized to (possibly) its full extent due to the much higher degree of freedom in the control moves. This is to be compared with the optimal trajectories in the right plot of Figure 6, where the proposed approach is shown to be able to yield quite the same trajectories with only five control moves. The durations of the transient and turnpike phases in this case are obtained as ( τ 1 * , τ 2 * , τ 3 * ) = ( 0.6 , 17.3 , 2.1 ) . The left plot in Figure 6 shows the optimal trajectories in the t domain, in which the optimization problem is solved. Observe that the duration of each phase is 1 regardless of the τ i values.

4.2. Example 2

This example considers a non-isothermal Van de Vusse reactor adapted from [46], in which the following reactions occur:
A k 1 B k 1 C , 2 A k 2 D ,
with B the desired product. The dynamic model of the reactor is given as:
C A ˙ ( t ) = k 1 ( T ( t ) ) C A ( t ) k 2 ( T ( t ) ) C A ( t ) 2 + C A i n C A ( t ) V ( t ) F i n ( t ) , C B ˙ ( t ) = k 1 ( T ( t ) ) ( C A ( t ) C B ( t ) ) C B ( t ) F i n ( t ) V ( t ) , C C ˙ ( t ) = k 1 ( T ( t ) ) C B ( t ) C C ( t ) F i n ( t ) V ( t ) , C D ˙ ( t ) = 1 2 k 2 ( T ( t ) ) C A ( t ) 2 C D ( t ) F i n ( t ) V ( t ) , T ˙ ( t ) = h ( t ) + α ( T c ( t ) T ( t ) ) + F i n ( t ) T i n T ( t ) V ( t ) , T c ˙ ( t ) = β ( T ( t ) T c ( t ) ) γ P c ( t ) , V ˙ ( t ) = F i n ( t ) F o u t ( t ) , F o u t ( t ) η V ( t ) , h ( t ) δ ( k 1 ( T ( t ) ) ( C A ( t ) Δ H AB + C B ( t ) Δ H BC ) + k 2 ( T ( t ) ) C A ( t ) 2 Δ H AD ) , k 1 ( T ( t ) ) k 0 , 1 exp E 1 T ( t ) + 273.15 , k 2 ( T ( t ) ) k 0 , 2 exp E 2 T ( t ) + 273.15 ,
where C i denotes the concentration of i; T and T c are the temperatures of the reactor and cooling jacket, respectively; V is the reaction volume; and F o u t is the outlet flowrate. Here, the original model in [46] has been modified to allow a varying volume. It is worth noting that, for the original model, conditions ensuring strict dissipativity, and, thus, potential emergence of turnpike have been verified in [27]. Nonetheless, such a result is not required in this work since it makes no assumption on the presence of a turnpike. The initial conditions are given as:
( C A ( 0 ) , C B ( 0 ) , C C ( 0 ) , C D ( 0 ) ) = ( 0 , 0 , 0 , 0 ) mol m 3 , ( T ( 0 ) , T c ( 0 ) ) = ( 108 , 107.7 ) ° C , V ( 0 ) = 10 3 m 3 .
In addition, the model constants are provided in Table 2. For simplicity, the normalized heat transfer coefficients α and β are assumed to be independent of the volume V [47]. The operation takes t f = 10 h. As a safety precaution, the reactor temperature T must not exceed 110 °C anytime during the operation. Similarly, the concentration of D must not exceed 500 mol m 3 . At the final time, the reactor volume V must be 0.01 m3 or less. The optimization variables are the inlet flowrate F i n and the cooling power P c , which are allowed to vary within [ 0 , 40 ] m3 h−1 and [ 0 , 4000 ] kJ h−1, respectively. With the goal of maximizing the production of B, the dynamic optimization problem can be formulated as:
min F i n , P c J ( F i n , P c ) : = 0 t f F o u t ( t , F i n ) C B ( t , [ F i n , P c ] ) d t , s . t . Dynamic model ( 38 ) , t ( 0 , t f ] , V ( t f , F i n ) 10 2 0 , T ( t , [ F i n , P c ] ) 110 , t [ 0 , t f ] , C D ( t , [ F i n , P c ] ) 500 , t [ 0 , t f ] , Initial conditions ( 39 ) , F i n ( t ) [ 0 , 40 ] , P c ( t ) [ 0 , 4000 ] , a . e . in [ 0 , t f ] .
The optimal steady-state values required for the nonuniform discretization and proposed approaches are obtained from
min F s s i n , P c , s s , C s s , V s s , T s s , T c , s s J : = F s s o u t C B , s s , s . t . steady - state model , F s s i n [ 0 , 40 ] , P c , s s [ 0 , 4000 ] , C A , s s [ 0 , 5000 ] , C B , s s [ 0 , 2000 ] , C C , s s [ 0 , 2000 ] , C D , s s [ 30 , 500 ] , V s s [ 0.01 , 5 ] , T s s [ 0 , 110 ] , T c , s s [ 0 , 150 ] ,
where C is the vector containing all the concentrations, and the steady-state model refers to the dynamic model (38) with the time derivatives set to zero. Notice that the path constraints on C D and T are included in Problem (41) as upper bounds for the corresponding variables. Except these and the lower bound of zero on the concentrations, other bounds on the state variables have no process implications and are placed so that the global solver can proceed. Once a global optimum is found, the solution must be checked to make sure these arbitrary bounds are not active.
The global solution of Problem (41) is obtained in less than 10 3 s, with the optimum point F s s i n , * = 40 m3 h−1, P c , s s * = 3040.6 kJ h−1, ( C A , s s * , C B , s s * , C C , s s * , C D , s s * ) = ( 2944.7 , 977.9 , 486.2 , 345.6 ) mol m−3, ( T s s * , T c , s s * ) = ( 110 , 106.5 ) °C, and V s s * = 1.8 m3.
The optimal solution and solver statistics for the different discretization strategies are given in Table 3. For the variant formulation of the proposed approach, the settings ϵ a = 10 4 and ϵ r = 10 1 are used. Similar to the previous example, the proposed approach yields a better optimal solution than the uniform discretization with the same number of epochs. The optimal solution of the latter improves by increasing the number of epochs to N = 60 . Nonetheless, it is still inferior to the one obtained from the proposed approach with N = 7 . Increasing N to 70 did not lead to an improved result as the solver failed after 207 iterations. The same problem occurred with N = 80 . This shows that a finer discretization is not always beneficial in practice as it can lead to numerical issues. Similarly, the nonuniform discretization with N = 7 terminated with a failure message indicating local infeasibility. However, this problem is known to be feasible because a more constrained version of it, i.e., the proposed semi-uniform discretization approach with N = 7 , is feasible (see Table 3). Therefore, the reported local infeasibility is only due to numerical issues that apparently arise from including the duration of epochs as extra optimization variables. In terms of the solution speed, the CPU times show that the proposed approach (main formulation) converges to a better solution about 10 times faster than the uniform discretization with N = 60 . This is in spite of the significantly more iterations that are taken by the proposed approach. The speed-up is even more remarkable for the variant formulation, i.e., about 37-fold.
The optimal trajectories for control variables and a selection of the state variables in the case of the uniform discretization with N = 7 , 60 and the proposed approach (both variants) are plotted in Figure 7 and Figure 8, respectively. The units for the quantities are consistent with those reported in Table 2. Similar observations as in the previous example hold here, and are omitted for brevity. The only notable additional point is that, here, the optimal trajectories from the main and the variant formulations are not exactly the same, although the difference can hardly be noticed. Moreover, the main and the variant formulations converge to slightly different values for the triplet ( τ 1 * , τ 2 * , τ 3 * ) , i.e., ( 5.4 , 93.2 , 1.4 ) and ( 2.9 , 95.7 , 1.4 ) h for the main and variant formulations, respectively. The optimal objective value from both formulations, however, is almost the same despite the slight difference in the computed optima.

Shorter Time Horizon

Here, the Van de Vusse reactor is revisited with a time horizon that is too short for the turnpike to appear. The purpose of this case study is to show that the proposed discretization approach still works in such conditions, although it may not be computationally more efficient than conventional discretization. To this end, the final time is reduced to t f = 0.2 h, and the problem is solved using the uniform discretization with N = 5 , 7 and the proposed approach with N = 5 ( N s = 2 , N d = 2 ). The computational results are given in Table 4, and the optimal trajectories for the case of N = 5 are plotted in Figure 9 and Figure 10. It is seen that the optimal trajectories do not reach a steady-state turnpike. Despite this, the proposed approach is still able to solve the problem, and even converge to a slightly better solution than the uniform discretization with N = 5 . The durations of the transient and turnpike phases are obtained as ( τ 1 * , τ 2 * , τ 3 * ) = ( 0.11 , 0 , 0.09 ) h. Interestingly enough, it is seen that the duration of the turnpike is pushed to zero by the optimizer, reducing the effective number of epochs to four. The convergence to a better solution despite the absence of a turnpike and lower number of epochs can be explained by the inherent flexibility of the proposed approach in adjusting the location and duration of epochs. Note, however, that the uniform discretization with N = 7 is able to yield yet an improved solution. In both cases, the solution with the uniform discretization converges somewhat faster, than the proposed approach. This suggests that, for very short time horizons, the extra computational overhead introduced by the proposed approach may not be offset by the reduction in the number of epochs.

4.3. Example 3

Finally, dynamic optimization of a continuous stirred-tank reactor adapted from [47] is presented. The following reactions take place in the reactor:
A + B k 1 P , 2 B k 2 I ,
where P and I are the desired and undesired products, respectively; k 1 and k 2 are the kinetic constants. Two pure streams at the flowrates F A i n and F B i n and concentrations C A i n and C B i n , respectively, enter the reactor. The reactor is modeled as:
C ˙ A ( t ) = k 1 C A ( t ) C B ( t ) + F A i n ( t ) V ( t ) ( C A i n C A ( t ) ) F B i n ( t ) V ( t ) C A ( t ) , C ˙ B ( t ) = k 1 C A ( t ) C B ( t ) 2 k 2 C B 2 ( t ) + F B i n ( t ) V ( t ) ( C B i n C B ( t ) ) F A i n ( t ) V ( t ) C B ( t ) , C ˙ P ( t ) = k 1 C A ( t ) C B ( t ) ( F A i n ( t ) + F B i n ( t ) ) V ( t ) C P ( t ) , C ˙ I ( t ) = k 2 C B 2 ( t ) ( F A i n ( t ) + F B i n ( t ) ) V ( t ) C I ( t ) , V ˙ ( t ) = F A i n ( t ) + F B i n ( t ) F o u t ( t ) ,
with F o u t ( t ) α V ( t ) being the outlet flow rate, where α is a positive constant, and the initial conditions are given by
C A ( 0 ) = C A 0 , C B ( 0 ) = C B 0 , C P ( 0 ) = C P 0 , C I ( 0 ) = C I 0 , V ( 0 ) = V 0 .
The model parameters and the initial conditions are found in Table 5. The operation takes t f = 50 min, and the objective is to maximize production of P during this period. The inlet flow rates are taken as the optimization variables. The following dynamic optimization problem is formulated:
min F A i n , F B i n J ( F A i n , F B i n ) : = 0 t f F o u t ( t , [ F A i n , F B i n ] ) C P ( t , [ F A i n , F B i n ] ) d t , s . t . Dynamic model ( 42 ) , t ( 0 , t f ] , V ( t f , [ F A i n , F B i n ] ) 10 3 0 , C I ( t , [ F A i n , F B i n ] ) 0.14 , t [ 0 , t f ] , Initial conditions ( 43 ) , F A i n [ 0 , 0.01 ] , F B i n [ 0.002 , 0.01 ] , a . e . in [ 0 , t f ] .
The optimal steady-state values required for the nonuniform discretization and proposed approaches are obtained from
min F A , s s i n , F B , s s i n , C s s , V s s J : = F s s o u t C P , s s , s . t . steady - state model , F A , s s i n [ 0 , 0.01 ] , F B , s s i n [ 0.002 , 0.01 ] , C A , s s [ 0 , 2 ] , C B , s s [ 0 , 2 ] , C P , s s [ 0 , 2 ] , C I , s s [ 0 , 0.14 ] , V s s [ 0 , 1 ] ,
where C is the vector containing all the concentrations, and the steady-state model refers to Problem (42) with the time derivatives set to zero. Similar to the previous example, bounds on the volume and the concentrations of A, B, and P have no process implications and are placed so that the global solver can proceed. The obtained global solution is accepted if these arbitrary bounds are not active.
The global solution of Problem (45) is obtained in 0.016 s, with the optimum point F A , s s i n , * = F B , s s i n , * = 0.01 L min−1, ( C A , s s * , C B , s s * , C P , s s * , C I , s s * ) = ( 1.69 , 0.43 , 0.82 , 0.13 ) mol L−1, and V s s * = 0.03 L.
Table 6 shows the optimal solution and solver statistics for the uniform, nonuniform, and proposed discretization strategies. The tolerances ϵ a = 0 and ϵ r = 10 2 are used for the variant formulation of the proposed approach. With N = 5 epochs, the proposed strategy and its variant yield a considerably improved solution compared to the uniform discretization. The variant formulation outperforms the main one in both number of iterations and CPU time. It also slightly outperforms the uniform discretization with N = 5 in the same respects. The durations of the transient and turnpike phases for the proposed approach (both formulations) are obtained as ( τ 1 * , τ 2 * , τ 3 * ) = ( 0 , 45.3 , 4.7 ) min.
The uniform strategy converges to the same solution with a much higher number of epochs ( N = 21 ) and a markedly higher computational time (93% and 200% slower compared to the main and variant formulations, respectively). Here, the nonuniform strategy converges quite closely to the optimal solution of J * = 0.741 with only two epochs. With only one more epoch, the nonuniform strategy is able to reach this solution (and slightly improves upon it). Nonetheless, the solution takes a much higher CPU time than the one with the proposed strategy, i.e, 1.8 and 2.8 times higher than the main and variant formulations, respectively. The durations of the three epochs used by the nonuniform discretization are obtained as ( 45.5 , 0.1 , 4.4 ) min.
The optimal trajectories are shown in Figure 11 and Figure 12. The trajectories for the main and variant formulations of the proposed approach are identical.

5. Conclusions

A new adaptive control discretization approach for efficient dynamic optimization is proposed. The approach is based on turnpike theory in optimal control and is most suitable for continuous systems with sufficiently long time horizons during which steady state is likely to emerge. However, it can also be applied to other systems with a steady-state solution, whether or not the steady state will actually appear in the solution trajectories. The special semi-uniform discretization enables approximating the turnpike structure with a minimal number of epochs, while avoiding the robustness issues that can be encountered with a full nonuniform discretization strategy. Unlike some other adaptive discretization techniques, the proposed adaptive discretization is built directly into the problem formulation. Thus, one would need to solve only one optimization problem instead of a series of successively refined problems. Another advantage of the proposed approach is the use of globally optimal steady-state values in the formulation that helps the optimizer avoid suboptimal solutions in case a steady-state turnpike emerges. It is shown that the proposed approach, especially the variant formulation, can significantly reduce the computational cost of dynamic optimization for systems of interest. However, a downside of the variant formulation is that the tolerance values can impact the performance of the numerical solution, and finding appropriate values for them may not be trivial. In this case, one may choose to use the main formulation instead, as it is adequately superior and requires no tuning parameters. Future work may include applying the proposed approach to large-scale processes and optimal campaign continuous manufacturing problems, as described in [47].

Acknowledgments

Financial support from the Novartis-MIT Center for Continuous Manufacturing (Cambridge, MA, USA) is gratefully acknowledged.

Author Contributions

This work was done under the supervision of Paul I. Barton. Both authors contributed to the development of the proposed approach. Ali M. Sahlodin also performed the case studies and prepared the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cervantes, A.; Biegler, L. Optimization Strategies for Dynamic Systems. In Encyclopedia of Optimization; Floudas, C.A., Pardalos, P.M., Eds.; Springer: Boston, MA, USA, 2009; pp. 2847–2858. [Google Scholar]
  2. Chachuat, B. Nonlinear and Dynamic Optimization: From Theory to Practice-IC-32: Spring Term 2009; Polycopiés de l’EPFL, EPFL: Lausanne, Switzerland, 2009. [Google Scholar]
  3. Von Stryk, O.; Bulirsch, R. Direct and indirect methods for trajectory optimization. Ann. Oper. Res. 1992, 37, 357–373. [Google Scholar] [CrossRef]
  4. Biegler, L. An overview of simultaneous strategies for dynamic optimization. Chem. Eng. Process. 2007, 46, 1043–1053. [Google Scholar] [CrossRef]
  5. Kraft, D. On Converting Optimal Control Problems into Nonlinear Programming Problems. In Computational Mathematical Programming; Schittkowski, K., Ed.; Springer: Berlin/Heidelberg, Germany, 1985; Volume 15, pp. 261–280. [Google Scholar]
  6. Binder, T.; Blank, L.; Bock, H.; Bulirsch, R.; Dahmen, W.; Diehl, M.; Kronseder, T.; Marquardt, W.; Schlöder, J.; von Stryk, O. Introduction to Model Based Optimization of Chemical Processes on Moving Horizons. In Online Optimization of Large Scale Systems; Grötschel, M., Krumke, S., Rambau, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 295–339. [Google Scholar]
  7. Hartwich, A.; Marquardt, W. Dynamic optimization of the load change of a large-scale chemical plant by adaptive single shooting. Comput. Chem. Eng. 2010, 34, 1873–1889. [Google Scholar] [CrossRef]
  8. Binder, T.; Blank, L.; Dahmen, W.; Marquardt, W. Grid refinement in multiscale dynamic optimization. In European Symposium on Computer Aided Process Engineering-10; Pierucci, S., Ed.; Elsevier: Amsterdam, The Netherlands, 2000; Volume 8, pp. 31–36. [Google Scholar]
  9. Feehery, W.F.; Tolsma, J.E.; Barton, P.I. Efficient sensitivity analysis of large-scale differential-algebraic systems. Appl. Numer. Math. 1997, 25, 41–54. [Google Scholar] [CrossRef]
  10. Özyurt, D.B.; Barton, P.I. Cheap Second Order Directional Derivatives of Stiff ODE Embedded Functionals. SIAM J. Sci. Comput. 2005, 26, 1725–1743. [Google Scholar] [CrossRef]
  11. Özyurt, D.B.; Barton, P.I. Large-Scale Dynamic Optimization Using the Directional Second-Order Adjoint Method. Ind. Eng. Chem. Res. 2005, 44, 1804–1811. [Google Scholar] [CrossRef]
  12. Luus, R.; Cormack, D.E. Multiplicity of solutions resulting from the use of variational methods in optimal control problems. Can. J. Chem. Eng. 1972, 50, 309–311. [Google Scholar] [CrossRef]
  13. Luus, R.; Dittrich, J.; Keil, F.J. Multiplicity of solutions in the optimization of a bifunctional catalyst blend in a tubular reactor. Can. J. Chem. Eng. 1992, 70, 780–785. [Google Scholar] [CrossRef]
  14. Banga, J.R.; Seider, W.D. Global optimization of chemical processes using stochastic algorithms. In State of the Art in Global Optimization: Computational Methods and Applications; Floudas, C., Pardalos, P., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 563–583. [Google Scholar]
  15. Srinivasan, B.; Primus, C.; Bonvin, D.; Ricker, N. Run-to-run optimization via control of generalized constraints. Control Eng. Pract. 2001, 9, 911–919. [Google Scholar] [CrossRef]
  16. Binder, T.; Cruse, A.; Villar, C.C.; Marquardt, W. Dynamic optimization using a wavelet based adaptive control vector parameterization strategy. Comput. Chem. Eng. 2000, 24, 1201–1207. [Google Scholar] [CrossRef]
  17. Schlegel, M.; Stockmann, K.; Binder, T.; Marquardt, W. Dynamic optimization using adaptive control vector parameterization. Comput. Chem. Eng. 2005, 29, 1731–1751. [Google Scholar] [CrossRef]
  18. Schlegel, M.; Marquardt, W. Detection and exploitation of the control switching structure in the solution of dynamic optimization problems. J. Process Control 2006, 16, 275–290. [Google Scholar] [CrossRef]
  19. Schlegel, M. Adaptive Discretization Methods for the Efficient Solution of Dynamic Optimization Problems; VDI-Verlag: Dusseldorf, Germany, 2005. [Google Scholar]
  20. Liu, P.; Li, G.; Liu, X.; Zhang, Z. Novel non-uniform adaptive grid refinement control parameterization approach for biochemical processes optimization. Biochem. Eng. J. 2016, 111, 63–74. [Google Scholar] [CrossRef]
  21. Dorfman, R.; Samuelson, P.A.; Solow, R.M. Linear Programming and Economic Analysis; McGraw Hill: New York, NY, USA, 1958. [Google Scholar]
  22. McKenzie, L. Turnpike theory. Econometrica 1976, 44, 841–865. [Google Scholar] [CrossRef]
  23. Wilde, R.; Kokotovic, P. A dichotomy in linear control theory. IEEE Trans. Autom. Control 1972, 17, 382–383. [Google Scholar] [CrossRef]
  24. Anderson, B.D.; Kokotovic, P.V. Optimal control problems over large time intervals. Automatica 1987, 23, 355–363. [Google Scholar] [CrossRef]
  25. Rao, A.; Mease, K. Dichotomic basis approach to solving hyper-sensitive optimal control problems. Automatica 1999, 35, 633–642. [Google Scholar] [CrossRef]
  26. Grüne, L.; Müller, M.A. On the relation between strict dissipativity and turnpike properties. Syst. Control Lett. 2016, 90, 45–53. [Google Scholar] [CrossRef]
  27. Faulwasser, T.; Korda, M.; Jones, C.N.; Bonvin, D. On turnpike and dissipativity properties of continuous-time optimal control problems. Automatica 2017, 81, 297–304. [Google Scholar] [CrossRef]
  28. Galán, S.; Feehery, W.F.; Barton, P.I. Parametric sensitivity functions for hybrid discrete/continuous systems. Appl. Numer. Math. 1999, 31, 17–47. [Google Scholar] [CrossRef]
  29. Rawlings, J.; Amrit, R. Optimizing Process Economic Performance Using Model Predictive Control. In Nonlinear Model Predictive Control; Magni, L., Raimondo, D., Allgöwer, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 384, pp. 119–138. [Google Scholar]
  30. Carlson, D.; Haurie, A.; Leizarowitz, A. Infinite Horizon Optimal Control: Deterministic and Stochastic Systems; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  31. Zaslavski, A.J. Turnpike Properties in the Calculus of Variations and Optimal Control; Springer: New York, NY, USA, 2006. [Google Scholar]
  32. Zaslavski, A.J. Turnpike Properties of Optimal Control Systems. Aenorm 2012, 20, 36–40. [Google Scholar]
  33. Zaslavski, A.J. Turnpike Phenomenon and Infinite Horizon Optimal Control; Springer: Cham, Switzerland, 2014. [Google Scholar]
  34. Faulwasser, T.; Korda, M.; Jones, C.N.; Bonvin, D. Turnpike and dissipativity properties in dynamic real-time optimization and economic MPC. In Proceedings of the 2014 IEEE 53rd Annual Conference on Decision and Control (CDC), Los Angeles, CA, USA, 15–17 December 2014; pp. 2734–2739. [Google Scholar]
  35. Grüne, L. Economic receding horizon control without terminal constraints. Automatica 2013, 49, 725–734. [Google Scholar] [CrossRef]
  36. Faulwasser, T.; Bonvin, D. On the design of economic NMPC based on approximate turnpike properties. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 4964–4970. [Google Scholar]
  37. Trélat, E.; Zuazua, E. The turnpike property in finite-dimensional nonlinear optimal control. J. Differ. Equ. 2015, 258, 81–114. [Google Scholar] [CrossRef]
  38. Teo, K.L.; Jennings, L.S.; Lee, H.W.J.; Rehbock, V. The control parameterization enhancing transform for constrained optimal control problems. ANZIAM J. 1999, 40, 314–335. [Google Scholar] [CrossRef]
  39. Lee, H.; Teo, K.; Rehbock, V.; Jennings, L. Control parametrization enhancing technique for optimal discrete-valued control problems. Automatica 1999, 35, 1401–1407. [Google Scholar] [CrossRef]
  40. Li, R.; Teo, K.; Wong, K.; Duan, G. Control parameterization enhancing transform for optimal control of switched systems. Math. Comput. Model. 2006, 43, 1393–1403. [Google Scholar] [CrossRef]
  41. Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 2006, 106, 25–57. [Google Scholar] [CrossRef]
  42. Tolsma, J.; Barton, P.I. DAEPACK: An Open Modeling Environment for Legacy Models. Ind. Eng. Chem. Res. 2000, 39, 1826–1839. [Google Scholar] [CrossRef]
  43. Tolsma, J.E.; Barton, P.I. Hidden Discontinuities and Parametric Sensitivity Calculations. SIAM J. Sci. Comput. 2002, 23, 1861–1874. [Google Scholar] [CrossRef]
  44. Tawarmalani, M.; Sahinidis, N.V. A polyhedral branch-and-cut approach to global optimization. Math. Program. 2005, 103, 225–249. [Google Scholar] [CrossRef]
  45. GAMS. GAMS–A User’s Guide. Available online: https://www.gams.com/24.8/docs/userguides/GAMSUsersGuide.pdf (accessed on 14 December 2017).
  46. Rothfuss, R.; Rudolph, J.; Zeitz, M. Flatness based control of a nonlinear chemical reactor model. Automatica 1996, 32, 1433–1439. [Google Scholar] [CrossRef]
  47. Sahlodin, A.M.; Barton, P.I. Optimal Campaign Continuous Manufacturing. Ind. Eng. Chem. Res. 2015, 54, 11344–11359. [Google Scholar] [CrossRef]
Figure 1. An optimal control or state trajectory with two transient phases and a turnpike in between. The green solid line represents the (globally) optimal steady state u s s * or x s s * , and the red dashed lines denote its ϵ -neighborhood.
Figure 1. An optimal control or state trajectory with two transient phases and a turnpike in between. The green solid line represents the (globally) optimal steady state u s s * or x s s * , and the red dashed lines denote its ϵ -neighborhood.
Processes 05 00085 g001
Figure 2. Schematic of an optimal control trajectory resulting from a nonuniform discretization. The dashed line illustrates the ideal optimal trajectory that might be obtained without discretization.
Figure 2. Schematic of an optimal control trajectory resulting from a nonuniform discretization. The dashed line illustrates the ideal optimal trajectory that might be obtained without discretization.
Processes 05 00085 g002
Figure 3. Schematic of the semi-uniform control discretization in the real time domain t (left) and the transformed time domain t (right). The three transient and turnpike phases are delineated by the blue dashed lines.
Figure 3. Schematic of the semi-uniform control discretization in the real time domain t (left) and the transformed time domain t (right). The three transient and turnpike phases are delineated by the blue dashed lines.
Processes 05 00085 g003
Figure 4. Adaptive semi-uniform discretization with suboptimal (left) and optimal (right) solution for τ i s. The dashed line trajectory shows the ideal optimal control as a reference.
Figure 4. Adaptive semi-uniform discretization with suboptimal (left) and optimal (right) solution for τ i s. The dashed line trajectory shows the ideal optimal control as a reference.
Processes 05 00085 g004
Figure 5. Optimal trajectories for Problem (37) in case of a uniform discretization with N = 5 (left) and N = 60 (right) epochs.
Figure 5. Optimal trajectories for Problem (37) in case of a uniform discretization with N = 5 (left) and N = 60 (right) epochs.
Processes 05 00085 g005
Figure 6. Optimal trajectories for Problem (37) in case of the proposed semi-uniform discretization approach with N = 5 epochs (both the main and variant formulations) in the t (left) and t (right) domains.
Figure 6. Optimal trajectories for Problem (37) in case of the proposed semi-uniform discretization approach with N = 5 epochs (both the main and variant formulations) in the t (left) and t (right) domains.
Processes 05 00085 g006
Figure 7. Optimal trajectories for Problem (40) in the case of a uniform discretization with N = 7 (left) and N = 60 (right) epochs.
Figure 7. Optimal trajectories for Problem (40) in the case of a uniform discretization with N = 7 (left) and N = 60 (right) epochs.
Processes 05 00085 g007
Figure 8. Optimal trajectories for Problem (40) in the case of the main (left) and the variant (right) formulations of the proposed approach with N = 7 epochs.
Figure 8. Optimal trajectories for Problem (40) in the case of the main (left) and the variant (right) formulations of the proposed approach with N = 7 epochs.
Processes 05 00085 g008
Figure 9. Optimal trajectories for Problem (40) with a short time horizon solved using a uniform discretization and N = 5 epochs.
Figure 9. Optimal trajectories for Problem (40) with a short time horizon solved using a uniform discretization and N = 5 epochs.
Processes 05 00085 g009
Figure 10. Optimal trajectories for Problem (40) with a short time horizon solved using the proposed approach and N = 5 epochs.
Figure 10. Optimal trajectories for Problem (40) with a short time horizon solved using the proposed approach and N = 5 epochs.
Processes 05 00085 g010
Figure 11. Optimal trajectories for Problem (44) in the case of the uniform (left) and proposed (right) discretization methods both with N = 5 epochs.
Figure 11. Optimal trajectories for Problem (44) in the case of the uniform (left) and proposed (right) discretization methods both with N = 5 epochs.
Processes 05 00085 g011
Figure 12. Optimal trajectories for Problem (44) in the case of the uniform discretization with N = 21 (left) and the nonuniform discretization with N = 3 (right) epochs.
Figure 12. Optimal trajectories for Problem (44) in the case of the uniform discretization with N = 21 (left) and the nonuniform discretization with N = 3 (right) epochs.
Processes 05 00085 g012
Table 1. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (37). n p denotes the total number of optimization variables for each strategy.
Table 1. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (37). n p denotes the total number of optimization variables for each strategy.
DiscretizationN n p J * IterCPU (s)
Uniform559.32130.13
Uniform60602.451038.02
Nonuniform59not converged>1000>11
Proposed5 ( N s = 2 , N d = 2 )72.45380.79
Proposed (variant)5 ( N s = 2 , N d = 2 )72.45330.89
Table 2. Constants for the Van de Vusse reactor model (38).
Table 2. Constants for the Van de Vusse reactor model (38).
k 0 , 1 1.29 × 10 12 h−1 α 30.828h−1
k 0 , 2 9.04 × 10 6 m3 (mol h)−1 β 86.688h−1
E 1 9758.3K γ 0.1K kJ−1
E 2 8560K δ 3.52 × 10 4 m3 K kJ−1
Δ H AB 4.2kJ mol−1 η 30m3/2 h−1
Δ H BC −11kJ mol−1 T i n 104.9°C
Δ H AD −41.85kJ mol−1 C A i n 5.10 × 10 3 mol m−3
Table 3. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (40). n p denotes the total number of optimization variables for each strategy.
Table 3. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (40). n p denotes the total number of optimization variables for each strategy.
DiscretizationN n p J * IterCPU (s)
Uniform714−3.34 × 10 5 25114.7
Uniform60120−3.84 × 10 5 227798
Uniform70140failed207645
Uniform80160failed2951160
Nonuniform719failed1212134
Proposed7 ( N s = 3 , N d = 3 )15−3.87 × 10 5 80075.2
Proposed (variant)7 ( N s = 3 , N d = 3 )15−3.87 × 10 5 21021.4
Table 4. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (40) for the case of a short time horizon.
Table 4. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (40) for the case of a short time horizon.
DiscretizationN n p J * IterCPU (s)
Uniform5103987441.65
Uniform7144058532.29
Proposed5 ( N s = 2 , N d = 2 )114009692.77
Proposed (variant)5 ( N s = 2 , N d = 2 )114009532.33
Table 5. Constants and initial conditions for the model (42).
Table 5. Constants and initial conditions for the model (42).
ParametersInitial Conditions
k 1 0.8L (mol min)−1 C A 0 0mol L−1
k 2 0.5L (mol min)−1 C B 0 0mol L−1
C A i n 5mol L−1 C P 0 0mol L−1
C B i n 3mol L−1 C I 0 0mol L−1
α 0.119L1/2 min−1 V 0 0.001L
Table 6. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (44). n p denotes the total number of optimization variables for each strategy.
Table 6. Optimal objective values, number of iterations, and CPU times for different discretization strategies in Problem (44). n p denotes the total number of optimization variables for each strategy.
DiscretizationN n p J * IterCPU (s)
Uniform5100.66311.3
Uniform14280.734231.26
Uniform20400.739272.38
Uniform21420.741343.53
Nonuniform240.731350.59
Nonuniform370.7421453.28
Proposed5 ( N s = 2 , N d = 2 )110.741291.83
Proposed (variant)5 ( N s = 2 , N d = 2 )110.741241.18

Share and Cite

MDPI and ACS Style

Sahlodin, A.M.; Barton, P.I. Efficient Control Discretization Based on Turnpike Theory for Dynamic Optimization. Processes 2017, 5, 85. https://doi.org/10.3390/pr5040085

AMA Style

Sahlodin AM, Barton PI. Efficient Control Discretization Based on Turnpike Theory for Dynamic Optimization. Processes. 2017; 5(4):85. https://doi.org/10.3390/pr5040085

Chicago/Turabian Style

Sahlodin, Ali M., and Paul I. Barton. 2017. "Efficient Control Discretization Based on Turnpike Theory for Dynamic Optimization" Processes 5, no. 4: 85. https://doi.org/10.3390/pr5040085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop