Next Article in Journal
Chemical Wave Computing from Labware to Electrical Systems
Next Article in Special Issue
Simple Learning-Based Robust Trajectory Tracking Control of a 2-DOF Helicopter System
Previous Article in Journal
An Ultra-Wideband Linear-to-Circular Polarization Converter Based on a Circular, Pie-Shaped Reflective Metasurface
Previous Article in Special Issue
Automatic Weight Prediction System for Korean Cattle Using Bayesian Ridge Algorithm on RGB-D Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictions Based on Evolutionary Algorithms Using Predefined Control Profiles

1
Control and Electrical Engineering Department, “Dunarea de Jos” University, 800008 Galati, Romania
2
Faculty of Science and Environment, “Dunarea de Jos” University, 800008 Galati, Romania
3
Mechanical Engineering Department, “Dunarea de Jos” University, 800008 Galati, Romania
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(11), 1682; https://doi.org/10.3390/electronics11111682
Submission received: 27 April 2022 / Revised: 18 May 2022 / Accepted: 23 May 2022 / Published: 25 May 2022
(This article belongs to the Collection Predictive and Learning Control in Engineering Applications)

Abstract

:
The general motivation of our work is to meet the main time constraint when implementing a control loop: the Controller’s execution time is less than the sampling period. This paper proposes a practical method to diminish the computational complexity of the controllers using predictions based on the Evolutionary Algorithm (EA). It is the case of Model Predictive Control or, more generally, Receding Horizon Control structures. The main drawback of the metaheuristic algorithms (including EAs) working in control structures is their great complexity. Usually, the control variables take values between minimum and maximum technological limits. This work’s main idea is to consider the control variables’ domain inside a predefined control profile’s neighbourhood. The Controller takes into account a smaller domain of the control variables without tracking the predefined control profile or a reference trajectory. The convergence of the EA under consideration is not affected; hence, the same best predictions are found. The predefined control profile is already known or can be determined by solving the optimal control problem without time constraints in open-loop and offline. This work also presents a simulation study applying the proposed technique that involves two benchmark control problems. The results prove that the computational complexity decreases significantly.

1. Introduction

Process engineering often involves controlling a dynamic system subjected to a performance index. If the controlled process has certain mathematical properties, theoretical control laws can be successfully used. This approach produces controllers with moderate computational complexity, which, in addition, can be estimated. A realistic solution must be found when the process has profound nonlinearities or its model is incomplete, imprecise or uncertain. A possible solution is to implement adequate control structures using metaheuristic algorithms, such as the EAs.
Metaheuristic algorithms [1,2,3] are recommended to be largely used in control engineering [4,5,6,7,8] because they are robust and can deal with complex problems. However, there is a price to pay: the computational effort of the Controller could be very large and involve a lengthy Controller’s execution time, which has to be less than the sampling period. For the Controller’s implementation, the fulfilment of this time constraint is sine qua non. Therefore, the diminution of computational complexity constitutes a crucial subject for control applications.
In previous papers, the authors have used the Receding Horizon Control (RHC) [9,10,11,12] to solve Optimal Control Problems (OCPs). RHC is an appropriate control structure due to several properties:
  • RHC is a closed-loop control structure (Appendix A) using a Process Model (PM).
  • The Controller makes predictions concerning the optimal control sequence over the prediction horizon using the PM [12,13,14].
  • The Controller can integrate Metaheuristics, such as the EA, which can be used to compute optimal predictions [12,15,16].
The Controller calls, in real-time, the EA at each sampling period and yields an optimal prediction. The EA’s population comprises individuals (chromozomes), which are candidate predictions, namely, sequences of control variables’ values. Hence, the EA search for optimal prediction inside the control variables’ space. The computational complexity is related to how extended this search space is. Roughly speaking, the larger the search space, the bigger the computational complexity.
When EAs are used as metaheuristics, the predicted sequences can be encoded using a technique that decreases the computational complexity [17]. In paper [18], the authors proposed a method to shrink the control variables’ space using an additional state estimator and a so-called “quality criterion”. The latter is specific to each Process and is expressed through a mathematical constraint characterizing the desired evolution. That is why this method is not always applicable.
In this work, which deals with the same topic as the paper [18], we propose a method to shrink the search space, which can be applied to any Process considered in OCPs. Obviously, before implementing the closed-loop control structure, the control engineer has tested that a certain version of the EA is appropriate for solving the OCP at hand. Hence, an “optimal” or quasi-optimal solution is available, which can give the best-known evolution of the process, optimizing the performance index [19]. The control inputs evolve inside the technological limits given in the OCP’s statement as bound constraints. A “reference” for the future closed-loop control structure can be the control profile (CP), which has generated the open-loop solution. This CP is the sequence of the best control values and is referred to as the predefined CP.
The predefined CP will be perceived and implemented specifically. The Controller’s objective is not to mechanically reproduce this CP, hoping that the optimal state trajectory will be reproduced. At least because of perturbations, this task would not be feasible. The Process and PM still have different behaviour, even if the model has very good accuracy. That is why closed-loop control is mandatory: at sampling moments, the Controller obtains the actual Process state, which is eventually affected by perturbations. The predefined CP is used initially to compute the control ranges and no longer matters in the control phase. The Controller only adapts the current control range at each sampling period.
Because the main objective is to solve OCPs, Section 2 briefly recalls the typical elements of an OCP, allowing the paper to be self-contained and notations to be introduced.
Section 3 describes how the predefined CP can be defined and used by the proposed Controller, such that the control variables’ ranges would be adapted at each sampling period. The structure and implementation of the proposed Controller are also presented. The Controller calls the Predictor, which adapts the control ranges before calling the EA. Each control variable will be constrained to belong to a narrower interval. The control ranges have a crucial influence on the initial population generated by the EA and even the search space. The Predictor’s algorithm is given in Section 3.3.
Our work is validated through a simulation study, including two case studies devoted to two benchmark OCPs. Section 4 sets the simulation objectives and describes the general simulation algorithm for a control action during the control horizon.
The first case study—presented in Section 5—deals with a monovariable process, and its objective function is only a final cost. The second one—illustrated in Section 6—considers a multivariable Process with two control inputs, and its objective function is an integral term added to a final cost.
Both OCPs have been analyzed and implemented in previous works [15,16,18] within the same context: the RHC and similar EAs integrated into the controllers. The present work is still directed to another objective: to propose a new Controller using a predefined CP, which will adapt the control ranges. After implementing the new Controller, we conducted simulation series with the old and new Controllers and made a comparative analysis. The conclusion undoubtedly is that the proposed Controller decreases the computational complexity significantly.
Our paper is not devoted to controlling a specific Process; our presentation emphasizes the purpose of the proposed method is to reduce the computational complexity. This method can be the solution to control many kinds of processes and is subjected to the general motivation of our work, which is time constraint fulfilment: the Controller’s run time must be less than the sampling period.
To help the reader understand exhaustively and use the algorithms presented in this paper, we attached all the written programs as Supplementary Materials and gave all the necessary details in the Appendix A, Appendix B, Appendix C and Appendix D.

2. Optimal Control Problems—Defining Elements

Because the structure of OCPs is well-known, we are content with giving only a few details just for presenting the notations rather than developing the subject. We shall also avoid some mathematical details to simplify the presentation.

2.1. Process Model

As in many works, for the algebraic and ordinary differential equations model, the process is as below:
X ˙ ( t ) = [ f 1 ( X , U , W ) f n ( X , U , W ) ] T ordinary   differential   equations g i ( X , U , W ) = 0 , i = 1 , , p algebraic   equations ;   W :   process parameters set
where the variables have the usual meaning:
X ( t ) = [ x 1 ( t ) x n ( t ) ] T a   vector   with   n   state   variables ;
U ( t ) = [ u 1 ( t ) u m ( t ) ] T a   vector   with   m   control   variables .
Examples of Process Models are Equation (20) from Section 5.1 and Equation (26) from Section 6.1.

2.2. Constraints

Constraints to guide the process evolution accompany the ordinary differential equations. There are many constraint types in the general systems theory, but we are content with mentioning only those used in the two case studies presented in the sequel.
There is a minimum set of constraints that have to be met:
Control horizon :   t [ t 0 , t f i n a l ] ;   t 0 = 0 ,
where t f i n a l denotes the final moment of the control horizon.
Initial state :   X ( 0 ) = X 0 R n .
Bound constraints :   u j ( t ) Ω j [ u min j ,   u max j ] ;   j = 1 , , m ;   0 < t < t f i n a l .
We consider that the values u min j ,   u max j are the technological limits of the control variable u j ( t ) , which are given in the OCP’s statement.
If the control system’s sampling period is T, then the control horizon will have H sampling periods, and it holds:
t f i n a l = H × T
Solving OCPs using EAs involves discretizing the control horizon and control variables’ values. Therefore, we consider a supplementary constraint: the control variables’ values are constant during each sampling period. So, each control variable is a step function.

2.3. Performance Index

An OCP consists in determining the control profile U ( ) that optimizes (maximizes or minimizes) a specific objective function (or cost function) J having the general form Equation (5):
J ( U ( ) , X 0 ) = 0 t f i n a l L ( X ( τ ) , U ( τ ) ) d τ + J f i n a l .
The specific function L characterizes the integral component of the objective function J . When the controller makes predictions, the presence of a final cost in the objective function’s expression [17] involves a particularity of the prediction horizon.
Remark 1.
When the objective function has a final cost (Mayer term), no matter if there is or is not an integral term (Lagrange term), the prediction horizon must include the control horizon’s final time, the most difficult situation related to the computational complexity.
That is why, for the moment, we shall consider only the final cost:
J ( U ( ) , X 0 ) J f i n a l = J ( X ( t f i n a l ) )  
The OCP’s solution related to an initial state is the function  U ( )  that optimizes the cost function. The optimal value  J 0  is called the performance index. For maximization, this value is given hereafter:
J 0 = m a x U ( ) J ( X ( t f i n a l ) ) m a x U ( ) J ( U ( ) , X 0 ) .
We are interested in generating and recording the best CP, which is the one that engenders the open-loop performance index, as we shall see in our approach.
As stated before, this problem has no connection with how its solution is implemented in a closed loop. Therefore we shall refer to it as open-loop OCP, its solution being an open-loop solution, that is, just a CP calculated offline. In fact, an open-loop solution is useless; it is a mathematical solution “on paper” and cannot be used to control the Process because the PM has a different dynamic than the real Process, however small the difference would be.

3. Controller with Predictions Based on EAs and Control Profiles

3.1. Main Idea of the Proposed Controller

As we have already mentioned, the objective of the proposed Controller is to decrease the computational complexity when the predictions are computed using EAs. In other words, the optimal problem solved at each sampling period is to determine the optimal prediction, which has exponential complexity. A stochastic algorithm, the EA, will determine a quasi-optimal prediction with moderate computational complexity. Moreover, this computational complexity can be decreased by adapting the control ranges as in the sequel.
The technological limits cover the control variables’ domain for all sampling periods, but generally speaking, these limits are too large for a specific sampling period. Therefore, these limits can be adjusted for each sampling period. This paper proposes to find out the control ranges around the values belonging to the predefined CP.
The individuals of the EA populations are predictions, that is, sequences of control variables placed inside the technological limits (Equation (4)). The main idea is to adapt the intervals Ω i [ u min i ,   u max i ] ;   i = 1 , , m for each sampling period such that a large enough neighbourhood of a good CP would become the new control variables’ domain.
Figure 1 illustrates this approach for the control variable u i ( t ) , 1 i m ,   t [ 0 , t f i n a l ] . The blue curve corresponds to the optimal (or quasi-optimal) solution, which is the open-loop solution of the OCP at hand. The open-loop OCP can be solved offline using an appropriate tool, including an EA. What we obtain can constitute a good reference for the Process Evolution, namely the reference solution, state trajectory and performance index. All these data can be recorded into a file and used in our approach. The blue curve corresponds to a good solution, not mandatorily the optimum one. This optimal or quasi-optimal solution could be considered a reference solution for our proposed Controller.
The red curve corresponds to a real solution achieved by the closed-loop control structure. Because our desideratum is to generate quasi-optimal behaviour of the Process, it is supposed that the red solution would be placed in a neighbourhood of the blue curve and would generate a close performance index. The dashed patterns indicate the maximum and minimum control values and delimit this neighbourhood, which is, in fact, a “tube” around the blue curve (reference solution). These patterns replicate the blue curve translated with ± Δ , which is a parameter defining the new control variable’s range. For example:
Δ i = α ( u max i u min i ) ; α = 20   % .
In this way, the new range has a smaller extent than the technological range, stretching between the technological limits. That will still be true for all the control variables.
The EA’s implementation led us to consider the following discretization constraint:
  U ( t ) = U ( k T )   for   k T t < ( k + 1 ) T ;   k = 0 , , H 1 .
To simplify the notations and equations’ form and because the context does not allow any mathematical confusion, we adopt the following convention in the sequel: the time moment k T will be simply denoted by k . For example, it holds:
U ( k T ) U ( k ) [ u 1 ( k ) , , u m ( k ) ] T
The discretization constraint means the control vector U ( k ) has constant elements and acts inside the sampling period [ k T , ( k + 1 ) T ) .
Remark 2.
A control profile (CP) is a sequence of  H  control vectors  U ( 0 ) ,   U ( 1 ) ,     U ( H 1 )  having constant values, as in Equation (7).
In the sequel, the reference solution will be a step function called CP (see Remark 2). Therefore, all the curves from Figure 1 will, in fact, be step functions, as in the example presented in Figure 7. This fact does not change the effect of the range adaptation.
The control ranges R k ,   k = 0 , , H 1 can be determined before the control loop works.
Let us consider the reference CP as below:
U r ( k ) = [ u 1 r ( k )   u 2 r ( k ) u m r ( k ) ] T ;   k = 0 , , H 1 ;
In the context presented before, the control range for the sampling period k is the following set:
R k = I 1 ( k ) × I 2 ( k ) × × I m ( k ) .
The shrunken intervals I i ( k ) are defined below.
I i ( k ) [ u min i ( k ) ,   u max i ( k ) ] ;   1 i m ;
u min i ( k ) u i r ( k ) Δ i ;   u max i ( k ) u i r ( k ) + Δ i ;
A control range R k will limit the searching domain for the control variables U ( k ) , as illustrated in Figure 2 (similar to Figure 3 of the paper [18]).
In this way, the bound constraint Equation (4) is replaced with the relation Equation (12) that adapts the control range to each sampling period.
U ( k ) R k ;   k = 0 , , H 1
The EA’s convergence toward the optimal prediction can be impeded when some ranges are too narrow, and there is insufficient search space for the control values. In other words, the optimal prediction must be entirely included in the blue zone from Figure 1.
Nota Bene:
The smaller the value Δ i , the smaller the extent of the new control variables’ range. This implication seems favourable for our objective and encourages us to choose a small value Δ i . However, a too-small value Δ i will cause the EA’s convergence to be lost, that is, the value J to be smaller than the reference value. That is why a practical approach is to choose Δ i following a few simulations.

3.2. Controller Structure with the Adaptation of Control Variables’ Range

The proposed controller is a specific version of the Receding Horizon Controller (Appendix A). To illustrate the specificity of our work, Figure 3 presents details of the proposed Controller. Let us notice the following aspects of this structure.
  • The prediction and optimization module—called Predictor in the sequel—is implemented using an EA, not mandatorily a certain version of the EA. The sole request is to assure good efficiency in solving the optimization problem.
  • The prediction’s length depends on the difference between the current ( k , 0 k < H ) and final ( H ) moments. So, a prediction has H k control output values for the remaining sampling periods.
  • The objective function computation module uses the PM and an integration procedure. The Predictor sends a candidate prediction U ¯ ( k ) and the current state vector X ( k ) toward this module, both needed by the numerical integration.
  • The Controller adapts the control variables’ range R k online to the sampling period’s rank ( U ( k ) R k ,   k = 0 , , H 1 ). These ranges are determined offline using the reference CP, that is, the CP yielding the optimal or quasi-optimal process evolution from an initial state vector.
  • The online control ranges’ adaptation will cause the computational complexity to decrease, keeping the EA’s convergence.
  • At the moment k , the Controller sends the first element U ( k ) of the predicted sequence toward the Process.
  • The grey arrows correspond to data obtained offline and available to the Controller, while the red arrows indicate the data exchanged between the Process and the Controller.
The EA is expected to find the optimal prediction for the current step k , using the control variables range R k , more efficiently than when the technological limits are adopted.
Remark 3.
A good tuning of the Controller’s parameters will generate a suitable Process evolution. The closed-loop state trajectory will be in the neighbourhood of the reference state trajectory owing to two factors:
-
The closed-loop CP, which is effectively achieved by the control structure, will be in the neighbourhood of the reference CP;
-
The PM accuracy is good enough to involve the state predictions resembling the process state trajectories.
A simulation study presented in the next sections will prove that the desideratum stated by Remark 3 is reachable, and the Controller will work such that the Process will evolve quasi-optimally.

3.3. Implementation of the Controller

To describe clearly the meaning of predicted sequences, we recall hereafter their structure. In the sampling period [ k , k + 1 ] , a candidate prediction has the following structure:
U ¯ ( k ) = U ( k | k ) . . U ( H 1 | k ) .
The value U ( k + i / k ) is the prediction U ( k + i ) using the knowledge up to k . Using Equations (1) and (13), the Controller also predicts the state sequence.
X ¯ ( k ) = X ( k | k ) . X ( H   | k ) .
The value X ( k + i / k ) is the prediction for the state X ( k + i ) .
The EA generates candidate predictions covering the prediction horizon and calculates the cost function’s value for each one. At convergence, the optimal prediction is returned to the Controller.
U ¯ ( k ) = U ( k | k )   .   U ( H 1 | k ) .
The controller’s output is the predicted sequence’s first element:
U ( k ) = U ( k | k ) .
This value is sent toward the control input of the Process.
Figure 4 presents the general activities of the Controller, including the call to the function implementing the Predictor.
Variable k stores the current sampling period’s rank against the beginning of the control action (when k = 0). The function “Predictor_EA” returns the predicted sequence, whose first element will give the current control values U ¯ ( k ) .
Algorithm 1 presents the pseudocode of the function “Predictor_EA”. This algorithm is suitable for any EA implementation due to its general structure. Appendix B details the EA’s special version that we have employed for this paper. Emphasis is placed only on the parameters’ initialization and employment of the control ranges (in lines #7–#9).
Algorithm 1 Pseudocode of the function “Predictor_EA”
Predicted_sequence Predictor _ E A ( k , X 0 )
 /* This function determines U ¯ ( k ) .*/
1 # Global parameters’ initialization.
2 N N0          /*N0: number of individuals*/
3 NGen Ngen0;        /*NGen0: number of generations */
4 ngene H-k;          /*ngene: number of genes */
5 for i = 1…N.
6   for j = 1…ngene;
7     pop (i, j) random( U min ( k + j ) ,   U max ( k + j ) ).
    /*The control variables’ ranges are R k ,   R k + 1 , , R H 1 */
8    end.
9 end.
10  # Objective function’computation for the initial population
11 g 1;
12 Ncalls 0;
13 F 0;
14 while (gNgen) and (F = 0)
15   #Selection of chromozomes;
16   #Crossover;
17   #Mutation;
18   #Replacement;
19   #Update F.
20   g  g + 1.
21 end.
22 Predicted_sequence  pop(1)
The columns of m × H matrices U min   and   U max are constructed offline using Equations (11) and (16).
U min ( k ) = [ u min 1 ( k ) u min m ( k ) ] T ;   U max ( k ) = [ u max 1 ( k ) u max m ( k ) ] T ;   0 k H 1
The matrices U min   and   U max that join these columns (Equation (17)) could be accessible to this function as global data and used for initial population generation.
U min = [ U min ( 0 ) U min ( H 1 ) ] ;   U max = [ U max ( 0 ) U max ( H 1 ) ] .
The columns U min ( k )   and   U max ( k ) indicate the range limits R k for each of the control variables. On the other side, the individuals of the initial population generated by the EA are control variables sequences that are included in R k ,   R k + 1 , , R H 1 . Therefore, line #8 yields elements randomly inside these sets for each individual of the population.
Line #13 initializes at 0 the variable “Ncalls” that will count the objective function calls’ number. This variable is updated inside the procedure implementing the objective function calculation and can be considered as a computational complexity measure during the control action.
In our implementation, the code of the EA is included in the “Predictor_EA” program. The reader can see Appendix B, which gives the most important details concerning the EA version employed in this work.

4. Simulation Study

4.1. Study’s Objectives and Preliminaries

In the previous sections, we have proposed the algorithms “Controller_EA” and “Predictor_EA” that can be used in real-time for control structure implementation. Of course, this can be conducted in conjunction with a real process.
The sequel presents a simulation study using these two algorithms devoted to predictions based on predefined control profiles. The study’s main objective is to ascertain that this technique decreases the computational complexity compared to EAs using technological bounds.
More specifically, the simulation study has the following objectives:
  • To implement and simulate the RHC and an EA that makes optimal predictions. The authors have already achieved this objective in previous works, but there is a new context related to predefined control profiles requiring the adjustment of old implementations.
  • To integrate reference control profiles and adaptation of control variables’ ranges within predictions based on EA. The new ranges are involved in the initial population generation and mutation operator.
  • To validate the hypothesis that the new technique can reduce the computational complexity of the Controller through the results’ analysis. The computational complexity is the major impediment to the Controller’s feasibility when using metaheuristics.
The study will be achieved using a simulation program that emulates the closed-loop working. The program’s modules have a realistic implementation and can be used in a real-time closed loop. Only a few adjustments have been performed owing to the simulation.
The Controller integrates the PM and sends the control value to the real Process. The simulation model of the Process will be constructed in two ways:
  • The simulation model of the Process and PM are identical. This situation is not trivial because there will be differences between the reference CP and the CP achieved by the closed loop. These differences are due to the closing of the control loop. The reference CP is an open-loop solution.
  • The simulation model of the Process is considered equivalent to the PM to which a uniformly distributed perturbation is added. If the vectors X k and X k are the state variables of the Process and PM, respectively, it holds:
X k = X k + perturbation ;   0 k H 1 .
A realistic measure of the Controller’s computational complexity is the number of objective function calls during the control horizon. This number is relevant, especially when the objective function’s evaluation entails numerical integrations, which are complex and time-consuming. This measure is adequate when comparing versions of the same application, especially in a statistical context. The simulation program will totalize the number of calls for all sampling periods.
We are not interested in controlling a specific Process; our presentation emphasizes the method, not the Process. The proposed combination of the RHC with an EA can be the solution to control many kinds of processes. Of course, time constraints could reduce this combination’s applicability to processes with relatively large time constants.
We shall consider two case studies covering both monovariable and multivariable Processes and Mayer and Boltza (Mayer plus Lagrange terms)-type performance indexes. Both cases deal with the most difficult case concerning the predictions (Remark 1).

4.2. Algorithm of the Closed-Loop Simulation

The simulation series are carried out by a general program that emulates the closed-loop running. The control horizon is implemented through a sequence of sampling periods k = 0 , , H 1 . For the specific moment k, the main action is calling the Controller. The algorithm of this simulation program is presented in Figure 5.
In the sequel, we add some comments to this flowchart to help understand the simulation algorithm.
state” is a vector with (H + 1) components that will store the Process’ trajectory.
  • At step k , the procedure “Controller_EA” is called to calculate the optimal control value U ( k ) considering the current state s t a t e ( k ) . This control value is memorized as a vector “uRHC” element.
  • The procedure “RealProcessStep” uses the Process’s simulation model and returns the next state of the Process.
  • This state is memorized and considered a new current state for the next sampling period.
  • The variable “Ncalls_C” cumulates the numbers Ncalls of all steps and, in this way, evaluates the computational complexity on the entire control horizon.
Let us remark that the Controller prediction X ¯ ( 0 ) , made at the moment k = 0 , and the sequence of real control values “uRHC” are different things. The Controller prediction is computed using the PM, while each control value u R C H ( k ) assures the transition between two real Process states:
X 0 = X ( k )   u R H C ( k ) X next = X ( k + 1 ) .
The function “RealProcessStep” integrates the Process’s simulation model to determine the transition Equation (19).
Because the prediction is based on an EA, the simulation program was conducted 40 times to allow statistical analysis. Appendix C gives details concerning our simulation program.

5. Case Study #1—Process with a Single Control Variable

5.1. Optimal Control Problem # 1

The Park–Ramirez problem (PRP) is a benchmark problem [15,20,21] describing an OCP that has been addressed in many papers to study various approaches, from integration methods to metaheuristic employment. The nonlinear process models a fed-batch reactor, which produces secreted protein. Besides many works that have communicated quasi-optimal evolutions, the authors have also presented solutions of the PRP in different research contexts. In paper [15], the authors have proposed an RHC solution integrating an EA. The main objective was to evaluate how much the optimal evolution degrades when the PM and Process differ. The present work is the continuation of a previous paper’s topic [18], which was to decrease the computational complexity of the Controller due to a short-time Estimator. This module defines shrunken intervals for the control variables that cause the search process to be more efficient. The Estimator can or cannot be used according to a “quality criterion” that does or does not exist. This work defines shrunken intervals, starting from a control profile that always exists. Because the closed-loop solution of PRP is already constructed and a version of the EA is tested, we shall study now the possibility of decreasing the computational complexity.
We present hereafter the three parts defining the PRP as an OCP.
Process Model
x ˙ 1 = g 1 ( x 2 x 1 ) u x 5 x 1 . x ˙ 2 = g 2 x 3 u x 5 x 2 . x ˙ 3 = g 3 x 3 u x 5 x 3 x ˙ 4 = 7.3 g 3 x 3 + u x 5 ( 20 x 4 ) x ˙ 5 = u g 1 = 4.75 g 3 0.12 + g 3 g 2 = x 4 0.1 + x 4 e 5.0 x 4 g 3 = 21.87 x 4 ( x 4 + 0.4 ) ( x 4 + 62.5 )
The state vector regroups the following physical parameters:
  • x 1 ( t ) —concentration of secreted protein.
  • x 2 ( t ) —concentration of total protein.
  • x 3 ( t ) —density of culture cell.
  • x 4 ( t ) —concentration of substrate.
  • x 5 ( t ) —holdup volume.
Constraints
The control horizon, initial state and bound constraints are given below.
t [ t 0 , t f i n a l ] ;   t 0 = 0 ,   t f i n a l = 15 h
X ( 0 ) = X 0 = [ 0 ,   0 ,   1 ,   5 ,   1 ] T R 5 .
u ( t ) Ω [ 0 ,   2 ] ;   0 < t < t f i n a l .
Performance Index
J 0 = max u ( t ) J ( x ( t f i n a l ) ) = max u ( t ) X 1 ( t f i n a l ) X 5 ( t f i n a l )

5.2. The Reference Control Profile

The EA presented in Appendix B has also been used to solve open-loop PRP with few modifications (see Appendix B). An open-loop solution is produced and considered the optimal or quasi-optimal solution of the PRP. The control profile, the best trajectory and the performance index have been saved in a data file (for example, the workspace file “WSBestEvolution.mat”).
Figure 6 shows the best CP and best trajectory; this data will help us improve the closed-loop solution’s computational complexity. For the Controller, the open-loop CP will be the reference.
We remark that the performance index of the so-called “best” solution is J o p t = 31.930 , but we know from previous works that there are solutions better than this one, having values of J superior to 32. This fact does not matter because we may choose as reference a quite good open-loop solution.
Equations (7)–(11) define the control ranges for the Controller. For this case, it holds:
m = 1 ;   u max = 2 ;   u min = 0 ;   Δ 1 = 0 . 4 ;
In Figure 7, the values u min 1 ( k ) and u max 1 ( k ) are depicted for all values of k as step functions. For a specific k, the control range stretches between the two values and is a blue rectangle. According to the k value, the unique shrunken interval ranges from the green line to the black line:
I 1 ( k ) = [ u min 1 ( k ) ,   u max 1 ( k ) ]
The blue and red step functions are placed between these marks along the control horizon. The dashed lines correspond to technological limits. Let us note that the control ranges cannot intersect with the technological limits.

5.3. Simulation Results

5.3.1. Simulation without Range Adaptation

The first simulation series concerns the implementation of the closed loop using the Controller_EA that does not adapt the control range and, consequently, considers only the technological limits. The data in Table 1 are generated following the procedure indicated in Appendix C.1.
Each line displays the performance index (J0) and the cumulated number of objective function calls (denoted here as Ncalls). The latter counts all the calls during the control horizon.
When a particular simulation produces the closest value to the average performance index, we consider it the typical execution. We determined the typical run as the 33rd: Jtypical = 31.9082.
Table 2 presents a statistic for the performance index (J0), including the minimum, maximum and average values and the standard deviation (Sdev).
The typical evolution of the closed loop is described in Figure 8. The controller with predictions based on the EA generates the control values in Figure 8a. The Process’s state vector evolves as in Figure 8b.

5.3.2. Simulation with Range Adaptation (Predefined Control Profile)

The second simulation series concerns the implementation of the closed loop using the Controller_EA with control range adaptation. In addition, the Process’s simulation model is also the PM (the model inside the Controller). The data in Table 3 are generated following the procedure indicated in Appendix C.2.
This time, the typical run is the 24th and has Jtypical = 31.9354.
Table 4 presents a statistic of this simulation series.
The typical evolution of the closed loop is described in Figure 9. With predictions based on the EA and a predefined CP, the controller generates the control values in Figure 9a. The control profile stays inside the blue zones representing the control ranges. The Process’s state vector evolves as in Figure 9b.

5.3.3. Simulation with Range Adaptation and Perturbated Process

To prove that the closed loop is working well in a real context, we have to simulate the situation when the Process differs from the PM. We carried out a few closed-loop simulations using the control range adaptation and perturbated Process.
We shall apply Equation (18) from Section 4.1, namely, the Process state will be affected by an additive perturbation. Practically, a perturbation Δ P i —equivalent to all influences—is added to the PM’s vector state, as below:
x i ( k ) x i ( k ) + Δ P i ,   i = 1 , , 5
We have considered the perturbation Δ P i as a random variable with uniform distribution in the interval [−L,L], where:
L = p | x i ( k ) | ,   0 < p < 1
Hence, Δ P i depends on the absolute value of the considered state variable to avoid annulling its influence. We have used a constant value, p = 5%, in our simulation, which means placing perturbations in a 10% interval.
The data in Table 5 are generated following the procedure indicated in Appendix C.3 and refer to a single simulation.
The evolution of the closed loop is described in Figure 10. With predictions based on the EA and the predefined CP, the controller generates the control values from Figure 10a. The Process’s state vector evolves as in Figure 10b, where we remark small discontinuities of the trajectories. These small jumps are caused by the perturbations applied to the state variables at the moment k, as in Equation (25).
We denote, by A(X0, r), r = 1, …, H, the accessibility set in r step, the set of all state vectors reached from the initial state X0 in r steps, respecting the MP’s dynamic and discretization constraint. For example, the maximum value of J(X) when X A(X0, H) is the performance index, which is around the value of 32.
Due to the perturbation (Equation (25), the simulation does not respect the MP’s dynamic that was actually our desideratum ( P M P ). The simulated Process now has stochastic behaviour and discontinuous trajectories. Consequently, it is possible for the final state to not belong to A(X0, H). The trajectory leads to new final states having possible greater values (for example, J0 = 34.69). The EA now has to search for optimal predictions in a new state subspace, which may be a harder problem involving a very large value of Ncalls. In any case, this is a new task, different from the previous ones, which makes the comparison irrelevant. That is why we did not conduct a third simulation series designated to be compared with the previous two series. Among the simulations with perturbations that we carried out, there are also simulations in which all states belong to A(X0), and the performance index has values accordingly.

5.4. Discussion of Case #1

Figure 7, Figure 9 and Figure 10 prove the quasi-optimal behaviour of the Process in the three cases. A statistic of the simulation series (without and with range adaptation) is presented in Table 6. First of all, let us remark that the range adaptation does not worsen the closed-loop behaviour:
  • The aspects of process evolution are very alike.
  • The Controller shows convergences in all simulations.
  • The optimal behaviour is not degraded. The J0 and standard deviation values are very similar because the shrunken ranges do not impede the convergence process. Practically the solutions converge toward the same optimal region of the searching space.
On the other side, the Controller with range adaptation greatly influences the computational complexity. Columns referring to Ncalls of Table 6 show this improvement.
The decreasing ratio of the average number of calls is calculated below.
decreasing   ratio = 103 , 782 37 , 850 103 , 782 = 0.636
Due to the control range adaptation, the computational complexity decreased by 63.6%, which is a spectacular reduction.
The simulations with control range adaptation and perturbed Process are not comparable to those from the previous series because the research state-space could differ. The predefined control profile is still not appropriate for the new accessibility set. This situation is tributary to how the Process is constructed using the PM and a uniformly distributed perturbation. Nevertheless, the controller works well even in this situation, and the EA converges to a very good performance index but uses non-accessible states from X0.

6. Case Study #2: Multivariable Process and Complete Objective Function

6.1. OCP’s Statement

In this section, we address another case study that will complete the first one, owing to the following aspects:
-
The Process is multivariable; it has two control variables.
-
Besides the final cost, the objective function also has an integral component (Remark 2).
Presented in many papers [6,14], we consider an OCP that the authors have already solved using the EA and RHC in a previous paper [14]. In the present work, we studied through simulations only whether a predefined control profile could decrease the computational complexity and to what extent.
The OCP treated in the sequel will be referred to as Protein Production Problem (PPP). The following state equations describe its nonlinear PM:
x ˙ 1 = u 1 ( t ) + u 2 ( t ) x ˙ 2 = g 1 x 2 ( u 1 ( t ) + u 2 ( t ) ) ( x 2 x 1 ) x ˙ 3 = 100 u 1   x 1 ( u 1 ( t ) + u 2 ( t ) ) ( x 3 x 1 ) g 1 x 2 0 . 51 x ˙ 4 = g 2 x 2 ( u 1 ( t ) + u 2 ( t ) ) ( x 4 x 1 ) x ˙ 5 = ( 4 u 2 ( t ) x 1 ) ( u 1 ( t ) + u 2 ( t ) ) ( x 5 x 1 ) x ˙ 6 = ( 0 . 09 x 5 0.034 + x 5 ) x 6 x ˙ 7 = (   0 . 09 x 5 0.034 + x 5 ) ( 1 x 7 ) g 1 = x 3 14.35 + x 3 ( 1 + x 3 / 111.5 )   ( x 6 + 0.22 x 7 0.22 + x 5 ) g 2 = ( 0.233 x 3 14.35 + x 3 ( 1 + x 3 / 111.5 ) ) ( 0 . 005   + x 5 0 . 022   + x 5 )
We have a system with n = 7 state variables and m = 2 control variables. All constraints presented in Section 2.2 are available for this OCP, including the discretization constraint.
t [ t 0 , t f ] ;   t 0 = 0 ;   t f = H = 10   hours X ( 0 ) = X 0 = [ 1 ,   0 . 1 ,   40 ,   0 ,   0 ,   1 ,   0 ] T . 0 u 1 ( t ) ,   u 2 ( t ) 1         i . e . ,   u j ( t ) Ω j [ 0 , 1 ] ;   j = 1 , 2 .
The objective function and the performance index are given in Equation (27):
J ( u 1 ( t ) , u 2 ( t ) , X 0 ) = Q t 0 t f u 2 ( t ) d t + x 1 ( t f ) x 4 ( t f ) ; Q = 1.5 . J 0 = m a x u 1 ( t ) ,   u 2 ( t ) J
As we mentioned before, we have solved and obtained near-optimal solutions for the PPP in the previous work. One of these solutions has been considered the best Process evolution relating to the initial state, described by three components: the predefined control profile, trajectory and performance index. These components were stored in the file “WSBestEvolution” using the procedure given in Appendix D.
Two simulation series, each with 40 runnings, were used to compare the controllers without and with control range adaptation.

6.2. Results and Discussion of Case #2

The first simulation series used the Controller constructed in our previous work, which the reader can understand and use following the details given in Appendix D.1. When the EA initializes its individuals, the two control variables take values inside the domains determined by the technological limits:
Ω 1 = [ 0 , 1 ] ;   Ω 2 = [ 0 , 1 ]
The performance index J0 and the number Ncalls are given in Table 7 for all the executions.
Execution #23 is the typical one whose evolution is depicted in Figure 11. The Controller calls 14,040 times the objective function during the control horizon and guides the Process toward its final state x 1 ( H ) = 2.359 ;   x 4 ( H ) = 2.484 . The value Jopt = 5.685 is also calculated considering the integral component.
A statistical analysis of J0 and Ncalls is presented in Table 8, where the columns’ names have the usual meaning. “Sdev” denotes the standard deviation of J0.
Let us note that the performance index recorded inside the file giving the best evolution (see Appendix D) is Jopt = 5.752. At the same time, the maximum value achieved by the control loop without range adaptation is J0 = 5.781. This fact confirms the conclusion drawn in previous works that the RHC does not degrade the loop behaviour significantly.
The second simulation series uses a new version of the Controller, whose EA is modified to adapt the control variables’ ranges (as in Algorithm 1). Instead of Figure 8 illustrating the single-interval ranges for the PRP, Table 9 presents the two-interval ranges for the PPP.
Details about how this simulation series is carried out are in Appendix D.2. Table 10 gives the values J0 and Ncalls for all the runnings.
Execution #9 is the typical one whose evolution is depicted in Figure 12 The Controller calls 7560 times the objective function during the control horizon and guides the Process toward its final state x 1 ( H ) = 1.922 ;   x 4 ( H ) = 3.131 . The value Jopt = 5.761 is calculated by adding the integral component to the final cost.
Statistical analysis for J0 and Ncalls is presented in Table 11. Compared with Table 8, in Table 11, the average value of J0 equals the reference performance index (Jopt = 5.752), but there are many other better results.
Figure 11 and Figure 12 show that the two Controllers cause the two closed loops to have similar behaviours. As in case #1, the J0 and standard deviation values are very similar because the shrunken ranges are wide enough to assure convergence. Because the value of Ncalls decreases globally, its standard deviation will decrease as well. The range adaptation has a great beneficial influence on computational complexity. The columns referring to Ncalls from Table 12 prove this improvement.
The decreasing ratio of the average number of calls is calculated below.
decreasing   ratio = 15 , 645 9013 15 , 645 = 0.43
Due to the control range adaptation, the computational complexity decreased by 43%, which is a very important reduction.

7. Conclusions

In this paper, we presented the continuation of our previous work concerning the implementation of the OCPs using predictions based on EAs. The emphasis was placed on how to decrease the computational complexity of prediction generation. We proposed the so-called predefined control profile, which means the control range adaptation along the control horizon. The following conclusions can be drawn considering the objectives stated in Section 4.1.
  • In the new context related to predefined control profiles, we reconsidered the implementation and simulation of closed loops based on RHC and EAs to make optimal predictions. The necessary adjustments have been made.
  • The control profiles were determined and integrated into the EA, and the control range adaptation was involved in the initial population generation at each sampling period.
  • The computational complexity is a bar for using metaheuristics to optimize inside control structures. The simulation analysis of the two case studies proved that the proposed technique is effective, and the computational complexity decrease is large (63% and, respectively, 43%).
The Controller converged in all simulations and gave good control loop results for the performance index, especially the number of calls. The latter’s decrease was reflected, obviously, in the reduction of the Controller’s execution time at each sampling period.
In the first case study, we also simulated a Process different from the PM, using a perturbated PM. Although the simulations carried out using this technique cannot be compared with the preceding ones, they proved that the Controller is robust and could conduct such a Process.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/electronics11111682/s1, The file “ARTICLE_ELECTRONICS.zip” regroups the files mentioned in Appendix B, Appendix C and Appendix D.

Author Contributions

Conceptualization, V.M.; methodology, L.G. and V.M.; software, V.M.; validation, V.M. and E.R.; formal analysis, V.M.; investigation, L.G.; resources, L.G.; data curation, L.G.; writing—original draft preparation, V.M.; writing—review and editing, V.M. and E.R.; visualization, L.G.; supervision, E.R.; project administration, E.R. and L.G.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the project “Excellence and involvement in intelligent development, research and innovation at “Dunarea de Jos” University of Galati, “DINAMIC”, financed by the Romanian Ministry of Research and Innovation in the framework of Programme 1—Development of the national research and development system, Sub-programme 1.2—Institutional Performance—Projects for financing excellence in Research, Development and Innovation, Contract no. 12PFE/2021.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Receding Horizon Control

The general structure of the RHC is given in Figure A1, although it is well-known, to preserve the self-contained character of this paper. This structure can optimally treat the final cost using predictions, which can also be implemented through metaheuristics such as EAs.
We recall hereafter the elements defining the RHC strategy.
  • The Controller makes optimal predictions that optimize the cost criterion by looking ahead for several sampling periods, but the Controller’s output is only implemented for the current step.
  • Predictions consider the number of steps that define the current prediction horizon and use a PM; the latter has to reproduce the system’s dynamic as accurately as possible.
  • An optimal prediction is a control values sequence covering the current prediction horizon [ k , H ] ( 0 k H 1 ) that optimizes the objective function. The first element in this optimal sequence, denoted by U ( k ) , will be the control output value at the moment k .
  • The prediction horizon’s left extremity “recedes” at the next step but keeps the final time. The Controller takes updated information concerning the process state X ( k ) and looks ahead for the new prediction horizon.
Figure A1. Closed loop using the RHC.
Figure A1. Closed loop using the RHC.
Electronics 11 01682 g0a1

Appendix B

Description of the Evolutionary Algorithm

The version of EA [3,15,16] employed in this work uses direct encoding. According to the sampling period rank, a chromosome has a variable length and encodes the predicted sequence over the while [k, H] using H k genes.
The main characteristics of the implemented EA are listed below:
-
The population has μ chromosomes (individuals);
-
The offspring: λ chromosomes ( λ < μ );
-
The selection uses universal sampling with selection pressure (s) [3];
-
The mutation operator: variable step size with global variance adaptation, using the “1/5 success rule” [1].
-
The crossover operator: in a single point;
-
The replacement strategy: the offspring replaces the λ worst individuals of the current generation.
-
After NGen generations, the first stop criterion stops the population’s evolution.
-
The second stop criterion: the best chromosome’s fitness value is superior to a preestablished value J0.
Other constants used by the EA:
  • Final time:
  • The minimum value of the performance index:
  • Individuals’ number:
  • Children’ s number:
  • Maximum generations’ number:
  • Selection pressure:
  • Standard deviation’s factor:
N t = t .
J0 = 31.8.
N0 = 35.
λ = 30 .
NGen = 70.
s = 1.8.
amic = 0.85.
In our implementation, the code of the EA is included in the “Predictor_EA” program, which is the script “RHC_Predictor_EA2.m”. The general structure of this EA can be read in Section 3.3, where the pseudocode is presented in Algorithm 1. The initial population, which uses the control variables’ ranges, is generated in lines 7–9. The objective function is implemented within the file “Eval_PR_step.m”. Both files are inside the folder “ART_ElectMTLB”.
The function “RHC_Predictor_EA2.m” has been transformed into the script “OfflineSolution.m” through a few initializations to give and save an offline PRP’s solution.

Appendix C

The simulation program is the script “RHC_EA_new2.m” in our implementation. The function “RealProcessStep” is implemented within the file “RHC_RealProcessStep.m”. Both files are inside the folder “ART_ElectMTLB/Case#1_PRP”.

Appendix C.1. Simulation without Range Adaptation

The algorithm of the closed-loop simulation without range adaptation is implemented by the script “RHC_EA_new2_0.m”. The control variable has only technological limits. The script “RHC_EA_new2_0.m” was carried out 40 times using the script “ciclu40_RHC_EA_new2_0.m”. The results are stored in the file “WSrange_all_0.mat”. The statistics in Table 1 and Table 2 and Figure 9a,b are yielded by the script “STATIC_DRAW40_0.m”.

Appendix C.2. Simulation with Range Adaptation

The closed-loop simulation with range adaptation is achieved using the script “RHC_EA_new2_1.m”. The script “RHC_EA_new2_1.m” was carried out 40 times using the script “loop40_RHC_EA_new2_1.m”. The results are stored in the file “WSrange_all_1.mat”. The statistics in Table 3 and Table 4 and Figure 10a,b are yielded by the script “STATIC_DRAW40_1.m”.

Appendix C.3. Simulation with Range Adaptation and Process Different from PM

The closed-loop simulation with range adaptation and Process different from PM is achieved using the script “RHC_EA_perturbation_1.m”. This program can be executed alone and generates the results corresponding to a single running. It calls “RHC_Predictor_EA23.m”, a slightly modified version of the Predictor. The data from Table 5 and Figure 11a,b are generated.

Appendix D

The script “Construct_BestEvolution.m” contains the reference CP, trajectory and performance index and creates the workspace “WSBestEvolution.mat”. The files concerning case study #2 are placed inside the “ART_ElectMTLB/Case#2_PPP” folder.

Appendix D.1. Simulation without Range Adaptation

The algorithm of the closed-loop simulation without predefined CP is implemented by the script “RHC_EA_PPP_without_range.m”. The control variable has only technological limits. Firstly, this program was executed standing alone. After that, it was executed 40 times using the script “loop40_RHC_EA_PPP_without.m” and created the workspace file “WSP7without.mat”. The displayed data are provided in the “RESULTS_LOOP40_without.txt” file. Based on the workspace file, the script “Analyse_40_WithoutRange.m” has generated the data in Table 7 and Table 8 and Figure 12a,b.

Appendix D.2. Simulation with Range Adaptation

The closed-loop simulation with predefined CP is achieved using the script “RHC_EA_PPP_ControlRange.m”. The control ranges are adapted according to the predefined CP. Firstly, this program was executed standing alone. After that, it was carried out 40 times using the script “loop40_RHC_EA_PPP_range.m” and yielded the workspace file “WSP7range.mat”. The displayed data are provided in the “RESULTS_LOOP40_range.txt” file. Based on the workspace file, the script “Analyse_40_RangeAdaptation.m” has generated the data in Table 10 and Table 11 and Figure 12a,b.

References

  1. Kruse, R.; Borgelt, C.; Braune, C.; Mostaghim, S.; Steinbrecher, M. Computational Intelligence—A Methodological Introduction, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
  2. Siarry, P. Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 978-3-319-45403-0. [Google Scholar]
  3. Talbi, E.G. Metaheuristics—From Design to Implementation; Wiley: Hoboken, NJ, USA, 2009; ISBN 978-0-470-27858-1. [Google Scholar]
  4. Faber, R.; Jockenhövelb, T.; Tsatsaronis, G. Dynamic optimization with simulated annealing. Comput. Chem. Eng. 2005, 29, 273–290. [Google Scholar] [CrossRef]
  5. Onwubolu, G.; Babu, B.V. New Optimization Techniques in Engineering; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  6. Valadi, J.; Siarry, P. Applications of Metaheuristics in Process Engineering; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 1–39. [Google Scholar] [CrossRef]
  7. Minzu, V.; Riahi, S.; Rusu, E. Optimal control of an ultraviolet water disinfection system. Appl. Sci. 2021, 11, 2638. [Google Scholar] [CrossRef]
  8. Minzu, V.; Ifrim, G.; Arama, I. Control of Microalgae Growth in Artificially Lighted Photobioreactors Using Metaheuristic-Based Predictions. Sensors 2021, 21, 8065. [Google Scholar] [CrossRef] [PubMed]
  9. Hu, X.B.; Chen, W.H. Genetic algorithm based on receding horizon control for arrival sequencing and scheduling. Eng. Appl. Artif. Intell. 2005, 18, 633–642. [Google Scholar] [CrossRef] [Green Version]
  10. Hu, X.B.; Chen, W.H. Genetic algorithm based on receding horizon control for real-time implementations in dynamic environments. In Proceedings of the 16th Triennial World Congress, Prague, Czech Republic, 4–8 July 2005; Elsevier IFAC Publications: Amsterdam, The Netherlands, 2005. [Google Scholar]
  11. Mayne, D.Q.; Michalska, H. Receding Horizon Control of Nonlinear Systems. IEEE Trans. Autom. Control 1990, 35, 814–824. [Google Scholar] [CrossRef]
  12. Minzu, V.; Serbencu, A. Systematic procedure for optimal controller implementation using metaheuristic algorithms. Intell. Autom. Soft Comput. 2020, 26, 663–677. [Google Scholar] [CrossRef]
  13. Goggos, V.; King, R. Evolutionary predictive control. Comput. Chem. Eng. 1996, 20 (Suppl. 2), S817–S822. [Google Scholar] [CrossRef]
  14. Chiang, P.-K.; Willems, P. Combine Evolutionary Optimization with Model Predictive Control in Real-time Flood Control of a River System. Water Resour. Manag. 2015, 29, 2527–2542. [Google Scholar] [CrossRef]
  15. Minzu, V. Quasi-Optimal Character of Metaheuristic-Based Algorithms Used in Closed-Loop—Evaluation through Simulation Series. In Proceedings of the ISEEE, Galati, Romania, 18–20 October 2019. [Google Scholar]
  16. Minzu, V. Optimal Control Implementation with Terminal Penalty Using Metaheuristic Algorithms. Automation 2020, 1, 48–65. [Google Scholar] [CrossRef]
  17. Minzu, V.; Riahi, S.; Rusu, E. Implementation aspects regarding closed-loop control systems using evolutionary algorithms. Inventions 2021, 6, 53. [Google Scholar] [CrossRef]
  18. Minzu, V.; Arama, I. Optimal Control Systems Using Evolutionary Algorithm-Control Input Range Estimation. Automation 2022, 3, 95–115. [Google Scholar] [CrossRef]
  19. Abraham, A.; Jain, L.; Goldberg, R. Evolutionary Multiobjective Optimization—Theoretical Advances and Applications; Springer: Berlin/Heidelberg, Germany, 2005; ISBN 1-85233-787-7. [Google Scholar]
  20. Banga, J.R.; Balsa-Canto, E.; Moles, C.G.; Alonso, A. Dynamic optimization of bioprocesses: Efficient and robust numerical strategies. J. Biotechnol. 2005, 117, 407–419. [Google Scholar] [CrossRef] [PubMed]
  21. Balsa-Canto, E.; Banga, J.R.; Aloso, A.V.; Vassiliadis, V.S. Dynamic optimization of chemical and biochemical processes using restricted second-order information 2001. Comput. Chem. Eng. 2001, 25, 539–546. [Google Scholar] [CrossRef]
Figure 1. Adaptation of a control variable’s range.
Figure 1. Adaptation of a control variable’s range.
Electronics 11 01682 g001
Figure 2. Control variables’ ranges for the prediction horizon [k, H].
Figure 2. Control variables’ ranges for the prediction horizon [k, H].
Electronics 11 01682 g002
Figure 3. Structure of the proposed Controller.
Figure 3. Structure of the proposed Controller.
Electronics 11 01682 g003
Figure 4. Algorithm 1—“Controller_EA” using predictions based on an EA.
Figure 4. Algorithm 1—“Controller_EA” using predictions based on an EA.
Electronics 11 01682 g004
Figure 5. The algorithm of the closed-loop simulation using Controller_EA.
Figure 5. The algorithm of the closed-loop simulation using Controller_EA.
Electronics 11 01682 g005
Figure 6. Best open-loop evolution of the process. (a) Best evolution of the control variable could be a reference for the closed-loop control. (b) The state trajectory of the best evolution in open loop.
Figure 6. Best open-loop evolution of the process. (a) Best evolution of the control variable could be a reference for the closed-loop control. (b) The state trajectory of the best evolution in open loop.
Electronics 11 01682 g006
Figure 7. Positions of the predefined control profile and control values inside the control variable’s ranges.
Figure 7. Positions of the predefined control profile and control values inside the control variable’s ranges.
Electronics 11 01682 g007
Figure 8. Typical closed-loop evolution without range adaptation. (a) Controller_EA’s control profile without range adaptation. (b) The state trajectory of the typical evolution in a closed loop.
Figure 8. Typical closed-loop evolution without range adaptation. (a) Controller_EA’s control profile without range adaptation. (b) The state trajectory of the typical evolution in a closed loop.
Electronics 11 01682 g008
Figure 9. Typical closed-loop evolution with range adaptation. (a) Controller_EA’s control profile with range adaptation. (b) The state trajectory of the typical evolution in a closed loop.
Figure 9. Typical closed-loop evolution with range adaptation. (a) Controller_EA’s control profile with range adaptation. (b) The state trajectory of the typical evolution in a closed loop.
Electronics 11 01682 g009
Figure 10. Typical closed-loop evolution with range adaptation and Process different from the PM. (a) Control profile with range adaptation and perturbated Process. (b) The state trajectory in a closed loop with range adaptation and perturbated Process state.
Figure 10. Typical closed-loop evolution with range adaptation and Process different from the PM. (a) Control profile with range adaptation and perturbated Process. (b) The state trajectory in a closed loop with range adaptation and perturbated Process state.
Electronics 11 01682 g010
Figure 11. Case #2—PPP: typical closed-loop evolution without predefined control profile. (a) The evolution of the two control variables. (b) The state trajectory without range adaptation.
Figure 11. Case #2—PPP: typical closed-loop evolution without predefined control profile. (a) The evolution of the two control variables. (b) The state trajectory without range adaptation.
Electronics 11 01682 g011
Figure 12. Case #2—PPP: typical closed-loop evolution using a predefined control profile. (a) The evolution of the two control variables. (b) The state trajectory with range adaptation.
Figure 12. Case #2—PPP: typical closed-loop evolution using a predefined control profile. (a) The evolution of the two control variables. (b) The state trajectory with range adaptation.
Electronics 11 01682 g012
Table 1. Simulation series for control loop without range adaptation.
Table 1. Simulation series for control loop without range adaptation.
Run #J0NcallsRun #J0Ncalls
131.867103,8932131.869115,447
231.960101,8672231.940106,851
331.93997,5262331.904114,410
431.962105,0242431.88791,767
531.91578,3222531.994107,582
631.89494,6492631.804109,752
731.873104,1592731.81397,892
831.96187,9722831.97195,134
932.05499,9932931.85293,878
1031.885121,8893031.86392,734
1131.868102,3033131.97199,226
1231.863112,9493231.823110,145
1331.86697,0143331.908121,515
1432.115112,1663431.844111,978
1531.845115,3683531.96490,920
1631.889116,5013631.912106,542
1731.896102,2563731.913101,207
1831.872109,7893831.98689,939
1931.838102,7233931.94597,510
2031.884118,8614031.927111,641
Table 2. Statistics on the performance index.
Table 2. Statistics on the performance index.
JminJavgJmaxSdevJtypical
31.0431.90832.1140.06431.9082
Table 3. Simulation of the control loop with range adaptation.
Table 3. Simulation of the control loop with range adaptation.
Run #JNcallsRun #JNcalls
131.83643,9722131.93742,637
231.83730,3982232.03031,717
332.04329,1632332.04535,026
431.87729,0302431.93536,893
531.91649,4062531.87049,992
631.95828,6132631.86235,178
731.98937,3832731.97839,042
831.81941,6842831.91039,695
931.95540,2982931.80243,910
1032.12830,7743032.03033,208
1131.92036,4273131.87943,079
1231.86331,6453231.89936,875
1332.19728,7773331.86727,142
1431.88041,6363431.97234,325
1531.87335,7153531.84231,758
1631.91435,0943631.84147,849
1731.87842,2613732.07130,180
1831.85342,5563831.96231,795
1931.87846,6713931.92353,447
2032.05831,5624031.90957,182
Table 4. Statistics on the performance index.
Table 4. Statistics on the performance index.
JminJavgJmaxSdevJtypical
31.80231.93032.1960.08831.935
Table 5. Results of the simulation with control range adaptation and perturbated state vector.
Table 5. Results of the simulation with control range adaptation and perturbated state vector.
kuRHC(k)J0Ncalls
00.39631.948460
10.13532.299798
20.11630.82051,740
30.45228.90647,940
40.31829.63543,747
50.91330.37939,620
61.05730.19735,991
71.13330.57831,816
81.73831.9001610
91.97232.147354
100.10732.578354
110.87032.252232
120.93333.062171
130.91434.687110
141.31334.69859
Table 6. Comparison among simulation series.
Table 6. Comparison among simulation series.
J0Ncalls
Typical ValueSdevMinAvgMaxSdev
without control range31.9080.06478,322103,782121,8899912
with control range31.9350.08827,14237,85057,1827401
with control range and perturbation34.69a single simulation
Table 7. Results of the Controller without predefined CP.
Table 7. Results of the Controller without predefined CP.
Run #J0NcallsRun #J0Ncalls
15.62415,420215.64814,100
25.62814,460225.77214,940
35.76116,020235.68414,040
45.64036,000245.70816,200
55.63717,580255.63823,580
65.72015,540265.67014,700
75.76014,040275.69717,340
85.71912,480285.68416,260
95.70714,280295.71212,780
105.67214,100305.45936,600
115.70711,880315.67512,540
125.72113,800325.69614,580
135.64314,520335.73013,380
145.62213,380345.58912,840
155.61817,580355.75813,680
165.64117,880365.67718,960
175.7819120375.73213,800
185.68913,020385.67010,200
195.70512,240395.684516,260
205.71911,640405.659014,040
Table 8. Statistics regarding the performance index and number of calls.
Table 8. Statistics regarding the performance index and number of calls.
MinAvgMaxSdevTypical
J05.4595.6825.7810.065.685
Ncalls912015645366005409.3
Table 9. Range adaptation of the control variables.
Table 9. Range adaptation of the control variables.
k R k = I 1 ( k ) × I 2 ( k )
0 [ 0 , 0.2 ] × [ 0 , 0.2 ]
1 [ 0 , 0.2 ] × [ 0 , 0.2 ]
2 [ 0 , 0.292 ] × [ 0 , 0.2 ]
3 [ 0 , 0.246 ] × [ 0 , 0.2 ]
4 [ 0 , 0.346 ] × [ 0 , 0.2 ]
5 [ 0 , 0.246 ] × [ 0 , 0.3 ]
6 [ 0 , 0.211 ] × [ 0 , 0.2 ]
7 [ 0 , 0.3 ] × [ 0 , 0.2 ]
8 [ 0 , 0.2 ] × [ 0 , 0.2 ]
9 [ 0 , 0.261 ] × [ 0 , 0.2 ]
Table 10. Simulations of the Controller with predefined CP.
Table 10. Simulations of the Controller with predefined CP.
Run #J0NcallsRun #J0Ncalls
15.7738640215.63018,900
25.7938100225.75210,440
35.7868820235.7836720
45.79110,560245.7549720
55.8189120255.7468580
65.69410,080265.6596000
75.7358520275.70610,140
85.6119660285.7357980
95.7617560295.7089660
105.7098820305.68412,240
115.8629660315.8126960
125.71610,020325.74312,300
135.67010,440335.7515460
145.7368040345.7857500
155.8166180355.9267380
165.8246000365.8267800
175.72810,200375.69110,620
185.6379540385.8186000
195.9057860395.9179360
205.8109840405.6929120
Table 11. Statistics regarding the performance index and objective function’s calls number.
Table 11. Statistics regarding the performance index and objective function’s calls number.
MinAvgMaxSdevTypical
J05.6115.7585.9270.075.761
Ncalls5460901318,9002300.7-
Table 12. Comparison between simulation series’ results.
Table 12. Comparison between simulation series’ results.
J0Ncalls
Typical ValueSdevMinAvgMaxSdev
without control range5.6850.06912015,64536,6005409.3
with control range5.7610.075460901318,9002300.7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mînzu, V.; Georgescu, L.; Rusu, E. Predictions Based on Evolutionary Algorithms Using Predefined Control Profiles. Electronics 2022, 11, 1682. https://doi.org/10.3390/electronics11111682

AMA Style

Mînzu V, Georgescu L, Rusu E. Predictions Based on Evolutionary Algorithms Using Predefined Control Profiles. Electronics. 2022; 11(11):1682. https://doi.org/10.3390/electronics11111682

Chicago/Turabian Style

Mînzu, Viorel, Lucian Georgescu, and Eugen Rusu. 2022. "Predictions Based on Evolutionary Algorithms Using Predefined Control Profiles" Electronics 11, no. 11: 1682. https://doi.org/10.3390/electronics11111682

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop