**4. Evolutionary Algorithm**

The boundary-value problem may have a nonconvex and nonunimodal objective functional (15) on the parameter space **q**, therefore, to solve this problem, it is advisable to use an evolutionary algorithm.

Evolutionary algorithms differ in the form of changing possible solutions. The first evolutionary algorithms appeared at the end of the XX century and continue to appear. Currently, hundreds of evolutionary algorithms are known. Most of them are named after animals, although the connection between animal behavior and computational algorithms is not strictly proven anywhere and is determined only by the statement of the author of the algorithm. Common steps of evolutionary algorithms are: generation of a set of possible solutions, assessment of solutions by objective function to find one or more best solutions, modification of solutions in accordance with the value of its objective function and with information about the values of the objective functions of other solutions by evolutionary operators.

In this work, we investigate the application of the Pontryagin maximum principle to solve the optimal control problem for a group of robots with phase constraints, and do not compare evolutionary algorithms. We applied one of the effective evolutionary algorithms, self-organizing migrating algorithm (SOMA) [5,6], with modification [7] to find the parameters, i.e., initial conditions of conjugate variables and additional parameter *qn*+<sup>1</sup> for terminal time. The modified SOMA includes the following steps.

Generate a population of *H* possible solutions, taking into account

$$\bar{q}\_{i}^{j} = \xi(q\_{i}^{+} - q\_{i}^{-}) + q\_{i}^{-}, \; i = 1, \dots, n+1, \; j = 1, \dots, H,\tag{18}$$

where *q*<sup>+</sup> *<sup>i</sup>* = 1, *q*<sup>−</sup> *<sup>i</sup>* <sup>=</sup> <sup>−</sup>1, *<sup>i</sup>* <sup>=</sup> 1, ... , *<sup>n</sup>*, *<sup>q</sup>*<sup>+</sup> *<sup>n</sup>*+<sup>1</sup> = 0.2, *q*<sup>−</sup> *<sup>n</sup>*+<sup>1</sup> = −2.5, *H* is a cardinal number of the population set, *ξ* is a random number from 0 to 1. Normalize the first *n* possible solutions according to (16)

$$q\_i^j = \frac{\dot{q}\_i^j}{\sqrt{\sum\_{k=1}^n (\dot{q}\_k^j)^2}}, \; i = 1, \dots, n, \; j = 1, \dots, H. \tag{19}$$

In the optimization problem, we have to find a vector of optimal parameters **q** = [*q*<sup>1</sup> ... *qn*+1] *<sup>T</sup>* in order to receive the minimal value of functional

$$J\_1(\mathbf{q}) = \sum\_{j=1}^{M} ||\mathbf{x}(t^+ + q\_{n+1}) - \mathbf{x}^{f,j}|| \to \text{min.} \tag{20}$$

For each vector of parameters, we set a historical vector. Initially, historical vectors contain zero elements

$$\vec{q}\_{i}^{j} = 0, \; i = 1, \ldots, n+1, \; j = 1, \ldots, H. \tag{21}$$

Calculate the values of functional for each possible solution

$$f\_j = f\_1(\mathbf{q}^j), \ j = 1, \ldots, H. \tag{22}$$

Find the best possible solution **q***j*<sup>0</sup> on a stage of evolution

$$J\_1(\mathbf{q}^{l\_0}) = \min\{f\_1, \dots, f\_H\}. \tag{23}$$

For each historical vector, find the best vector among the randomly selected ones **q¯** *<sup>j</sup>* in current population

$$J\_1(\mathbf{q}^{j\*}) = \min \{ f\_{j\_1 \vee \dots \vee} f\_{j\_K} \} \, \tag{24}$$

where *ji* ∈ {1, . . . , *H*}, *i* = 1, . . . , *K*. Transform each historical vector

$$
\hat{q}\_i^j \leftarrow \alpha \hat{q}\_i^j + \beta (q\_i^{j^\*} - q\_i^j),
\tag{25}
$$

where *i* = 1, ... , *n* + 1, *j* = 1, ... , *H*, *α* and *β* are parameters of the algorithm, positive numbers less than one. Let us set a step *t* = *δ*. Calculate some new values for each possible solution

$$\mathfrak{q}\_i^j(t) = \begin{cases} \mathfrak{q}\_i^j + \tilde{\mathfrak{q}}\_j^i + \mathfrak{t}(\mathfrak{q}\_i^{j\_0} - \mathfrak{q}\_i^j), \text{ if } \mathfrak{f} < P\_{rt} \\\ \mathfrak{q}\_i^j + \tilde{\mathfrak{q}}\_{j'}^i \text{ otherwise} \end{cases},\tag{26}$$

where *i* = 1, ... , *n* + 1, *Prt* is a parameter of the algorithm. Check each component of a new vector for restrictions

$$\eta\_i^j(t) = \frac{\mathfrak{q}\_i^j(t)}{\sqrt{\sum\_{k=1}^n (\mathfrak{q}\_k^j(t))^2}}, \ t = 1, \dots, n,\tag{27}$$

$$q\_{n+1}^{j} = \begin{cases} q\_{n+1}^{+}, \text{ if } q\_{n+1}^{j}(t) > q\_{n+1}^{+} \\ q\_{n+1'}^{-}, \text{ if } q\_{n+1}^{j}(t) < q\_{n+1}^{-} \\ q\_{n+1}^{j}(t), \text{ otherwise} \end{cases} \tag{28}$$

Calculate the functional for a new vector

$$f\_{\dot{f}}(t) = f\_1(\mathbf{q}^{\dot{f}}(t)). \tag{29}$$

If *fj*(*t*) *fj*, then we change possible solution **q***<sup>j</sup>* by a new vector

$$\mathbf{q}^{j} \leftarrow \mathbf{q}^{j}(t),$$
 
$$\mathbf{q}^{j} \leftarrow \mathbf{q}^{j}(t),$$

$$f\_j \leftarrow f\_j(t). \tag{31}$$

Increase *t*

$$t = t + \delta.\tag{32}$$

If *t* < *Plength* then repeat calculations (25)–(32), *Plength* is a parameter of the algorithm. Repeat calculations (22)–(32) for all possible solutions in the population. Then again, find the best solution (23) and change historical vector (25). Repeat all stages *R* times. The last best vector is a solution of the optimization problem.

An applied algorithm with historical vector is called a modified SOMA. The value of parameter *β* = 0 transforms the algorithm from modified SOMA to classical SOMA. Pseudo code of the modified SOMA has the following form, see Algorithm 1.

**Algorithm 1:** Modified SOMA for Optimal Control Problem.


**Algorithm 1:** *Cont.*

```
if fjl < fj∗ then
       j
       ∗ = jl //*
     end if ; //*
  end for //*
   for (i = 1, . . . , n + 1) //* transformation of historical vectors
     q˜
      j
      i = αq˜
            j
            i + Random · β(q
                             j
                              ∗
                             i − q
                                   j
                                  i
                                   ) //*
   end for //*
   t = δ
   while t < Plength do //termination condition
     s = 0
     for (i = 1, . . . , n) //calculation of new values of parameters
      if Random < Prt then
          qˆ
           j
           i
            (t) = q
                   j
                   i + q˜
                        j
                        i + t(q
                              j0
                              i − q
                                    j
                                    i
                                     )
      else
          qˆ
           j
           i = q
                j
                i + q˜
                     j
                     i
      end if
       s = s + (q˜
                 j
                 i
                 )2
     end for
     for (i = 1, . . . , n) //normalization
       q
        j
        i
        (t) = qˆ
                j
                i
                (t)/
                     √s
     end for
     if Random < Prt then
       qˆ
        j
        n+1(t) = q
                  j
                  n+1 + q˜
                          j
                          n+1 + t(q
                                   j0
                                   n+1 − q
                                           j
                                           n+1)
    else
       qˆ
        j
        n+1(t) = qˆ
                  j
                  n+1(t)
    end if
     if qˆ
         j
         n+1 > q+
                 n+1 then
       q
        j
        n+1 = q+
                n+1
    end if
    if qˆ
        j
        n+1 < q−
                n+1 then
       q
        j
        n+1 = q−
                n+1
    end if
     fj(t) = J1(qj
                  (t)) //estimation of new vector of parameters
     if fj(t) < fj then
       fj = fj(t)
      for (i = 1, . . . , n + 1)
        q
         j
         i = q
               j
              i
               (t) //transformation of vector of parameters
      end for
     end if
     t = t + δ
  end while do
 end for (j = H)
end for (r = R)
```
In pseudo code, subroutine Random generates a random real number from 0 to 1, and subroutine *Random(A)* generates random integer number from 0 to *A* − 1. We used \* in comments to highlight the modification of modified SOMA in comparison to original SOMA.

The effectiveness of modified SOMA, as with all evolutionary algorithms, depends on the parameters that influence the number of computational operations, i.e., number of elements in initial population (*H*), number of generations (*R*), number of evolutions (*P*). To evaluate one single solution, we need to simulate the whole system, thus, for the problem, we have to calculate the functional minimum *H* + *nRP* times, where *n* depends on parameter of algorithm *Plength*.

As for all evolutionary algorithms, the convergence of modified SOMA is determined by probability. The more solutions are looked through, the more is the probability to find the optimal one. In evolutionary algorithms, the value of goal function depends on the number of generations as descending exponent. If the solution is not improved for some generations, then the search is stopped, and the best current solution is considered to be the solution to the problem. The optimal control problem with phase constraints is not unimodal, and the search algorithm is not deterministic, thus, to find the solution, the algorithm ran multiple times.
