2.3.3. Hybrid WOAPSO Algorithm

In this section, the principle of the proposed hybrid WOAPSO algorithm is briefly addressed. In general, the performance of any optimization technique while solving any NLP problem is affected by premature convergence and slow rate of convergence. Some algorithms better explore the search space and have a slow convergence rate while some algorithms less diversely explore the search space and did not find the optimal solution. Maintaining the balance between exploration and exploitation is a critical issue in any optimization algorithm. WOA has good exploration capability but exploitation depends on evaluating the distance between the whale and the best position of the prey, and if the distance is large then it takes more time to converge. While PSO has fast rate of convergence but it is prone to premature convergence due to weakness in global search capability. Since in PSO, if the global best solution gets trapped in local optima, then the rest of the particles

do not explore the search space and follow the global best solution, and become trapped in local optima. Therefore, it can be concluded that WOA is good at exploring the search space, but suffers from a slow convergence rate while PSO doesn't have good capability in exploring the search space but have good local search capability. The aim of the proposed hybrid algorithm is to enhance the exploitation capability of WOA by embedding the PSO algorithm to find an optimal solution around the region explored by WOA. The proposed approach is mixed, co-evolutionary in which PSO is used as a component of WOA and thus the hybrid approach utilizes the strength of both the algorithms to avoid the premature convergence and local optima. Figure 2 depicts the process flow chart of the proposed algorithm. The mathematical model of the proposed algorithm is illustrated in the following steps:

**Step 1**: Initialize the random population of search agents with position and velocity defined as:

$$X\_{\mathbf{i}} = \left(\mathbf{x}\_{\mathbf{i}'}^1, \dots, \dots, \dots, \dots, \mathbf{x}\_{\mathbf{i}'}^d, \dots, \dots, \dots, \mathbf{x}\_{\mathbf{i}}^n\right), \text{ for } \mathbf{i} = 1, 2, \dots, \dots, \dots, N \tag{15}$$

$$V\_{\mathbf{i}} = \left(v\_{\mathbf{i}'}^1, \dots, \dots, \dots, \dots, v\_{\mathbf{i}'}^d, \dots, \dots, \dots, v\_{\mathbf{i}}^n\right), \text{ for } i = 1, 2, \dots, \dots, \dots \dots N \tag{16}$$

**Step 2**: Calculate the fitness of each search agent. If the problem is the minimization problem, then → *X* ∗ is the position corresponding to the minimum fitness and for maximization problem → *X* ∗ is the position corresponding to the maximum fitness. → *X* ∗ is the best search agent.

**Step 3**: Update the constant parameters A, C, using Equations (10) and (11) and l lying between [–1, 1] and p is the probability between 0 and 1.

**Step 4**: If *p* < 0.5 and |*A*|≥1, then select the random position of search agent (*X\**) in search space and update the position of search agent using Equations (9) and (13).

Else if *p* < 0.5 and |*A*|<1, then update the position of search agent using Equations (13) and (14).

Else *p* > 0.5, then update the position of search agent using Equation (14).

**Step 5**: Update the velocity of search agent based on the best position of search agent (*X\**) in the search space using the following equation:

$$v\_i^d(t+1) = w \times v\_i^d(t) + c\_1 \times r\_1 \times \left(X^\* - x\_i^d(t)\right) \tag{17}$$

**Step 6**: Update the position of the particles using Equation (17).

**Step 7**: Go to step 3 until the termination criteria is met. The algorithm terminates when either maximum number of iterations or minimum error criteria is attained.

**Step 8**: In the last iteration the returned value of → *X* ∗ represents the global minimum and the position corresponding to it represents the solution of the problem.
