4.2.2. Simulated Annealing

Simulated annealing (SA) uses an iterative procedure for solving combinatory optimization problems. SA employs the process of crystallization at a discrete search space of a physical system [57]. The SA algorithm depends on the cooling criterion and uses initial temperature (T), final temperature (*Tmin*) and cooling rate (*β*) variables. SA algorithms are extensively proposed in the literature to allocate DG units at lower computational time. Simulated annealing algorithms perform effectively in solving reliability-criteria-based optimization problems [2,57]. The advantages of SA algorithms are robustness, simplicity of implementation, and capability to provide feasible solutions to combinatorial problems. Nevertheless, SA algorithms have large computation times without upper limits, terminate at local minimums and lack details on the level of variation between a local minimum and global minimum [56,85]. In Koziel et al. [86], the authors presented a feasibility-preserving SA algorithm to obtain DN reconfiguration with the objective to minimise power loss and improve voltage profile. This study concluded that the proposed algorithm was more efficient than some published population-based intelligence search methods with respect to computational cost and solution repeatability. However, the optimality of the solution was not reported, and the harmonic contents and dynamic stability of the networks were not evaluated in the proposed work.

#### 4.2.3. Particle Swarm Optimization

Particle swarm optimization (PSO) methods are developed based on the social adaptation of flocking bird and schooling fish. In PSO, single intersection of all dimensions produces a particle, and these particles move randomly in a complex search space. The system is then adjusted using a number of solutions that are randomly selected. During each

iteration, the particles use their fitness level to assess their positions. Then, the contiguous particles update their previous "best" position to upgrade the final solution [2,57,87]. The advantages of PSO are robustness, simple implementation and running simultaneous computations in less computation time. PSO algorithms use a couple of parameters to modify and converge faster. PSO can also be effectively used to solve DG allocation problems with inaccurate mathematical models. However, the initial design parameter are difficult to define with PSO. During complex DG allocation problems, PSO may converge prematurely and terminate at the local minimum [6,56]. In Prabpal et al. [88], the PSO technique was applied to obtain optimal sizes and locations of multiple BESS and PVDG units with the objective to minimise total cost, minimise the impact of large-scale penetration of BESS, improve voltage profile and increase the stability of the power system. The results showed that PSO and GA methods equally performed better in achieving fewer numbers of iterations and quality of solutions. Shahzad et al. [23], Jamian et al. [89], Rathore et al. [90] and Zeinalzadeh et al. [91] proposed multi-objective PSO methods for determining optimal locations and sizes of BESSs/PVDGs to minimise power losses and improve voltage profiles. However, the uncertainties of the intermittent DGs and BESSs were not modelled, and the impact of their variable output power on the dynamic stabilities and harmonic contents of the distribution networks was not considered. Only the uncertainties related to BESS/PVDG market scenarios were evaluated in Rathore et al. [90].
