*5.3. Instance Test of FFSP–PB Global Optimization Algorithm*

### 5.3.1. Parameter Settings of the Optimization Algorithm

ICA, CGA, HNN, and SAA–HNN were utilized as global optimization algorithms. Moreover, these algorithms were combined with the reentrant rules of electric flat carriage and workpiece transfer rules in the public buffer designed for the public buffer to solve the FFSP–PB, which are optimal for comparing and analyzing the optimization performance and verifying its efficiency with local scheduling rules to resolve FFSP–PB. The parameter settings of the four global optimization algorithms are shown in Table 4.


**Table 4.** Swarm evolutionary algorithm parameters.

#### 5.3.2. Simulation Results and Analysis

(1) Evaluation Index of Scheduling Results

Each of the four algorithms were tested on the small-scale, medium-scale, and large-scale data. The total number *n* of the processed workpieces was set to 12 in small-scale data, 40 in medium-scale data, and 80 in large-scale data.

i Small-scale data

The four algorithms were run 30 times for small-scale data, and the average values of the evaluation indexes obtained from 30 simulations are summarized in Table 5.


**Table 5.** Comparison of the evaluation indexes of four algorithms' scheduling results (small-scale data).

From the simulation results of small-scale data in Table 5, during the solving of FFSP–PB, the main evaluation index makespan *Cmax* of the SAA–HNN algorithm decreases to 46.62, 27.97, and 59.63 in comparison to the ICA, CGA, and HNN algorithms, respectively, and the improvement is 26.42%, 17.71%, and 31.46%, respectively. Moreover, the total workpiece blockage time *TWBT* of the SAA–HNN algorithm decreases 17.53, 12.28, and 23.41 in comparison to the other three algorithms, respectively, and the improvement is 63.93%, 55.39%, and 70.3%, respectively. The total plant factor *TPF* and total workstation idle time *TWBT* also improves, thereby indicating that using the SAA–HNN algorithm as the global optimization algorithm to solve the FFSP–PB problem improves each evaluation index and reduces the production blockage in small-scale data simulation.

ii Medium-scale and large-scale data

The four algorithms were run 30 times for medium-scale and large-scale data, respectively, and the average values of the evaluation indexes obtained from the 30 simulations under the two data scales are listed in Tables 6 and 7.

**Table 6.** Comparison of the evaluation indexes of four algorithms' scheduling results (medium-scale data).



**Table 7.** Comparison of the evaluation indexes of four algorithms' scheduling results (large-scale data).

From the simulation results in Tables 6 and 7, under medium-scale data, the main evaluation index makespan *Cmax* of the SAA–HNN algorithm decreases to 129.3, 96.52, and 188.47 in comparison to the ICA, CGA, and HNN algorithms, respectively, and the improvement is 15.50%, 12.04%, and 21.10%, respectively. Under large-scale data, the main evaluation index makespan *Cmax* of the SAA–HNN algorithm decreases to 415.23, 271.44, and 564.19 in comparison to the ICA, CGA and HNN algorithms, respectively, and the improvement is 23.14%, 16.44%, and 29.03%, respectively. Under medium-scale data and large-scale data, other evaluation indexes of the SAA–HNN algorithm also improve. The SAA–HNN algorithm performs optimally and has perfect adaptability during the solving of the FFSP–PB of medium-scale and large-scale data.

Based on the comprehensive analysis of the simulated results in the FFSP–PB of different data scales, it can be concluded that the SAA–HNN algorithm combines the reentrant rules of the electric flat carriage and workpiece transfer rules in the public buffer which are designed for the public buffer, effectively solves the FFSP–PB, and reduces the production blockage.

#### (2) Scheduling Evolutionary Process Analysis

The correlation between the makespan *Cmax* and the iterations of four algorithms under actual production data are shown in Figure 5.

**Figure 5.** Correlation between makespan *Cmax* and the iterations of four algorithms.

Figure 5 shows that the HNN algorithm converges rapidly in the initial stage of evolution but falls into local extremum prematurely due to the monotonous decrease in the energy function; it stagnates evolving in the 31st generation when *Cmax* converges to 252 eventually. The ICA and CGA algorithms have the ability of rapid optimization and convergence in the initial stage; however, they display integral convergence after evolving for a specific number of generations, making them easy to fall into local extremum. These phenomena stagnate evolving in the 221st and 185th generations, respectively, and their *Cmax* finally converge to 246 and 245, respectively. The SAA–HNN algorithm, which maintains the fast optimization feature of the HNN algorithm, converges rapidly in the initial stage of evolution, but falls into local extremum in the 127th generation. By introducing the idea of the simulated annealing algorithm, the ability of the SAA–HNN algorithm to jump out of local extremum is enhanced. Also, the SAA–HNN algorithm reactivates the evolutional process in the 278th generation, whose *Cmax* converges to 233 eventually.

From the above analysis of the scheduling evolutionary process, the SAA–HNN algorithm not only maintains the fast optimization characteristics of the HNN algorithm, but also jumps out of the local extremum and continues evolution in the process of solving the FFSP–PB problem with the improved local scheduling rules. Also, the SAA–HNN algorithm has better optimization effect than the ICA and CGA algorithms.
