**4. Local Scheduling Rules for Multi-Queue Limited Bu**ff**ers**

In order to reduce the setup times and reduce the impact of setup times on the scheduling process, this paper has developed a variety of local scheduling rules to guide the distribution of jobs for the process of entering and exiting multi-queue limited buffers. When the job enters the multi-queue limited buffer, the local scheduling rules employ the remaining capacity of max queue buffer (RCMQB) rule. When the job leaves the multi-queue limited buffer, the local scheduling rules use the shortest setup time (SST) rule, the first available machine (FAM) rule, and the first-come first-served (FCFS) rule, of which the SST rule is a priority [17].

#### *4.1. Rules for Jobs Entering Multi-Queue Limited Bu*ff*ers*

• Max queue buffer capacity remaining rule When ∃*Ci*,*j*−<sup>1</sup> and *t* = *Ci*,*j*<sup>−</sup>1,

$$\text{Max}\_{i,j}(t) = \left\{ \text{Bs}\_{j,a} \Big| \max \left\{ \mathbf{K}\_{j,a} - \text{card} \left( \mathsf{W} A\_{j,a}(t) \right) \Big| \left( \mathbf{K}\_{j,a} - \text{card} \left( \mathsf{W} A\_{j,a}(t) \right) > 0 \right) \right\} \right\} \tag{30}$$

$$j \in \{2, \ldots, m\}$$

where *Mcai*,*j*(*t*) is the set of lane *Bsj*,*<sup>a</sup>* that the job in the previous stage *Operj*<sup>−</sup><sup>1</sup> can access. The difference between the maximum number of spaces (*Kj*,*a*) of each lane *Bsj*,*<sup>a</sup>* and the number of jobs in the waiting queue *WAj*,*<sup>a</sup>* in this lane is the remaining capacity (available space) of lane *Bsj*,*<sup>a</sup>* at the current stage. When the job finishes the previous stage, namely when *t* = *Ci*,*j*<sup>−</sup>1, the job enters the lane with the largest remaining capacity. When *card Mcaj*(*t*) = 0, it indicates there is no remaining space in the current lane. When *card Mcaj*(*t*) > 1, it illustrates that multiple lanes can be entered.

#### *4.2. Rules for Jobs Leaving Multi-Queue Limited Bu*ff*ers*

When the job exits the buffer and is assigned to the machine, in the case of an available machine *WSj*,*<sup>l</sup>* and several jobs are waiting in the buffer, that is, the number of the selectable jobs to be processed in the multi-queue limited buffers is greater than 1 ( *Aj a*=1 *QBCj*,*a*(*t*) > 1). In the course of assigning jobs, the job with the minimum setup time ( *Ji min Tsi* ,*j*,*l* ) is processed according to the SST rule. If the number of jobs with the minimum setup time is greater than 2, the job with the longest waiting time in the buffer ( *Ji max t* − *Tei*,*<sup>j</sup> i OAi*,*j*,*a*(*t*) <sup>=</sup> <sup>1</sup> ) is selected for processing in accordance with the first in first out (FIFO) rule. In the case of available machines and only one job waiting to be processed, that is, the number of selectable jobs to be processed in the multi-queue limited buffers is equal to 1 ( *Aj a*=1 *QBCj*,*a*(*t*) = 1), the SST rule is used to select the machine with the minimum setup time in the course of assigning machines. If the number of selectable machines is greater than 2, the job is machined on the basis of the FAM rule. The multimachine-and-multi-job case is a combination of the aforementioned conditions.

#### **5. Simulation Experiment**

The ICGA was implemented using MATLAB 2016a simulation software, running on the PC with Windows 10 operating system, Core i5 processor, 2.30 GHz Central Processing Unit (CPU), and 6 GB memory. The multi-queue limited buffers scheduling problems in a flexible flow shop with setup times originate from the production practice of bus manufacturers. It is a complex scheduling problem. As standard examples are not available at present, the standard data of a flexible flow shop scheduling problem (FFSP) was employed to discuss and analyze the parameters in the improved compact genetic algorithm, so as to determine the optimal parameter [18]. In addition, multiple groups of large-scale and small-scale data were used to test the effect of the improved method on the optimization ability of the standard CGA. Furthermore, the instance data of multi-queue limited buffers scheduling in a flexible flow shop with setup times were used to verify the optimal performance of the ICGA for solving such problems.

#### *5.1. Analysis of Algorithm Parameters*

The parameter values of the algorithm had a significant effect on the algorithm's optimization performance. The ICGA has three key parameters: threshold value σ*<sup>T</sup>* that started the mapping operation, adjustment override of the learning rate β, and thr number of new individuals generated in each generation *NP*. The FFSP standard examples of a d-class problem with five stages and 15 jobs in the 98 standard examples proposed by Carlier and Neron were applied to test each parameter. The example j15c5d3 was used to illustrate the orthogonal experiment. Three parameters, each with four levels (see Table 1), were taken for orthogonal experiments with the scale of L16 (43) [19]. The

algorithm ran 20 times in each group of experiments. The average time of makespan (*Cmax*) was used as an evaluation index.


**Table 1.** Level of each parameter.

Through experiments, we can see that the influence of the change of each parameter on the performance of the algorithm is shown in Figure 3. From the figure we can see that β and *NP* had a greater impact on the performance of the algorithm, and σ*<sup>T</sup>* had the least impact on the performance of the algorithm. The best combination of parameters was: σ*<sup>T</sup>* = 10, β = 1.5, *NP* = 4.

**Figure 3.** Trend chart of the algorithm's performance impacted by each parameter.

#### *5.2. Optimization Performance Testing on the ICGA*

In order to study the impact of the improved method (which is based on the probability density function of the Gaussian distribution mapping) on the optimization performance of the CGA, the ICGA was compared with CGA, bat algorithm (BA) [20] and whale optimization algorithm (WOA) [21]. These algorithms are the currently emerging intelligent optimization algorithms and are widely used in the field of optimization and scheduling [22–24]. In the BA, the number of individuals in the population *NP* = 30, pulse rate γ = 0.9, search pulse frequency range [*Fmin*, *Fmax*] = [0, 2]. In the WOA, the number of individuals in the population *NP* = 30. Each algorithm ran 30 times on each group of data. The maximum evolutionary generation of four algorithms was set to 500 generations. The average makespan *Cmax* and the average running time *Tcpu* were used as the evaluation metrics.

#### 5.2.1. Small-Scale Data Testing

The test data are from the 98 standard examples proposed by Carlier and Neron based on the standard FFSP. The set of examples was divided into five classes. According to the difficulty of solving the examples, the set of examples was divided into two categories by Neron et al. [25]: easy to solve and difficult to solve. Four groups of easy examples (j15c5a1, j15c5a2, j15c5b1, and j15c5b2) and four groups of difficult examples (j15c10c3, j15c10c4, j15c5d4, and j15c5d5) were chosen to test the ICGA so as to better evaluate the optimization performance of the ICGA for small-scale data. The test results are shown in Table 2.


**Table 2.** Small-scale data test results.

In the table, *LB* represents the lower bound of makespan for the examples, whose optimal value was given by Santos and Neron [25,26]. It can be seen from Table 2 that even if the data size was small, the average running time of the CGA and ICGA under each group of data was significantly shorter than that of the BA and WOA. This indicates that CGA and ICGA have the advantages of the convergence speed of optimization. In terms of the optimization performance of small-scale data, the four algorithms did not have significant differences. But overall, the optimization effect of the ICGA on small-scale data was still the best among the four algorithms. The ICGA has achieved better solutions than the other three algorithms when solving all four groups of difficult examples (the four algorithms all reached the lower bound of makespan when solving the easy examples). Especially, compared with the CGA, when solving two groups of examples of the j15c5d class, the average relative error obtained by the ICGA was reduced by 5.64% and 4.37%, respectively. This shows that the optimization effect of the ICGA is greatly improved from the CGA when solving small-scale data.
