5.2.2. Large-Scale Data Testing

Six groups of data, including 80 jobs with four stages, 80 jobs with eight stages, and 120 jobs with four stages, were used to test the optimization performance of the ICGA in solving large-scale complex problems. The test results are shown in Table 3.


**Table 3.** Large-scale data test results.

As can be seen from the table, when solving the problem of large-scale data, the advantages of the CGA and ICGA in the speed of optimization were more obvious. Especially when solving two groups of examples of j80c8d class, the average running time of CGA and ICGA was significantly smaller than the BA and WOA. It can be seen from the test results of the first four groups of examples that when the number of processes was increased from four to eight, the average running time of the BA and the WOA was greatly increased. On the other hand, although the average running time of the CGA and the ICGA was also increased, the rate of increase was small. This indicates that CGA and ICGA are suitable for solving the problem of scheduling optimization with large data scale with many processes. In general, when solving the problem of large-scale data, the optimization performance of the BA and

ICGA was better than that of the WOA and CGA. As for the aspect of optimization speed, the average running time of the BA was much larger than the other three algorithms under every group of data.

It can also be seen from the table that although the CGA had the fastest optimization speed under each group of large-scale data, its optimization performance was also the worst among the four algorithms. And as the size of the data increased, the gap between the CGA and the other three algorithms was getting larger. This is mainly because the CGA uses roulette to select gene fragments in the probabilistic model when generating new individuals. In the initial stage of the probabilistic model, the probability of each gene fragment being selected is 1/*n*. When the size of the data is large (*n* = 80 and *n* = 120), the initial probability value of each gene fragment in the probability matrix becomes small (1/8 and 1/120). Further, in the process of evolution, the probability value of the unselected gene fragment will be reduced again, and the probability value of the selected gene fragment will be increased. This will lead to the probability values of certain gene fragments that are much larger than the probability values at other locations after several generations of evolution. Although this will make the evolution speed of the algorithm faster, it will also lead to a decline in the diversity of new individuals generated by the probabilistic model, which means the premature convergence of the algorithm. Compared with the CGA, when solving the problems of each group of large-scale data, the optimization effect of the ICGA was greatly improved. This indicates that the ICGA, to some extent, overcame the problem that the CGA was easy to prematurely converge.

By testing the four algorithms using large-scale and small-scale data, we can conclude that compared with the CGA, the ICGA has stronger capability to continuously evolve and jump out of the local extremum while maintaining the characteristic of fast convergence of the CGA. The ICGA, to some extent, overcomes the problem that the CGA is easy to prematurely converge. At the same time, compared with the BA and the WOA, for either small-scale data or large-scale data, the ICGA has an obvious advantage in optimization speed. Thus, the ICGA is suitable for solving the complicated problem of scheduling optimization with many processes.

#### *5.3. Instance Test on Multi-Queue Limited Bu*ff*ers in Flexible Flow Shops with Setup Times*

#### 5.3.1. Establishing Simulation Data

The simulation data of the production operations in the body shop and paint shop of the bus manufacturer were established as follows.

#### 1. Parameters in the shop model

The body shop of the bus manufacturer is a rigid flow shop with multiple production lines, which can be simplified into one production stage. The production of the paint shop is simplified to three stages [27]. The simulation data for scheduling include four stages, namely *Oper*1, *Oper*2, *Oper*3, *Oper*<sup>4</sup> , whose parallel machine *Mj* is {3, 2, 3, 2}. The buffer between the body shop and the paint shop is a multi-queue limited buffer. As such, the buffer of stage *Oper*<sup>2</sup> in the scheduling simulation data is set to the multi-queue limited buffer. The number of lanes in buffer *Bu*<sup>2</sup> of stage *Oper*<sup>2</sup> is equal to 2, namely *A*<sup>2</sup> = 2. The number of spaces in lane *Bs*2,1 is 2, namely *K*2,1 = 2, and that of buffer *Bs*2,2 is 2, namely *K*2,2 = 2. That is to say, the multi-queue limited buffers have two lanes and each lane has two spaces.

In the production of the paint shop, it is necessary to clean the machine and adjust production equipment if the model and color of the buses that are successively processed on the machine are different. Therefore, the simulation process uses the changes in the model and color as the basis for calculating setup times. Table 4 shows that the setup times parameters are set when the model and color of the buses that are processed successively on the machine changed. When the bus is assigned to the machine of the next stage from the buffer, the setup times of the machine is calculated using Equation (16).


**Table 4.** Model parameters.

#### 2. Parameters of processing the object

The information of the bus model and color properties is shown in Table 5. The sum of bus properties is 2, namely *X* = 2. *Prop*<sup>1</sup> represented the model property of the bus, while *Prop*<sup>2</sup> denoted the color property of the bus. The value of model property (*PropValue*1) is *BusType*1, *BusType*2, *BusType*<sup>3</sup> , and the value of color property (*PropValue*2) is {*BusColor*1, *BusColor*2, *BusColor*3}. It is assumed that two successively processed buses on machine of stage *WS*2,1 are buses *J*<sup>1</sup> and *J*5. If model properties were as follows: *prop*1,1 = *BusType*<sup>1</sup> and *prop*1,5 = *BusType*1, and color properties are as follows: *prop*2,1 = *BusColor*<sup>1</sup> and *prop*2,5 = *BusColor*3, then *prop*1,1 = *prop*1,5, *prop*2,1 *prop*2,5. Hence, *Nsrv*<sup>2</sup> 2,1,5 = 1. Using Equation (16), the setup time can be obtained as follows: *Ts*5,2,2 = *Tsp*2,2 = 4. And the Table 6 shows the standard processing time for bus production.

**Table 5.** Information of bus model and color properties.



**Table 6.** Standard processing time for bus production.

#### 5.3.2. Simulation Scheme

The scheduling problem of the bus manufacturer was investigated by using the ICGA, BA, WOA, and standard CGA as the global optimization algorithm, combined with local dispatching rules in the multi-queue limited buffers. This study further analyzed the optimization performance of the ICGA combined with local dispatching rules in solving the multi-queue limited buffers scheduling

problems in a flexible flow shop with setup times. A total of eight groups of simulation schemes were designed, in which schemes 1–4 employed the FIFO rule as their local dispatching rules, and schemes 5–8 adopted the RCMQB rule when entering the buffer and used the SST rule, FAM rule, and FCFS rule as local dispatching rules when leaving the buffer. Among them, the SST rule was the priority. The information of the eight sets of simulation schemes is shown in Table 7.


**Table 7.** The eight groups of the simulation program.

#### 5.3.3. Simulation Results and Analysis

#### 1. Evaluation index of scheduling results

In the optimization process, makespan *Cmax* was used as the fitness function value of the global optimization algorithm. Meanwhile, a number of evaluation indexes related to the actual production line were established, including *Tcpu*, *TWIP*, *TWT*, *FUR*, *TS* and *TPB*. Except *FUR*, the smaller the values, the better the remaining evaluation indexes.

Eight sets of simulation schemes were run 30 times. The average of 30 simulation results is presented in Table 8. As shown in the table, under the principle of adopting the same global optimization algorithm, compared with schemes 1–4, each metric of schemes 5–8 has been improved to some extent, except the *Tcpu*. Among the metrics, the optimization improvement of *TWIP*, *TS* and *TPB* is obvious. This is mainly because schemes 5–8 adopted RCMQB rules when entering multi-queue limited buffers, which can allocate resources of multi-queue limited buffers more reasonably and reduce the occurrence of blocking. The SST rule, the FAM rule and the FCFS rule were adopted when leaving the buffer, which can more effectively control the jobs to select the machine with the least change of properties among multiple machines for processing. This was beneficial to reduce the setup times. Further, *TWIP* is equal to the sum of the time for the job staying on the buffer, the blocking time of the job and setup times, so these three evaluation indexes were significantly improved. The reduction in blocking time and setup times will lead to the reduction in the makespan of the whole process and the reduction of the idle time for machines. Therefore, *Cmax*, *TWT* and *FUR* of schemes 5–8 were also optimized to some extent. From the table we can also see that compared with the schemes 1–4, although the *Tcpu* of the schemes 5–8 had all increased, the rate of increase was small. This indicates that the complicated local rules adopted in schemes 5–8 can made full use of the capacity of the multi-queue limited buffers and effectively reduced the blocking time and setup times. At the same time, the running time costs were small and did not have much impact on the speed of the algorithm.


**Table 8.** Evaluation index comparison of scheduling results of eight schemes.

The comparison among schemes 1–4 or schemes 5–8 show that the running speed of the CGA and ICGA was significantly faster than that of the BA and WOA. The average running time of the BA reached 555.56 s, which is obviously not suitable for solving practical problems. In terms of optimization performance, under two different local dispatching rules, the optimization effect of ICGA for *Cmax* was the best among the four algorithms. Specially, compared with the CGA, the optimization effect of the ICGA was significantly improved. The above results show that the ICGA, to some extent, overcame the problem that the CGA was easy to prematurely converge while maintaining the fast running speed of the CGA. This is mainly because in the early stage of evolution, there is no large probability value in the probabilistic model. Thus, the ICGA was still evolving as the procedure of the CGA. In this way, the ICGA maintained the advantage that the CGA can converge quickly in the early stage of evolution. In the later stage of evolution, when falling into the local extremum, the ICGA mapped the original probabilistic model to the new probabilistic model through the probability density function of the Gaussian distribution. In this way, while keeping probability values' distribution of the original probabilistic model unchanged, ICGA's searching ability of feasible solutions was expanded, and the diversity of population individuals was improved. Therefore, the algorithm could jump out of the local extremum and continued to evolve.

Through the above analysis, it can be concluded that compared with the other seven schemes, the combination of the ICGA and local dispatching rules adopted in scheme 8 was the best for reducing setup times of the machine, and it effectively decreased the blocking effect of the limited buffers, more reasonably arranged the jobs in and out of the multi-queue limited buffer, and orderly assigned the processing tasks. Various evaluation metrics were improved, and the problem of multi-queue limited buffers scheduling problems in a flexible flow shop with setup times was more effectively solved.

2. Gantt chart analysis of scheduling result

Figure 4 is the Gantt chart of the scheduling result of scheme 8, with time axis as the abscissa and machine of each stage as the ordinate. The green part indicates the residence time of the bus in the buffer. The red part indicates the setup times when the successively-processed buses are with different properties. The blue part denotes the blocking time of the bus on the machine after the completion of the process. The processing route of *<sup>J</sup>*<sup>3</sup> was *WS*1,1, *<sup>b</sup>*2,2,1, *<sup>b</sup>*2,2,2, *WS*2,2, *<sup>b</sup>*3,1,1, *WS*3,1, *<sup>b</sup>*4,1,1, *WS*4,1 . In Figure 4, we can see that at time *t* = 44, *J*<sup>3</sup> completed the processing on machine *WS*1,1 of stage *Oper*<sup>1</sup> (the competition time of *J*<sup>3</sup> was 44, namely *C*3, 1 = 44). At this time, the first lane *Bs*2,1 of buffer *Bu*<sup>2</sup> had two jobs waiting to be processed (*J*<sup>8</sup> and *J*4), that is to say, *WA*2,1(*t*) = {*J*8, *J*4}, *card*(*WA*2,1(*t*)) = 2. The second lane *Bs*2,2 of buffer *Bu*<sup>2</sup> had one job waiting to be processed (*J*9), that is to say, *WA*2,2(*t*) = {*J*9}, *card*(*WA*2,2(*t*)) = 1. The capacity of lane *Bs*2,1 was 2 (*K*2,1 = 2), *card*(*WA*2,1(*t*)) ≤ *K*2,1. And the capacity of lane *Bs*2,2 was 2 (*K*2,2 = 2), *card*(*WA*2,2(*t*)) ≤ *K*2,2. Both of them satisfy the constraint of Equation (10) of the limited buffers. At this time, based on the RCMQB rule which regulates the job's access to the buffer, we can obtain *Mca*3,2(*t*) = *Bs*2,2 from Equation (26). The *J*<sup>3</sup> was assigned to space *b*2,2,1 in lane *Bs*2,2 of stage *Oper*2, waiting to be processed at stage *Oper*2. The entry time of the bus into the buffer (*Te*3,2) was equal to *C*3, 1, which satisfied the constraint of Equation (8) of limited buffers.

**Figure 4.** Gantt chart of the scheduling result of the ICGA.

At time *t* = 68, *J*<sup>9</sup> left the space *b*2,2,2 in lane *Bs*2,2, and the departure time out of the buffer (*Tl*9,2) was 68. At the same time, *J*<sup>3</sup> was assigned to the space *b*2,2,2, waiting to be processed on the machine of stage *Oper*2. At time *t* = 98, *Tl*3,2 = 98, *J*<sup>3</sup> also departed from the buffer, and *Tl*9,2 < *Tl*3,2, satisfying the constraint of Equation (13). From the Gantt chart drawn from scheduling results, it can be seen that the constraint of Equation (12) was always met during the use of multi-queue limited buffers.

Further analysis of the process that *J*<sup>3</sup> left the buffer is as follows. When *t* = 98, machine *WS*2,2 completed the processing of *J*9, so that machine *WS*2,2 was available. At this time, *J*<sup>12</sup> on space *b*2,1,2 in lane *Bs*2,2 was waiting for processing, and *J*<sup>3</sup> on space *b*2,2,2 was also waiting for processing. According to the bus's property information in Table 3, in regard to machine *WS*2,2, if we choose to process *J*12, the setup time (*Ts*12,2,2) will be equal to *Tsp*2,2, namely *Ts*12,2,2 = *Tsp*2,2 = 4; if we choose to process *J*3, the setup time (*Ts*3,2,2) will be 0, namely *Ts*3,2,2 = 0. In accordance with the SST rule that controls the job leaving the buffer, *J*<sup>3</sup> was chosen to process.
