3.1.2. Decoding

Each code represents the storage locations of the storage and retrieval tasks and the operation sequence in AS/RS. The code needs to be decoded to find the operation sequence of the crane and the production stage of the machines. The crane number is determined by the location number in multi-crane operations. The racks of a crane can store two rows, which are numbered as: 

$$S = \left[\frac{j}{2Y \cdot Z}\right] + 1\tag{15}$$

The decoding progress is:

Step 1: Determine the crane number by the location number in AS/RS and determine the task sequence by the task number to obtain the operation sequence in cranes.

Step 2: The total operation time of cranes is obtained by the end operation time of the task on each crane.

Step 3: The travel times of shelves in different rows to the production equipment are calculated based on the end operation time of the crane retrieval tasks in step 1 and obtain the arrival time from retrieval tasks to the production stage.

Step 4: According to the arrival time of the retrieval task to the production stage, the operation time sequence of tasks can be obtained by the rules of "First-Come-First-Service" of tasks, "First Idle", and "Capacity Priority" (workpiece processing time) of machines.

Step 5: The makespan is obtained by the end operation time of each task at the final production stage.

#### *3.2. GA Rules*

GA includes the operations of the population initialization, selection, crossover, and mutation. The population initialization method is randomly generated according to the encoding method. The selection mechanism is based on the elite reservation and the binary tournament selection mechanism. The crossover and mutation operations are needed because the coding method is two-layer coding. The operation tasks are allocated. The retrieval tasks are constrained by the corresponding material storage locations in AS/RS, and the storage tasks are constrained by the free locations in AS/RS.

## 3.2.1. Crossover

The main purpose of the GA phase is to obtain a better solution, which is to make full use of the global optimization performance of GA and to avoid the generation of useless solutions. Crossover is designed as follows: In retrieval tasks, an improved crossover operation that extracts the corresponding code in process coding is designed based on the type of material. In storage tasks, a uniform crossover operation that extracts partial code of different storage locations is adopted, which considers the same retrieval location under different solutions. After that, the extraction code is put back to the original solution's extraction position accordingly, and the specific process is shown in Figure 6.


**Figure 6.** Schematic diagram of the intersection of storage and retrieval tasks.

Useless solutions can be avoided by the above operations under the constraints of task locations. It is beneficial for the global search ability of the algorithm by fully adjusting the old solutions according to different cross operations of storage and retrieval tasks.

## 3.2.2. Mutation

The sequence of storage and retrieval tasks is generated by four mutation operations, namely random exchange, pre-insertion, post-insertion, and sequential pair exchange. The allocation of storage and retrieval tasks is designed by the single-point mutation and the exchange mutation. The single-point mutation randomly makes a location in a feasible location set of a task to mutate a new location that is not in the solution. The exchange mutation randomly exchanges the locations of two tasks in the set of retrieval tasks or the set of storage tasks of the same material in the solution.

#### *3.3. MBO Rules*

The MBO is a neighborhood search-based algorithm that performs a sufficient local search on the neighborhood of each solution, and it compensates for the deficiency of the GA's local search ability. In addition, the MBO algorithm can deeply explore the optimal solution at the late population convergence stage of the algorithm.

#### 3.3.1. Neighborhood Structure Design

The neighborhood structure directly affects the MBO solution quality and the convergence speed of the algorithm. An efficient neighborhood structure needs to be identified. For the sequence of tasks, the formation of the sequence neighborhood structure considers random exchange, pre-insertion, post-insertion, sequence pair exchange, optimal insertion, and optimal exchange.

For the product allocation, the scale of the feasible storage location is larger than the scale of the feasible retrieval locations. Therefore, different operations are designed to construct different neighborhood structures for the storage and retrieval tasks.


#### 3.3.2. Adaptive Adjustment of Neighborhood Structure

The total of eight neighborhood structures above present different search effects in the solution at different stages of the algorithm. At the early stage of the algorithm, the effect of the eight neighborhood structures to ge<sup>t</sup> better solutions is not much different, but the search efficiencies of random exchange, pre-insertion, post-insertion, and sequence pair exchange are higher, which improve the application frequencies. At the late stage of the algorithm, the two neighborhood structures based on optimal insertion, optimal exchange, and storage assignment are better than the other four structures. The application frequencies of these four structures should be increased. Therefore, an adaptive adjustment strategy is introduced to control the usage frequency of each structure during the search period to optimize the algorithm's efficiency.

Weight *ω*0 is assigned to each neighborhood structure. The roulette method is used to randomly select the neighborhood structure according to the neighborhood structure weight *ωi* to generate the neighborhood solution, and the weight is updated after each iteration. The adjust weight is contributed by:

$$
\omega\_{i, \text{seg}} + 1 = (1 - \eta\_i) \cdot \omega\_{i, \text{seg}} + \frac{\eta\_i \cdot \beta\_{i, \text{seg}}}{a\_{i, \text{seg}}} \tag{16}
$$

where *αi* is the number of times of structure *i*; *βi* is the cumulative score of structure *i*. If the solution is better than the original solution generated by structure *i*, *βi* = *βi* + 1, *ηi* ∈ [0, 1] is the speed of the response to the effect of structure *i* of weight *ωi*.

#### **4. Simulation Experiments**

#### *4.1. Test Examples*

The algorithms for the integrated scheduling problem are coded in MATLAB2016a and run on the Intel i7-7700 k CPU with 16 GB memory. The operation process in an AS/RS of a manufacturing enterprise is used as an example to research the scheduling problem and is modified to obtain the experimental data. The production process consisted of an AS/RS and a hybrid flowshop. The AS/RS has 6 rows, 60 columns, and 15 layers, and the number of racks is 5400. The coordinates of the input point and the output point is (0, 0, 1) and (7, 61, 1). The parameters of each index are shown in Table 2. There are 30 kinds of materials in the system, and the quantity of each material is *N* ∈ U(30, <sup>50</sup>); the types of storage and retrieval materials are *P* ∈ {5, 10, 15, <sup>20</sup>}; the quantity of each material is *OP* ∈ U(1, <sup>5</sup>); the number of scheduling stages is *K* ∈ {3, 5, <sup>7</sup>}; the number of machines was *Ek* ∈ U(2, <sup>6</sup>); the material processing time is *T* ∈ U(10, <sup>70</sup>); material weight is *M* ∈ U(10, <sup>30</sup>), and the frequency of storage and retrieval is *f* ∈ U(1, <sup>10</sup>). There are 12 groups of experiments set by the types of storage and retrieval materials and the number of scheduling stages. The data format <sup>U</sup>[*<sup>x</sup>*, *y*] represents a discrete uniform distribution between *x* and *y*.

**Table 2.** Each index parameter of the AS/RS.


#### *4.2. Parameter Settings*

The relevant parameters of the GA-MBO are *NG*, *Mgen*, *Pc*, *Pm*, *NM*, *a*, *b*, *G*. Some parameters of reference were set: *Pc* = 0.8, *Pm* = 0.05, *a* = 3, *b* =1[25,26]. It was found that this setting has a good effect on the solution of this paper by experiments. *NG*, *Mgen*, *NM*, and *G* are related to the problem scale. The corresponding Taguchi experiment was designed for the factor levels [27]. The parameter level table is shown in Table 3. Experiments are carried out with an example of *p* = 10, *K* = 3, and the algorithm is run 10 times independently under each combination of parameters. The maximum running

time is 10(*K* + 1) · *P* ∑ *p*=1 *Op* + *Ip* · *s* [28]. The average values of 10 experimental results are taken as the response values, as shown in Table 4.




**Table 4.** Orthogonal matrix and response value.

The results under the combinations of parameters are analyzed by Minitab 17 in Figure 7, the parameter change trend diagram, and Table 5, the average response value. It can be seen the performance of the algorithm is the best when *NG* = 250, *Mgen* = 250, *NM* = 51, and *G* = 10. This parameter scheme is adopted in subsequent experiments.

**Figure 7.** The graph of parameter change trend.



#### *4.3. Algorithms Comparison*

In the integrated scheduling optimization problem of AS/RS and hybrid flowshop, a comparison with the improved GA, improved particle swarm optimization (PSO) algorithm, and hybrid algorithm of GA and PSO (GA-PSO), the algorithm performance of GA-MBO is verified. With the NP-hard characteristics of the problem, the evaluation indexes, including average (Avg.) and the standard deviation (Std.), are solved by the repeated experiments of 10 times of the four algorithms. The Avg. is used to measure the efficiency and the Std. is used to measure the robustness of the algorithm. The 12 groups for the comparative analysis are shown in Table 6, which set the types of storage and retrieval materials *P* ∈ {5, 10, 15, 20} and the number of scheduling stages in AS/RS *K* ∈ {3, 5, <sup>7</sup>}. From Table 6, compared with three algorithms, including IGA, IPSO, and GA-PSO, it is obvious that GA-MBO has the most optimal solutions of Avg. and Std.



To display the promotion of the GA-MBO more clearly, the optimization results by comparing with other algorithms are shown in Table 7. In Table 7, the experimental results of IGA, IPSO, and GA-PSO show that: (1) compared with IGA, IPSO, and GA-PSO, GA-MBO, the optimization efficiencies are achieved at 13.88% in Group 6, 23.98% in Group 5, and 8.83% in Group 2, and the average promotions are 9.48%, 19.53%, and 5.12%, respectively; (2) in Group 12, the GA-MBO is not as stable as IGA and IPSO; (3) compared with IGA, IPSO, and GA-PSO, GA-MBO, the optimization robustness are achieved at 59.45% in Group 2, 79.36% in Group 7, and 60.12% in Group 11, and the average promotions are 35.16%, 54.42%, and 39.38%, correspondingly. Although the robustness of GA-MBO is poor in Group 12, it is much better in other groups, and the average value is much higher. Therefore, it is still considered that the GA-MBO has the best robustness.

The T-test uses t-distribution theory to infer the probability of difference and compare whether the difference between two averages is significant. In the test examples, the normally distributed data are assumed, and the operation time is 10 in each group. The t-test is studied to test the statistical difference of the GA-MBO with other three algorithms, and the confidence is 0.95. The results are shown in Table 8. In Table 8, the values of upper confidence and lower confidence between GA-MBO and other algorithms are negative and within the confidence interval, which verifies the effectiveness of the GA-MBO. The advantage of GA-MBO is further proved by indicating that the Avg. of the optimal solution of GA-MBO is stably better than other algorithms again.


**Table 7.** The optimization efficiency and robustness of algorithms.

1 OE\_1 is the optimization efficiency promotion of the GA-MBO relative to IGA. It is calculated as OE\_1 = (IGA Avg. − GA-MBO Avg.)/IGA Avg. 2 OE\_2 is the optimization efficiency promotion of the GA-MBO relative to IPSO. 3 OE\_3 is the optimization efficiency promotion of the GA-MBO relative to GA-PSO. 4 OR\_1 is the optimization robustness promotion of the GA-MBO relative to IGA. It is calculated as OR\_1 = (IGA Std. − GA-MBO Std.)/IGA Std. 5 OR\_2 is the optimization robustness promotion of the GA-MBO relative to IPSO. 6 OR\_3 is the optimization robustness promotion of the GA-MBO relative to GA-PSO.



To find the iteration situation of these algorithms, the convergence comparisons between the GA-MBO and other three algorithms are carried out with the calculation example of *p* = 10 and *K* = 3 as shown in Figure 8. In Figure 8, the abscissa is the iteration times of the four algorithms, and the ordinate is the value of objective function which calculated by the algorithms. The optimal solution of the GA-MBO is the best among the other three algorithms when the iteration is greater than 276, and the iteration tends to be flat when the iteration is greater than 350, which means the GA-MBO can find a best value of objective function. Based on the above analysis, the GA-MBO is superior to IGA, IPSO, and GA-PSO in terms of the efficiency and robustness of the solution.

**Figure 8.** Iteration diagrams of four algorithms.

#### *4.4. Bi-Objective Comparison*

To verify the superiority of the integrated scheduling optimization of AS/RS and hybrid flowshop, three experiments with *p* = 10 and *K* = 3 are tested and compared. Objectives of these experiments are operation time in AS/RS, makespan in hybrid flowshop, and the bi-objective. The results are shown in Table 9, which shows that while *f*2 only changes 4.43%, *f*1 has a 26.34% improvement. This condition is more suitable for the actual production which concerns the total profit and reflects the superiority of the integrated scheduling optimization.

**Table 9.** Comparison of results between bi-objective optimization and single objective optimization.

