1. Introduction
As the economy develops, consumers tend to diversify and personalize their products, resulting in the traditional make-to-stock production model failing to meet the needs of today’s consumers and forcing companies to switch to a make-to-order (MTO) customer customization model [
1,
2]. Multi-variety and small batches are characteristics of MTO orders, and this form of manufacturing usually uses multi-equipment work centers. The production scheduling for multi-equipment work centers can be based on the hybrid flow-shop scheduling problem (HFSP). The HFSP is a typical NP-hard problem, also known as the flexible flow-shop scheduling problem [
3], which breaks the predetermined processing equipment limits in traditional flow-shop scheduling.
The HFSP is a widely studied class of scheduling problems. It exists in the metallurgical, chemical, mechanical manufacturing, steel, food processing, and pharmaceutical industries. In the HFSP, parallel machines exist in some or all processes, which combine the characteristics of flow shop and parallel machines and are more challenging to solve. In the past, the HFSP was only used to optimize the single objective of the makespan. With the deepening of research and a series of problems, such as energy consumption and emissions in actual production, multi-objective optimization has become the main content of HFSP research.
In the study of multi-objective optimization for the HFSP, Schulz et al. [
4] proposed a new multiphase iterated local search algorithm with the makespan, total energy costs, and peak load as the optimization objectives. Mousavi et al. [
5] proposed a heuristic algorithm based on simulated annealing with the makespan and total tardiness as the optimization objectives. Wang et al. [
6] designed a new local search operator with the makespan and the mean continuous running time as the optimization objectives to improve the local search capability of the genetic algorithm. S.P et al. [
7] proposed a traveling salesman algorithm and a genetic algorithm with the optimization objectives of the makespan, mean flow time, and machine idle time. M.K. et al. [
8] proposed an improved particle swarm optimization algorithm with the optimization objectives of the makespan and total flow time. Cui et al. [
9] used an improved genetic algorithm to solve the no-wait HFSP with the objective of minimizing the maximum completion time and the earliest start time of each piece of equipment.
In the HFSP, there are three modes of job organization: the sequential movement mode, the parallel movement mode, and the parallel sequential movement mode. the sequential movement mode can be described as a process in which all the workpieces of the current process are machined before the next step can be performed. The parallel movement mode can be described as the workpiece being ready for the next step once the current workpiece has been machined. The parallel sequential movement mode requires each machine to be continuously processed without allowing idle time during processing. This reduces the machine failure rate due to frequently switching on and off, reducing energy consumption. Therefore, the HFSP based on the parallel sequential movement mode can significantly improve the utilization of processing equipment. Moreover, there are certain scenarios in which production processing can only be performed in the parallel movement mode. For example, in the production of glass fibers, integrated circuits, and ceramic frits, the processing equipment cannot be idle once it is started until all the workpieces are processed due to limitations in the production processes, technical specifications, or manufacturing costs.
The parallel sequential movement mode idea is similar to the no-idle flow-shop scheduling problem (NFSP), in that the goal is to ensure that there is no idle processing equipment. Kong et al. [
10] studied the multi-objective flow-shop batch-scheduling problem by considering a separable processing time and adjustment time in the parallel sequential movement pattern, constructing a multi-objective model, and proposing a separable forbidden search algorithm to solve the problem. Zhou et al. [
11] constructed a mathematical model of flow-shop scheduling in the parallel sequential movement mode and designed a hybrid genetic algorithm to solve the model.
In the study of the NFSP, Deng et al. [
12] proposed a hybrid discrete differential evolutionary algorithm for solving the no-idle permutation flow-shop scheduling problem. Zhou et al. [
13] proposed an intrusive weed-optimal scheduling algorithm for optimizing the NFSP and demonstrated the effectiveness of the algorithm in solving the no-idle flow-shop scheduling problem by comparing 12 cases of different sizes with other algorithms. Liu et al. [
14] proposed a discrete firefly optimization algorithm for the no-idle permutation flow-shop scheduling problem, which uses an individual encoding approach based on the workpiece sequences and a position update approach and embeds swapping, insertion, and reverse-order operation strategies to enhance the local search performance of the algorithm. Shen et al. [
15] proposed a two-population distribution estimation algorithm for solving the no-idle replacement flow-shop scheduling problem with the total delay quasi-side, and demonstrated the effectiveness of the proposed algorithm through numerical experiments as well as algorithm comparisons. Shao et al. [
16] proposed a modal algorithm for edge histograms with mixed nodes, which consisted of a population-based global search and local optimization for individuals, and demonstrated the effectiveness of the algorithm in solving this problem through numerical experiments. Liu et al. [
17] proposed a discrete fireworks algorithm with a local search for the no-idle permutation flow-shop scheduling problem by using the encoding of artifact sequences, redefining the explosion operator and the variation operator by combining operations such as flip-swap, and incorporating a local search strategy based on the insertion neighborhood to enhance the local search capability of the fireworks algorithm. Chen et al. [
18] studied the distributed no-idle permutation flow-shop scheduling problem, proposed a collaborative optimization algorithm to ensure the diversity and quality of the initial population by two heuristic synergies, improved the search operator adaptive reinforcement strategy, and finally, demonstrated the effectiveness of the algorithm in solving the current problem with numerical experiments of different scales.
Algorithms represent the core research of the HFSP. In the study of solving the HSFP, heuristic algorithms such as the genetic, simulated annealing, particle swarm, and ant colony algorithms are often chosen because of the NP-hard nature of the problem. Among them, genetic algorithms are the most widely used for the HFSP. For example, several studies in the literature [
19,
20,
21,
22,
23] are based on genetic algorithms for the solution. For multi-objective problems, the NSGA-II based on a genetic algorithm and the concept of Pareto optimality is often used, which has the advantages of a fast operation and good convergence of the solution set. Zhang et al. [
24] considered workpiece transportation and machine tool change, and developed an HFSP model with energy consumption and the makespan as the optimization objectives, which was solved using the NSGA-II. Sun et al. [
25] designed the NSGA-II based on two-stage chromosome coding to solve a multi-objective HFSP that minimized the total delay time of the workpiece and the standard deviation of the worker job assignment. Huang et al. [
26] improved the NSGA-II for solving a multi-objective HFSP with the makespan and load balancing as the optimization objectives. Song et al. [
27] proposed an improved NSGA-II for solving the HFSP with the optimization objectives of minimizing energy consumption and the makespan. Chen et al. [
28] proposed an improved NSGA-II for the HFSP with the makespan and energy consumption as the optimization objectives.
In addition, in most of the studies on the HFSP, only the processing time was considered, and some auxiliary time was neglected. In real production, there is some preparation time before the processing equipment, and there is handling time when the workpiece is transported from the current process to the next process. According to the statistics, in real factories, the auxiliary time such as adjustment time and handling time may even account for 90% of the whole manufacturing process [
29], so it is important to consider the adjustment time and handling time in the study of the HFSP. In the study of considering setup time or handling time, Yuan et al. [
30] investigated the hybrid flow-shop grouping scheduling problem with setup time and handling time. A co-evolutionary cultural genetic algorithm was proposed with the makespan as the optimization objective. Han et al. [
31] proposed an improved genetic algorithm for the optimization problem of multi-sequence finite buffer scheduling in a hybrid flow-shop with adjustment time. Lo et al. [
32] used genetic algorithms to solve the approximate optimality of large-scale problems with the setup time and makespan as the optimization objectives. Naderi et al. [
33,
34] proposed an improved simulated annealing algorithm by considering the setup time, with the makespan and total delay time as the optimization objectives. Xuan et al. [
35] considered the handling time and release time and designed a hybrid genetic algorithm with the makespan as the optimization objective.
In summary, there are fewer studies that have combined handling time and setup time when considering HFSP-assisted time studies, and most of them are single-objective optimization problems. In addition, for the scheduling problem where the job organization pattern is a parallel sequential movement, the current research only stops at the application in flow-shop scheduling. In contrast, the scenario considered in the traditional flow-shop scheduling study is rather limited, i.e., the scenario where there is only one processing machine per process, which is not in line with actual production. Therefore, the main goals of this paper were the following: (1) Study the HFSP based on the parallel sequential movement model. A mathematical model of multi-equipment processing while considering adjustment time and handling time was developed. (2) Consider the actual load capacity of handling, the number of times the handling equipment is started, and the amount of handling equipment when the workpieces are run in different processes. Factors such as the actual load capacity, the number of times the handling equipment is started, and the amount of handling equipment can directly affect the scheduling plan as well as the production cost. By using the number of handling times and the makespan as the optimization targets, we provided a targeted handling scheduling solution for companies with different handling capacities. (3) Design an improved NSGA-II to solve the problem studied in this paper. For the situation where the population tended to fall into the local optimum in the iterative process, two crossover operators were combined, and a new variational operator was designed. To improve the overall quality of the Pareto solution set, a neighborhood search was performed on the Pareto solution set by combining the variational neighborhood search algorithm.
2. Problem Description and Model Construction
2.1. Description of the Hybrid Flow-Shop Scheduling Problem in Parallel Sequential Mode
The background of this paper is multi-equipment work center production scheduling in an MTO production mode, and multi-equipment work center production scheduling can be directly borrowed from the HFSP. Therefore, this paper investigates an HFSP that considers auxiliary time (setup time and handling time) and is based on parallel sequential movement.
The HFSP based on the parallel movement model is described as follows.
n workpieces are machined in the same order on
i work centers. Each work center has
mh machining machines (
mh ≥ 1,
h = 1, …,
i).
r is the number of types of workpieces, and
na is the number of type
a workpieces, with
n1 +
n2 + …
nr =
n. Workpieces are transported to different work centers with corresponding handling times, and the equipment has setup times for processing different kinds of workpieces. The equipment must be continuous during machining (i.e., the next workpiece should be machined immediately after the current workpiece is machined), and no idle time is allowed. One or more performance indicators is optimized by sequencing the workpieces and assigning equipment to them. The Gantt chart is shown in
Figure 1. This is a machine with
n = 6,
i = 3, workpieces of type 1 numbered 1 and 2, and workpieces of type 2 numbered 3, 4, 5, and 6. The S notation in the figure indicates the setup time. It can be seen that the workpieces are processed continuously on each machine. The end machining time of workpiece 3 at work center 1 is
t1, and the machining start time at work center 2 is
t2. This is due to the handling time of workpiece 3 from work center 1 to work center 2. Note: All machining machines do not require an adjustment time when machining a workpiece for the first time.
This paper is based on the following assumptions to help understand the issue:
- (1)
The workpiece may not be halted after it has begun being processing on the equipment, nor may it be stopped before inserting other workpieces.
- (2)
Each machine is in good working order, and there is no downtime.
- (3)
The handling time includes the time spent handling the workpiece from the processing equipment to the handling equipment and from the handling equipment to the processing equipment.
- (4)
The handling equipment has an unlimited carrying capacity.
- (5)
For each workpiece on different pieces of equipment in multiple work centers, the machining time, setup time, and handling time between work centers are known.
- (6)
All workpieces are processed on different equipment, and no nonconforming products are produced.
Figure 2 shows the layout of a multi-device work center, where each work center is equivalent to a process (stage). It can be seen that there are multiple pieces of machining equipment in each work center, and this machining equipment can be completely related (i.e., the workpiece is processed at the same time by all equipment in the current work center) or unrelated (i.e., the workpiece is not necessarily processed at the same time by all equipment in the current work center). There is a certain distance between the different work centers, and the workpiece needs to be moved with the help of handling equipment. The handling equipment can be vehicles, handling robots, etc., and the number of pieces of handling equipment is greater than or equal to 1. The production side should reasonably arrange the production and processing scheduling plan according to the customer’s order delivery time and the efficiency of the enterprise in order to reduce the manufacturing cycle, improve the utilization of the processing equipment, and deliver on time.
2.2. Model Establishment for Multi-Objective Optimization
Table 1 and
Table 2 show the parameters and variables defined in this paper, with superscript T for handling, superscript S for setup, and superscript a for workpiece kind.
The decision variables in this paper are shown below.
Xh,j,k: If workpiece j is machined on machine k at work center h, Xh,j,k = 1, otherwise Xh,j,k = 0.
Ah,j,k: If workpiece j is of the same type as the workpiece being processed by machine k at work center h, Ah,j,k = 0, otherwise Ah,j,k =1.
Yh,j: If the workpieces in work center h with a completion time less than TEh,j belong to a set, Yh,j = 1, otherwise Yh,j = 0.
The objective function and restrictions are as follows, with the makespan and the number of handling events as the optimization objectives.
- (1)
Makespan.
The most crucial metric in the MTO production process is still the lead time, which can be directly calculated from the makespan. The objective function is shown in Equation (1).
- (2)
Minimize the number of handling events.
The number of handling events in actual production indirectly impacts the amount of handling equipment, the total handling time, the handling cost, and the number of handling workpieces. In addition, the number of handling events also has a close relationship with energy consumption; less handling also means less energy consumption of the handling equipment. The objective function is shown in Equation (2).
The constraint condition equations are shown in (3)–(18).
Equation (3) ensures that the workpiece is processed on only one piece of equipment in each work center. Equation (4) represents the number of handling events from work center h to work center (h + 1). Equation (5) represents the total handling time. Equation (6) represents the relationship between the start time of handling and the completion time of the workpiece. The qth handling start time is equal to the maximum TEj,h of the artifacts in the handling artifact set. Equation (7) represents the end time of handling. In the same work center, the end time of the (q + 1)th handling is equal to the start time of the qth handling plus the handling time. Equation (8) represents the sequence of the start time of handling. The start time of the (q + 1)th handling from the work center h to (h + 1) is greater than or equal to the start time of the qth handling. Equation (9) represents the relationship between the start and end times of processing, ensuring that the workpiece has been processed prior to handling. Equation (10) represents the setup time of the workpiece in the work center. Equation (11) is the total setup time. Equation (12) represents that the start time of the first use of the equipment in work center 1 is equal to the setup time of the equipment. Equation (13) represents that the start time of the first use of the equipment in the work center h(h > 1) is equal to the end time of the first handling from the work center (h − 1) to h. Equation (14) is a constraint for the parallel sequential mode, representing the start time of the equipment for processing the (v + 1)th workpiece, which is equal to the start time of processing the vth workpiece plus the setup time and the processing time. Equation (15) represents the relationship between the start time of processing of the equipment and the end time of processing of the equipment. Equation (16) represents the end time of the workpiece in the work center. Equation (17) formulates a constraint on the adjacent work centers, ensuring that the workpiece can only be processed in one work center at the same time. Equation (18) indicates that the end time of the workpiece is greater than 0.
3. Improved NSGA-II
In multi-objective optimization, each optimization objective is in conflict with the other and they cannot be optimized at the same time. Therefore, the task of multi-objective optimization is not to find an optimal solution, but to obtain a set of Pareto solution sets. The NSGA-II was proposed by DEB et al. [
36] on the basis of the NSGA, and the core involves non-dominated ordering and introduces the concept of congestion. With the advantages of fast operation, good robustness, and good convergence, it has become an important basis for handling multi-objective optimization problems. However, there are several problems in applying the NSGA-II to solve the NHFSP, as follows. (1) The selection process essentially chooses solutions with a non-dominance rank of 1 and a congestion of 0, which leads to the easy and accelerated convergence of the Pareto solution set to a local optimum. (2) Crossover and variation are important steps to generate new individuals, but the traditional NSGA-II crossover and variation operators have poor local and global search capabilities.
Therefore, in this paper, we improved the NSGA-II to ensure its optimization capability in a multi-target NHFSP, named as NSGA-II-V. Firstly, the initial population was generated with “random” and “kind of consecutive” encoding. Second, a crossover operator combining
POX and
LOX with a new mutation operator was designed. Then, a combination of elite selection and tournament selection strategies were used to determine the parent population. Finally, to ensure that the Pareto solution set was optimal in the neighborhood, a variable neighborhood search algorithm was used to perform a local search on the solution set.
Figure 3 shows the flow of the algorithm.
3.1. Encoding and Decoding
In the NHFSP, the commonly used encodings are real-valued encoding, based on workpiece sorting, and matrix encoding, based on workpiece sorting with machine selection. Since matrix coding does not incorporate the characteristics of assembly line production, has a high computational complexity, and its coding method of specifying processing equipment for each workpiece is not applicable to the no-idle problem studied in this paper, real-valued coding was used in this paper. There are two types of codes: the first one is random coding, where n workpieces are arranged in completely random order, and the second one is similar-adjacent coding, where the workpieces are divided into r sub-codes according to different kinds, and the basic idea is to reduce the adjustment time of the equipment.
Decoding uses a combination of the first-come, first-processed and processing equipment idle priority decoding methods. The idea of the first-come, first-processed decoding method is that in the first process, the workpiece is scheduled to be processed exactly in the coded order, while the other processes need to be scheduled in the order of arrival of the workpiece. The idea of the processing equipment idle priority decoding method is that for processes where multiple pieces of processing equipment exist, the equipment with the shortest processing time in the current process is selected for processing according to the order of the code or the sequential arrival of the workpieces, and when the processing equipment is all running, the next workpiece is selected to be processed by the fastest idle processing equipment.
3.2. Population Initialization
Population initialization is a key problem in the NSGA-II, and the quality of the initial solution has a great impact on the speed and quality of the NSGA-II solution. In this paper, three initialization rules were proposed to ensure the quality and diversity of the initial solutions. The initial solutions generated by each initialization rule together form the complete initial population. The first one is completely random: individuals are randomly selected to enter the initial population. The second one is the makespan minimum: the makespan, one of the optimization objectives, is selected as the screening index, and individuals with smaller values are screened to enter the initial population. The third one is the minimum number of moves: the number of moves of another optimization target is chosen as the screening index, and the individuals with a smaller number of moves are screened into the initial population.
3.3. Crossover Operator
In order to prevent individuals from falling into a local optimum prematurely, crossover operations need to be implemented on the population. The idea of the crossover operator is to achieve population evolution by crossing two original individuals, which may result in more superior chromosomes. The NSGA-II-V designed in this paper uses two crossover operators, LOX and POX.
- (1)
Linear order crossover (LOX)
The idea of the
LOX crossover is to generate child generations by swapping some genes of the parent individuals, as shown in
Figure 4. Firstly, two crossovers are generated in the interval (1,
n). Second, the segments between the intersection points of the two parents are swapped, and the genes in the original parent are removed from the swapped segments. Finally, the remaining genes are filled in sequentially outside the crossover fragment.
- (2)
Precedence Operation Crossover (POX)
The idea of the
POX crossover is to exchange genes at multiple different points of the two parents, as shown in
Figure 5. First, a set of artifacts of length n containing 1~
n is generated in random order, and this artifact set is divided into two subsets,
J1 and
J2, at position
a (
a is randomly generated in the range of 1~20). Second, the genes identical to
J1 in
parent1 are copied to the same position in
child1, and the genes identical to
J1 in
parent2 are copied to the same position in
child2. Finally, the elements in parent1 that are identical to
J2 are copied to
child2, and the elements in parent2 that are identical to
J2 are copied to
child1.
3.4. Mutation Operators
There are three kinds of improved variational operators in this paper: (1) Single-point transformation. In the encoding of a randomly selected position p, the encoding of both sides of the p point is fragment1−1 and fragment1−2, respectively. These two segments of the encoding have two ways of change, maintained change and random change. After the change, the two segments of the encoding swap their order into the original position. (2) Fragment exchange. In the encoding of two randomly selected segments of encoding fragment2−1 and fragment2−2, these two segments of the encoding have one maintained and two random ways of change. After the change of fragment2−1 and fragment2−2 switches the order, the rest of the encoding remains unchanged. (3) Fragment inheritance. Part of a fragment of an optimal encoding is extracted and used as a reference to generate several different new encodings.
The triggering probabilities of the three mutation methods are the same. After X1 mutates into X2, the two are compared, and only when X2 completely dominates X1, the mutation operation is ended and X1 is replaced with X2. If it is not completely dominated, the mutation operation needs to be performed again until the generated X2 completely dominates X1 or the cycle ends, and if the cycle ends, X1 will not be replaced.
3.5. Selection
In this paper, a combination of the elite retention strategy and the tournament was used to select the parental population. The specific operation is as follows. First, a de-duplication operation is performed to remove chromosomes with similar coding structures. Second, an elite retention strategy is performed to retain o optimal individuals. Third, a tournament selection strategy is performed. Two different individuals (excluding the elite retained individuals) are randomly selected from the population. If the Pareto ranks of the two individuals are the same, the individual with the higher crowding is selected. If the Pareto ranks are different, the individual with the higher rank is selected. Finally, the individuals are merged to produce a new parent population.
3.6. Neighborhood Structure
The variable neighborhood search algorithm is an improved local search algorithm. The basic idea is that the local optimal solution of one neighborhood structure is not necessarily the local optimal solution of another neighborhood structure. The optimal solution is perturbed with a domain structure composed of different actions, which in turn finds other optimal solutions in the neighborhood. In this paper, two types of neighborhood structures are designed, i.e., a large-scale neighborhood search and a small-scale neighborhood search.
- (1)
The idea of the large-scale neighborhood search is to iterate over all positions of the current optimal solution, which proceeds as follows.
Step 1: k = 1 and t = 2, such that Xbest = X.
Step 2: Take out the first t genes in X, denoted as X1~t, and the remaining gene components, denoted as Xr, and insert X1~t into the best position in Xr (i.e., the new chromosome with the largest fitness obtained) to obtain the new chromosome Xf.
Step 3: Compare Xf with the X fitness value. If the fitness of Xf is large, make Xbest =Xf, and go to Step 1. If the fitness of X is large, then t = t + 1, repeat Step 2, stop if t = n − 1, and output Xbest.
- (2)
The small-scale neighborhood search includes: (1) Translation. The individual codes are panned by
u units to the left or right as a whole, and the value of
u ranges from 1 to 5. (2) Fragment inversion. A segment of any length from 2 to 7 in the individual codes is taken and arranged in reverse order.
Figure 6 shows the flow of the small-scale neighborhood structure. The current neighborhood structure is used to generate a fitness comparison from the new solution
X with the original optimal solution
Xbest, and if the fitness value of
X is large, then
Xbest =
X. If the fitness value of
Xbest is large, the cycle is continued.
gen_
vns is the number of iterations for the local search of small-scale domain structures.
3.7. Fast Non-Dominated Sorting and Crowdedness Calculation
- (1)
Fast non-dominated sorting
The initialization population is subjected to non-dominance sorting. The Pareto dominance relation is as follows: There are two target components, f1(x) and f2(x), and given two individuals Xa and Xb arbitrarily, individual a dominates individual b if the following two conditions hold. (1) For ∀m ∈ 1, 2, both have fm(Xa) ≤ fm(Xb). (2) ∃m ∈ 1, 2 such that fm(Xa) < fm(Xb). If no one individual dominates it, it is referred to as a non-dominant solution. The Pareto rank of a non-dominated solution in a collection of solutions is 1. The Pareto rank of the remaining solutions is defined as 2, and so on, and all the Pareto ranks are obtained by eliminating the non-dominated solution from the collection of solutions.
After non-dominance sorting, if there were initially 25 (1~25) individuals in the population, these 25 individuals are separated into multiple Pareto ranks, with rank 1 = (1, 3, 5), rank 2 = (2, 12, 13, 14), rank 3 = (4, 6, 8, 10, 19), rank 4 = (7, 11, 16, 18, 22, 25), and rank 5 = (9, 15, 17, 20, 21, 23, 24).
- (2)
Crowding degree
The crowding degree
x is introduced to ensure that the set of Pareto solutions can be constrained to a dispersed and uniform Pareto front to make the obtained solutions more homogeneous in the target space. The formula is shown below:
where
fm(
z + 1) and
fm(
z − 1) are the values of the (z + 1) and (z − 1) individuals in the
m target component, respectively, and
and
are the maximum and minimum values of the current target component.
o is the total number of target components.
4. Simulation Test
The algorithm program was implemented based on MATLAB R2019a and the program was run on a PC with the processor AMD Ryzen 5 4600H (Guangzhou, China) with Radeon Graphics, 3.00 GHz, and 8 G of RAM. PC brand is Huawei, model MateBook14, produced in Guangzhou, China
4.1. Parameter Settings
Because the algorithm’s performance is heavily dependent on parameter settings, the parameters suitable for the algorithm proposed in this paper were determined experimentally. The benchmark example proposed by Car [
37] was modified for the multi-variety, small-batch arithmetic example with a number of workpieces of 10, 20, and 50. The parallel sequential mode was used as the workpiece movement mode, and the optimization objectives were the maximum completion time and the number of handling times. The setup time was within 1~10, and the handling time was within 5~30. The joint crossover probabilities for genetic algorithms were 0.65, 0.7, 0.75, and 0.8, and the standard variation probabilities were 0.05, 0.1, 0.15, and 0.2. Therefore, the simulation tests were performed for all combinations of the above probabilities for 16 groups.
Exc and
Soj measured the algorithm’s performance with different parameter values.
Exc is the number of non-dominated solutions of the current arithmetic case, which means that the Pareto solution with the current parameter values has
Exc non-dominated solutions among all Pareto solutions with the parameter values.
Soj is the number of endpoint optima of the current arithmetic case, which means that the simulation results with the current parameter values have
Soj optimal single target values. The test results for the three instances with various parameter values are displayed in
Table 3.
The average of
Exc and
Soj was calculated ten times after simulating each set of parameter values. According to the parameter-taking-value test results, most parameter-taking values may obtain a satisfactory Pareto solution set when the scale of the workpiece is small. However, as the scale of the workpiece rises, various parameter-taking-values will have advantages and disadvantages. The crossover and variation probability of the algorithm proposed in this study were found to be (0.8,0.15) when the Exc and Soj values in
Table 2 were combined.
4.2. Comparison with Other Algorithms
To evaluate the performance of the NSGA-II-V solver designed in this paper for the HFSP based on the parallel sequential movement mode, the benchmark proposed by Taillard [
38] was used for testing. Other current algorithms with a better performance were chosen as comparison algorithms, including DSOA [
39], DFWA-LS [
13], and IWO [
17]. However, the above studies in the literature only stayed in the parallel sequential movement mode-based flow-shop scheduling, so the NSGA-II-V was applied to the parallel sequential movement mode-based flow-shop scheduling. The experimental results are shown in
Table 4. To facilitate the comparison of the NSGA-II-V with other algorithms, the average relative percentage deviation (ARPD) and standard deviation (SD) were used to measure the algorithm performance.
Note:
N is the number of simulation tests.
is the solution obtained by the algorithm in the
ith simulation.
is the optimal solution used in the literature.
is the average value obtained from
N simulations.
Table 4 shows the performance of the four algorithms for solving Taillard examples of different sizes, and
was selected to compare the optimal solutions used by the algorithms. As shown in
Table 4, the average ARPD value of the NSGA-II-V was 0.158, which was lower than 0.362 and 1.774 for the DFWA-LS and IWO, and slightly larger than 0.152 for the DSOA. The average SD value of the NSGA-II-V was 0.028, which was lower than 0.029, 0.051, and 0.185 for the DSOA, DFWA-LS, and IWO. The above results indicate that the NSGA-II-V outperformed the remaining three algorithms in terms of the stability of the optimization search. In addition, the NSGA-II-V had the lowest average SD, indicating that the NSGA-II-V has excellent stability in solving problems of different scales. In summary, the effectiveness and applicability of the NSGA-II-V in solving parallel sequential move or no-idle scheduling problems were demonstrated by comparing it with three different algorithms.
Table 4.
Comparison of the results of the algorithms.
Table 4.
Comparison of the results of the algorithms.
Example | NSGA-II-V | DSOA | DFWA-LS | IWO |
---|
ARPD | SD | ARPD | SD | ARPD | SD | ARPD | SD |
---|
T01 | 0.05 | 0 | 0.05 | 0.008 | 0.15 | 0.013 | 0.92 | 0.083 |
T11 | 0.21 | 0 | 0.21 | 0.034 | 0.62 | 0.06 | 2.37 | 0.156 |
T21 | 0.22 | 0.065 | 0.21 | 0.057 | 0.51 | 0.093 | 2.33 | 0.272 |
T31 | 0 | 0 | 0 | 0 | 0.01 | 0.004 | 0.49 | 0.103 |
T41 | 0.3 | 0.073 | 0.29 | 0.048 | 0.52 | 0.085 | 2.76 | 0.311 |
Average | 0.158 | 0.028 | 0.152 | 0.029 | 0.362 | 0.051 | 1.774 | 0.185 |
4.3. Algorithm Performance Testing
This paper investigated a multi-objective, no-idle HFSP while considering setup time and handling time, and the order type was a multi-variety, small-lot MTO order. Since there was no benchmark related to the research problem of this paper, the benchmark proposed by Carlier was adapted. The adaptations were to add setup time and handling time in the ranges of 1~10 and 10~30, to classify the workpiece types, and to change the original single-equipment processing to multiple-equipment processing. The adapted examples are j10k3i3, j20k4i4, j30k5i6, j40k6i5, and j50k7i5, where j is the total number of workpieces, k is the number of workpieces, and i is the number of work centers.
To evaluate the performance of each strategy proposed in this paper further, the NSGA-II, NSGA-II-V, and NSGA-II-1 were used to compare the performance improvement. Among them, the NSGA-II-V was the algorithm designed in this paper. The NSGA-II-1 has the same genetic part as the traditional NSGA-II, and the rest conditions are consistent with the NSGA-II-V. Each case was simulated 10 times, and the best results were compared. Since there is no standard algorithm for the problem studied in this paper, it was not possible to evaluate the performance of the multi-objective optimization algorithm using, for example, generational distance, inverse generational distance, or the space metric. Therefore, this section uses the Pareto front surface for comparison, as shown in
Figure 7.
Due to the small number of
j10
k3i
3 artifacts, the NSGA-II-V had the same Pareto front surface as that obtained by the NSGA-II-1, while the NSGA found only two non-dominated solutions. Therefore, no graphical analysis was performed for
j10
k3
i3. In
Figure 7, the dots represent the non-dominated solutions, and the curves are used to connect the dots for easy viewing. In the figure, the Pareto frontier formed by the non-dominated solutions found by the conventional NSGA-II was inferior to that of the NSGA-II-V and NSGA-II-1, which illustrates that the crossover, variation, selection, and variable neighborhood search strategies proposed in this paper can effectively improve the superiority-seeking accuracy of the NSGA-II. The comparison between the NSGA-II-V and the NSGA-II-1 showed that the overall Pareto front surface of the NSGA-II-1 was inferior to that of the NSGA-II-V. This is because there may be better solutions than the current one in the neighborhood of a sure non-dominated solution, and the variable neighborhood search operation can find these solutions. This also proved that the variable neighborhood search strategy designed in this paper was effective.
Figure 8 is a Gantt chart of one of the non-dominated solutions of
j40
k6
i5, from which it can be seen that the number of devices in the five work centers was three, two, four, five, and two in order, and each device was unrelated. The book numbers represent the workpiece numbers, where workpieces 1–8 were type 1, 9–13 were type 2, 14–20 were type 3, 21–30 were type 4, 30–34 were type 5, and 35–40 were type 6. There was a transfer time for different types of workpieces (e.g., workpiece 8 and workpiece 32 in M
1−3), and the workpieces were transferred in different work centers.
4.4. Simulation Example Analysis
Through the simulation analysis of this paper, the following information was obtained: the best scheduling scheme, the start/end processing time of each workpiece in different work centers, the handling time of each workpiece in different work centers, the total number of handling events, the setup times of the processing equipment, etc. Processing and manufacturing companies can combine the above information with their actual situation to arrange a reasonable production plan. The real-time status of processing equipment, vehicles, and workpieces can be monitored throughout the process, thus helping intelligent manufacturing. An example analysis is as follows.
Enterprise A receives a multi-variety, small-batch processing order 1 from a customer.
n = 18 and
r = 3 (the number of different types of workpieces is: five, three, and ten), and the order is limited to 10 h. After receiving order 1, enterprise A automatically performs calculations in the background and comes up with the following content: order 1 needs to go through three work centers; the current number of available equipment in each work center is three, two, and two; and the handling time is 10.5. The equipment processing time and setup time are shown in
J(1) and
J(2); the column is the processing equipment and the row indicates the type of workpiece.
In addition, the equipment processing cost is
$72/hour, the handling equipment cost is
$30/time, and the current number of available handling equipment pieces is 10. The factory must choose a reasonable scheduling plan according to the order delivery time and its benefits. Using the model calculation method of this paper, the case was simulated and analyzed.
Figure 9 shows the Pareto frontier for order 1.
From the Pareto frontier obtained in
Figure 9, the scheduling scheme for the current optimum was selected. First, considering the constraints of delivery time and the number of handling equipment pieces, it was concluded that there were five options to choose from: (409, 9), (435, 8), (466, 7), (483, 6), and (552, 5). Next, the optimal production solution was selected. The five solutions cost
$812.4,
$759.6,
$769.2,
$762, and
$760.8 in order, so (435, 8) was the optimal production solution for order 1. Finally, the processing Gantt chart of the optimal production solution was generated, as shown in
Figure 10.
The number on the rectangular block is the workpiece number, and the equipment had the corresponding setup time when processing different workpieces, so workpiece 4 in M
1−1 was not processed from moment 0. Different types of workpieces were processed on the same equipment with the corresponding equipment setup time, such as workpiece 1 and workpiece 11 in M
1−1. The same types of workpieces were processed on the same equipment without the setup time, such as workpiece 11 and workpiece 12 in M
2−1. As can be seen from
Figure 10, all workpieces were processed continuously on the equipment, and the equipment had no idle time during the processing cycle. The optimal scheduling processing order in
Figure 10 was: 16-9-1-11-2-12-17-14-6-13-10-4-5-15-8-7-3-18. Its specific production operation plan was: e
,
,
,
,
,
,
. BN was calculated as shown in
Figure 11.
Note:/ is the start/end of the machining time matrix, and (h,j) denotes the start/end time of machining the jth workpiece at work center h. / is the start/end of the setup time matrix, and (h,j) indicates the start/end setup time of the jth workpiece processed by work center h. “-” indicates that no setup time is required for processing the current workpiece, is the matrix of the number of handling events, and (h,c) indicates the number of workpieces to be moved for the cth time in work center h. By default, the last work center does not need to be moved. / is the start/end of the time matrix, and (h,c) indicates the start/end time of the cth handling of work center h.