Next Article in Journal
Review on Torque Distribution Scheme of Four-Wheel In-Wheel Motor Electric Vehicle
Previous Article in Journal
Neural Network Based Adaptive Event-Triggered Control for Quadrotor Unmanned Aircraft Robotics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Whale Optimization Algorithm for Flexible Job-Shop Scheduling Problem

1
School of Mechanical and Electrical Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China
2
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(8), 618; https://doi.org/10.3390/machines10080618
Submission received: 21 June 2022 / Revised: 19 July 2022 / Accepted: 25 July 2022 / Published: 27 July 2022
(This article belongs to the Section Advanced Manufacturing)

Abstract

:
The flexible job shop scheduling problem (FJSP) is an extension of the classical job shop scheduling problem and one of the more well-known NP-hard problems. To get better global optima of the FJSP, a novel hybrid whale optimization algorithm (HWOA) is proposed for solving FJSP, in which minimizing the makespan is considered as the objective. Firstly, the uniformity and extensiveness of the initial population distribution are increased with a good point set (GPS). Secondly, a new nonlinear convergence factor (NCF) is proposed for coordinating the weight of global and local search. Then, a new multi-neighborhood structure (MNS) is proposed, within which a total of three new neighborhoods are used to search for the optimal solution from different directions. Finally, a population diversity reception mechanism (DRM), which ensures to some extent that the population diversity is preserved with iteration, is presented. Seven international benchmark functions are used to test the performance of HWOA, and the results show that HWOA is more efficient. Finally, the HWOA is applied to 73 FJSP and four Ra international instances of different scales and flexibility, and the results further verify the effectiveness and superiority of the HWOA.

1. Introduction

The scheduling problem is the rationalization of the allocation of limited resources under certain constraints with the aim of achieving one or more target values to the satisfaction of the operator. Job-shop scheduling is one of the most important issues in the planning and operation of modern production and manufacturing systems, which is the key to the operation of a manufacturing system and the heart of production planning and control [1]. Based on the development of economic globalization, manufacturing enterprises are bound to face more and more severe competition. Therefore, a reasonable and effective production scheduling solution is indispensable to raise enterprise competitiveness. Currently, many companies still rely on experienced workers for production scheduling planning. Although experience-based scheduling solves the problem of resource allocation in the production process of manufacturing companies to a certain extent, the method relies too much on personal experience and can only be used for some simple, small-scale production scheduling problems. Furthermore, the global optimization scheduling plans cannot be guaranteed. Therefore, in large-scale production tasks, such experience-dependent manual scheduling is no longer effective. Meanwhile, traditional industrial production is updating and striding forward in the direction of flexibility, digitalization and intelligence. Therefore, it is essential to carry out the study on the flexible job shop scheduling problem.
The job-shop scheduling problem (JSP) is a typical NP-hard problem and is very important in production scheduling management systems [2]. In 1959, Bowman et al. first gave the definition of job shop scheduling [3]. For this problem, there are m machines and n jobs to be machined, where each job has one or more machining operations and is machined in a fixed operation sequence. Each operation requires a designated machine, and the machine is always available and can operate only one operation at a time without interruption. The good decision is how the job sequence is determined under the idle condition of the machines, which minimizes the makespan.
The flexible job-shop scheduling problem (FJSP) is an extension of the classic JSP [4]. In 1990, Bruker et al. first considered machine operation flexibility and proposed the concept of flexible job shop scheduling [5]. On the basis of operation allocation, machine allocation should also be considered. Each operation of a job can be operated on multiple machines, and the operation time on each machine is not necessarily the same. Actual production can be flexible according to the resource load situation. However, the increase in flexibility causes the FJSP to become complex [6]. Therefore, FJSP is more difficult and more suited to the actual needs of modern production than traditional JSP [7]. There are two general methods for solving FJSP: Firstly, the exact algorithm can get an effective optimal solution when it is used to solve small-scale problems. However, it is difficult to obtain the optimal solution in an acceptable period of time for large-scale scheduling problems [8]. Secondly, intelligent algorithms having good search ability are usually used to deal with complex FJSP and can obtain the optimal solution in a reasonable time. Such algorithms generally include genetic algorithms (GA), simulated annealing (SA), ant colony optimization (ACO), particle swarm optimization (PSO), grey wolf optimization (GWO), etc.
Driss et al. proposed a new genetic algorithm, that adopted a new chromosome representation method and different crossover and mutation strategies to solve the FJSP [9]. For example, Jiang et al. proposed a discrete grey wolf algorithm to solve FJSP, in which a search operator based on crossover operation was designed, so that the algorithm could work directly in the discrete domain, and an adaptive mutation method and variable neighborhood search were utilized to enhance the search performance of the algorithm [10]. Xu et al. proposed a parallel adaptive hybrid immune algorithm (HIA) to solve FJSP, where hybrid coding method, adaptive crossover operator, mutation operator based on coding antibody method, and affinity calculation based on group matching were used. Furthermore, a hybrid algorithm based on a simulated annealing algorithm was added to avoid trapping in local optimum [11]. Caldeira et al. proposed an improved Jaya algorithm to solve FSJP, in which an efficient initialization mechanism, local search technology and acceptance criteria were introduced to improve the quality of the solution. The test on 203 benchmark instances proved that it was effective in solving the FJSP problem [12]. Wu et al. proposed a non-dominated sorting genetic algorithm optimization algorithm for solving FJSP with energy consumption as the goal [13]. Zhang et al. designed a machine selection method including a global selection strategy, a local selection strategy, and a random selection strategy [14]. Kacem et al. proposed a local search algorithm that comprehensively considered the size of machine load and operation load to solve the machine allocation problem and then used a genetic algorithm to solve the operation sequencing problem [15]. Li et al. proposed a new hybrid algorithm to solve the FJSP by combining the powerful global search ability of GA with the efficient local search ability of TS [16]. Zhang et al. proposed a quantum genetic algorithm and applied it to dual resource-constrained FJSP [17]. Singh et al. introduced mutation factors to speed up the convergence based on the particle swarm optimization algorithm and used a chaotic mapping approach to generate initialization values [18]. Wang et al. used an improved ant colony algorithm to optimize the FJSP, in which the initialization mechanism, pheromone guidance mechanism, and node selection method were improved [19]. Cruz-Chávez et al. proposed a simulated annealing algorithm which incorporated a new cooling mechanism to accelerate convergence [20]. Gao et al. proposed an improved simulated degradation algorithm to solve the FJSP, combining SA with PSO to improve the simulated annealing operation and using the better particles to guide the search [21].
The WOA was proposed by Mirjalili et al. and is widely used in many fields, such as facial recognition [22], power optimization [23], aerospace [24], shop scheduling [25], path planning [26], photovoltaic cells [27], fault diagnosis [28], etc. In the field of shop scheduling, Abdel et al. studied the permutation flow shop scheduling problem using WOA [29]. Liu et al. optimized the JSP using WOA and used the Lévy flight strategy and DE strategy to improve the performance of WOA [30]. Luan et al. proposed a discrete WOA to solve low-carbon FJSP, where a new coding mechanism to solve the allocation of machines and operations and a hybrid variable neighborhood search to generate high-quality populations were all used [31]. From the discussions above, the WOA algorithm has received the growing attention of scholars because of its simple principle, fewer parameters to be adjusted, and easy implementation. Furthermore, WOA has been proven by many scholars that its convergence speed and convergence accuracy are higher than some classical group intelligence optimization algorithms. Therefore, WOA has been widely used to solve continuous function optimization problems, but seldom in discrete combination optimization problems, especially in FJSP. Thus, in this paper, a new hybrid whale optimization algorithm (HWOA) is studied and adopted to solve the flexible job shop scheduling problem, which is important to expand its application. Specifically as follows: Firstly, according to the characteristics of the FJSP discrete combinatorial optimization problem, the continuous values of the whale positions are mapped to the discrete value; secondly, the good point set strategy is used to generate the uniformly distributed population in the initial stage; thirdly, a multi-neighborhood structure is added to explore the depth of the solution; finally, the population diversity reception mechanism is also used to ensure population diversity with iteration.
The rest of the paper is organized as follows. Section 2 describes the definition of the problem. Section 3 introduces the original whale optimization algorithm. Section 4, the improved HWOA is proposed. Section 5 analyzes the solution results of HWOA on various international benchmark instances of FJSP. Section 6 summarizes the conclusions of this paper and presents the research direction of future work.

2. Problem Description

Flexible job shop scheduling can be described as: multiple jobs that need to be operated on multiple machines, each with multiple operations. Compared to classic JSP, the number of machines that can operate each operation of a job is no longer a fixed number of machines; it can be operated by one or more machines [32]. Therefore, FJSP has two assignment tasks: machine selection and operation sorting. Notations and abbreviations used in the article are described in Table 1.
FJSP can be divided into two main categories:
(1)
In the Total Flexible Job Shop Scheduling Problem (T-FJSP), any operation in any job can choose any machine from all machines for operations, as shown in Table 2;
(2)
Partial Flexible Job Shop Scheduling Problem (P-FJSP), where each operation of each job can be operated by some of all the machines shown in Table 3.
In this study, the optimization objective is to minimize the makespan C m a x , which is the minimum time required for all jobs to be machined. The mathematical model can be described as follows:
m i n C m a x = m i n   m a x C 1 , C 2 , , C I  
S i , j + x i , j , k · P i , j , k < S i , j + 1
i = 1 m x i , j , k = 1
C i C m a x
C i , j S i j + 1
S i , j 0 , C i , j 0
E k 1 , M i , j 1
where Equation (1) is the optimization objective function. Equation (2) indicates that the preceding step of the current operation of the same job has been completed. Equation (3) means that an operation can only be operated by one machine. Equation (4) ensures that the completion time of any one job cannot exceed the total completion time. Equation (5) requires that the completion time of the current job cannot be greater than the operation starting time of the next operation of the job. Equation (6) guarantees that all parameter variables cannot be negative. Equation (7) shows that each machine can operate at least one operation of the job, with at least one machine available for each operation of the job.

3. Whale Optimization Algorithm

Mirjalili et al., Australian scholars, proposed WOA in 2016 [24]. It has been widely used in many fields because of its simple principle, easy implementation, few parameters, high convergence accuracy, and fast convergence. WOA focuses on solving optimization problems by simulating the group hunting behaviors of humpback whales in nature, such as the operation of searching, encircling, and chasing. During the search, each whale has two behaviors to choose from, encircling and bubble net attacking. During the process of encircling the prey, the whales choose to swim towards the best whale or one selected randomly.

3.1. Encircling the Prey

In the WOA, the location of the target prey is unknown beforehand, so the current optimal whale individual is assumed to be the location of the target prey, and the other whale individuals approach the optimal individual to form an encirclement. The mathematical description of this behavior is described as:
X t + 1 = X t A · D
D = C · X t X t
A = 2 a · r a
C = 2 · r
a = 2 2 t t m a x
where t represents the current iteration, A , C are the coefficient vectors, X is the position vector of the best individual obtained so far, X is the position vector, |·| expresses the absolute value, r is a random vector within [0,1], a decreases linearly from 2 to 0 as the number of iterations increases, and t m a x indicates the maximum number of iterations.

3.2. Search for The Prey (Exploration Phase)

In the operation of encircling the prey, whales also have a certain probability to approach other whales, and the mathematical formula for this behavior is as follows:
X t + 1 = X r a n d A · D
D = C · X r a n d X t
where X r a n d is the position vector of any individual in the whales population.
Whether an individual whale moves closer to the optimal individual position is determined by the value of the control factor A . The whale moves an individual at random when A > 1 , position selection is shown in Figure 1; otherwise, the whale moves closer to the optimal individual. The mathematical description is defined as:
X t + 1 = X t A · D                     A < 1 X r a n d A · D                       A 1
D = C · X t X t                               A < 1 C · X r a n d X t                               A 1

3.3. Bubble-Net Attacking (Exploitation Phase)

When hunting, whales swim in a circle and spew out bubbles to form a bubble net to hunt prey, as in Figure 2. When forming a bubble net attack, the whale constantly updates its position in a spiral motion. The mathematical model is described as:
X t + 1 = D l · e b l · cos 2 π l + X t
D l = X t X t
where b is a constant used to define the shape of the spiral usually taken as 1, l is a random number in 1 , 1 .
The probability that the whale swims in a spiral form to continuously reduce the envelope is assumed to be P . Then the probability of enveloping its prey is 1 P , and usually P is set to 0.5. The mathematical model is as follows:
X t + 1 = X t A · D                                                                   P < 0.5 D l · e b l · cos 2 π l + X t                               P 0.5
The detailed principle of WOA is shown in Figure 3.

4. Proposed HWOA

4.1. Encoding and Decoding

FJSP is a discrete combinatorial optimization problem, while WOA is an optimization algorithm used to solve the continuous optimization problem. Therefore, to solve the FJSP problem, WOA must be modified through a conversion mechanism. According to the characteristics of the FJSP problem, the operation sequencing problem and the machine assignment problem need to be ordered, so a two-layer coding approach including operation sequence O S and machine sequence M S is used. The principle can be described as follows: Firstly, generate the random numbers, of which the number is the same as the number of operations O; secondly, the generated random numbers are mapped from small to large, for instance, r a n d = 0.62 ,   0.31 ,   0.37 ,   0.92 ,   0.27 ,   0.15 and the mapping sequence is r a n d = 6 ,   5 ,   2 ,   3 ,   1 ,   4 , and the elements in the mapped rand’ are used as the position index of the job number. Taking Table 2, described as 3 × 4 part of the flexible job shop scheduling, there are 3 jobs. Each job has 2 operations, each operation by multiple machinable machines, as an instance, of which encoding, and decoding are depicted in Figure 4 and Figure 5.
As shown in Figure 4 and Figure 5, six random numbers are generated, and the O S and M S are determined based on the mapping sequence of the generated random numbers. The O S is ordered as follows: the first operation of job 3, the second operation of job 3, the first operation of job 1, the first operation of job 2, the second operation of job 1, and the second operation of job 2. According to the order of operation, the table of available operation machines for the current operation of the job is sequentially queried, and the machine selected for the current operation of each job is deemed as M S .
Furthermore, decoding can be described as, step 1: Determine the operation time required for each operation based on the operation sequencing table and the machine selection table; step 2 determine the end time of every operation, which is determined by the idle time and the processing time of the corresponding machine; and step 3 compare the complete times of all the jobs, and the smallest complete time of the job is chosen as the optimum.

4.2. Population Initialization

The quality of the initial population has a great impact on the performance of the algorithm to solve the problem. The initial population of WOA is generally generated randomly, which causes the initial population to be of low quality. Therefore, the theory of good point set is introduced to generate a uniformly distributed initial population, which effectively improves the diversity of the population and avoids premature convergence of the algorithm to a certain extent and is defined mathematically as follows [33]:
Let G be a unit cube in m-dimensional euclidean space, x = x 1 , x 2 , , x m G , the point set is expressed as:
P n k = r 1 m k , r 2 m k , r n m k , k = 1 , 2 , n
If the point set deviation satisfies:
φ n = C r , ε n 1 + ε  
Then P n k is said to be a good point-set, r is a good point:
r k = 2 cos 2 π k / p ,   1 k m  
In Equation (21), ε is an arbitrary positive number and C r , ε is only a constant related to r , ε . In Equation (22), p is the smallest prime number that satisfies 2 m + 3 p .
For the initial selection of the machine, a hybrid search strategy is used, of which 40% is generated with reference to the greedy algorithm and 60% is generated randomly to avoid falling into a local optimum during the iterations. Herein, taking the two-dimensional scatter plots in Figure 6 as an instance, it can be seen that the initial positions of the whale individuals initialized with the good point set are more uniformly distributed than those with random distribution, which can effectively ensure the diversity of the initial population solutions. The pseudo-code of the good point set initialization is described in Algorithm 1.
Algorithm 1. The population is initialized by the good point set.
1.Let the population size be G.
2.  for n = 1:G do
3.    for k = 1:m do
4.      P takes the smallest prime number that satisfies p ≥ 2m + 3
5.      A matrix with n as rows, m as columns, and all elements in the columns as n
6.      Calculate the   r k value by Equation (22)
7.      The initial population location P is calculated by Equation (20)
8.    end for
9.  end for

4.3. Nonlinear Convergence Factor

For swarm intelligence algorithms, how to coordinate the global search and the local search has a significant impact on the performance of the algorithm. The global search mechanism ensures that the algorithm can explore more solution space during iteration, thus avoiding falling into local optima. However, the local search mechanism ensures that the algorithm can exploit the solution as much as possible and thus speeds up the convergence of the algorithm. In the basic WOA, the global and local searches are mainly coordinated by the parameter A , and it can be seen from Equation (10) that the value of A is linearly related to the convergence factor a , so the value of a can affect the algorithm’s global and local search. The value of a in the basic WOA decreases linearly from 2 to 0 as the number of iterations increases. So, the parameter a leads to a slow convergence of the algorithm, and it is easy to fall into a local optimum. In fact, the algorithms should have a strong global search capability in the early search to ensure maintaining a faster convergence rate, and in the later search, local search should be used to enhance the accuracy of the solution. Based on the above analysis, it is necessary to introduce a nonlinear convergence factor [34], whose mathematical expression is:
a = a m 1 + cos π t 1 / t m a x 1
where a m is the initial value of the convergence factor a , t m a x is the maximum number of iterations, and t is the current number of iteration. Meantime, comparison of before and after improvement of convergence factor a is depicted in Figure 7.
From Figure 7, it can be seen that the improved nonlinear convergence factor a converges more slowly in the early and late stages of the algorithm and more quickly in the middle stage. Such a convergence approach can well coordinate the global search performance of the algorithm in the early stage with the local search performance in the latter stage and ensure the overall convergence speed of the algorithm.

4.4. Multi-Neighborhood Structure

A new multi-neighborhood structure for FJSP is proposed in which there are three new neighborhoods, N1, N2, and N3, and the optimization of FJSP is carried out from different directions.
Neighborhood structure N1: the scheduling solution is optimized in terms of different selection pairings of machines, and the machine selection is updated by a gene mutation mechanism.
Neighborhood structure N2: The scheduling solution is optimized in terms of the operation update mechanism, and a perturbation mechanism is proposed to disturb, in part, the position of whale individuals and avoid falling into a local optimum.
Neighborhood structure N3: Enhancing local search capability makes it easier for HWOA to break through the local optimum limit and lock the core operation for updating as the iteration proceeds.

4.4.1. Neighborhood Structure N1

During each iteration, the excellent machine-set gene sequences in the parent are retained to execute the machine gene mutation mechanism to obtain an alternative for the offspring. Let the set of parent machines be M i = M 1 ,   M 2 ,   ,   M n . Selection operation: b mutation points are selected from M i after each iteration. The value of b decreases with the number of iterations. Machine gene mutation operation: for the selected b mutation points, the available operation machine set M i , j of the current operation is obtained and a new available operation machine set is generated from M i , j . The new operation machine set is M i = x i 1 ,   x i 2 ,   ,   x i n . The machine mutation operation is shown in Figure 8 and Figure 9, and the Gantt chart is shown in Figure 10. The available processing machines and processing times of each operation for each job are shown in Table 2. The meanings represented by the symbols O i , j , M i , j in the figure are described in Table 3.

4.4.2. Neighborhood Structure N2

The initial population generated by using the good point set can fully ensure the diversity of the population in the early stage. However, in the late stage of the algorithm, the population diversity would be reduced due to the convergence of population individuals. Thus, the neighborhood structure N2 is designed as a perturbation to increase the population diversity. Let the population be P o p s i z e = X 1 ,   X 2 ,   ,   X N , the whale individuals be X i = x i 1 ,   x i 2 ,   ,   x i n , and the adaptive interference probability be p . Generate a random number r during each iteration. If r < p a disturbance operation is generated in this iteration, it generates a random perturbation factor β in the interval 0 , 10 to disturb the selected elements and generate a new individual X i = x i 1 , x i 2 , , x i n , as shown in Figure 11. Which is mathematically described as follows:
X i =     β x k         ,   r < p     x i n                   ,   r > p
p = t / 2 t m a x
As shown in Figure 11, the perturbation operation occurs when r < p , the number of perturbations z = 1 , the perturbation position k = 4 , and the random perturbation factor β = 2 . The number of perturbations and position of perturbations are determined randomly.

4.4.3. Neighborhood Structure N3

To enhance the local search capability of the algorithm, N3 uses the core operation across-machine gene reinforcement mechanism. Firstly, the sequence is determined which consumes the most time from the first operation to the last operation, as the start and end of core operations. Secondly, between the starting operation and the ending operation, starting from the ending operation and moving forward to find the operation sequence that is closely connected to the previous operation. Finally, all the operations found in the sequence can neither be advanced nor postponed until the starting operation is stopped, so that a core operation path is found. This means that any change to the sequence in the core operation would affect the makespan of the operation as well. To explain the neighborhood structure N3 in detail, Table 4 FJSP is taken as an instance, and the results are shown in Figure 12 and Figure 13.
As shown in Figure 12, the gray part is the core operation, which is from the last gray part to the first gray part. It can be seen that any change in the arrangement of the core operation would directly affect the makespan. Adopting the core operation across-machine gene reinforcement mechanism, the core operation is reinforced with machine genes one by one without prolonging the makespan. During the execution of machine gene mutation operations in the core operation, only operations that produce positive optimization are retained, and the set of machine genes in the core operation is continuously enhanced. For sequences with reduced makespans due to machine set gene changes, which are preserved in the next generation. As shown in Figure 12 and Figure 13, the set of core operations before updating are O 11 ,   O 31 ,   O 32 , with a makespan of 12, and the set of core operations after updating are O 11 ,   O 12 ,   O 22 , with a makespan of 9. The pseudo-code of the core operation, across-machine gene reinforcement mechanism, is described in Algorithm 2.
Algorithm 2. Core operation across-machine gene reinforcement mechanism.
1. Let the current core operation arrangement code be O S i , j , the machine arrangement code be M S i , j , k , the length of the core path set composed of core operations be l e n , and the fitness function be F
2.  for  i = 1 : l e n
3.     Select the i operation on the core path, and the set composed of its operation set O S i , j and machine set M S i , j , k is denoted as S i .
4.      The across-machine gene reinforcement operation is performed on M S i , j , k , and the newly generated machine set is noted as m a c h i n e i , j , k , and the set of newly generated machine set and operation set is noted as U p s i
5.      if F( U p s i ) < F( S i )
6.         M S i , j , k = m a c h i n e i , j , k
7.      end if
8.  end for

4.5. Diversity Reception Mechanism

By using the diversity reception mechanism, the algorithm can accept poor individuals to improve the population diversity, which effectively avoid falling into local optimum. Its mathematical description is as follows.
P =                               1 ,                                           f n f o 5 f n f o / f o ,                 f n > f o
where f n is the fitness value of newly generated individuals in the current generation and f o is the fitness value of individuals in the previous generation. When P = 1 , the newly generated superior whale individual is received completely. Otherwise, a random number q 0 , 0.5 is generated, and if P < q the newly generated inferior whale individual is received. The pseudo-code of HWOA is described in Algorithm 3.
Algorithm 3. HWOA.
1. Initialize the parameters and population
2. Calculate the fitness value and save the optimal individual position
3. while t < t m a x do
4.  for i = 1 : N do
5.    Calculate the value of A according to Equation (10), the value of C according to Equation (11) and the value of the nonlinear convergence factor a according to Equation (23)
6.     Generate a random number P within 0 , 1
7.       if P 0.5 do
8.         if A 1 do
9.           Update individual whale positions according to Equation (8)
10.        else if
11.          Update individual whale positions according to Equation (13)
12.        end if
13.      else if
14.        Update individual whale positions according to Equation (17)
15.      end if
16.  end for
17. Perform multi-neighborhood structure updates and retain the resulting improved solutions
18. Carry out the diversity reception mechanism
19. Update the optimal individual position
20. t = t + 1
21. end while
22. end

5. Experimental Analysis

5.1. Benchmark Functions Test

In order to verify the performance of HWOA, we chose 7 classic benchmark functions to test [35]. We list the functions in Table 5. Seven benchmark function tools have the following four characteristics: single mode function, multimode function, separable function, integral function, and to US, MS, the US, and UN tag in Table 5.
The computer used in this study is: Windows 10 operating system and an Intel Pentium Gold G5420 CPU, with a frequency of 3.80 GHZ, and 8 GB of RAM memory using the MATLAB programming language to realize the following test.
In order to verify the superiority of HWOA, we compared it with the following seven algorithms: WOA, MFO, GWO, SCA, MWO, DA, and SSA. In order to ensure the fairness of the experiment, we set up the same experimental parameters and adopted three dimensions to test: dimension D = 30, 50, and 100. To reduce the influence of randomness, each algorithm was run 30 times independently.

5.2. Performance Evaluation Compared with Other Algorithms

The results obtained by HWOA, and the other seven algorithms are listed in Table 6, Table 7 and Table 8. Table 6 shows the results of the maximum iteration t m a x = 500 obtained by HWOA and the other seven algorithms under 30 dimensions. Table 7 shows the results of t m a x = 2000 obtained by HWOA and the other seven algorithms under 50 dimensions. Table 8 shows the results of t m a x = 8000 obtained by HWOA and the other seven algorithms under 100 dimensions. M i n is the minimum value obtained from the eight algorithms, M e a n is the mean value obtained by each algorithm independently running at 30,   S t d is the benchmark deviation obtained by each algorithm independently running at 30, and S i g is the Wilcoxon signed-rank test. Analysis is conducted under the significance level of 0.05. The results are marked as “+/=/−”, corresponding to HWOA superior, equal, and worse than the comparison algorithm, respectively. The convergence curves of the algorithm in three dimensions are shown in Figure 14.
As can be seen in Table 6, Table 7 and Table 8, HWOA outperforms the other seven algorithms in both mean results and standard deviation at 30- and 50-dimensions, and in the Wilcoxon signed-rank test results, HWOA and WOA achieve the same performance results in the benchmark function   f 6 , but the other six criterion functions, and our proposed HWOA in the Wilcoxon signed-rank test results all outperformed the other algorithms, and at 50- dimensions, HWOA found the optimal values in the benchmark functions f 1 , f 2 , and f 5 . In 100 dimensions, HWOA finds the optimal values in the benchmark functions f 1 , f 2 , f 3 , and f 5 . Although WOA also finds the optimal values in the benchmark functions f 1 , f 2 , f 3 , and f 5 , it can be seen from the data in Table 6, Table 7 and Table 8 and Figure 14 that the proposed HWOA outperforms the other seven algorithms in terms of convergence speed for different dimensions. These results show that HWOA shows strong robustness for different dimensional problems and also proves the superiority of our proposed HWOA algorithm.

5.3. Application to Flexible Job Shop Scheduling

5.3.1. Experimental Settings

The parameters of HWOA are set as follows: population size of 100, vector dimension n   , spiral coefficient b = 1 , maximum number of iterations t m a x = 500 , and a m = 1 .
In order to verify the feasibility and effectiveness of the HWOA, four sets of internationally known instances and a set of randomly generated instances are selected. The first set of FJSP instances proposed by Brandimate [36] includes 10 representative FJSP problems, named Mk01-Mk10, which are one of the standard instances for studying FJSP at present. The second set of FJSP instances proposed by Kacem [15] contains five instances of FJSP of different sizes. The third set of instances is the Fattahin instances, which consists of a total of 18 small and medium-sized FJSPs. The fourth set of instances is the Hurink base instance (vdata) with a high degree of flexibility, which has 40 FJSPs of different sizes. The fifth group is the randomly generated FJSP instances, with four large-scale FJSPs. The values of the instance data obey a uniform distribution within a certain range, the number of operations for each job is the same, the available operation machines are randomly generated within the total number of machines, and the operation corresponding time is generated within 1 , 100 . In order to eliminate the randomness in the experimental process, multiple tests are conducted on each instance.

5.3.2. Strategy Validity Verification

Four strategies are used to improve the efficiency of the HWOA algorithm for FJSP, which include good point set initialization (GPS), nonlinear convergence factor (NCF), multiple neighborhood structure (MNS), and diversity receiving mechanism (DRM). The strategy of adding GPS and NCF in WOA is named HWOA-1, the policy of adding N1 and N2 neighborhoods in HWOA-1 is named HWOA-2, and the approach of adding N3 neighborhoods in HWOA-2 is named HWOA. HWOA-1, HWOA-2, and HWOA all contain DRM in the three algorithms. For a fair comparison, each algorithm has 10 dependent runs for each trial. L B and UB are the lower and upper limits of the test case, respectively. C m i n is the minimum value of each algorithm in 10 runs, and C m e a n is the mean value. O m i n is the minimum value of all algorithms compared in 10 runs, M P R D is the relative deviation percentage of O m i n , which is calculated as in Equation (27), and the relative lift percentage P r o is calculated by Equation (28).
M R P D = C m i n O m i n O m i n × 100
P r o = O a r C r r O a r × 100 %
where C r r and O a r are the best values obtained in 10 runs by HWOA and the original WOA, respectively.
As for the data in Table 9, the first column is the name of the instance, the second column is the size of the instance, the third column is the internationally published optimal lower bound for the instance, and the fourth column is the internationally published upper bound for the solution of the instance.
From Table 9, it can be seen that the HWOA algorithm shows superior capability in the BRdata standard instance, and all the results are better than those of WOA, HWOA-1, and HWOA-2. Although some of the results do not reach the lower limit of the instance, they are within reasonable limits. HWOA-1 and WOA obtain the same minimum value C m i n in MK02, MK04, MK06, MK09, and MK10, but the overall results of HWOA-1 are better than those of WOA under the synergistic effect of GN and NCF strategies.
From Figure 15, it can be concluded that HWOA achieves a better solution in the MK01 instance compared to WOA and HWOA-1, and converges to a superior solution earlier compared to HWOA-2. In Figure 16, it can be seen that the mean value obtained by HWOA-1 with GPS and NCF strategies for solving the BRdata standard instance is superior to those obtained by WOA. HWOA-2, with the addition of N1 and N2 neighborhood structures, has significantly better results than WOA and HWOA-1. Furthermore, the HWOA algorithm with the addition of the N3 neighborhood structure strategy does have the best results relative to the other three algorithms.
It can be seen in Table 10 that the HWOA algorithm achieves the best results for 15 international benchmark instances of different sizes. For the five international benchmark instances of Kacem and the 10 international benchmark instances of BRdata, the percentage improvement of the results obtained by HWOA relative to those obtained by WOA reaches 16.19%, HWOA-2 14.81%, and HWOA-1 2.95%, all of which show an improvement.
We show two Gantt charts for solving the Kacem instance and the BRdata instance using HWOA in Figure 17 and Figure 18, respectively. These two instances are the Kacem03 instance and the MK10 instance, with problem sizes of 10 machines, seven jobs, and 15 machines, 20 jobs, respectively.

5.3.3. Performance Verification

In the following tests, the HWOA is compared with several algorithms from the literature on different international benchmark instances. In addition to the comparison with the better solutions obtained by each algorithm, the R P D tests on the results of the different algorithms for all instances are also performed, which is calculated by Equation (29), where L B is the lower limit of the test instance and C m i n is the minimum makespan of the instance obtained from multiple runs of the current algorithm.
R P D = C m i n L B L B × 100
In order to minimize the influence of the randomness of the algorithm, the following comparison experiments were repeated over 20 runs.

Comparison Experiment 1

To further validate the performance of the HWOA, a set of FJSP instances proposed by Brandimate are used in Experiment 1, which has a total of 10 medium-sized FJSPs. Algorithms were introduced by several scholars for comparison, such as OPSO [6], PPSO [37], KBACO [38], Heuristic [39], IWOA [40], and NIMASS [41]. The results are presented in Table 11 and Table 12.
As can be seen from Table 11, HWOA achieves optimal solutions for five of the 10 instances compared with the other six algorithms, and the remaining five achieve suboptimal solutions, which is excellent overall. As for the P R D in Table 12, it can be concluded that the HWOA proposed in this paper has the smallest overall mean P R D value among the results obtained by the seven algorithms, which can also be seen in that the international known lower bound solutions are found in MK03 and MK08, and the deviations of the better solutions found by HWOA from the internationally known lower bound solutions of the four instances of MK02, MK05, MK07, and MK09 are also small, at between 2% and 9%. A BRdata instances box plot of HWOA and the other six algorithms is shown in Figure 19.

Comparison Experiment 2

The Kacem instances are introduced to reflect the effectiveness of the HWOA algorithm, which is compared with the four algorithms KBACO [38], HGWO [42], IWOA [40], and EQEA [4]. The experiment results are listed in Table 13 and Table 14. In HWOA, KBACO, and EQEA, the three algorithms perform well in both C m i n and R P D .

Comparison Experiment 3

Fattahin instances are selected to verify the performance of the HWOA, which contains a total of 18 FJSPs, and the results of comparing HWOA with the following six algorithms: FPA [43], M2 [44], HA [16], GOA [45], AIA [46], and EPSO [47] are presented in Table 15 and Table 16. As can be seen, HWOA achieves known lower bound solutions in 10 instances, and achieves well-known solutions together with HA and EPSO in eight instances. Overall, HWOA outperforms the four algorithms FPA, M2, GOA, and AIA in the Fattahin instances test.

Comparison Experiment 4

Forty instances of the Hurink benchmark with high flexibility are selected. Since no lower limit is published for this instance, the well-known solutions obtained by other scholars to solve these instances are used as the lower limit for comparison. This experiment uses the results obtained for these instances in the HA [15] published by Li and Gao in 2016 as the effective lower bound benchmark. In Table 17, the results of the comparison with the following three algorithms are included: N1-1000 [48], N2-1000 [48], and IJA [12].
As can be seen from the results in Table 17, HWOA achieves good results in the Hurink benchmark instances with a high degree of flexibility. The results obtained in most instances are consistent with the lower bound value, while the percentage deviation of the solution obtained from the HWOA solution with respect to the well-known solution is smaller than that obtained by the other three algorithms. Based on the comparison results of these instances, it can be concluded that HWOA performs equally well in solving highly flexible problems.

Comparison Experiment 5

Four randomly generated large-scale FJSP instances are tested by HWOA, which is compared with WOA [24], MWOA [49], and IWOA [40]. The results of the comparative test for this instance are presented in Table 18.
The convergence curve of the four algorithms, WOA, MWOA, IWOA, and HWOA solving Ra instances is depicted in Figure 20. As can be seen, HWOA achieves the best results among the four algorithms. The reasons for this are as follows: First, the good point set population initialization mechanism assists HWOA in finding a good solution space at the beginning of the iteration, resulting in rapid convergence and reduced running time; second, the multi-neighborhood structure in the iterative operation ensures that HWOA keeps breaking through the dilemma to avoid falling into a local optimum; and third, the diversity reception mechanism also ensures the population diversity also of HWOA, which improves computational accuracy. The scheduling Gantt chart for Ra instances is shown in Figure 21.

6. Conclusions

In this study, a novel hybrid whale optimization algorithm (HOWA) is proposed to solve the flexible job shop scheduling problem with the objective of minimizing the maximum makespan. The Whale Optimization Algorithm (WOA) is a new intelligent algorithm commonly used to solve optimization problems, and the WOA has been validated by most scholars in solving continuous problems. In this study, WOA is used to solve the discrete FJSP by a certain transformation. A variety of effective strategies are added to WOA, namely HWOA, to make it better for solving FJSP. Firstly, the initial population is generated by introducing the theory of good point set (GPS) to make the distribution of the initial population more uniform and have more comprehensive coverage in the initial stage. Secondly, in order to make the convergence factor a in WOA play a better coordination role, a nonlinear convergence factor (NCF) a is proposed to coordinate the global search and local search in WOA. Thirdly, a new multi-neighborhood structure (MNS) is proposed, within which a total of three new neighborhoods are included. The N1 neighborhood structure optimizes the scheduling solution in terms of different selections of machine pairings, the N2 neighborhood structure optimizes the scheduling solution in terms of the operation update mechanism; and the N3 neighborhood structure is used to enhance the local search capability of HWOA. Finally, a population diversity reception mechanism (DRM) that ensures to some extent that populations do not lose population diversity with iteration. Finally, seven international standard functions, including 30-, 50-, and 100-dimensions, are used to test the HWOA, and the results show that HWOA has a strong search ability on almost all the tested functions. 73 international benchmark instances of FJSP of different sizes and flexibility, and four randomly generated large-scale FJSP instances are used to perform performance tests on HWOA and verify the effectiveness of HWOA, which further demonstrates that HWOA shows strong competitiveness. Especially in the BRdata instance, HWOA improves by 36.73% relative to WOA, and in the Kacem instance, HWOA improves by 36.36% relative to WOA, with an average improvement rate of 16.19% in both instances. With reference to the experimental results, the maximum P r o   of HWOA relative to OPSP, PPSO, KBACO, Heuristic, IWOA, and NIMASS is 44.44%, 24.19%, 11.54%, 20.97%, 12.04%, and 8.06%, respectively. To sum up, HWOA has strong robustness. For future research, multiple objectives of FJSP would be considered to better adapt to the complicated scenarios of enterprise production.

Author Contributions

Writing—review & editing, Z.Y.; Methodology, W.Y.; Writing—original draft, J.S.; Validation, Y.Y. (Yunhang Yao); Investigation, Y.Y. (Ying Yuan). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Project of China (2018YFB1700500) and the Scientific and Technological Project of Henan Province (222102110095), Higher Learning Key Development Project of Henan Province (22A120007).

Data Availability Statement

Reasonable requests to access the datasets should be directed to [email protected].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, H.; Ihlow, J.; Lehmann, C. A genetic algorithm for flexible job-shop scheduling. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; pp. 1120–1125. [Google Scholar]
  2. Du, X.; Li, Z.; Xiong, W. Flexible Job Shop scheduling problem solving based on genetic algorithm with model constraints. In Proceedings of the 2008 IEEE International Conference on Industrial Engineering and Engineering Management, Singapore, 8–11 December 2008; pp. 1239–1243. [Google Scholar]
  3. Bowman, E.H. The schedule-sequencing problem. Oper. Res. 1959, 7, 621–624. [Google Scholar] [CrossRef]
  4. Wu, X.; Wu, S. An elitist quantum-inspired evolutionary algorithm for the flexible job-shop scheduling problem. J. Intell. Manuf. 2017, 28, 1441–1457. [Google Scholar] [CrossRef] [Green Version]
  5. Brucker, P.; Schlie, R. Job-shop scheduling with multi-purpose machines. Computing 1990, 45, 369–375. [Google Scholar] [CrossRef]
  6. Nouiri, M.; Bekrar, A.; Jemai, A.; Niar, S.; Ammari, A.C. An effective and distributed particle swarm optimization algorithm for flexible job-shop scheduling problem. J. Intell. Manuf. 2015, 29, 603–615. [Google Scholar] [CrossRef]
  7. Huang, X.; Chen, S.; Zhou, T.; Sun, Y. A new neighborhood structure for solving the flexible job-shop scheduling problem. Syst. Eng.-Theory Pract. 2021, 41, 2367–2378. [Google Scholar]
  8. Xue, L. Block structure neighborhood search genetic algorithm for job-shop scheduling. Comput. Integr. Manuf. Syst. 2021, 27, 2848–2857. [Google Scholar]
  9. Driss, I.; Mouss, K.N.; Laggoun, A. A new genetic algorithm for flexible job-shop scheduling problems. J. Mech. Sci. Technol. 2015, 29, 1273–1281. [Google Scholar] [CrossRef]
  10. Jiang, T.; Zhang, C. Application of Grey Wolf Optimization for Solving Combinatorial Problems: Job Shop and Flexible Job Shop Scheduling Cases. IEEE Access 2018, 6, 26231–26240. [Google Scholar] [CrossRef]
  11. Liang, X.; Huang, M.; Ning, T. Flexible job shop scheduling based on improved hybrid immune algorithm. J. Ambient Intell. Humaniz. Comput. 2016, 9, 165–171. [Google Scholar] [CrossRef]
  12. Caldeira, R.H.; Gnanavelbabu, A. Solving the flexible job shop scheduling problem using an improved Jaya algorithm. Comput. Ind. Eng. 2019, 137, 106064. [Google Scholar] [CrossRef]
  13. Wu, M.; Yang, D.; Zhou, B.; Yang, Z.; Liu, T.; Li, L.; Wang, Z.; Hu, K. Adaptive population nsga-iii with dual control strategy for flexible job shop scheduling problem with the consideration of energy consumption and weight. Machines 2021, 9, 344. [Google Scholar] [CrossRef]
  14. Zhang, G.; Gao, L.; Shi, Y. An effective genetic algorithm for the flexible job-shop scheduling problem. Expert Syst. Appl. 2011, 38, 3563–3573. [Google Scholar] [CrossRef]
  15. Kacem, I.; Hammadi, S.; Borne, P. Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans. Syst. Man Cybern. Part C 2002, 32, 1–13. [Google Scholar] [CrossRef]
  16. Li, X.; Gao, L. An effective hybrid genetic algorithm and tabu search for flexible job shop scheduling problem. Int. J. Prod. Econ. 2016, 174, 93–110. [Google Scholar] [CrossRef]
  17. Zhang, S.; Du, H.; Borucki, S.; Jin, S.; Hou, T.; Li, Z. Dual resource constrained flexible job shop scheduling based on improved quantum genetic algorithm. Machines 2021, 9, 108. [Google Scholar] [CrossRef]
  18. Singh, M.R.; Mahapatra, S.S. A quantum behaved particle swarm optimization for flexible job shop scheduling. Comput. Ind. Eng. 2016, 93, 36–44. [Google Scholar] [CrossRef]
  19. Wang, L.; Cai, J.; Li, M.; Liu, Z. Flexible Job Shop Scheduling Problem Using an Improved Ant Colony Optimization. Sci. Program. 2017, 2017, 9016303. [Google Scholar] [CrossRef]
  20. Cruz-Chávez, M.A.; Martínez-Rangel, M.G.; Cruz-Rosales, M.H. Accelerated simulated annealing algorithm applied to the flexible job shop scheduling problem. Int. Trans. Oper. Res. 2017, 24, 1119–1137. [Google Scholar] [CrossRef]
  21. Chenyang, G.; Yuelin, G.; Shanshan, L. Improved simulated annealing algorithm for flexible job shop scheduling problems. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; pp. 2191–2196. [Google Scholar]
  22. Vijaya Lakshmi, A.; Mohanaiah, P. WOA-TLBO: Whale optimization algorithm with Teaching-learning-based optimization for global optimization and facial emotion recognition. Appl. Soft Comput. 2021, 110, 107623. [Google Scholar] [CrossRef]
  23. Medani, K.b.O.; Sayah, S.; Bekrar, A. Whale optimization algorithm based optimal reactive power dispatch: A case study of the Algerian power system. Electr. Power Syst. Res. 2018, 163, 696–705. [Google Scholar] [CrossRef]
  24. Huang, X.; Wang, R.; Zhao, X.; Hu, K. Aero-engine performance optimization based on whale optimization algorithm. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 11437–11441. [Google Scholar]
  25. Yan, Q.; Wu, W.; Wang, H. Deep Reinforcement Learning for Distributed Flow Shop Scheduling with Flexible Maintenance. Machines 2022, 10, 210. [Google Scholar] [CrossRef]
  26. Dao, T.-K.; Pan, T.-S.; Pan, J.-S. A multi-objective optimal mobile robot path planning based on whale optimization algorithm. In Proceedings of the 2016 IEEE 13th International Conference on Signal Processing (ICSP), Chengdu, China, 6–10 November 2016; pp. 337–342. [Google Scholar]
  27. Oliva, D.; Abd El Aziz, M.; Hassanien, A.E. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm. Appl. Energy 2017, 200, 141–154. [Google Scholar] [CrossRef]
  28. Liang, R.; Chen, Y.; Zhu, R. A novel fault diagnosis method based on the KELM optimized by whale optimization algorithm. Machines 2022, 10, 93. [Google Scholar] [CrossRef]
  29. Abdel-Basset, M.; Manogaran, G.; El-Shahat, D.; Mirjalili, S. A hybrid whale optimization algorithm based on local search strategy for the permutation flow shop scheduling problem. Future Gener. Comput. Syst. 2018, 85, 129–145. [Google Scholar] [CrossRef] [Green Version]
  30. Liu, M.; Yao, X.; Li, Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl. Soft Comput. 2020, 87, 105954. [Google Scholar]
  31. Luan, F.; Cai, Z.; Wu, S.; Liu, S.Q.; He, Y. Optimizing the Low-Carbon Flexible Job Shop Scheduling Problem with Discrete Whale Optimization Algorithm. Mathematics 2019, 7, 688. [Google Scholar] [CrossRef] [Green Version]
  32. Yazdani, M.; Amiri, M.; Zandieh, M. Flexible job-shop scheduling with parallel variable neighborhood search algorithm. Expert Syst. Appl. 2010, 37, 678–687. [Google Scholar] [CrossRef]
  33. Hua, L.; Wang, Y. Application of Number Theory in Approximate Analysis; Science Press: Bejing, China, 1978; p. 248. [Google Scholar]
  34. Lv, H.; Feng, Z.; Wang, X.; Zhou, W.; Chen, B. Structural damage identification based on hybrid whale annealing algorithm and sparse regularization. J. Vib. Shock. 2021, 40, 85–91. [Google Scholar]
  35. Yang, W.; Yang, Z.; Chen, Y.; Peng, Z. Modified Whale Optimization Algorithm for Multi-Type Combine Harvesters Scheduling. Machines 2022, 10, 64. [Google Scholar] [CrossRef]
  36. Brandimarte, P. Routing and scheduling in a flexible job shop by tabu search. Ann. Oper. Res. 1993, 41, 157–183. [Google Scholar]
  37. Ding, H.; Gu, X. Improved particle swarm optimization algorithm based novel encoding and decoding schemes for flexible job shop scheduling problem. Comput. Oper. Res. 2020, 121, 104951. [Google Scholar] [CrossRef]
  38. Xing, L.-N.; Chen, Y.-W.; Wang, P.; Zhao, Q.-S.; Xiong, J. A Knowledge-Based Ant Colony Optimization for Flexible Job Shop Scheduling Problems. Appl. Soft Comput. 2010, 10, 888–896. [Google Scholar] [CrossRef]
  39. Ziaee, M. A heuristic algorithm for solving flexible job shop scheduling problem. Int. J. Adv. Manuf. Technol. 2013, 71, 519–528. [Google Scholar] [CrossRef]
  40. Luan, F.; Cai, Z.; Wu, S.; Jiang, T.; Li, F.; Yang, J. Improved Whale Algorithm for Solving the Flexible Job Shop Scheduling Problem. Mathematics 2019, 7, 384. [Google Scholar] [CrossRef] [Green Version]
  41. Xiong, W.; Fu, D. A new immune multi-agent system for the flexible job shop scheduling problem. J. Intell. Manuf. 2015, 29, 857–873. [Google Scholar] [CrossRef]
  42. Jiang, T. Flexible job shop scheduling problem with hybrid grey wolf optimization algorithm. Control Decis. 2018, 33, 503–508. [Google Scholar]
  43. Phuang, A. The flower pollination algorithm with disparity count process for scheduling problem. In Proceedings of the 2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), Phuket, Thailand, 12–13 October 2017; pp. 1–5. [Google Scholar]
  44. Demir, Y.; İşleyen, S.K. Evaluation of mathematical models for flexible job-shop scheduling problems. Appl. Math. Model. 2013, 37, 977–988. [Google Scholar] [CrossRef]
  45. Feng, Y.; Liu, M.; Yang, Z.; Feng, W.; Yang, D. A Grasshopper Optimization Algorithm for the Flexible Job Shop Scheduling Problem. In Proceedings of the 2020 35th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Zhanjiang, China, 16–18 October 2020; pp. 873–877. [Google Scholar]
  46. Bagheri, A.; Zandieh, M.; Mahdavi, I.; Yazdani, M. An artificial immune algorithm for the flexible job-shop scheduling problem. Future Gener. Comput. Syst. 2010, 26, 533–541. [Google Scholar] [CrossRef]
  47. Teekeng, W.; Thammano, A.; Unkaw, P.; Kiatwuthiamorn, J. A new algorithm for flexible job-shop scheduling problem based on particle swarm optimization. Artif. Life Robot. 2016, 21, 18–23. [Google Scholar] [CrossRef]
  48. Hurink, J.; Jurisch, B.; Thole, M. Tabu search for the job-shop scheduling problem with multi-purpose machines. Oper.-Res.-Spektrum 1994, 15, 205–215. [Google Scholar] [CrossRef] [Green Version]
  49. Wei, F.; Cao, C.; Zhang, H. An Improved Genetic Algorithm for Resource-Constrained Flexible Job-Shop Scheduling. Int. J. Simul. Model. 2021, 20, 201–211. [Google Scholar] [CrossRef]
Figure 1. Effect on location selection by A value.
Figure 1. Effect on location selection by A value.
Machines 10 00618 g001
Figure 2. Spiral updating position.
Figure 2. Spiral updating position.
Machines 10 00618 g002
Figure 3. The Flowchart of Basic WOA.
Figure 3. The Flowchart of Basic WOA.
Machines 10 00618 g003
Figure 4. Number of operations for each job.
Figure 4. Number of operations for each job.
Machines 10 00618 g004
Figure 5. Schematic of encoding.
Figure 5. Schematic of encoding.
Machines 10 00618 g005
Figure 6. (a) Scatter plot of random distribution; (b) Scatter plot of good point set.
Figure 6. (a) Scatter plot of random distribution; (b) Scatter plot of good point set.
Machines 10 00618 g006
Figure 7. Comparison of before and after improvement of convergence factor a .
Figure 7. Comparison of before and after improvement of convergence factor a .
Machines 10 00618 g007
Figure 8. Parental machine genes.
Figure 8. Parental machine genes.
Machines 10 00618 g008
Figure 9. Machine gene after mutation.
Figure 9. Machine gene after mutation.
Machines 10 00618 g009
Figure 10. (a) Primitive Gantt chart; (b) Variant Gantt chart.
Figure 10. (a) Primitive Gantt chart; (b) Variant Gantt chart.
Machines 10 00618 g010
Figure 11. (a) Primitive Operation; (b) Disturbance Operation.
Figure 11. (a) Primitive Operation; (b) Disturbance Operation.
Machines 10 00618 g011
Figure 12. Original core operation Gantt chart.
Figure 12. Original core operation Gantt chart.
Machines 10 00618 g012
Figure 13. Gantt chart after across-machine gene reinforcement of core operation.
Figure 13. Gantt chart after across-machine gene reinforcement of core operation.
Machines 10 00618 g013
Figure 14. (a) Sphere 30−dimension Convergence Curve; (b) Ackley 30−dimension Convergence Curve; (c) Sumsquare 50−dimension Convergence Curve; (d) Schwefel2.22 50−dimension Convergence Curve. (e) Rosenbrock 100−dimension Convergence Curve. (f) Levy 100−dimension Convergence Curve.
Figure 14. (a) Sphere 30−dimension Convergence Curve; (b) Ackley 30−dimension Convergence Curve; (c) Sumsquare 50−dimension Convergence Curve; (d) Schwefel2.22 50−dimension Convergence Curve. (e) Rosenbrock 100−dimension Convergence Curve. (f) Levy 100−dimension Convergence Curve.
Machines 10 00618 g014aMachines 10 00618 g014bMachines 10 00618 g014c
Figure 15. MK01 strategy comparison Convergence Curve.
Figure 15. MK01 strategy comparison Convergence Curve.
Machines 10 00618 g015
Figure 16. Comparison of mean values of four algorithms for BRdata instance.
Figure 16. Comparison of mean values of four algorithms for BRdata instance.
Machines 10 00618 g016
Figure 17. Kacem03 Gantt chart.
Figure 17. Kacem03 Gantt chart.
Machines 10 00618 g017
Figure 18. MK10 Gantt chart.
Figure 18. MK10 Gantt chart.
Machines 10 00618 g018
Figure 19. BRdata instances of seven algorithms box plot.
Figure 19. BRdata instances of seven algorithms box plot.
Machines 10 00618 g019
Figure 20. (a) Ra01 Convergence Curve; (b) Ra02 Convergence Curve; (c) Ra03 Convergence Curve; (d) Ra04 Convergence Curve.
Figure 20. (a) Ra01 Convergence Curve; (b) Ra02 Convergence Curve; (c) Ra03 Convergence Curve; (d) Ra04 Convergence Curve.
Machines 10 00618 g020aMachines 10 00618 g020b
Figure 21. (a) Ra01 Gantt chart; (b) Ra02 Gantt chart; (c) Ra03 Gantt chart; (d) Ra04 Gantt chart.
Figure 21. (a) Ra01 Gantt chart; (b) Ra02 Gantt chart; (c) Ra03 Gantt chart; (d) Ra04 Gantt chart.
Machines 10 00618 g021aMachines 10 00618 g021b
Table 1. Notations and abbreviations used in the article.
Table 1. Notations and abbreviations used in the article.
SymbolsDescription
n Total number of jobs
m Total number of machines
O Total number of operations
i Job   serial   number ,   i 1 , 2 , , n
l i Number of operations for each job
j Operation   serial   number ,   j 1 , 2 , , O
k Machine   serial   number ,   k 1 , 2 , , m
E k Number of operations assigned to machine k
O i , j The   j t h operation of job i
M i , j The   set   of   available   operation   machines   for   the   j t h   operation   of   job   i
P i , j , k The   operation   time   required   for   the   j t h operation of job i on machine k
S i , j The   start   time   of   the   operation   of   the   j t h operation of job i
C i , j End   time   of   the   operation   of   the   j t h operation of job i
C i Completion   time   of   job   i
C m a x Makespan
x i , j , k Binary   decision   variable   that   specifies   whether   the   j t h operation of job i   is   operated   at   machine   k .
O S Operation sequence
M S Machine sequence
l e n The length of the core path set composed of core operations
G P S Good point set initialization
N C F Nonlinear convergence factor
M N S Multiple neighborhood structure
D R M Diversity receiving mechanism
M R P D Self-deviation percentage
P r o Relative lift percentage
L B Lower bound of test instances
U B Upper bound for test instances
R P D Deviation percentage
Table 2. 3 × 4 Instance of T-FJSP.
Table 2. 3 × 4 Instance of T-FJSP.
JobOperationMachine
M1M2M3M4
i 1 O 11 6373
O 12 3261
i 2 O 21 5164
O 22 3637
i 3 O 31 2712
O 32 7344
Table 3. 3 × 4 Instance of P-FJSP.
Table 3. 3 × 4 Instance of P-FJSP.
JobOperationMachine
M1M2M3M4
i 1 O 11 -3-3
O 12 32-1
i 2 O 21 -1-4
O 22 3-3-
i 3 O 31 251-
O 32 -34-
Table 4. 3 × 3 Instance of P-FJSP.
Table 4. 3 × 3 Instance of P-FJSP.
JobOperationMachine
M1M2M3
i 1 O 11 -3 3
O 12 321
i 2 O 21 14
O 22 3-3
i 3 O 31 251
O 32 -34
Table 5. Details of benchmark functions.
Table 5. Details of benchmark functions.
NameFunctionCSearch RangeMin
Sphere f 1 x = i = 1 D x i 2 US[−100, 100]0
Sumsquare f 2 x = i = 1 D i x i 2 US[−10, 10]0
Schwefel2.22 f 3 x = i = 1 D x i + i = 1 D x i UN[−10, 10]0
Rosenbrock f 4 x = i = 1 D 1 100 · x i + 1 x i 2 2 + 1 x i 2 UN[−5, 10]0
Rastrigin f 5 x = i = 1 D [ x i 2 100 cos 2 π x i ] + 10 D MS[−5.12, 5.12]0
Ackley f 6 x = 20 exp ( 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i + 20 + e MN[−32, 32]0
Levy f 7 x = i = 1 D 1 x i 1 2 1 + s i n 2 3 π x i + 1 + s i n 2 3 π x 1 + x D 1 1 + s i n 2 3 π x D MN[−10, 10]0
Table 6. Comparison of results for 30-dimension benchmark functions.
Table 6. Comparison of results for 30-dimension benchmark functions.
Algorithm f 1 f 2 f 3 f 4 f 5 f 6 f 7
HWOAMin3.42 × 101326.04 × 101322.61 × 10782.62 × 10108.88 × 10162.68 × 102
Mean2.22 × 10−1204.45 × 10−1202.29 × 10−712.68 × 10103.14 × 10−151.40 × 10−1
Std1.13 × 10−1192.41 × 10−1197.84 × 10−712.36 × 10−101.98 × 10−151.08 × 10−1
WOAMean1.44 × 10−724.41 × 10−781.17 × 10−502.79 × 1011.89 × 10−153.85 × 10−155.19 × 10−1
Std7.22 × 10−721.58 × 10−772.77 × 10−504.79 × 10−11.04 × 10−142.48 × 10−152.08 × 10−1
sig+++++=+
MFOMean2.34 × 1036.04 × 1023.64 × 1015.35 × 1061.59 × 1021.46 × 1012.24 × 102
Std5.04 × 1038.47 × 1021.99 × 1012.03 × 1074.07 × 1017.661.09 × 103
sig+++++++
GWOMean1.65 × 10−272.44 × 10−281.08 × 10−162.70 × 1012.881.05 × 10−136.53 × 10−1
Std3.22 × 10−273.80 × 10−286.52 × 10−177.85 × 10−14.251.97 × 10−141.98 × 10−1
sig+++++++
SCAMean1.11 × 1012.122.23 × 10−26.33 × 1042.64 × 1011.45 × 1011.13 × 105
Std3.32 × 1014.693.06 × 10−21.34 × 1053.38 × 1018.952.79 × 105
sig+++++++
MVOMean1.31 × 101.401.25 × 1014.76 × 1021.27 × 1021.542.19 × 10−1
Std3.71 × 10−11.553.48 × 1017.71 × 1023.37 × 1014.48 × 10−11.57 × 10−1
sig+++++++
DAMean1.69 × 1032.39 × 1021.60 × 1013.10 × 1051.67 × 1021.09 × 1013.23 × 105
Std6.59 × 1021.95 × 1025.523.06 × 1053.77 × 1011.903.98 × 105
sig+++++++
SSAMean4.79 × 10−71.831.461.69 × 1025.04 × 1012.751.37 × 101
Std1.14 × 10−61.541.022.63 × 1021.38 × 1016.48 × 10−11.54 × 101
sig+++++++
The best results are highlighted in boldface.
Table 7. Comparison of results for 50-dimension benchmark functions.
Table 7. Comparison of results for 50-dimension benchmark functions.
Algorithm f 1 f 2 f 3 f 4 f 5 f 6 f 7
HWOAMin002.17 × 10−3102.34 × 10−208.88 × 10−169.69 × 10−4
Mean004.92 × 10−2954.76 × 10−103.02 × 10−157.16 × 10−2
Std0004.90 × 10−102.57 × 10−151.01 × 10−1
WOAMean9.86 × 10−3052.47 × 10−3002.26 × 10−2094.67 × 1011.89 × 10−154.32 × 10−152.00 × 10−1
Std0006.10 × 10−11.04 × 10−142.72 × 10−151.33 × 10−1
sig+++++=+
MFOMean8.00 × 1032.19 × 1036.97 × 1011.07 × 1072.96 × 1021.89 × 1012.73 × 107
Std8.47 × 1031.93 × 1033.59 × 1012.77 × 1074.65 × 1012.85 × 1001.04 × 108
sig+++++++
GWOMean1.70 × 10−917.22 × 10−921.28 × 10−534.68 × 1011.89 × 10−151.58 × 10−141.69 × 10
Std4.06 × 10−912.82 × 10−911.73 × 10−537.93 × 10−11.04 × 10−142.54 × 10−152.88 × 10−1
sig+++++++
SCAMean1.58 × 10−19.18 × 10−21.45 × 10−52.81 × 1054.58 × 1011.73 × 1013.67 × 105
Std3.33 × 10−12.64 × 10−14.26 × 10−51.19 × 1063.91 × 1017.02 × 101.29 × 106
sig+++++++
MVOMean6.06 × 10−13.00 × 101.47 × 1014.51 × 1022.25 × 1021.61 × 101.98 × 10−1
Std1.54 × 10−12.44 × 104.47 × 1016.51 × 1024.70 × 1015.53 × 10−11.38 × 10−1
sig+++++++
DAMean2.78 × 1035.71 × 1022.64 × 1014.02 × 1052.98 × 1029.83 × 101.61 × 105
Std1.17 × 1032.44 × 1028.71 × 102.53 × 1055.00 × 1011.16 × 101.65 × 105
sig+++++++
SSAMean3.08 × 10−87.72 × 10−12.34 × 101.04 × 1021.01 × 1022.75 × 103.99 × 101
Std5.12 × 10−91.01 × 101.40 × 107.38 × 1012.44 × 1016.67 × 10−12.93 × 101
sig+++++++
The best results are highlighted in boldface.
Table 8. Comparison of results for 100-dimension benchmark functions.
Table 8. Comparison of results for 100-dimension benchmark functions.
Algorithm f 1 f 2 f 3 f 4 f 5 f 6 f 7
HWOAMin0002.22 × 10−208.88 × 10−164.47 × 10−4
Mean0003.10 × 10−102.31 × 10−151.96 × 10−2
Std0002.40 × 10−102.40 × 10−153.48 × 10−2
WOAMean0009.50 × 10103.73 × 10−152.76 × 10−2
Std0003.54 × 10−102.54 × 10−153.94 × 10−2
sig===+=++
MFOMean1.95 × 1041.47 × 1041.25 × 1024.58 × 1076.40 × 1021.99 × 1011.39 × 108
Std1.61 × 1047.70 × 1035.56 × 1016.95 × 1077.41 × 1011.39 × 10−12.45 × 108
sig+++++++
GWOMean1.98 × 10−2658.93 × 10−2663.26 × 10−1569.72 × 10101.51 × 10−145.71
Std0007.60 × 10−101.87 × 10−153.36 × 10−1
sig++++=++
SCAMean1.84 × 1025.11 × 1014.13 × 10−74.21 × 1061.03 × 1021.94 × 1012.10 × 107
Std2.70 × 1021.19 × 1021.82 × 10−65.18 × 1066.26 × 1014.382.58 × 107
sig+++++++
MVOMean6.05 × 10−11.04 × 1017.14 × 1044.12 × 1025.81 × 1024.071.13
Std1.10 × 10−15.813.90 × 1056.34 × 1028.02 × 1016.192.73
sig+++++++
DAMean3.08 × 1031.68 × 1034.46 × 1014.45 × 1056.17 × 1028.111.17 × 105
Std1.36 × 1037.42 × 1021.66 × 1013.44 × 1051.49 × 1021.932.29 × 105
sig+++++++
SSAMean8.03 × 10−81.496.241.81 × 1022.06 × 1023.781.27 × 102
Std8.91 × 10−91.793.271.61 × 1024.48 × 1017.02 × 10−12.63 × 101
sig+++++++
The best results are highlighted in boldface.
Table 9. BRdata Instance Verification.
Table 9. BRdata Instance Verification.
BRdatan × mLBUBWOAHWOA-1HWOA-2HWOA
C m i n C m e a n C m i n C m e a n C m i n C m e a n C m i n C m e a n
MK0110 × 636424245.74244.34041.84041.3
MK0210 × 624323339.93336.52728.42627.5
MK0315 × 8204211231235.1223228.3204213.1204207.4
MK0415 × 848817376.37374.764 67.36263.8
MK0515 × 4168186175181.5175181.9173177.3173174.5
MK0610 × 15338698101.798100.76569.56265.4
MK0720 × 5133157154158.7155160.7145147.6143145.2
MK0820 × 10523523531541.8531539.3523526.1523523
MK0920 × 10299369379392.5383391.5312327.9310319.9
MK1020 × 15165296271302.4265296.5224233.3216220.3
The best results are highlighted in boldface.
Table 10. Comparison of four algorithms.
Table 10. Comparison of four algorithms.
Kacem,
BRdata
n × mLBWOAHWOA-1HWOA-2HWOA
M P R D P r o % M P R D P r o % M P R D P r o % M P R D P r o %
Kac014 × 5110.000.000.000.000.000.000.000.00
Kac028 × 81457.140.0014.2827.270.0036.360.0036.36
Kac0310 × 71127.270.0027.270.000.0021.420.0021.42
Kac0410 × 10714.280.000.0012.500.0012.500.0012.50
Kac0515 × 101127.270.0027.270.009.0914.280.0021.42
MK0110 × 6365.000.005.000.000.004.760.004.76
MK0210 × 62426.920.0026.920.003.8418.180.0021.21
MK0315 × 820413.230.009.313.460.0011.680.0011.68
MK0415 × 84817.740.0017.740.003.2212.320.0015.06
MK0515 × 41681.150.001.150.000.001.140.001.14
MK0610 × 153358.060.0058.060.004.8333.670.0036.73
MK0720 × 51337.690.008.39−0.641.395.840.007.14
MK0820 × 105231.530.001.530.000.0015.060.0015.06
MK0920 × 1029922.250.0019.06−1.050.6417.670.0018.20
MK1020 × 1516525.460.0022.682.213.7017.340.0020.29
Mean--20.330.0015.912.951.7814.810.0016.19
Table 11. Comparison of different algorithms in BRdata.
Table 11. Comparison of different algorithms in BRdata.
BRdatan × mOPSOPPSOKBACOHeuristicIWOANIMASSHWOA
C m i n C m i n C m i n C m i n C m i n C m i n C m i n
MK0110 × 641403942404040
MK0210 × 626292928262826
MK0315 × 8207204204204204204204
MK0415 × 865666575606562
MK0515 × 4171175173179175177173
MK0610 × 1561776769636762
MK0720 × 5173145144149144144143
MK0820 × 10523523523555523523523
MK0920 × 10307320311342339312310
MK1020 × 15312239229242242229 216
Table 12. RPD value validation for comparison algorithm in BRdata.
Table 12. RPD value validation for comparison algorithm in BRdata.
BRdatan × mOPSOPPSOKBACOHeuristicIWOANIMASSHWOA
R P D R P D R P D R P D R P D R P D R P D
MK0110 × 611.1111.118.3316.6711.1111.1111.11
MK0210 × 612.520.8320.8616.678.3320.868.33
MK0315 × 80.000.000.000.000.000.000.00
MK0415 × 829.1637.5035.4156.2525.0035.4129.16
MK0515 × 45.954.162.976.544.165.352.97
MK0610 × 15136.36133.33103.03109.0990.90103.0387.87
MK0720 × 510.529.028.2712.038.278.277.51
MK0820 × 100.000.000.006.120.000.000.00
MK0920 × 1014.047.024.0114.3813.374.343.67
MK1020 × 1550.9144.8438.7846.6646.6638.7830.90
Mean-27.0526.7822.1628.4420.7822.7118.15
Table 13. Comparison of different algorithms in Kacem.
Table 13. Comparison of different algorithms in Kacem.
Kacemn × mKBACOHGWOIWOAEQEAHWOA
C m i n C m i n C m i n C m i n C m i n
Kac014 × 51111111111
Kac028 × 81414141414
Kac0310 × 71111131111
Kac0410 × 1077777
Kac0515 × 101113141111
Table 14. RPD value validation for comparison algorithm in Kacem.
Table 14. RPD value validation for comparison algorithm in Kacem.
Kacemn × mKBACOHGWOIWOAEQEAHWOA
R P D R P D R P D R P D R P D
Kac014 × 50.000.000.000.000.00
Kac028 × 80.000.000.000.000.00
Kac0310 × 70.000.0018.180.000.00
Kac0410 × 100.000.000.000.000.00
Kac0515 × 100.0018.1827.270.000.00
Table 15. Comparison of different algorithms in Fattahin.
Table 15. Comparison of different algorithms in Fattahin.
Fattahinn × mLBFPAM2HAGOAAIAEPSOHWOA
C m i n C m i n C m i n C m i n C m i n C m i n C m i n
SFJS012 × 26666666666666666
SFJS022 × 2107107107107107107107107
SFJS033 × 2221221221221221221221221
SFJS043 × 2355355355355355355355355
SFJS053 × 2119119119119119119119119
SFJS063 × 3320320320320320320320320
SFJS073 × 5397397397397397397397397
SFJS083 × 4253253253253253253253253
SFJS093 × 3210210210210210210210210
SFJS104 × 5516516516516533516516516
MFJS015 × 6396469468468469468468468
MFJS025 × 7396446446446457448446446
MFJS036 × 7396470466466538468466466
MFJS047 × 7496554564554610554554554
MFJS057 × 7414516514514613527514514
MFJS068 × 7469636634634721635634634
MFJS078 × 76198799288791092879879879
MFJS089 × 8619884-8841095884884884
- Means the result was not given by the author.
Table 16. RPD value validation for comparison algorithm in Fattahin.
Table 16. RPD value validation for comparison algorithm in Fattahin.
Fattahinn × mLBFPAM2HAGOAAIAEPSOHWOA
R P D R P D R P D R P D R P D R P D R P D
SFJS012 × 2660.000.000.000.000.000.000.00
SFJS022 × 21070.000.000.000.000.000.000.00
SFJS033 × 22210.000.000.000.000.000.000.00
SFJS043 × 23550.000.000.000.000.000.000.00
SFJS053 × 21190.000.000.000.000.000.000.00
SFJS063 × 33200.000.000.000.000.000.000.00
SFJS073 × 53970.000.000.000.000.000.000.00
SFJS083 × 42530.000.000.000.000.000.000.00
SFJS093 × 32100.000.000.000.000.000.000.00
SFJS104 × 55160.000.000.003.290.000.000.00
MFJS015 × 639618.4318.1818.1818.4318.1818.1818.18
MFJS025 × 739612.6212.6212.6215.4013.1312.6212.62
MFJS036 × 739618.6817.6717.6735.8518.1817.6717.67
MFJS047 × 749611.6913.7111.6922.9811.6911.6911.69
MFJS057 × 741424.6324.1524.1548.0627.2924.1524.15
MFJS068 × 746935..6035.1835.1853.7335.3935.1835.18
MFJS078 × 761942.0049.9242.0076.4142.0042.0042.00
MFJS089 × 861942.81-42.8176.8942.8142.8142.81
- Means the result was not given by the author.
Table 17. RPD value validation for comparison algorithm in Hurink (Vdata).
Table 17. RPD value validation for comparison algorithm in Hurink (Vdata).
VdataN1-1000N2-1000IJAHWOA
R P D R P D R P D R P D
La01–La050.780.590.000.00
La06–La100.200.150.000.00
La11–La150.200.400.000.00
La16–La200.000.000.000.00
La21–La256.901.970.770.64
La26–La300.260.250.130.09
La31–La350.060.010.010.01
La36–La400.000.000.000.00
Mean1.0500.4210.1140.093
Table 18. Comparison of different algorithms in Ra.
Table 18. Comparison of different algorithms in Ra.
Ran × mWOAMWOAIWOAHWOA
C m i n C m e a n C P U C m i n C m e a n C P U C m i n C m e a n C P U C m i n C m e a n C P U
Ra0130 × 1525242664.6152.7723442419.2106.1322732397.2169.3617511857.796.81
Ra0245 × 1535223691.2203.0433363417.5114.9131903354.1223.0327153939.2116.45
Ra0350 × 1032693382.4152.7730903168.295.8230073193.4169.3627342878.498.84
Ra0460 × 1544544660.6283.3642614318.2153.7239474231.9310.0136693836.3137.17
The best results are highlighted in boldface.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, W.; Su, J.; Yao, Y.; Yang, Z.; Yuan, Y. A Novel Hybrid Whale Optimization Algorithm for Flexible Job-Shop Scheduling Problem. Machines 2022, 10, 618. https://doi.org/10.3390/machines10080618

AMA Style

Yang W, Su J, Yao Y, Yang Z, Yuan Y. A Novel Hybrid Whale Optimization Algorithm for Flexible Job-Shop Scheduling Problem. Machines. 2022; 10(8):618. https://doi.org/10.3390/machines10080618

Chicago/Turabian Style

Yang, Wenqiang, Jinzhe Su, Yunhang Yao, Zhile Yang, and Ying Yuan. 2022. "A Novel Hybrid Whale Optimization Algorithm for Flexible Job-Shop Scheduling Problem" Machines 10, no. 8: 618. https://doi.org/10.3390/machines10080618

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop