Next Article in Journal
Using an Adaptive Fuzzy Neural Network Based on a Multi-Strategy-Based Artificial Bee Colony for Mobile Robot Control
Previous Article in Journal
Some Properties of Extended Euler’s Function and Extended Dedekind’s Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effective Heuristic Algorithms Solving the Jobshop Scheduling Problem with Release Dates

1
College of Software, Northeastern University, Shenyang 110819, China
2
Department of Business Administration, Cheng Shiu University, Kaohsiung 83347, Taiwan
3
Department of Statistics, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1221; https://doi.org/10.3390/math8081221
Submission received: 8 June 2020 / Revised: 22 July 2020 / Accepted: 23 July 2020 / Published: 25 July 2020
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Manufacturing industry reflects a country’s productivity level and occupies an important share in the national economy of developed countries in the world. Jobshop scheduling (JSS) model originates from modern manufacturing, in which a number of tasks are executed individually on a series of processors following their preset processing routes. This study addresses a JSS problem with the criterion of minimizing total quadratic completion time (TQCT), where each task is available at its own release date. Constructive heuristic and meta-heuristic algorithms are introduced to handle different scale instances as the problem is NP-hard. Given that the shortest-processing-time (SPT)-based heuristic and dense scheduling rule are effective for the TQCT criterion and the JSS problem, respectively, an innovative heuristic combining SPT and dense scheduling rule is put forward to provide feasible solutions for large-scale instances. A preemptive single-machine-based lower bound is designed to estimate the optimal schedule and reveal the performance of the heuristic. Differential evolution algorithm is a global search algorithm on the basis of population, which has the advantages of simple structure, strong robustness, fast convergence, and easy implementation. Therefore, a hybrid discrete differential evolution (HDDE) algorithm is presented to obtain near-optimal solutions for medium-scale instances, where multi-point insertion and a local search scheme enhance the quality of final solutions. The superiority of the HDDE algorithm is highlighted by contrast experiments with population-based meta-heuristics, i.e., ant colony optimization (ACO), particle swarm optimization (PSO) and genetic algorithm (GA). Average gaps 45.62, 63.38 and 188.46 between HDDE with ACO, PSO and GA, respectively, are demonstrated by the numerical results with benchmark data, which reveals the domination of the proposed HDDE algorithm.

1. Introduction

Smart manufacturing has become a new competitive advantage in many countries. Therefore, governments have issued smart-manufacturing-related strategies, such as ‘Made in China 2025’ in China and ‘Industrial 4.0’ in Germany. One goal of smart manufacturing is to improve customer satisfaction by meeting the requirement of customization, which is a common customer demand pattern in the smart era. Jobshop is a manufacturing system that makes products with bespoke productive processes. For example, a gear is a key part in automatic equipment such as transmission. Precision gears are manufactured by computerized numerical control (CNC) machine tools. In accordance with product shape, CNC machine tools handle gear blanks with preset processing routes over time. The manufacturing process of gears can be abstracted as a jobshop scheduling (JSS) model where each job has its own release date. Akers and Friedman [1] were the first to formulate a four-machine JSS problem. Since then, considerable attention has been focused on this research area. A comprehensive survey on the JSS problems can be found in Jain and Meeran [2] and Zhang et al. [3].
Generally, the optimal objectives of the JSS model are makespan that can minimize maximum machine loads, or total completion time (TCT) that can minimize work-in-process inventory. Academically, these criteria are linearized with designated weight factors for substituting bi-objective optimization. However, the error of the weight factors will result in a deviation between linearization and dual objectives. Cheng and Liu [4] indicated the advantage of the total k-power completion time (TKCT) criterion, which balances makespan and TCT. On the one hand, TKCT approaches makespan as k→∞ because only the maximum completion time dominates the objective value in this situation. On the other hand, TKCK is just TCT as k = 1. Therefore, the total quadratic completion time (TQCT) criterion is introduced as a trade-off for machine loads and work-in-process inventory.
Garey et al. [5] reported the strong NP-hardness for the JSS TCT problem even for the two-machine case, indicating that the JSS TQCT problem with release dates is at least as difficult as the former. Therefore, no polynomial time algorithm can solve the studied problem unless P = NP. Constructive heuristic and meta-heuristic algorithms are designed to handle different scale instances. The shortest processing time, dense schedule (SPT-DS) heuristic, is proposed to achieve feasible solutions for large-scale instances. A preemptive single-machine-based lower bound is designed to estimate the optimal solution and reveal the performance of the heuristic. A hybrid discrete differential evolution (HDDE) algorithm is provided to achieve near-optimal solutions for medium-scale instances, where multi-point insertion and a local search scheme enhance the quality of final solutions. The superiority of the presented algorithms are confirmed through conducting extensive contrast experiments.
Compared with the known literature, the contributions of this study mainly include three aspects:
  • A JSS model with the TQCT criterion is established, where each job is available at its own release date. This scheduling model simulates the production environment in which jobs arrive to the system over time.
  • An HDDE algorithm is presented to achieve high-quality schedules in a given time, where tri-point insertion in crossover operator and local search scheme with exchange neighbourhood enhance the quality of final solutions.
  • An innovative heuristic algorithm combining SPT and dense scheduling rule is proposed, which provides dominant feasible solutions on large-scale instances.
  • A preemptive single-machine-based lower bound is proposed to estimate the optimal schedule, which can server as an estimation of the optimal solution to evaluate the performance of approximate algorithms.
The remainder of the paper is arranged as follows. A literature survey is presented in Section 2. Section 3 establishes a mixed integer programming (MIP) model for the JSS problem. Section 4 proposes an SPT-DS heuristic algorithm and a new lower bound. Section 5 provides the DDE algorithm to deal with medium-scale instances. Section 6 executes numerical experiments and reports the analysis results. Section 7 concludes the paper and indicates future research directions.

2. Literature Review

The JSS model is one of the most complex problems in combination optimization [6]. Except for several polynomial-solvable cases, most JSS problems are NP-complete [7]. Therefore, related research for JSS problems mainly puts attention on the application of metaheuristic algorithms to achieve near-optimal solutions.
The criterion for most single-objective optimizations is to minimizing makespan. Khadwilard et al. [8] adopted firefly algorithm and used one-third fractional factorial experimental design to set parameters. Gao et al. [9] presented a hybrid particle-swarm tabu search (TS) algorithm, where particle swarm optimization (PSO) with a balanced strategy provides diverse and elite initial solutions for the TS algorithm. Qiu and Lau [10] proposed a novel hybrid algorithm integrating clonal selection, immune network, and PSO concept. Wang and Duan [11] designed a hybrid biogeography-based optimization algorithm that introduced chaos theory and strategy to explore and improve population diversity. Keesari and Rao [12] applied teaching-learning-based optimization (TLBO), in which the learner phase in the original TLBO is modified. Asadzadeh [13] addressed an agent-based local search genetic algorithm (GA). Peng et al. [14] presented a hybrid TS/path relinking algorithm that referred to features including a path construction procedure on the basis of distances of solutions and a special mechanism to select the reference solution. Kurdi [15] proposed a hybrid island model GA with naturally inspired self-adaptation phase strategy, which stroked a balance between diversification and intensification during search. Cheng et al. [16] presented the hybrid evolutionary algorithm (HEA), in which a TS procedure is incorporated into the framework of an evolutionary algorithm. HEA embraces several distinguishing features such as the longest common sequence-based recombination operator and a similarity-and-quality-based replacement criterion for population updating. Dao et al. [17] applied the concept of a parallel processing to bat algorithm.
Some extended studies on JSS problems have been conducted to minimize makespan. Saidi-Mehrabad et al. [18] considered a specific production system constituted by a warehouse, a network guide-path, several machines, and the transportation between machines. They built an integrated mathematical model composed of the JSS and conflict-free routing problems for automated guided vehicles to convey jobs and applied a two-stage ant colony algorithm. Sundar et al. [19] discussed the JSS problem with no-wait constraint and presented a hybrid artificial bee colony (HABC) algorithm where the artificial bee colony algorithm coupled with a local search effectively coordinates the various components of HABC. Kuhpfahl and Bierwirth [20] considered the JSS problem with total weighted tardiness objective and developed an approach on the cornerstone of disjunctive graphs to capture the neighborhood structure. They found a structural classification of neighbourhood operators and some new analytical results. Kundakcı and Kulak [21] introduced efficient hybrid GA methodologies for the dynamic JSS problem. Ku and Beck [22] evaluated four MIP formulations for the classical JSS problem using three modern optimization software.
Some researchers have conducted studies on bi-objective optimization. Phanden et al. [23] applied a simulation-based GA algorithm with restart scheme to handle three special cases to minimize mean tardiness and makespan. Nguyen et al. [24] developed four multi-objective genetic programming-based hyper-heuristic methods for the automatic design of scheduling policies and simultaneous handling of multiple scheduling decisions, owing to the complexity of each scheduling decision as well as the interactions among different decisions. They proposed a diversified multi-objective co-operative coevolution method by which different multiple scheduling decisions can evolve in sub-populations. May et al. [25] merged and upgraded NSGA-II and SPEA-II to build a green GA to optimize productivity and environmental objectives. Salido et al. [26] focused on the JSS problem with machine speed scaling where variable-speed machines have different energy efficiencies to minimize makespan and energy use. They also designed a GA to model and solve the JSS problem. On the basis of the machine speed scaling framework, Zhang and Chiong [27] minimized the total weighted tardiness and the total energy consumption and combined a multi-objective GA incorporated with two problem-specific local improvement strategies.
The known studies on the JSS problems are basically concentrated on makespan criterion for research convenience. However, in an industrial environment, the makespan criterion is too simple to offer insight for complex scheduling event. Therefore, this article focuses on the TQCT criterion, which can effectively balance energy consumption of machines and cost of work-in-process inventory.

3. MIP Model

In a JSS system, a series of m machines handles a number of n jobs following their own preset processing routes. Oi,j denotes the i-th execution of job j. Qi,j denotes that job j is processed on machine i, i = 1, 2, …, m, j = 1, 2, …, n. The processing time of job j on machine i is denoted as pi,j, where pi,j = 0 indicates that operation Oi,j is nothingness. Release date rj is the earliest time when job j is available. Each machine processes a job at most once and no re-circulation occurs in the system. The intermediate buffer between any adjacent machines has unlimited storage capacity. At any moment, each machine can process at most one job, and each job can access at most one machine. Any preemptive phenomenon is prohibitive, i.e., a starting operation must be continued until it is completed. The optimal objective is to achieve a feasible schedule for minimizing the total quadratic completion time.
The definition of symbol for the MIP model is shown in Table 1.
Therefore, the following MIP model is established.
Minimize   j = 1 n C j 2
Subject to:
C i , j + M × ( 1 y i , j ) r j + p i , j , i = 1 , m ,   j = 1 , , n  
j = 0 n z i , j , j = 0 ,   i = 1 , , m
k = 0 n z i , j , k = 1 , j = 0 , , n ,   i = 1 , , m  
k = 0 n z i , j , k = 1 , j = 0 , , n ,   i = 1 , , m  
z i , k , j + z i , j , k 1 , i = 1 , , m ,   k = 0 , , n ,   j = 0 , , n
C l , j p l , j + M × ( 1 a j , i , l ) C i , j ,   i , l = 1 , , m , j = 1 , , n  
C i , j C i , k + M × ( 1 z i , k , j ) p i , j ,   i = 1 , , m , k , j = 1 , , n  
C j C i , j ,   i = 1 , , m , j = 1 , , n
a j , i , l { 0 , 1 } ,   z i , j , k { 0 , 1 } ,   C i , j 0 ,   C j 0 , i , l = 1 , m , j , k = 1 , , n
Constraint (1) states that job j must be processed after its release date rj. Constraints (2) to (5) ensure the uniqueness of decision variable zi,j,k. Constraint (6) illustrates the relationship of completion time between two adjacent operations of the same job. Constraint (7) defines the relationship between the completion times of two adjacent jobs on the same machine. Constraint (8) explains the meaning of decision variable Cj, which is the maximum completion time of each job. Constraint (9) specifies the 0–1 variables and value ranges for the inputs.

4. SPT-DS Heuristic and Lower Bound

Given that the JSS problem is NP-completed, a heuristic algorithm can effectively achieve feasible solutions in a very short time for massive production setting, in which continuous production is more important than optimal solution. Townsend [28] reported the optimality of the shortest processing first (SPT) rule for the single-machine TQCT problem. An SPT-based heuristic is proposed to handle the JSS TQCT problem with release dates. A preemptive single-machine lower bound based on the optimality of the shortest remaining processing time (SRPT) rule for the preemptive single-machine TQCT problem is presented to evaluate the performance of the heuristic [29].

4.1. SPT-DS Heuristic

Matrix T stores the processing route of each job. According to the precedence of operations in matrix T, all current operations are stored in set A, where current operation means an immediate successor of the operation that has just been processed for a given job. Matrix S and C store the start time and completion time of all operations, respectively, where si,j denotes the start time of job j on machine i, i.e., Qi,j, and ci,j denotes the completion time of job j on machine i, ∀jn, ∀im, si,jS, ci,jC. Each job j has a non-negative release date rj and processing time pi,j denotes the execution duration of Qi,j.
Step 1: Set the start time si,j equals to release date rj and let ci,j = 0, ∀jn, ∀im, Go to Step 2.
Step 2: Assign operation Qi,j with the earliest start time si,j in set A. If there is more than one job with the earliest start time, then schedule the job with the shortest processing time. Go to Step 3.
Step 3: Update the completion time ci,j = si,j + pi,j. Delete operation Qi,j in set A. Check the elements in matrix T to determine the machine on which the next execution of job j is processed. If machine k is the target, then add operation Qk,j in set A. Go to Step 4.
Step 4: Update the start time of the remaining operations of job j, let sk,j = max (sk,j, ci,j), 1 ≤ km, and the start processing time of machine i, si,h = max (si,h, ci,j), 1 ≤ hn. Go to Step 5.
Step 5: If set A ≠ Ø, turn to Step 2. Otherwise, stop the algorithm and calculate the objective value.
A numerical example is provided below to better explain the SPT-DS heuristic.
Example 1.
A JSS problem with three jobs {J1, J2, J3} and three machines {M1, M2, M3} is involved. The input data including processing times, release dates and processing routes are presented in Table 2.
The processing times and the processing routes of three jobs in Table 2 are restored in Matrices P and T. Matrices S and C record the start and completion times of each operation Qi,j, 1 ≤ i ≤ 3, 1 ≤ j ≤ 3. In initial status, the completion times of all operations are set to zero, and the start times of jobs on all machines are equal to their release dates. The first raw in matrix T is [2, 3, 1] which means the first execution of job j, 1 ≤ j ≤ 3, can be processed on machine M2, M3 or M1. Therefore, the data stored in set A is [Q2,1, Q3,2, Q1,3]. The elements in each matrix are shown as follows.
T 3 × 3 = [ 2 3 1 3 2 1 1 3 2 ] , P 3 × 3 = [ 2 2 2 4 2 1 3 6 5 ] ,   C 3 × 3 = [ 0 0 0 0 0 0 0 0 0 ] , S 3 × 3 = [ 1 0 2 1 0 2 1 0 2 ] .
Schedule the operation in set A with the earliest start time. For example, s2,1 = 1, s3,2 = 0, s1,3 = 2. Therefore, the first operation of job J2 is scheduled on machine M3, i.e., Q3,2. The completion time of Q3,2 is c3,2 = s3,2 + p3,2 = 0 + 6 = 6. Then update set A and check the elements in matrix T. As the second execution of job J2 is actually processed on machine M2, the data stored in set A is updated to [Q2,1, Q2,2, Q1,3]. Then, update matrix S by recalculating the start time of job J2 and machine M3. For example, s2,2 = max (s2,2, c3,2) = max (0, 6) = 6, and s3,1 = max (s3,1, c3,2) = max (1, 6) = 6. The elements in each matrix are shown as follows.
C 3 × 3 = [ 0 0 0 0 0 0 0 6 0 ] , S 3 × 3 = [ 1 6 2 1 6 2 6 6 6 ] .
Next, repeat the above steps to select an operation from set A to schedule until set A is empty. The final operation schedules on machines M1, M2, and M3 are {J3, J2, J1,}, {J1, J2, J3} and {J2, J1, J3}, respectively. The Gantt chart of the SPT-DS schedule is provided in Figure 1. The objective value of the schedule is 122 + 102 + 152 = 469.

4.2. Well-Designed Lower Bound

An effective lower bound is proposed for the JSS TQCT problem to estimate the optimal value for large-scale instances. The basic idea of designing the lower bound is to relax the constraints of the relationship between machines and non-preemption for each job. Letting p i , k S R P T denote that operation O i , k is scheduled according to the SRPT rule, lower bound   Z L B is shown as follows.
Z L B = max 1 i m { j = 1 n ( C i , j L B ) 2 }
where
C i , j L B = max 1 x j { r x + k = x j p i , k S R P T }
Given the NP-hardness of the JSS TQCT problem, i.e., it cannot be solved in polynomial time for large-scale instances, an effective lower bound is usually served as a substitute of the optimal schedule to evaluate approximation algorithms. Generally, the lower bound is the optimal solution of a relaxation version of the original problem. An m-machine JSS TQCT problem is reduced to m preemptive versions of the single-machine scheduling (SMS) problem to minimize the TQCT criterion with release dates. Bai [29] proved that the SRPT rule is optimal for the preemptive SMS TQCT problem. Therefore, each of these SRPT schedules is a lower bound of the JSS TQCT problem. As a larger lower bound can provide better performance for minimization criteria, the dominative one among the m lower bounds is selected as the final lower bound for the JSS TQCT problem.
To better explain the lower bound, a numerical example is provided as follows.
Example 2.
The input data are similar to those in Example 1. The lower bound schedule with the SRPT rule on the three machines is shown in Figure 2. The scheduling on machine M2is explained in detail for better understanding. At time t = 0, only job J2 is available. At time t = 1, job J1 is released and the remaining time of job J2 is 5 > p2,1 = 4. On the basis of the SRPT rule, job J2 is preempted by job J1. At time t = 2, similarly, job J1 is preempted by job J3. Given that no extra job is released after the completion of job J3, the remaining parts of jobs J1 and J2 are scheduled with the SRPT rule. The full schedule for the lower bound is presented in Figure 2. The associated lower bound value is
Z LB = max { 2 2 + 4 2 + 6 2 , 11 2 + 3 2 + 6 2 , 9 2 + 4 2 + 14 2 } = 293 .

5. Effective HDDE Algorithm

The SPT-DS heuristic algorithm can quickly output a feasible solution for large-scale instances. However, the quality of the SPT-DS schedule weakens for medium-scale instances. For example, the mean gap is approximately 185% for 20 random instances with the combination m × n = 3 × 30. Relatively, meta-heuristic is a type of higher level heuristic that controls the entire search process. Thus, the global optimal solutions can be achieved systematically and efficiently. The differential evolution (DE) algorithm is a population-based evolutionary algorithm for global optimization in a continuous search space. The DE algorithm improves candidates by executing mutation and crossover operations, and renews the population through greedy one-to-one selection. Compared with the classical evolutionary-based meta-heuristics, the superiority of the DE algorithm involves easy implementation, simple structure, robust, and fast convergence. The traditional DE algorithm was originally used to solve the global optimization problem in the continuous search space, in which individuals were encoded by floating point numbers that are invalid for discrete variables. This section transforms individuals into operation-based sequences and proposes the HDDE algorithm to solve the JSS problem.

5.1. Encoding and Decoding

Given the diversity of process routes for each job in a JSS model, a job-number-based (JNB) representation is introduced to encode and decode between individuals and feasible schedules in the DDE algorithm. The encoding scheme linearizes the operations in a feasible schedule according to their starting times and represents an individual by the corresponding job number. The decoding scheme restores a JNB individual to a feasible schedule, where each operation is assigned in turn to the available machine following the given machine sequence. Matrices T m × n (operation sequences) and P ^ m × n (processing times of operations) are presented as follows.
T m × n = [ t 11 t 12 t 1 n t 21 t 22 t 2 n t m 1 t m 2 t m n ]   and   P ^ m × n = [ p ^ 11 p ^ 12 p ^ 1 n p ^ 21 p ^ 22 p ^ 2 n p ^ m 1 p ^ m 2 p ^ m n ]
A numerical example is provided below to further describe the encoding and decoding processes.
Example 3.
A JSS model with three jobs {J1, J2, J3} and three machines {M1, M2, M3} is provided to explain the procedure in detail. The input data are similar to those in Example 1. The data in Matrices T 3 × 3 and P ^ 3 × 3 are presented as follows.
T 3 × 3 = [ 2 3 1 3 2 3 1 1 2 ]   and   P ^ 3 × 3 = [ 4 6 2 3 2 5 2 2 1 ]
where t23 = 3 indicates that machine M3 processes operation O2,3 and p ^ 23 = 5 is its associated processing time. The SPT-DS schedule in Figure 1 can be linearized as π = {O1,2, O1,1, O1,3, O2,2, O2,1, O3,2, O2,3, O3,1, O3,3}. In encoding, the individual of schedule π is denoted as JS = {2, 1, 3, 2, 1, 2, 3, 1, 3}, and the corresponding machine sequence is MS = {3, 2, 1, 2, 3, 1, 3, 1, 2}. In decoding, the operations of individual JS are assigned to permutation MS and scheduled with the dense scheduling rule, which generates a feasible schedule.

5.2. Initialization

The main task of initialization is to generate the initial population and set the parameters. The initial population is randomly generated by swapping adjacent genes in a chromosome. Orthogonal tests are designed to determine four parameters, including population size Λ, the maximum number of iterations τmax, mutant factor Z, and crossover factor Y, thereby enhancing the performance of the algorithm. The pseudo-code for initialization is provided in Procedure 1.
Procedure 1: Initialization
1Input: n,Λ,m //n is the number of jobs, m is the number of machines, Λ is the size of the population
2Output: pop[Λ] [m × n]←Array to store initial population;
3begin
4  j ← 1;
5  for (i from 0 to m × n − 1) do
6   for (x from i to i + m − 1) do
7    pop[0][x] ← j;
8   end for
9   ii + m; jj + 1;
10  end for
11  for (i from 1 to Λ) do
12   for (j from 0 to m × n − 1) do
13    P[j] ← pop[i − 1][j];
14   end for
15   t ← rand(1,m × n);
16   f ← rand(1,m × n);
17   if (t ! = f)
18    temp ← P[t − 1];
19    P[t − 1] ← P[f − 1];
20    P[f − 1] ← temp;
21   else
22    ii − 1;
23    Continue;
24   end if
25   for (j from 0 to m × n − 1) do
26    pop[i][j]←P[j];
27   end for
28  end for
29  return pop[Λ] [m × n];
30end

5.3. Mutation and Crossover

The mutation operator is executed on all individuals in the current population to generate the mutant individual V h k = [ v h 1 k , v h 2 k , , v h m × n k ] at iteration k, where h ∈ [1, Λ]. Two pairs of target individuals { X α 1 k 1 , X β 1 k 1 } and { X α 2 k 1 , X β 2 k 1 } are randomly selected from the (k−1)th population to implement the mutation operation with current optimal individual X b e s t k 1 (the individual with the optimal objective value in the (k−1)th population). The mutation operator is expressed as follows.
V h k = X b e s t k 1 Z ( X α 1 k 1 X β 1 k 1 ) Z ( X α 2 k 1 X β 2 k 1 )
where Z ∈ [0,1] is the mutant factor. In Equality (11), operator is executed as follows.
G h x k = Z ( X α x k 1 X β x k 1 ) = { X α x , w k 1 X β x , w k 1 , if   r a n d ( · ) < Z 0 ,   otherwise , w m × n
where rand (•) is a random number assigned to each element of the difference individual and x = 1, 2. Operator in Equality (11) is executed as follows.
V h k = m o d ( ( X b e s t , w k 1 + G h 1 , w k + G h 2 , w k + n 1 ) , n ) + 1 , w m × n
where mod (•) is the modular operator, which guarantees that each vector in a mutation individual represents a valid job number. The pseudo-code for the mutation operator is provided in Procedure 2.
Procedure 2: Mutation operator
1Input: pop[Λ] [m × n];//initial population
2Output: V h k
3Begin
4  do
5  {
6αx, βx ← random(1,Λ);//x = 1,2
7  } while(αx, βx (x = 1,2) is pairwise different);
8    /Operation of operator ⊗ */
9   X α x , j k 1 ←An individual randomly selected from the population;
10     X β x , j k 1 ←Another non-repeating individual randomly selected from the population;
11  for (j from 1 to m × n) do   //n is the number of jobs and m is the number of machines
12    γ←random(0,1);
13    if (γ < Z)
14       G h , j k = X α x , j k 1 X β x , j k 1 ;
15    else
16       G h , j k = 0;
17    end if
18  end for
19     /* Operation of operator ⊕ */
20  for (j from 1 to m × n) do
21 V h , j k = mod ( ( X b e s t , j k 1 + G h 1 , j k + G h 2 , j k + n 1 ) , n ) + 1 ;
22  end for
23  return V h k ;
24end
The crossover operator selectively inserts a mutation individual into a target individual X h k 1 to generate a trial individual U h k . A series of random numbers is assigned to the vectors in a mutant individual V h k . For each element in the mutation individual, if the random number rand (•) < Y, then the element is retained; otherwise, it is removed, where Y ( 0 , 1 ) is a crossover factor. Tri-point insertion is executed to obtain a temporary individual, where the remaining elements of a mutation individual are divided into three parts and inserted in three randomly selected positions on target individual X h k 1 . The trial individual U h k is achieved by deleting the extra elements (repeat more than m times) in the temporary individual from left to right. The pseudo-code for crossover operator is presented in Procedure 3.
Procedure 3: Crossover operator
1Input:   V h k
2Output:   U h k
3Begin
4  for (j from 0 to m × n − 1) do//n is the number of jobs and m is the number of machines
5    r ← random (0,1);
6    if (r >= Y) do
7       V h k ←Remove v h , j k V h k ;
8    end if
9  end for
10   U h k   X h k 1 ;
   V h k ←Randomly split V h k into three parts;
11   Generate three random insertion points of U h k ;
12  The divided three parts of V h k is inserted into the three random insertion points of U h k in sequence;
13   U h k ←Remove duplicate operations;
14  return U h k
15End
A numerical example is provided below to further describe the mutation and crossover operations.
Example 4.
A JSS model with three jobs {J1, J2, J3} and three machines {M1, M2, M3} is provided to explain the procedure in detail. Letting mutation and crossover factors be set to Z = 0.3 and Y = 0.5, respectively. The procedure of mutation operation is illustrated in Table 3 and Table 4, while that of the crossover operation is illustrated in Table 5 and Figure 3.

5.4. Hill-Climbing-Based Improvement Strategy

A hill-climbing-based improvement strategy is introduced to enhance the quality of the final solution, aiming to balance diversification and intensification. Hill-climbing is a local search algorithm, which starts with an arbitrary solution of a given problem, then attempts to seek a better solution by making an incremental change to the solution. The change procedure executes continuously until no further improvements can be found.
However, the hill-climbing consumes considerable running time to obtain the local optimal solution in practical settings. For saving computing resource and improving algorithm efficiency, the improvement strategy is presented as follows. If the local optimal solution is found within the specified iterations, the solution is recorded; otherwise, the current best solution is recorded. The dominative parent individual is denoted as U h k . Ten iterations are conducted for the improvement strategy. Each iteration generates 20 neighborhood solutions by exchanging two elements in the parent individual. The optimal one among the 20 neighborhood solutions is compared with the current optimal individual (COI). If the new optimal solution is dominated, then the COI is updated by it and the next iteration is executed. Otherwise, the improvement strategy will continue to be dominated by the original COI until it reaches the maximum number of iterations. The improvement strategy is performed with probability θ to save computing time. The pseudo-codes for the improvement strategy are shown in Procedure 4.
Procedure 4: Hill-climbing-based improvement strategy
1Input: θ,   U h k
2Output:   U h k
3Begin
4  q ← random(0,1);
5  If (q > θ)
6   for (i from 0 to m × n − 1)
7    better[i]←   U h , i k ; //store better individual;
18    end for
9   for (z from 1 to 10)
10    init ← the value of the objective function corresponding to better individual.
11    for (y from 1 to 20)
12Neighboor[y]←Randomly exchange the two elements of better to produce neighborhood individual.
13obj[y]←the value of objective function of Neighboor[y];
14    end for
15   best_nei←the best one from 20 neighborhood individuals;
14   best_obj←the objective value of best_nei;
15   if(best_obj < init)
16    for (i from 0 to m × n − 1)
16      better[i]←best_nei[i];
17    end for
18   end if
19   end for
20   for (i from 0 to m × n − 1)
21    U h , i k better[i]
22   end for
23  end if
24  return U h , i k ;
25End

5.5. Selection

Population updating simulates the law of natural selection: survival of the fittest. Trial individual U h k is decoded as a feasible schedule. If the objective value of U h k dominates that of the target individual X h k 1 , then the individual in the current population will be updated; otherwise, the target individual will be retrained. Executing the mutation, crossover and selection operations for all target individuals generate a new population. The iteration procedure is repeated until the termination condition is satisfied.

5.6. Framework of the HDDE Algorithm

With the previous procedures combined, the entire framework of the HDDE algorithm is provided in Algorithm 1.
Algorithm 1: the HDDE algorithm:
1Input: Parameter τmax, Z, Y, Λ, θ
2Output: X b e s t k
3  Begin
4  /* Initialization phase */
5  pop[Λ] [m × n]←Randomly generate the initial population;
6  Evaluate the initial population;
7  k←1;
8   X b e s t k ←Individual with the optimal objective value in the current population;
9  while (kτmax) do
10   for (h from 1 to Λ) do
11   /* Mutation phase */
12            V h k ←the mutant individual after the mutation operation;
13   /* Crossover phase */
14    U h k the variant individual after the crossover operation;
15   /* Improvement strategy phase */
18     Enter the improvement strategy operator to update U h k ;
20   /* Selection Phase */
21   if (F( U h k ) < F( X h k 1 ) // F(X) is the objective value corresponding to the individual X;
22     X h k = U h k ;
23   else
24     X h k = X h k 1 ;
25    end if
26    end for
27    Update X b e s t k ;
28    k = k + 1;
29   end while
30   return X b e s t τ m a x ;
31  End

6. Numerical Simulation Experiment

Numerous simulation experiments were executed to evaluate the effectiveness of the HDDE algorithm and SPT-DS heuristic for medium- and large-scale instances, respectively. The proposed algorithms were implemented in C++ language, and simulations were run on a Dell computer with an Intel Core i5-8400 (2.8 GHz × 6) CPU and 8.00 GB RAM. The input data were generated because no standard benchmark was available for the JSP problem with release dates. Release date rj was randomly generated from a uniform distribution U[0, 3n]. Without loss of generality, at least one job was released at time zero. Processing time pi,j was randomly generated from a uniform distribution U[1, 10]. For operation Oi,j, processing time pi,j = 0 indicates that job j does not pass through machine i. The processing routes of each job were randomly generated, and the job might not pass through all machines. For each machine-job combination, 10 random trials were conducted and the average values are reported in the following tables.

6.1. Performance of SPT-DS Heuristic

This section designs numerical experiments to verify the performance of the SPT-DS heuristic algorithm for large-scale instances. The machine-job combinations m = {3, 5, 8} with n = {100, 300, 500} were tested with the previous input settings. Measure G A P = Z H Z L B Z L B was used for evaluating the error of algorithm, where ZH and ZLB are the objective values of the SPT-DS heuristic and lower bound, respectively. Mean GAP values are presented in the following table.
In Table 6, the GAPs are stable for the constant machine number. For the five-machine example, the GAP values float between 1.0664 and 1.2134 as the problem scale increases from 100 to 500, where the fluctuation range is about 7.5%. Furthermore, the growth trend of GAP values indicates that the SPT-DS heuristic deteriorates with the increase in m for a fixed problem scale. For the 300-job example, the GAP value increases from 0.7349 to 1.4221 as the machine scale increases from 3 to 8. This phenomenon might be attributed to the large machine scale increasing the idle time between adjacent operations in a feasible schedule, thereby weakening the performance of the heuristic.

6.2. Improvement of the HDDE Algorithm

To highlight the improvement schemes of the tri-insertion and local search, comparative experiments were executed between the standard DDE (SDDE) and the HDDE algorithms. The machine-job combinations m = {3, 5, 8} with n = {50, 100, 150} were tested under the input settings presented at the beginning of this section. The performance of a meta-heuristic mainly depends on the parameter settings. Thus, a series of orthogonal experiments was conducted to determine the parameters of the algorithm (Appendix A), such as mutation factor Z, crossover factor Y, population size Λ, and local search probability θ (Table A2). The parameters are set as Z = 0.2, Y = 0.1, Λ = 200, τmax = 300, and θ = 0.8.
Relative difference percentage R D P = Z H Z * Z * × 100 % was used for evaluation of the effectiveness of the HDDE algorithm, where Z H is the final objective value obtained by a meta-heuristic for each test, and Z * is the minimum value among all ZH values.
Table 7 shows the average RDP (ARDP), minimum RDP (MinRDP), maximum RDP (MaxRDP), and standard deviation (SD) for SDDE and HDDE. ARDP is a performance measure that evaluates an algorithm numerically. The ARDPs of HDDE evidently dominate those of SDDE. For example, the mean values of ARDPs obtained by HDDE and SDDE are 0.6011 and 10.1811, respectively. SD aims to reveal the stability performance of an algorithm. The SD values of HDDE and SDDE are stable in [0, 1.21] and [2.91, 4.34], respectively, except for the maximum values, indicating the robustness of the former relative to the latter. In the minimum objective values of the 90 numerical tests, about 94% (85 out of 90) of the data were obtained by HDDE and about 6% of the data (5 out of 90) were obtained by SDDE. These final experimental outcomes show that the improvement schemes considerably enhance the performance of the HDDE algorithm.

6.3. Comparison between HDDE and Other Optimization Algorithms

6.3.1. Comparison between HDDE and ACO

Ant Colony Optimization (ACO) algorithm is selected as a contrast objective to indicate the superiority of the HDDE algorithm. The machine-job combinations and input parameters are identical with those in Section 6.2. The RDP measure is used in this section. The parameters used in ACO algorithm are presented as follows. P i j k is the probability of selecting operation j when ant k climbs to position i (i.e., the moment when the current operation is selected).
P i j k = { [ τ i j ] α [ η i j ] β s   Ω k [ τ i s ] α [ η i s ] β   ,   j Ω k 0   ,   j Ω k }
where Ωk is the tabu list for ant k, which stores operations that have been processed. τi,j is the pheromone concentration of operation j. ŋi,j is the current heuristic factor of operation j at present.
η i j = 1 d i j
where dij is the processing time of operation j. Parameters α and β indicate the relative importance of pheromones and heuristics.
Δ τ i j k = { Q C k   ,   if   operation   j   is   machined   in   position   i 0   ,   others }
where Q is a constant, and Ck is the objective values of the operations in which ant k has been processed.
Δ τ i j = k = 1 Λ Δ τ i j k
τij is the total increment of the pheromone of operation j.
τ i j = ρ τ i j + Δ τ i j
τij is the current pheromone concentration of operation j, ∆τij is the pheromone concentration of operation j in the previous iteration stage, and ρ is the evaporation coefficient.
After previous preparation, a basic framework of the ACO algorithm is presented as follows.
Step 1. Set parameters for the algorithm, such as the ant colony size Λ, the maximum number of iterations tmax, constant Q, evaporation coefficient ρ, and parameters α and β. The initial pheromone concentration of each operation is set to a certain constant. Set the current number of iterations t = 0.
Step 2. Generate the initial colony. With Equation (14), each ant uses roulette method to continuously select the operations that need to be processed. If all operations are included in the sequence of each ant, the initial ant colony is formed. Use Equations (16)–(18) to update the pheromone of each operation.
Step 3. Set h = 1, and start iteration.
Step 4. The current ant uses the roulette method to select an operation with Equation (14). Once an operation is selected, the pheromone increment of the operation is obtained by Equation (16).
Step 5. If h < Λ, set h = h + 1, and then go to Step 4; otherwise go to Step 6.
Step 6. If t < tmax, set t = t + 1. The pheromone increment of each operation is obtained by Equation (16). Update the pheromone of each operation in current iteration with Equation (17), and then go to step 3; otherwise go to step 7.
Step 7. Terminate procedure if the termination condition is satisfied. The optimal individual of the current ant colony is output as the final solution.
Similarly, the parameters of ant colony algorithm are set by orthogonal experiment, as shown in Appendix B. The parameters are set as α = 1.0, β = 0.5, ρ = 0.9, Q = 0.5, Λ = 200 and tmax = 300. Table 8 shows the average RDP (ARDP), minimum RDP (MinRDP), maximum RDP (MaxRDP), and standard deviation (SD) for ACO and HDDE. The ARDPs of HDDE evidently dominate those of ACO. For example, the mean values of ARDPs obtained by HDDE and ACO are 0.3789 and 22.4422, respectively. The SD values of HDDE and ACO are stable in [0, 1.78] and in [3.12, 9.69], respectively, except for the maximum values, indicating the robustness of the former relative to the latter. In the minimum objective values of the 90 numerical tests, about 92% (83 out of 90) of the data were obtained by HDDE and about 8% of the data (7 out of 90) were obtained by ACO. These numerical results reveal that the HDDE algorithm completely dominates the ACO algorithm.

6.3.2. Comparison between HDDE and PSO

To highlight the dominance of the HDDE algorithm, a series of comparison experiments are conducted between the HDDE and PSO algorithms. The concrete procedure of the PSO algorithm is referred to [30,31]. The parameters of the HDDE algorithm are identical with those in Section 6.3.1. The parameters of the PSO algorithm are set by orthogonal experiment (shown in Appendix C), i.e., population size Λ = 200, maximum number of iterations τmax = 300, weight w = 5, maximum speed Vmax = 4, maximum position Smax = 4, cognitive coefficient c1 = 3, and social coefficient c2 = 3.
The data in Table 9 are the values of ARDP, MinRDP, MaxRDP, and SD for PSO and HDDE. The results of HDDE are all 0 for the 90 trials. This phenomenon indicates that the HDDE algorithm completely dominates the PSO algorithm.

6.3.3. Comparison between HDDE and GA

This section conducted comparison experiments on the HDDE and GA algorithms. The experimental parameters of the GA algorithm are set as follows: mutation probability Z1 = 0.5, crossover probability Y1 = 0.5, population size Λ = 200, and maximum number of iterations τmax = 300. The detailed parameter setting process is presented in Appendix D.
The data of ARDP, MinRDP, MaxRDP, and SD for GA and HDDE are shown in Table 10. It is obvious that HDDE’s ARDPs is superior to GA’s ARDPs. For example, the mean values of ARDPs obtained by HDDE and GA are 0.158 and 23.769, respectively. The SD values of GA are stable in [5.70, 12.91], except for the maximum values, and the SD value of HDDE is only equal to 3.04 at one scale (3 × 150), but equal to zero at all other scales, indicating the robustness of the latter relative to the former. In the minimum objective values of the 90 numerical tests, about 97.8% (88 out of 90) of the data were obtained by HDDE and about 2.2% of the data (2 out of 90) were obtained by GA. These numerical results reveal that the HDDE algorithm completely dominates the GA algorithm.

6.4. Comparison under JSS Problem Benchmarks

Currently, no standard benchmark is provided for the JSS problem with release dates. Therefore, the basic benchmarks proposed by Taillard, E. [32] is combined with generated data of release dates from a uniform distribution [0, 3n] to verify the performance of the proposed algorithm.
To control the meta-heuristics running in an appropriate time, the data of processing times with machine-job combinations m × n = 15 × 50, 20 × 50, and 20 × 100 were selected from the benchmarks. The data in Table 11 were the final objective values of the HDDE, ACO, PSO, and GA algorithms, where gap1 = (ZACOZHDDE)/ZHDDE × 100%, gap2 = (ZPSOZHDDE)/ZHDDE × 100%, gap3 = (ZGAZHDDE)/ZHDDE × 100%, ZHDDE, ZACO, ZPSO, and ZGA are the final objective values obtained by HDDE, ACO, PSO, and GA, respectively. It is obvious that the HDDE algorithm is superior to other population-based meta-heuristics.
However, the tested meta-heuristics consumed different CPU times. For an m × n = 20 × 100 instance, the CPU time expended by the HDDE, ACO, PSO, and GA algorithms are 17,890, 22,055, 837.853, and 238.448 s, respectively. The results reveals that better performance of an algorithm will cost more CPU time. As a trade-off between solution quality and running time, the HDDE algorithm dominates the other meta-heuristics.

7. Conclusions

This study investigates the JSS problem to minimize the total quadratic completion time where each job is available at its release date. No polynomial algorithm can solve the problem because of its NP-hardness. For large-sized instances, the SPT-DS heuristic algorithm is proposed to achieve approximate solutions in a short computing time. For medium-sized instances, the HDDE algorithm is provided to achieve high-quality solutions, where well-designed tri-insertion crossover and lower search schemes significantly enhance the performance of the meta-heuristic. A series of random experiments demonstrates the effectiveness of the proposed algorithms.
Future researches mainly focus on two aspects. On the one hand, the acceleration scheme will be proposed to save running time for the HDDE algorithm. On the other hand, the model will be generalized to a multi-objective version that is more popular in a production scheduling environment. A non-dominated sorting-based meta-heuristic will be presented to obtain Pareto optimal solutions. A high-efficiency improvement scheme, such as variable neighborhood search, will be combined with the meta-heuristic for JSS problems.

Author Contributions

Supervision, T.R.; methodology—designing algorithms, Y.Z. and T.R.; formal analysis, B.-y.C., X.-y.W. and P.Z.; conceptualization—problem modeling, S.-R.C.; writing—original draft preparation, M.Z.; writing—review and editing, C.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by Fundamental Research Funds for the Central Universities (N2017009, N2017008, N181706001, N182608003, N161702001, N2018008, N181703005), National Natural Science Foundation of China (61902057), the Doctoral Start-up Funds of Liaoning Province (2019-BS-084).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Orthogonal experiments are designed to determine the appropriate parameters of the DDE algorithm, including population size Λ, mutation factor Z, crossover factor Y, maximum iteration τmax and local search probability θ, because the performance of the DDE algorithm is mainly dependent on parameter setting. The DDE algorithm parameters table is shown in Table A1.
To save computing time, the test scale is fixed at m = 3 and n = 60. The mean improvement percent M I P = Z I N I Z F I N Z F I N × 100 % is used to measure the effect of the parameters, where ZINI is the minimum objective value among the initial population, and ZFIN is the final objective value of the output solution at the termination of algorithm. The orthogonal experiment results for the parameters of the HDDE algorithm are shown in Table A2. The main effect graph of the mean value for the orthogonal experiment results is shown in Figure A1. Referring to the peak point of each parameter (2-1-1-5-2) in Figure A1, it is obvious that the best parameter combinations are Z = 0.2, Y = 0.1, Λ = 200, τmax = 300 and θ = 0.8.
Figure A1. The main effect graph of HDDE.
Figure A1. The main effect graph of HDDE.
Mathematics 08 01221 g0a1
Table A1. HDDE algorithm parameters level.
Table A1. HDDE algorithm parameters level.
LevelZYΛτmaxθ
10.10.12001000.75
20.20.23001500.8
30.30.34002000.85
40.40.45002500.9
50.50.56003000.95
Table A2. HDDE algorithm parameters orthogonal experiment results.
Table A2. HDDE algorithm parameters orthogonal experiment results.
No.ZYΛτmaxθMIP
11111163.047
21222256.857
31333362.44
41444461.173
51555556.198
62123468.973
72234563.566
82345162.135
92451251.484
102512363.784
113135276.275
123241348.861
133352447.98
143413557.46
153524156.328
164142565.379
174253160.104
184314257.64
194425355.235
204531444.366
215154358.799
225215464.755
235321540.386
245432142.218
255543248.76

Appendix B

Similarly, orthogonal experiments are designed to determine the appropriate parameters of the ACO algorithm, including ant colony size Λ, pheromone quantity Q, decay coefficient ρ, maximum iteration tmax, accumulated information factor α, and inspiration information factor β. The ACO algorithm parameters level table is shown in Table A3.
To save computing time, the test scale is fixed as m = 3 and n = 60. MIP is used to measure the effect of the parameters. The orthogonal experiment results for the parameters of the ACO algorithm are shown in Table A4. The main effect graph of mean values for the orthogonal experiment results is shown in Figure A2. Referring to the peak point of each parameter (5-4-1-5-1-5) in Figure A2, it is obvious that the best parameter combinations are α = 1.0, β = 0.5, ρ = 0.9, Q = 0.5, Λ = 200, and tmax = 300.
Table A3. ACO algorithm parameters level.
Table A3. ACO algorithm parameters level.
LevelαβρQΛtmax
10.60.20.90.9200100
20.70.310.8300150
30.80.40.80.7400200
40.90.50.60.6500250
510.60.70.5600300
Table A4. ACO algorithm parameters orthogonal experiment results.
Table A4. ACO algorithm parameters orthogonal experiment results.
LevelαβρQΛtmaxMIP
111111119.133
212222217.13
313333321.664
414444425.079
515555524.833
621234512.356
722345117.048
823451222.346
924512322.964
1025123419.987
1131352415.748
1232413517.635
1333524121.003
1434135220.2
1535241324.685
1641425314.161
1742531422.272
1843142529.234
1944253123.501
2045314222.543
2151543211.02
2252154323.61
2353215426.278
2454321530.529
2555432125.445
Figure A2. The main effect graph of ACO.
Figure A2. The main effect graph of ACO.
Mathematics 08 01221 g0a2

Appendix C

The parameters of the PSO algorithm mainly include population size Λ, maximum iteration τmax, weights w, maximum speed Vmax, maximum position Smax, cognitive coefficient c1, and social coefficient c2. To unify comparison criterion, the population size and maximum iteration of PSO are set to the same as those of HDDE. The remaining parameters of PSO are set by orthogonal experiments. The PSO algorithm parameters level table is shown in Table A5.
The test scale is m × n = 3 × 60, and test procedure is similar with that in Appendix A. The results for the parameters of the PSO algorithm are shown in Table A6. The main effect graph of the mean values for the orthogonal experiment results is shown in Figure A3. Referring to the peak point of each parameter (1-2-3-1-3) in Figure A3, it is obvious that the best parameter combinations are Λ = 200, τmax = 300, w = 5, Vmax = 4, Smax = 4, c1 = 3, c2 = 3.
Table A5. PSO algorithm parameters level.
Table A5. PSO algorithm parameters level.
Levelωvmaxsmaxc1c2
10.95631
21.0452.52
31.13423
41.2231.52.5
51.31211.5
Table A6. PSO algorithm parameters orthogonal experiment results.
Table A6. PSO algorithm parameters orthogonal experiment results.
NO.ωvmaxsmaxc1c2MIP
11111175.467
21222280.659
31333387.806
41444444.144
51555543.162
62123468.973
72234564.049
82345146.286
92451268.278
102512374.803
113135260.749
123241388.783
133352458.687
143413544.56
153524165.843
164142540.564
174253160.727
184314258.335
194425348.848
204531448.898
215154358.814
225215444.572
235321536.048
245432157.198
255543249.239
Figure A3. The main effect graph of PSO.
Figure A3. The main effect graph of PSO.
Mathematics 08 01221 g0a3

Appendix D

The parameters of GA mainly include mutation probability Z1, crossover probability Y1, population size Λ, and maximum iteration τmax. Similarly, an orthogonal experiment with four factors and five levels was performed to determine parameters. The parameters level table of GA is shown in Table A7. The results for the parameters of GA are shown in Table A8. The main effect graph of mean values for the orthogonal experiment results is shown in Table A4. Referring to the peak point of each parameter 5-5-1-5) in Figure A4, it is obvious that the best parameter combinations are Z1 = 0.5, Y1 = 0.5, Λ = 200, τmax = 300.
Figure A4. The main effect graph of GA.
Figure A4. The main effect graph of GA.
Mathematics 08 01221 g0a4
Table A7. GA parameters level.
Table A7. GA parameters level.
LevelZ1Y1Λτmax
10.10.1200100
20.20.2300150
30.30.3400200
40.40.4500250
50.50.5600300
Table A8. GA parameters orthogonal experiment results.
Table A8. GA parameters orthogonal experiment results.
NO.Z1Y1ΛτmaxMIP
1111115.438
2122219.518
3133328.87
4144424.818
5155529.768
6212322.523
7223428.46
8234528.97
9245120.75
10251226.568
11313532.46
12324124.465
13335222.413
14341329.891
15352435.587
16414222.361
17425327.872
18431436.13
19442530.854
20453121.569
21515424.833
22521538.551
23532127.793
24543233.15
25554331.272

References

  1. Akers, S.B.; Friedman, J. A non-numerical approach to production scheduling problems. J. Oper. Res. Soc. Am. 1955, 3, 429–442. [Google Scholar] [CrossRef]
  2. Jain, A.; Meeran, S. Deterministic job-shop scheduling: Past, present and future. Eur. J. Oper. Res. 1999, 113, 390–434. [Google Scholar] [CrossRef]
  3. Zhang, J.; Ding, G.; Zou, Y.; Qin, S.; Fu, J. Review of job shop scheduling research and its new perspectives under Industry 4.0. J. Intell. Manuf. 2019, 30, 1809–1830. [Google Scholar] [CrossRef]
  4. Cheng, T.C.E.; Liu, Z. Parallel machine scheduling to minimize the sum of quadratic completion times. IIE Trans. 2004, 36, 11–17. [Google Scholar] [CrossRef] [Green Version]
  5. Garey, M.R.; Johnson, D.S.; Sethi, R. The Complexity of Flowshop and Jobshop Scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  6. Chen, B.; Potts, C.N.; Woeginger, G.J. A Review of Machine Scheduling: Complexity, Algorithms and Approximability. In Handbook of Combinatorial Optimization; Du, D.-Z., Pardalos, P., Eds.; Kluwer Academic Publishers: London, UK, 1998; pp. 21–169. [Google Scholar]
  7. Brucker, P. Scheduling Algorithms, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  8. Khadwilard, A.; Chansombat, S.; Thepphakorn, T.; Thapatsuwan, P.; Chainate, W.; Pongcharoen, P. Application of firefly algorithm and its parameter setting for job shop scheduling. J. Ind. Technol. 2012, 8, 49–58. [Google Scholar]
  9. Gao, H.; Kwong, S.; Fan, B.; Wang, R. A hybrid particle-swarm tabu search algorithm for solving job shop scheduling problems. IEEE Trans. Ind. Inform. 2014, 10, 2044–2054. [Google Scholar] [CrossRef]
  10. Qiu, X.; Lau, H. An AIS-based hybrid algorithm for static job shop scheduling problem. J. Intell. Manuf. 2014, 25, 489–503. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, X.; Duan, H. A hybrid biogeography-based optimization algorithm for job shop scheduling problem. Comput. Ind. Eng. 2014, 73, 96–114. [Google Scholar] [CrossRef]
  12. Keesari, H.S.; Rao, R.V. Optimization of job shop scheduling problems using teaching-learning-based optimization algorithm. OPSEARCH 2014, 51, 545–561. [Google Scholar] [CrossRef]
  13. Asadzadeh, L. A local search genetic algorithm for the job shop scheduling problem with intelligent agents. Comput. Ind. Eng. 2015, 85, 376–383. [Google Scholar] [CrossRef]
  14. Peng, B.; Lü, Z.; Cheng, T.C.E. A tabu search/path relinking algorithm to solve the job shop scheduling problem. Comput. Oper. Res. 2015, 53, 154–164. [Google Scholar] [CrossRef] [Green Version]
  15. Kurdi, M. A new hybrid island model genetic algorithm for job shop scheduling problem. Comput. Ind. Eng. 2015, 88, 273–283. [Google Scholar] [CrossRef]
  16. Cheng, T.C.E.; Peng, B.; Lü, Z. A hybrid evolutionary algorithm to solve the job shop scheduling problem. Ann. Oper. Res. 2016, 242, 223–237. [Google Scholar] [CrossRef]
  17. Dao, T.K.; Pan, T.S.; Nguyen, T.T.; Pan, J.S. Parallel bat algorithm for optimizing makespan in job shop scheduling problems. J. Intell. Manuf. 2018, 29, 451–462. [Google Scholar] [CrossRef]
  18. Saidi-Mehrabad, M.; Dehnavi-Arani, S.; Evazabadian, F.; Mahmoodian, V. An Ant Colony Algorithm (ACA) for solving the new integrated model of job shop scheduling and conflict-free routing of AGVs. Comput. Ind. Eng. 2015, 86, 2–13. [Google Scholar] [CrossRef]
  19. Sundar, S.; Suganthan, P.N.; Jin, C.T.; Xiang, C.T.; Soon, C.C. A hybrid artificial bee colony algorithm for the job-shop scheduling problem with no-wait constraint. Soft Comput. 2017, 5, 1193–1202. [Google Scholar] [CrossRef]
  20. Kuhpfahl, J.; Bierwirth, C. A study on local search neighborhoods for the job shop scheduling problem with total weighted tardiness objective. Comput. Oper. Res. 2016, 66, 44–57. [Google Scholar] [CrossRef]
  21. Kundakcı, N.; Kulak, O. Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem. Comput. Ind. Eng. 2016, 96, 31–51. [Google Scholar] [CrossRef]
  22. Ku, W.Y.; Beck, J.C. Mixed integer programming models for job shop scheduling: A computational analysis. Comput. Oper. Res. 2016, 73, 165–173. [Google Scholar] [CrossRef] [Green Version]
  23. Phanden, R.K.; Jain, A.; Verma, R. A genetic algorithm-based approach for job shop scheduling. J. Manuf. Technol. Manag. 2012, 23, 937–946. [Google Scholar] [CrossRef]
  24. Nguyen, S.; Zhang, M.; Johnston, M.; Tan, K.C. Automatic design of scheduling policies for dynamic multi-objective job shop scheduling via cooperative coevolution genetic programming. IEEE Trans. Evol. Comput. 2013, 18, 193–208. [Google Scholar] [CrossRef]
  25. May, G.; Stahl, B.; Taisch, M.; Prabhu, V. Multi-objective genetic algorithm for energy-efficient job shop scheduling. Int. J. Prod. Res. 2015, 53, 7071–7089. [Google Scholar] [CrossRef]
  26. Salido, M.A.; Escamilla, J.; Giret, A.; Barber, F. A genetic algorithm for energy-efficiency in job-shop scheduling. Int. J. Adv. Manuf. Technol. 2016, 85, 1303–1314. [Google Scholar] [CrossRef]
  27. Zhang, R.; Chiong, R. Solving the energy-efficient job shop scheduling problem: A multi-objective genetic algorithm with enhanced local search for minimizing the total weighted tardiness and total energy consumption. J. Clean. Prod. 2016, 112, 3361–3375. [Google Scholar] [CrossRef]
  28. Townsend, W. The single machine problem with quadratic penalty function of completion times: A branch-and bound solution. Manag. Sci. 1978, 24, 530–534. [Google Scholar] [CrossRef]
  29. Bai, D. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates. Int. J. Syst. Sci. 2015, 46, 1994–2005. [Google Scholar] [CrossRef]
  30. Ehsan, N.; Mahdi, P.K.; Matti, L. Transmission expansion planning integrated with wind farms: A review, comparative study, and a novel profound search approach. Int. J. Electr. Power Energy Syst. 2020, 115, 105460. [Google Scholar]
  31. Ehsan, N.; Mahdi, P.K.; Hamdi, A. An efficient particle swarm optimization algorithm to solve optimal power flow problem integrated with FACTS devices. Appl. Soft Comput. J. 2019, 80, 243–262. [Google Scholar]
  32. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
Figure 1. Gantt chart of SPT-DS heuristic.
Figure 1. Gantt chart of SPT-DS heuristic.
Mathematics 08 01221 g001
Figure 2. SRPT schedule of the lower bound.
Figure 2. SRPT schedule of the lower bound.
Mathematics 08 01221 g002
Figure 3. Diagram of crossover operation.
Figure 3. Diagram of crossover operation.
Mathematics 08 01221 g003
Table 1. Model variables.
Table 1. Model variables.
Variables Meanings
n total number of jobs;
m total number of machines;
rj release date of job j;
pi,j processing time of job j on machine i;
Ci,j completion time of job j on machine i;
Cj maximum completion time of job j;
aj,i,l=1, if job j is processed on machine i before machine l; 0, otherwise.
zi,j,k=1, if job k precedes job j immediately on machine i; 0, otherwise.
yi,j=1, if the first operation of job j is processed on machine i; 0, otherwise.
M a large positive number.
Table 2. Input data of Example 1.
Table 2. Input data of Example 1.
JobsProcessing RoutesProcessing TimesRelease Dates
J1M2, M3, M14, 3, 21
J2M3, M2, M16, 2, 20
J3M1, M3, M22, 5, 12
Table 3. Generation of temporary vector G h 1 k (Z = 0.3).
Table 3. Generation of temporary vector G h 1 k (Z = 0.3).
Individualx1x2x3x4x5x6x7x8x9
Xα1213121323
Xβ1231323112
Xα1Xβ10−22−20−2211
rand0.200.370.060.180.260.680.860.500.15
G h 1 k 002−200001
Table 4. Generation of temporary vector G h 2 k (Z = 0.3).
Table 4. Generation of temporary vector G h 2 k (Z = 0.3).
Individualx1x2x3x4x5x6x7x8x9
Xα2323121312
Xβ2211323132
Xα2Xβ2112−20−22−20
rand0.560.020.380.260.660.180.590.890.15
G h 2 k 010−20−2000
Table 5. Generation of trial individual (Y = 0.5).
Table 5. Generation of trial individual (Y = 0.5).
Individualx1x2x3x4x5x6x7x8x9
X b e s t k 1 231213312
G h 1 k 002−200001
G h 2 k 010−20−2000
243−211312
V h k 213111312
rand0.560.280.320.480.660.180.590.890.15
V h k 131 1 2
Table 6. GAPs of SPT-DA heuristic.
Table 6. GAPs of SPT-DA heuristic.
n/mm = 3m = 5m = 8
n = 1000.78081.06441.4948
n = 3000.73491.21431.4221
n = 5000.72171.11501.4003
Table 7. Improvement effect of the HDDE algorithm.
Table 7. Improvement effect of the HDDE algorithm.
m × nSDDEHDDE
ARDPMaxRDPMinRDPSDARDPMaxRDPMinRDPSD
3 × 5026.1155.3616.4214.984.8148.13014.43
3 × 10014.6420.399.353.910000
3 × 1507.3811.7503.730.404.0401.21
5 × 5013.8223.556.594.340000
5 × 1007.8115.512.364.030000
5 × 1505.8212.2903.440.060.5500.17
8 × 504.7810.803.660.050.4800.14
8 × 1007.3213.53.532.910000
8 × 1503.9511.3803.340.090.9100.27
Table 8. Comparison between HDDE and ACO.
Table 8. Comparison between HDDE and ACO.
m × nACOHDDE
ARDPMaxRDPMinRDPSDARDPMaxRDPMinRDPSD
3 × 5027.5450.3114.3612.880000
3 × 1009.0722.3407.480.815.7001.78
3 × 1502.6710.7603.751.8912.4303.72
5 × 5013.6226.7309.690.434.3201.29
5 × 10041.1255.6633.506.000000
5 × 15037.1643.7132.243.350000
8 × 505.7616.9906.530.282.8400.85
8 × 10034.1639.2728.123.510000
8 × 15030.8835.0425.833.120000
Table 9. Comparison between HDDE and PSO.
Table 9. Comparison between HDDE and PSO.
m × nPSOHDDE
ARDPMaxRDPMinRDPSDARDPMaxRDPMinRDPSD
3 × 5045.9679.7019.8719.890000
3 × 10044.2365.9923.0614.330000
3 × 15030.6048.2513.489.8780000
5 × 5062.9983.2044.5312.4140000
5 × 10048.6655.6340.245.290000
5 × 15042.8948.3837.983.3530000
8 × 5051.0263.6641.417.130000
8 × 10041.6353.6933.945.520000
8 × 15037.7043.3832.993.340000
Table 10. Comparison between HDDE and GA.
Table 10. Comparison between HDDE and GA.
m × nGAHDDE
ARDPMaxRDPMinRDPSDARDPMaxRDPMinRDPSD
3 × 5029.0962.369.1916.670000
3 × 10020.0746.350.5812.690000
3 × 15010.3430.0009.041.428.1803.04
5 × 5036.0559.2718.3412.910000
5 × 10018.0233.990.9910.700000
5 × 15014.9225.374.266.730000
8 × 5023.2332.3316.815.700000
8 × 10030.1851.1213.1411.540000
8 × 15032.024616.949.240000
Table 11. Comparison under JSS problem benchmarks.
Table 11. Comparison under JSS problem benchmarks.
m × nHDDEACOPSOGAgap1gap2gap3
15 × 505.34 × 1087.88 × 1089.18 × 1081.28 × 10947.6871.92139.70
5.44 × 1087.87 × 1089.35 × 1081.27 × 10944.5271.84132.34
5.12 × 1087.54 × 1088.34 × 1081.45 × 10947.1762.95182.17
5.15 × 1087.99 × 1088.90 × 1081.15 × 10955.2373.03123.65
5.15 × 1087.92 × 1088.84 × 1081.39 × 10953.9471.84170.79
5.54 × 1088.36 × 1089.35 × 1081.17 × 10950.9868.90111.56
5.32 × 1088.22 × 1088.91 × 1081.51 × 10954.5067.38183.35
5.71 × 1088.40 × 1089.94 × 1081.38 × 10947.0674.12142.28
5.16 × 1087.43 × 1088.86 × 1081.28 × 10944.0271.71147.24
5.50 × 1088.13 × 1088.67 × 1081.48 × 10947.8757.99168.68
20 × 507.56 × 1081.05 × 1091.18 × 1092.01 × 10939.2655.67166.45
7.81 × 1081.16 × 1091.24 × 1092.31 × 10947.9759.22195.26
7.08 × 1081.02 × 1091.13 × 1091.79 × 10943.5559.82152.24
6.49 × 1089.59 × 1091.10 × 1091.77 × 10947.6669.11172.34
7.01 × 1081.03 × 1091.25 × 1092.04 × 10946.6378.74190.4
7.16 × 1081.08 × 1091.23 × 1092.07 × 10950.2871.38189.73
6.92 × 1081.04 × 1091.20 × 1092.20 × 10950.3172.82217.56
6.91 × 1089.99 × 1081.17 × 1091.82 × 10944.7069.10162.85
7.21 × 1081.11 × 1091.19 × 1092.29 × 10953.5765.19218.07
7.57 × 1081.08 × 1091.20 × 1092.08 × 10942.3358.00174.95
20 × 1004.68 × 1096.86 × 1097.54 × 1091.20 × 101046.6061.20156.75
4.31 × 1096.14 × 1096.89 × 1091.55 × 101042.6059.89260.32
4.67 × 1096.62 × 1097.35 × 1091.32 × 101041.7957.41182.91
4.49 × 1096.44 × 1096.99 × 1091.57 × 101043.3755.8250.04
4.58 × 1096.56 × 1097.19 × 1091.53 × 101043.0156.74233.26
4.51 × 1096.41 × 1096.82 × 1091.60 × 101042.1351.11254.59
4.71 × 1096.20 × 1096.94 × 1091.66 × 101031.5747.45252.63
4.44 × 1096.40 × 1096.86 × 1091.64 × 101043.9154.35268.87
4.62 × 1096.16 × 1096.88 × 1091.55 × 101033.1448.78235.07
4.42 × 1096.23 × 1096.97 × 1091.40 × 101041.1957.91217.75

Share and Cite

MDPI and ACS Style

Ren, T.; Zhang, Y.; Cheng, S.-R.; Wu, C.-C.; Zhang, M.; Chang, B.-y.; Wang, X.-y.; Zhao, P. Effective Heuristic Algorithms Solving the Jobshop Scheduling Problem with Release Dates. Mathematics 2020, 8, 1221. https://doi.org/10.3390/math8081221

AMA Style

Ren T, Zhang Y, Cheng S-R, Wu C-C, Zhang M, Chang B-y, Wang X-y, Zhao P. Effective Heuristic Algorithms Solving the Jobshop Scheduling Problem with Release Dates. Mathematics. 2020; 8(8):1221. https://doi.org/10.3390/math8081221

Chicago/Turabian Style

Ren, Tao, Yan Zhang, Shuenn-Ren Cheng, Chin-Chia Wu, Meng Zhang, Bo-yu Chang, Xin-yue Wang, and Peng Zhao. 2020. "Effective Heuristic Algorithms Solving the Jobshop Scheduling Problem with Release Dates" Mathematics 8, no. 8: 1221. https://doi.org/10.3390/math8081221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop