Next Article in Journal
Development of Reference Process Model and Reference Architecture for Pharmaceutical Cold Chain
Previous Article in Journal
Frequency, Intensity and Influences of Tropical Cyclones in the Northwest Pacific and China, 1977–2018
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Large Neighborhood Search Method for Minimizing Makespan on Unrelated Parallel Batch Processing Machines with Incompatible Job Families

1
School of Traffic & Transportation Engineering, Central South University, Changsha 410075, China
2
School of Engineering, Deakin University, Geelong, VIC 3216, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(5), 3934; https://doi.org/10.3390/su15053934
Submission received: 30 January 2023 / Revised: 17 February 2023 / Accepted: 19 February 2023 / Published: 21 February 2023

Abstract

:
This paper studies a scheduling problem with non-identical job sizes, arbitrary job ready times, and incompatible family constraints for unrelated parallel batch processing machines, where the batches are limited to the jobs from the same family. The scheduling objective is to minimize the maximum completion time (makespan). The problem is important and has wide applications in the semiconductor manufacturing industries. This study proposes a mixed integer programming (MIP) model, which can be efficiently and optimally solved by commercial solvers for small-scale instances. Since the problem is known to be NP-hard, a hybrid large neighborhood search (HLNS) combined with tabu strategy and local search is proposed to solve large-scale problems, and a lower bound is proposed to evaluate the effectiveness of the proposed algorithm. The proposed algorithm is evaluated on numerous compatible benchmark instances and newly generated incompatible instances. The results of computational experiments indicate that the HLNS outperforms the commercial solver and the lower bound for incompatible problems, while for compatible problems, the HLNS outperforms the existing algorithm. Meanwhile, the comparison results indicate the effectiveness of the tabu and local search strategies.

1. Introduction

Batch processing machines (BPMs) have many applications across many industries, such as semiconductors, garment industries, food processing, and transportation. Due to the ever-growing manufacturing industries, scheduling suffers increasing stress, and the immaturity of scheduling methods has become the bottleneck in processing. Enhancing the efficiency of batch processing operations becomes urgent for the manufacturing industry to maintain or raise its market share, giving rise to batch processing machines scheduling problems [1]. The batch processing scheduling problem can be addressed by solving two independent subproblems: grouping the jobs into batches and scheduling the batches on the machines. The batch formation subproblem for jobs from multiple job families often occurs in the diffusion operation [2]. The jobs can be grouped as a batch as long as the total size does not exceed the machine capacity. When a batch is formed, the batch ready time is the largest ready time of all the jobs in the batch, and the batch processing time is the longest processing time of all the jobs in the batch. A job family is a subset of jobs that need the same recipe and can be processed together as a batch. In this paper, we address a scheduling problem for unrelated parallel BPMs with unequal capacities, where jobs have different processing times on different machines. A batch is composed of a set of jobs from the same family such that the total size of the jobs does not exceed the machine capacity. The objective is to minimize the makespan of machines.
The research on batch processing machines with incompatible job families can be roughly divided into a single batch processing machine scheduling problem and a parallel batch processing machine scheduling problem. Scheduling on a single batch processing machine consists of two independent subproblems: grouping the jobs into batches and scheduling the batches on the machines. For heuristic methods, Uszoy [2] introduced the single batch processing problem with incompatible job families. An efficient algorithm to minimize makespan, maximum lateness, and total weight completion time was proposed, where both the static and dynamic job arrivals scenarios were investigated. Koh et al. [3] designed a heuristic algorithm named Biggest Weight First (BWF) and a hybrid genetic algorithm (GA) to minimize makespan, total completion time, and total weighted completion time. Comparisons were made with a lower bound. Jolai [4] aimed to minimize the number of tardy jobs on a single batch processing machine. They showed that the problem was NP-hard and presented a dynamic programming algorithm that had polynomial time complexity when the number of job families and the batch machine capacity were fixed. Kashan and Karimi [5] investigated the scheduling problem with arbitrary job sizes and weights, and an ant colony optimization (ACO) algorithm was presented afterward. The ACO was compared with GA and BWF in [3], and the results showed ACO’s superior solving ability in most cases. Dauzère-Pérès and Mönch [6] minimized the weighted and unweighted numbers of tardy jobs and proposed two different mixed integer linear programming (MILP) formulations and a random key genetic algorithm (RKGA) to solve the problem. Cheng et al. [7] introduced a batch processing problem with arbitrary job sizes and showed that the problem of minimizing makespan and total completion time were both NP-hard. A MILP model and two polynomial time heuristics based on the longest processing time (LPT) rule and first fit (FF) rule were presented to handle the problem. Li et al. [8] studied unequal job sizes, and the objective was to minimize the maximum lateness. They presented a mathematical model, a lower bound, and several heuristics. All the studies mentioned above solve the batch processing problem by splitting it into different sub-problems such as the batch formation subproblem and batch scheduling subproblem [2,3,4,5,6,7,8]. These sub-problems are solved by independent designed heuristics, and the problem is solved through the continuous interaction of the independent designed heuristics iteratively.
There are some studies using exact methods to solve the single batch processing machine scheduling problems. Azizogla and Webster [9] considered arbitrary job weights and sizes. To handle this problem, a branch and bound method was proposed, and experiments showed that the algorithm yielded optimal solutions for problem instances with up to 25 jobs. Tangudu and Kurz [10] considered the scenario of dynamic job arrivals for minimizing the total weighted tardiness and developed a branch and bound algorithm. The algorithm was capable of solving problems with up to 32 jobs. Yao et al. [11] considered arbitrary ready times and proposed a branch and bound method to handle the problem. Computational results suggested that the method can figure out the test problems with up to 60 jobs and six families.
A growing number of studies have paid attention to scheduling methods on identical parallel batch processing machines. According to the requirement of real-world manufacturing, the single-objective problems mainly optimize the total weight tardiness, total flow time, total weight completion time, and makespan. To minimize total weight tardiness, Balasubramanian et al. [12] proposed two different versions of genetic algorithms (GAs), and the first GA was applied to assign batches on the machine, while the second GA was used to handle jobs assignment. Mönch et al. [13] considered dynamic job arrivals and used different heuristics to optimize batch formation, assignment, and sequence separately. Chiang et al. [14] solved the same problem as in [13] and proposed a memetic algorithm with a new genome encoding scheme to search for the optimal or near-optimal batch formation and batch sequence simultaneously. Machine assignment was resolved in the proposed decoding scheme. Through extensive experiments, the algorithm demonstrated its advantages over the memetic algorithm [13] regarding solution quality and computational burden. Almeder and Mönch [15] devised an ACO and a variable neighborhood search (VNS) approach to solve the problem, and the computational results showed that VNS outperformed the ACO and GA [12] with respect to computational costs and solution quality. Venkataramana and Srinivas Raghavan [16] divided the problem into batch formation and batch scheduling and proposed three ACOs. Experiment results showed that ACO-based algorithms performed better than GA [12]. Lausch and Mönch [17] designed a GA approach, an ACO approach, and a large neighborhood search (LNS) approach for the same problem, as in [15]. The LNS outperformed the GA and the ACO, which was comparable with the VNS proposed in [15]. To minimize the total flow time, Cakici et al. [18] added arbitrary ready times and unequal job sizes to the problem and presented a mathematical model and several different VNS variants. Ham et al. [19] only considered non-identical job sizes and formulated a MIP model and a constraint programming (CP) model. Then, the models’ performances were compared with those of a VNS heuristic [18], and the computational results demonstrated that the CP model outperformed VNS regarding solution quality and computational time. Jiang et al. [20] studied the scheduling problem of jobs with non-identical sizes and ready times. They proposed a MILP and a CP model for small-scale instances, batch-based local search, and the iterated greedy (IG) algorithm for large-scale instances. To minimize total weight completion time, Huang et al. [21] proposed a MILP model and an approximation algorithm. Numerical results demonstrated that the proposed algorithm can obtain high-quality solutions. As for minimizing the makespan, Koh et al. [22] added non-identical job sizes to the problem, and proposed several heuristics and a GA algorithm. Jia et al. [23] studied the problem of scheduling a set of jobs with non-identical job sizes and proposed a Max-Min Ant System (MMAS) algorithm. Computational results demonstrated the superiority of the proposed MMAS over GA [22]. Sun et al. [24] considered deteriorating jobs and module changes simultaneously, they proposed a hybrid algorithm combining self-adaptive differential evolution (SADE) and artificial fish swarm algorithm (AFSA) to tackle the problem. Computational experiments were conducted to evaluate the performance of the proposed algorithm. Similarly, most of the studies above solve problems by the heuristic interaction between the parts (such as batches formation and batches scheduling) to search for better solutions of the problems iteratively.
All of the studies above focus on scheduling on identical machines, where jobs have the same processing speeds on machines. However, new and old machines usually co-exist in practical manufacturing, resulting in different processing speeds for jobs, which leads to unrelated parallel batch processing machines. Meanwhile, most of the research mentioned above assumes all jobs are the same size or ignores that the jobs are available at different times. As it is practically motivated, we address the problem of scheduling on unrelated parallel batch processing machines with non-identical job sizes and dynamic job arrivals. The performance metric of the problem is minimizing makespan, which can reflect the utilization of the limited resource and is the most commonly used performance metric in the literature [25]. To the best of our knowledge, research on optimizing the makespan of the parallel batch processing machines scheduling problems with incompatible job families is still very limited.
A conventional way to tackle batch processing scheduling problems is to split them into two subproblems and solve them through heuristic interaction, which complicates the solving mechanism. In order to avoid decomposing the batch processing scheduling problem and improve the computational efficiency, we propose a hybrid large neighborhood search (HLNS) from the perspective of discrete combinatorial optimization and optimize batch formation and batch scheduling simultaneously.
The HLNS is based on the large neighborhood search [26] framework, which searches for better solutions iteratively by a destroy operator and a repair operator designed exclusively according to the characteristics of the target problem. Based on the tardiness of jobs, we design a tardiness-related removal operator in this paper. Meanwhile, in order to avoid unexpected repetitive explorations, the tabu strategy is integrated into the removal operator, then proposing the tabu-based tardiness-related removal operator. Furthermore, to improve the local search ability of the method, a local search algorithm will be applied to the current solution if no better solution is achieved after a fixed number of iterations. The proposed HLNS combines the advantages of LNS, tabu strategy, and local search strategy, which has superior exploration ability and can guide the search process towards a better region. To prove the effectiveness of the algorithm, a MIP model is presented which can solve the small-scale instances optimally, and a lower bound is proposed to evaluate large-scale instances. Furthermore, experiments were conducted using 180 benchmark instances, and the results are compared with the heuristic method reported in [27,28,29,30].
The main contributions of the work can be summarized as follows:
(i).
For the first time, we present a MIP model for the unrelated parallel batch processing machines problem with incompatible job families (UPBPMIJF), which can solve small-scale instances optimally;
(ii).
A hybrid large neighborhood search (HLNS) approach combined with a tabu and local search strategy is proposed to solve the scheduling problem efficiently;
(iii).
To verify the effectiveness of the proposed algorithm, two sets of instances, including incompatible job families and compatible job families, are used to conduct the experiments, and the results obtained by the HLNS are compared with that of a MIP model, a lower bound, and four previously published heuristic methods in [27,28,29,30]. Numerous computational experiments are conducted, which cogently demonstrate the advantages of the HLNS algorithm.
The rest of the paper is organized as follows: Section 2 introduces UPBPMIJF and presents a MIP model and a lower bound. The framework of the proposed HLNS for UPBPMIJF and its implementation procedure is described in Section 3. Section 4 provides numerical results, comparisons, and analyses. Finally, concluding remarks are made in Section 5.

2. Mathematical Model and Lower Bound

The scheduling problem of interest can be formally defined as follows: Given a set of n jobs, J = { 1 , , n } , to be processed on m unrelated parallel batch processing machines (UPBPMs), M = { 1 , , m } , and each machine has a capacity denoted by C m . The jobs are divided into f families, and the set of families is F = { 1 , 2 , , f } . Only jobs from the same family can be grouped into a batch. Furthermore, each job j is associated with a size s j , a ready time r j and a corresponding processing time p j m on the machine m . A batch-processing machine can process a group of jobs simultaneously as long as the capacity of the machine C m is not exceeded. We assume that the size of each job does not exceed the largest machine capacity C ¯ = max m M { C } m   ( s j C ¯ ,   j J ) . The ready time of a batch is the time when all jobs in the batch are available to be processed, and the processing time of a batch is determined by the longest processing time of the jobs in the batch.

2.1. Mathematical Model

The following symbols will be used throughout the rest of this paper:
Sets
J Set of jobs, indexed by j .
M Set of machines, indexed by m .
B Set of batches, indexed by b .
FSet of job families, index by f .
Parameters
s j Size of job j , j J .
r j Ready time of job j , j J .
p j m Processing time of job j on machine m , j J ,   m M .
C m Capacity of machine m ,   m M .
Z j f Binary variable, Z j f = 1 if job j is in family f , otherwise 0.
Variables
S b m Starting time of batch b on machine m .
P b m Processing time of batch b on machine m .
x j b binary variable, x j b = 1 if job j is in batch b , otherwise 0.
y b m binary variable, y b m = 1 if batch b is processed on machine m , otherwise 0.
δ b f binary variable, δ b f = 1 if batch b is in family f , otherwise 0.
C m a x Makespan of the schedule.
The mathematical model of the problem can be formulated as follows:
Minimize C m a x
b B x j b = 1 ,   j J
m M y b m = 1 ,   b B
j J x j b s j y b m C m ,   b B ,   m M
S b m r j x j b y b m ,   j J ,   b B ,   m M
S b m S ( b 1 ) m + P ( b 1 ) m , j J ,   b B ,   b > 1 , m M
P b m p j m x j b u b m , j J ,   b B , m M
C m a x P b m + S b m ,   b B ,   m M
x j b z j f δ b f ,   j J ,   b B ,   f F
δ b f j J x j b z j f , b B ,   f F
δ b f { 0 , 1 }
x j b { 0 , 1 }
y b m { 0 , 1 }
Objective (1) is minimizing the makespan. Constraint sets (2) and (3) are assignment constraints, which determine jobs assigned to batches and, subsequently, batches assigned on machines. Constraint set (4) ensures that the size of a batch processed on the machine does not exceed its capacity. Constraints (5) and (6) define the starting time of a batch, which is the maximum value between the latest arrival job and the completion time of the previous batch. Constraint (7) defines the processing time of a batch. Constraint (8) defines the makespan as the completion time of the last batch. Constraints (9) and (10) ensure that each batch only belongs to one family. The types and bounds of variables are defined by constraints (11) to (13).

2.2. Lower Bound

The work in [27] addressed an algorithm to compute the lower bound ( L B ) for scheduling unrelated BPMs with unequal job sizes, arbitrary job ready times, and compatible job families, so as to minimize makespan. In this paper, we adapt this lower bound for the UPBPMIJF. For ease of explanation, we assume that there are only two different machine capacities and let the two machine capacities be denoted by C 1 and C 2 (assuming C 1     C 2 ). The number of machines with capacity C i is denoted as m i ( i = 1 , 2 ), and the machine sets can be grouped into two subsets, represented by M 1 and M 2 . Similarly, the job sets can be divided into 2 subsets, denoted by J 1 and J 2 , where the jobs in J 1 can be scheduled on any machine in M 1 M 2 , while the jobs in J 2 can only be scheduled on any machine in M 2 . One lower bound for the problem is given by max j J { r j + min m M { p j m } } . Other lower bounds can be obtained by cutting each job j into p j × s j unit jobs. For example, when the unitized jobs of J 2 are processed on the machines in M 2 , a possible lower bound can be obtained. Note that the jobs are from different families, and J 2 f represents the jobs in J 2 of family f , a lower bound for the number of batches of the family is [ j J 2 f { s j }   m 2 × C 2 ] . Therefore, a lower bound for the total processing time of all the jobs in the family f is [ j J 2 f { s j }   m 2 × C 2 ] × min m M 2 p j m . The earliest time that the batches can be processed is min j J 2 { r j } . Thus, a possible lower bound is [ f F [ j J 2 f { s j }   m 2 × C 2 ] × min m M 2 p j m ] + min j J 2 { r j } . The equation of the lower bound is given as follows.
The L B can be obtained by computing q + 1 lower bounds:
L B = max { L B 0 ,   L B 1 , , L B q }
where
L B 0 = max j J { r j + min m M { p j m } }
L B 1 = [ f F [ j J q f { s j } × min m M q p j m   m q × C q ] ] + min j J q { r j }
L B 2 = [ f F [ j J q 1 f J q f { s j } × min m M q 1 M q p j m m q × C q + m q 1 × C q 1 ] ] + min j J q 1 J q { r j }
, L B q = [ f F [ j J f { s j × min m M p j m }   m 1 × C 1 + m 2 × C 2 + + m q × C q ] ] + min j J { r j }

3. Hybrid Large Neighborhood Search Algorithm

The generality of MIP methods is limited to solving small-scale instances, and heuristic methods are usually used to solve large-scale problems more efficiently. As such, this study presents a hybrid large neighborhood search (HLNS), a heuristic method, to solve the UPBPMIJF. The LNS framework has shown excellent results in discrete/combinatorial optimization problems [31,32], which shows powerful global searching ability due to a large neighborhood reconstruction mechanism. This study adopts the LNS framework and improves its performance by incorporating tabu search and local search strategies according to the characteristics of the UPBPMIJF.
In order to explain the search process of the HLNS intuitively, a small-scale instance is presented in Figure 1. There are eight jobs and two machines. Figure 1a shows family f of job j . After obtaining the initial solution, the destroy operator is applied to remove some jobs from the initial solution. In the example, j 1 and j 8 are removed, as shown in Figure 1b. Then, the removed jobs are reinserted into the existing batches or a new batch to obtain a new solution. Here, j 1 is inserted into a new batch and j 8 is inserted into an existing batch, as shown in Figure 1c. After a pre-defined number of iterations, if a better solution cannot be obtained, a local search algorithm is applied to the solution to swap the positions of two jobs. As shown in Figure 1d, j 1 and j 2 swap the positions trying to obtain a maybe better solution.
This section presents the proposed hybrid large neighborhood search (HLNS) method for solving the UPBPMIJF. In particular, a dynamic solution structure of UPBPMIJF is proposed, which contains the job-to-machine assignment information of each parallel batch. By exploiting the solution structure, the problem decomposition can be avoided; therefore, the proposed HLNS can optimize batch formation and batch scheduling simultaneously. We define a solution structure S as follows:
S = [ J 1   ( 1 ) J 1 ( 2 ) J 1 ( | B 1 | ) J 2 ( 1 ) J 2 ( 2 ) J 2 ( | B 2 | ) J | M | ( 1 ) J | M | ( 2 ) J | M | ( | B | M | | ) ]
We use M to represent the set of machines and | M | to represent the number of machines. B m denotes the set of batches that are assigned to machine m , and | B m | the number of batches. Term J m ( b ) , ( m M , b B m ) represents the subset of jobs assigned to batch b (i.e., J m ( b ) = { j | j J ,   x j b = 1 , y b m = 1 } ). When the solution structure in (19) is determined, each batch’s starting time and processing time can be easily determined, and thus the objective value of the solution can be obtained accordingly. Based on this, we can destroy and repair the solution structure and determine the quality of the scheduling scheme by evaluating the solution structure.
The proposed HLNS seeks the optimal solution by manipulating the solution structure. At each iteration, d jobs will be removed by a tabu-based removal operator, which will be reinserted into the solution by a repair operator. At every i t iteration, the quality of the solution will be checked, and if the best solution is not improved during the past i t iterations, a local search algorithm will be applied to the current solution. The framework of HLNS is depicted in Algorithm 1. The algorithm starts from an initial solution and searches for a better solution through the destroy-and-repair operations iteratively. In each iteration, the reconstructed solution is examined together with the original solution by an acceptance criterion extracted from simulated annealing ( T is temperature, α is cooling rate), and the accepted solution will be preserved for the next iteration. The process will not stop until a predefined termination condition is met.
Algorithm 1: Hybrid large neighborhood search for the UPBPMIJF
Require: A feasible initial solution x 0
1: Set x ¯ = x 0 ,   x * = x 0 ,   T = T 0 . Initialize the tabu list
2: while stopping criteria not met do
3:     x ^ = Tabu-based Tardiness-related removal ( x ¯ )
4:     x = Greedy insertion ( x ^ )
5:        if local search criteria met do
6:                    x = Local search ( x )
7:                    x = x
8:        end if
9:        if  O b ( x ) < O b ( x ¯ ) then // O b ( x ) denotes the objective value of solution x
10:                    x ¯ = x ; //update the current solution
11:                    if  O b ( x ) < O b (   x * )   then
12:                                 x * = x ; //update the best solution
13:                   end if
14:    else
15:                    Add the removed jobs to the tabu list with a certain probability p t
16:                    if  r a n d o m ( 0 ,   1 ) < e ( O ) b ( x ¯ ) O b ( x ) T
17:                                 x ¯ = x ; //a worse solution is accepted
18:                    end if
19:    end if
20:        T = α T 0
21: end while

3.1. Initialization

In this work, the initial solution is constructed by a greedy algorithm (called H3_FF) proposed in [27] with some adjustments. The procedure for obtaining an initial solution structure is listed as follows:
Step 1: Sort the jobs in ascending order of their ready times to get list L ;
Step 2: Assign the first job of list L to the best insertion position with the smallest increment of objective value. Then remove the job from list L ;
Step 3: Repeat Step 2, an initial solution is obtained until list L is empty.

3.2. Destroy and Repair Operators

The core part of the HLNS is the destruction and reconstruction operations, which directly determine the search efficiency. At each iteration, d jobs are removed from the current solution S by an exclusively designed tabu-based tardiness-related removal operator and a partial solution S p is obtained, and these d jobs will be added back to the S p according to a certain rule by a greedy insertion operator. The details of the key mechanisms are further discussed in the following.

3.2.1. Tabu-Based Tardiness-Related Removal Operator

The tardiness-related removal operator was first proposed by Shaw [26] for vehicle routing problems, which removes similar customer orders according to evaluation criteria. In this paper, we take the tardiness of jobs as a metric of relatedness. The tardiness of jobs is defined as T i = | c i r i | , where c i represents the completion time of job i , and r i represents the ready time of job i . Then, the relatedness metric R ( i , j ) is computed by Formula (20). A smaller the similarity value indicates higher similarity of the two jobs.
R ( i , j ) = | T i T j |
The first job i to be removed is randomly chosen, and other jobs will be selected according to the similarity to the randomly chosen job. To enhance the efficiency and avoid short-term repetition of the search, a tabu-based search strategy is incorporated into the tardiness-related removal operator. Specifically, if a worse solution is produced, the removed jobs are added to a tabu list with a predefined probability p t   and prohibited from being removed for θ iterations. The procedure of tabu-based worst removal is described in Algorithm 2.
Algorithm 2: Tabu-based Tardiness-related removal
Require: Current solution S , destruction level d and tabu list
1: Remove a random job i that are not in the tabu list from S , set R j = { i } // R j represents the set of removed jobs
2: While  | R j | < d do
3:          Remove job j = a r g m i n j J R j   R ( i ,   j ) from S that is not in the tabu list
4:           R j = R j { j }
5: end while

3.2.2. Greedy Insertion

The insertion operator needs to make two decisions for reinserting the removed jobs: (1) the insertion position of the removed jobs; (2) the insertion order of the removed jobs. Each job j , j R j can not only be inserted into the existing batches but can also be inserted into the S p as a newly formed batch. The implementation of greedy insertion concerning the UPBPMIJF is described in the following steps:
Step 1: For the first job in list R j and the corresponding alternative insertion position, examine whether the job can be inserted into the position without violating the capacity. The job will be inserted into the best position with minimal makespan increment. Then the job is removed from list R j ;
Step 2: Repeat Step 1 until the list R j is empty.
In each iteration, the removed job is attempted to be inserted into every feasible position (i.e., the existing batches and newly formed batches), and the objective increment is calculated as long as the batch size does not exceed the machine capacity. It is worth mentioning that the newly formed batch is inserted into the appropriate position on the machine so that the batches still satisfy the ascending order of their ready times so as to reduce repeated computation. Each job has multiple feasible positions to insert, and among these positions, the best insertion position (with minimal makespan increment) is kept for the next iteration.

3.3. Local Search Procedure

Algorithms based on local search are used to solve various combinatorial optimization problems and work well [33]. In particular, when no better solution is obtained after each i t iteration, local search (LS) is applied to the newly generated solution after the destroy-and-repair phases to avoid falling into local optimality. The LS is in view of swapping two randomly selected jobs in the current solution as long as the two jobs meet the exchange requirement (i.e., the machine capacity cannot be violated, and the two jobs have to be in the same family). If a better solution is obtained after the local search, the newly generated solution is preserved as the current solution, and the iteration counter is reset to 0; otherwise, the iteration counter will increment by 1. The local search does not evaluate the entire exchange neighborhood, and it will stop if the iteration counter reaches n ( n is the number of jobs). The overview of the LS procedure is presented in Algorithm 3.
Algorithm 3: Local search
Require: Current solution x
1: Set iteration counter = 0;
2: While iteration counter < n do
3:          Select a random job family f
4:          Select two random jobs j 1 and j 2 from different batches of family f
5:          if exchange condition met do \\ the corresponding machines’ capacities cannot be violated
6:                    x = exchange jobs j 1 and j 2 in solution x ;
7:                   if  O b ( x ) < O b ( x ) then
8:                          x = x ;
9:                            iteration counter = 0;
10:                   else
11:                            iteration counter = iteration counter + 1;
12:                   end if
13:          end if
14: end while
15: Return x ;

3.4. Acceptance Criterion

After searching for a better solution, an acceptance criterion is applied to determine whether the new solution S n e w is accepted as the current solution S for the next iteration. In this algorithm, a simulated annealing acceptance criterion, which accepts a worse solution with the probability exp ( ( O b   ( S ) O b   ( S n e w ) ) / T ) is used. The parameter temperature T influences the acceptance probability, which starts at T 0 and decreases in every iteration at a cooling rate α .

3.5. Stopping Conditions

In this algorithm, two stopping conditions are used to force the termination of the proposed HLNS: (i) When the makespan value of the best solution achieved by HLNS equals the lower bound or the optimal makespan value obtained by a commercial optimization solver; (ii) when the number of iterations reaches a pre-specified number. If either of these two stopping conditions is met, the HLNS algorithm stops.

4. Numerical Experiments

In this section, we start by describing the design of the experiments. Then we explain how we determine the parameters of the proposed HLNS. The computational results are recorded and analyzed in Section 4.3 and Section 4.4, and the advantages of the proposed tabu and local search strategies are discussed at the end of this section.

4.1. Design of Experiments

Two sets of instances are used to validate the efficacy and efficiency of the proposed HLNS heuristic. The jobs in Set 1 are compatible, while those in Set 2 are incompatible. The characteristics of the two datasets are described as follows.
Set 1. In [27], the authors provided the benchmark instances of the scheduling problem under unrelated parallel BPMs context with unequal job sizes, dynamic job arrivals, and compatible job families, which can be found at http://www.dpi.ufv.br/projetos/scheduling/upbpm.htm (accessed on 1 April 2022). It provides 10 sub-sets of instances, and the instances in each sub-set obey the same random distribution. This paper only selects the instances from the fifth sub-set to verify the effectiveness of the proposed algorithm, resulting in 72 small-scale instances and 108 large-scale instances. We introduce the generation rules of the instances briefly. Each instance of Set 1 has six different characteristics, namely the number of jobs ( n ) , number of machines ( m ) , job size ( s j ) , job ready time ( r j ) , machine capacities ( C m ) , and processing time ( p j m ) . The processing times p j m are uniformly distributed in k disjoint but interrelated intervals, I i = [ a i , b i ] , where a 1 = 10 ;   a i = b i 1 + 10 , i { 2 , , k } ;   b i = a i + 20 ,   i { 1 , , k } . Job ready times r j are generated based on the processing time, which are uniformly distributed in the range U [ 1 , μ P ] , where P = ( j J m M p j m / | M | )   , μ { 0.05 ,   0.1 ,   0.3 } . Job sizes ( s j ) are generated according to uniform distribution and the capacity of the machine ( C m ) is randomly generated based on the number of machines ( m ) . The factor and characteristics of the instances provided in the literature are shown in Table 1.
Set 2. The instances in Set 2 are adopted from Set 1, and we randomly generated the attributes of job families. The jobs of small-scale instances are divided into three families, whereas the jobs of large-scale instances are divided into four families. Set 2 also contains 72 small-scale instances and 108 large-scale instances. Notably, the instances in Set 2 (with incompatible job families) are generalized cases of the instances in Set 1 (with compatible job families), except that the jobs in Set 2 are divided into multiple job families according to the scale; the rest of the parameters (i.e., s j , r j ,   C m , p j m ) are the same as those in Set 1.

4.2. Parameter Settings

All experiments are coded in Python 3.8 and run on a PC with 3.0GHz with 8 GB memory running 64-bit Windows 10 and a 64-bit version of Gurobi 9.1.2 under an academic license. The MIP model was implemented in Gurobi with a time limit of 1 h (3600 s).
For the HLNS, we test different parameter settings on eight randomly chosen instances in Set 2, one for each number of jobs. Each parameter is set to a value in a certain range. Through experiments, we found that some parameters are not sensitive to the performance of the proposed algorithm, which has been recorded in Table 2, where O b ( S i n i ) represents the objective value of the initial solution. Three parameters contribute more to the performance, namely the number of iterations Π i , the number of jobs to be removed d , and the length of the tabu list l t . Additionally, we need to adjust the above three parameters according to the scales of instances. We conduct a factorial experiment with the Π i , d and l t . Each parameter is tested at a certain range, and we compare the solution quality under the different parameter settings. It is found that when d is uniformly distributed in U [ 5 , 10 ] ,   l t is set to 0.5 × n and Π i is set to 10,000 for small-scale instances, and d is uniformly distributed in U [ 10 , 20 ] ,   l t is set to 0.7 × n and Π i is set to 20,000 for large-scale instances in our work, providing a good trade-off between solution quality and computational time.

4.3. Comparison between Gurobi, Lower Bound, and HLNS Methods

In the first experiments, the proposed HLNS is compared with the Gurobi solver and the lower bound through the instances in Set 2 (with incompatible job families). The experiments with the MIP model (see Section 3.1) were performed with the commercial solver Gurobi, and the computational time to solve each instance was limited to 3600 s. However, large-scale instances can seldom be solved to a feasible solution within 3600 s. The results from the HLNS of large-scale instances with that of the lower bound are compared. In particular, the HLNS method was executed 10 times for each instance to overcome the stochastic differences in the results.
In the comparative analysis, we record the best and the average solution among 10 trials for each instance. The results are grouped by the number of machines ( m ), the number of jobs ( n ), and the combination of job sizes ( S i ) and job ready times ( R i ), i = 1 , 2 , 3 . S 1 , S 2 , S 3 represent the instances with small, large, and mixed job sizes. R 1 , R 2 , R 3 denote the problems with short, medium, and long ready times. Table 3 and Table 4 represent the computational results for instances with 20–30 jobs and 40–50 jobs.
For the Gurobi solver, the objective value of the obtained solution within 3600 s and the Gap (%) are recorded. The Gap returned by Gurobi is computed as Gap (%) = | O l b O b   ( S ) | / ( O b   ( S )   ) × 100 , where O l b is the relaxed lower bound of the MIP model, and O b ( S ) is the objective of the obtained solution. Among the 72 small-scale instances, 27 instances are solved optimally within 3600s by Gurobi. However, When the scale (the number of jobs) of instances rises, the performance of Gurobi declines significantly with respect to solution quality and computational time, and even in some instances, Gaps obtained by Gurobi are 100%. As we can see, for all the instances that can be solved optimally by Gurobi, HLNS can also yield optimal solutions. Meanwhile, for those instances that cannot be solved optimally by Gurobi, HLNS can find better solutions than Gurobi with less computational burden. In addition, the standard deviation (SD) of the makespan value obtained by HLNS is calculated in 10 trials for each instance, and it can be observed that over 70% (51 out of 72) SDs are calculated to be 0, and the average SD is less than 0.60.
In the experiment, large-scale instances can hardly be solved by Gurobi to a feasible solution. Therefore, we only list the results of lower bound and HLNS for large-scale instances. Table 5 and Table 6 show the experimental results for 100–150 jobs and 200–250 jobs, respectively. From the presented results, 18 instances reach the lower bound. However, for the rest of the instances, the results obtained by the HLNS are far from the results achieved by the lower bound. It can be concluded that it is not a very tight lower bound of the problem and is used as a termination condition (see Section 3.5). Additionally, we calculate the SDs of the 108 large-scale instances, among which 57 instances’ SDs are 0. The average SD is 3.44, which is quite small compared to the corresponding objective values. Take the largest SD (20.28), for instance; it is just 1.23% of the corresponding average makespan.
Figure 2a shows the average computational time of HLNS for each small-scale (with 20, 30, 40, and 50 jobs), which indicates that the computational time of HLNS is relatively shorter than that of the Gurobi solver. Figure 2b shows the average computational time for each large-scale (with 100, 150, 200, and 250 jobs). We can see that the average computational time of HLNS does not exceed 10 s for small-scale instances and is shorter than 260 s for large-scale instances. To sum up, HLNS shows its advantage in finding optimal or near-optimal solutions stably within a short computing time for instances of Set 2 (with incompatible job families).

4.4. Comparison among CPLEX, 4 Meta-Heuristics, and HLNS

In this subsection, we present the experimental results of the instances in Set 1 (compatible job families), where jobs can be processed together. We compare the results among CPLEX, IG [27], SA [28], GA [29], ACO [30], and HLNS, and the corresponding results from CPLEX and four existing meta-heuristics can be found in http://www.dpi.ufv.br/projetos/scheduling/upbpm.htm (accessed on 1 April 2022).
First, we list the numerical results of small-scale instances. Table 7 and Table 8 record the best solutions in 10 trials of each algorithm for instances with 2030 jobs and 40–50 jobs. Therein, 29 out of 72 instances have reached the optimal solutions by CPLEX, from which 29 (100%), 29 (100%), 26 (89.66%), 27 (93.10%), and 21 (72.41%) optimal solutions are yielded by algorithms HLNS, IG, SA, GA, and ACO, respectively. For 72 small-scale instances, HLNS, IG, SA, GA, and ACO obtained 72, 70, 57, 47, and 39 best-known solutions, respectively. The proposed HLNS method manages to find better solutions than IG for two instances; the average improvement is 0.69%, and the maximum improvement is 0.74%.
The average results of the five meta-heuristics (the proposed HLNS and the other four existing meta-heuristics) over 10 trials are depicted in Figure 3 (the specific results of average solutions in 10 trials will be given in Appendix A). Note that the effect of the number of machines ( m ) is not considered when presenting the average results. For each category in Figure 3 ( S i R i , i = 1 , 2 , 3 ), the recorded value represents the average value of the categories of different machines (the result is the average result of the same category S i R i among m = 2 and m = 3 for small-scale instances). Figure 3 illustrates distinctly that HLNS achieves better average results over 10 trials than the other four meta-heuristics for almost all instances. In particular, HLNS, IG, SA, GA, and ACO reach 68, 57, 39, 41, and 32 instances with the best average solutions for a total of 72 small-scale instances. Among all instances, the HLNS outperforms IG for 15 instances with better average results, with an average improvement of 1.00%, while IG performs better than the HLNS for only 4 instances, with an average improvement of 0.52%.
For large-scale instances, we must address the comparison of the result generated by the five algorithms HLNS, IG, SA, GA, and ACO. Table 9 and Table 10 present the best solutions in 10 trials for instances with 100–150 jobs and 200–250 jobs. Concerning 108 large-scale instances, HLNS, IG, SA, GA, and ACO reach 100, 80, 58, 53, and 43 instances with best-known solutions. In particular, HLNS outperforms IG, SA, GA, and ACO for 28, 49, 55, and 65 instances, and IG outperforms HLNS only for eight instances. On average, the improvement of the HLNS compared to IG is 0.28%, and as the scale of the problem increases, the improvement is more significant.
As for the average results among 10 trials for large-scale instances (see Figure 4), HLNS, IG, SA, GA, and ACO yield 96, 69, 51, 40, and 38 instances, with the best average solutions for a total of 144 instances. Similarly, the influence of the number of machines ( m ) is not reflected in Figure 4, and the result recorded is the average result among the same category S i R i of m = 3 ,   m = 4 , and m = 5 . The specific results are given in Appendix A. In particular, the proposed HLNS outperforms IG for 39 instances, and IG outperforms HLNS for 12 instances. The improvement of HLNS compared to IG is 0.89% on average. In conclusion, HLNS outperforms the other four algorithms in solving unrelated parallel BPMs with compatible job families and has a higher probability of obtaining better solutions.
In addition, we have calculated the computational time and SD of HLNS for each instance and present the average value of each scale, as shown in Figure 5. This demonstrates that the average computational time for small-scale instances is less than 10 s, and the average computational time for large-scale instances is less than 270 s. The SD is quite small compared to the value of the makespan. In particular, in 59 out of 72 small-scale instances and 63 out of 108 large-scale instances, SD is equal to 0. Since the specific makespan value obtained by four other meta-heuristics is not given in the literature, the SD cannot be obtained for comparison.

4.5. Discussions of the TALNS

As mentioned earlier, the instances in Set 1 (compatible job families) are special cases of the instances in Set 2 (incompatible job families), where all jobs are in the same family and can be processed together. After carefully observing the results of the two subsections above, there is no significant difference between the average computational time for each scale of the two sets of instances. However, the SDs of incompatible instances (Set 2) are generally larger than the SDs of compatible instances (Set 1). Indeed, 54 out of 180 instances in Set 2 attest to larger SDs than those with compatible job families, while 36 out of 180 instances in Set 1 result in greater SDs than Set 2. The reason for this phenomenon is that the scheduling problem that considers incompatible job families tend to be more difficult to solve, whereby the HLNS has reduced solution stability than solving compatible instances.
In addition, in order to analyze the effectiveness of the proposed tabu-removal operator and local search mechanism, we compare the performances of pure LNS, tabu-based LNS (TLNS), and HLNS under the same parameter settings and stopping conditions. Table 11 shows the best and average results, average times, and SDs in 10 trials of instances with the smallest scale ( n = 20 ) and instances with the largest scale ( n = 250 ) . All results are averaged over different numbers of machines. From the table, HLNS and TLNS can always achieve better results than LNS, which shows that the tabu strategy can effectively guide the search toward better regions. Meanwhile, although TLNS outperforms HLNS on the best results in a few instances, HLNS shows superiority in finding high-quality solutions over TLNS in general. This indicates that HLNS combined with a local search strategy can effectively avoid falling into local optimum. Furthermore, the computational time of HLNS is longer than that of TLNS and LNS, and LNS costs less time than TLNS for most categories of instances. This is because the tabu-removal and local search procedure costs extra time.

5. Conclusions

This paper studied the unrelated parallel batch processing machines scheduling problem with non-identical job sizes, dynamic job arrivals, and incompatible job families. A mixed integer programming is proposed to optimally solve the small-scale problem, and a hybrid large neighborhood search algorithm is proposed to solve large-scale problems. The proposed hybrid heuristic follows the basic structure of a large neighborhood search, which incorporates tabu and local search strategies to guide the search toward better regions and enhance the local search ability. Two sets of instances with up to 250 jobs and five machines, including 180 benchmark compatible job family instances and 180 newly generated incompatible job family instances, were used in the experiments to verify the effectiveness of the proposed model and heuristic solution method, with results compared with existing methods.
For 180 instances with incompatible job families, the proposed MIP model can optimally solve small-scale instances quickly, while it is difficult to yield a satisfactory result within a reasonable time for large-scale instances. In contrast, the proposed HLNS can obtain high-quality solutions efficiently for both small- and large-scale instances. In addition, the proposed algorithm performs stably over 10 trials, indicating that the proposed HLNS is efficient and stable in solving problems with incompatible job families. For 180 instances with compatible job families, the HLNS has updated the best-known solutions for two small-scale and 27 large-scale instances, respectively. The comparison results show that the proposed HLNS outperforms the other four existing algorithms (IG, GA, SA, ACO), especially for large-scale instances. Meanwhile, over 65% of instances had SD values of 0, which shows the stability of the proposed HLNS. Numerical results demonstrated that the proposed HLLNS outperforms commercial optimization solver and other meta-heuristics, indicating that the HLNS can efficiently and stably solve compatible and incompatible job family problems. Furthermore, from the comparison results of LNS, TLNS, and HLNS, HLNS can always find better-quality solutions and perform more stably than LNS and TLNS, which indicates the effectiveness of the tabu strategy and local search strategy.
There are several possible pathways to our future research. One is to develop more efficient exact methods for the medium- and large-scale instances of scheduling problems on unrelated parallel batch processing machines with various considerations. Another direction is to consider job-shop and flow-shop environments with batch processing machines. Furthermore, it would also be interesting to tackle the batch processing scheduling problem in stochastic environments.

Author Contributions

Conceptualization, B.J. and X.X.; Methodology, B.J. and X.X.; Software, X.X.; Validation, B.J. and X.X.; Formal analysis, B.J. and X.X.; Investigation, B.J. and X.X.; Resources, B.J.; Data curation, X.X.; Writing—original draft, X.X.; Writing—review & editing, B.J., S.S.Y. and G.W.; Visualization, X.X.; Supervision, B.J. and S.S.Y.; Project administration, B.J.; Funding acquisition, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China under Grant No. 72001216, Natural Science Foundation of Hunan Province, China under Grant No. 2020JJ5780, Central South University under Grant No. 202045007.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all individual participants included in the study.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Appendix A

Here, we provide the average results for the instances in Set 1 in 10 trials. Table A1 and Table A2 record the average solutions in 10 trials of each algorithm for instances with 20–30 jobs and 40–50 jobs. Table A3 and Table A4 record the average solutions in 10 trials of each algorithm for instances with 20–30 jobs and 40–50 jobs. Table A5 presents the SD value of makespan in 10 trials of HLNS for instances in Set 1.
Table A1. Average results for instances with n = 20 and n = 30 .
Table A1. Average results for instances with n = 20 and n = 30 .
Machine Number   of   Jobs   n = 20 Number   of   Jobs   n = 30
m HLNSIGSAGAACOHLNSIGSAGAACO
2 S 1 R 1 9093.395.39792.2110.8111.1123.8128.5118.3
S 1 R 2 98100.2113.2105.398134.8135146.3146.7140.7
S 1 R 3 209209209209209339339339339339
S 2 R 1 475475475475475608608608608608
S 2 R 2 418418418418418575575575575575
S 2 R 3 386386386386397588588588588588
S 3 R 1 245245245249245523523523523523
S 3 R 2 278278278278278434434434434434
S 3 R 3 300303.7306.8310.7320384385.5388.2389391
3 S 1 R 1 808085809499.598103.510998
S 1 R 2 120120120120120170170170170170
S 1 R 3 319319319319319431431431431431
S 2 R 1 259259259259260.5477477477477499
S 2 R 2 308308315.5334.7330.3378378394.3406401.7
S 2 R 3 291291291291306624624629.3624634.5
S 3 R 1 524524524524524244.5244257.3260258.5
S 3 R 2 272272272272342358.4360.3372.7387378.8
S 3 R 3 322322322322335533533533533542
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table A2. Average results for instances with n = 40 and n = 50 .
Table A2. Average results for instances with n = 40 and n = 50 .
Machine Number   of   Jobs   n = 40 Number   of   Jobs   n = 50
m HLNSIGSAGAACOHLNSIGSAGAACO
2 S 1 R 1 131.3132.1143149.7134.8154.1155.2175.5189.3180.8
S 1 R 2 161162.8162.8164.3164215.4218.1220.7223.2227.7
S 1 R 3 454454454454454541541541541541
S 2 R 1 623623625.2626.763810971097109710971097
S 2 R 2 93393393393395310171017101710171017
S 2 R 3 737737737737738.210301030103010301030
S 3 R 1 778778778778778590590597610.3590
S 3 R 2 514514519524516.2827827827827827
S 3 R 3 581581583.5584.2583728728731.2736.5728
3 S 1 R 1 125127135.5136.2134146146146146148
S 1 R 2 223223244.5248234.7259259261263263
S 1 R 3 604604604604604734734734734734
S 2 R 1 457.5457.3469.2480.8477.210381038103810381038
S 2 R 2 522.8521.2538.8533.5600.8701701715.7718.3769
S 2 R 3 702.7707.7723.7713.3736.3841841854.8841841
S 3 R 1 397397410420.3421586586586586615.5
S 3 R 2 550550550550559.2630.8632.8644644674.7
S 3 R 3 612612612612612812.6816.2818818829
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table A3. Average results for instances with n = 100 and n = 150 .
Table A3. Average results for instances with n = 100 and n = 150 .
Machine Number   of   Jobs   n = 100 Number   of   Jobs   n = 150
m HLNSIGSAGAACOHLNSIGSAGAACO
3 S 1 R 1 291.6309.1328327.8316.8401.1421.2428.6435.4425.4
S 1 R 2 537537537537537772772782772772
S 1 R 3 1503150315031503150322892289228922892289
S 2 R 1 162816281639.21651.216532022.32023.32043.820612083.6
S 2 R 2 17231723173917231785.42410241024202422.82442.6
S 2 R 3 191819181918191819322512251225122552.82562
S 3 R 1 994.49951081.41058.410721426.11436.915171508.41601
S 3 R 2 15661566156615661584.41474.414901512.415661567.8
S 3 R 3 158115811581158115812347.32344.42364.223812381
4 S 1 R 1 346348.4353353.4352.8506506511510511
S 1 R 2 662662662662662994994994994994
S 1 R 3 1935193519351935193529302930293029302930
S 2 R 1 192119221941.81927193732253225322532253348
S 2 R 2 15531553.21594.815991674.431733173317331733213
S 2 R 3 2180218021802187.222073094.2309430943127.43102
S 3 R 1 1028.610301056.81078.41101.2233523362365.62391.82397.6
S 3 R 2 129712971318.41358.81468.42318231823182338.22411
S 3 R 3 2026202620262048.220263088308830883102.83171
5 S 1 R 1 415415415.4416.2415.8626626626626626
S 1 R 2 80080080080080012211221122112211221
S 1 R 3 2389238923892389238936103610361036103610
S 2 R 1 1065.61058.31138.411511203.41421.81420.81481.615261560.2
S 2 R 2 971.6953.61010.41045.81031.21355.113631466.21517.61517
S 2 R 3 2535253525352541.4254336433643364336433643
S 3 R 1 708.3714.8785786.47471205.21228.31385.413941406.2
S 3 R 2 978.8980.51039.21120.61078.21282.51256.91278.41297.41299
S 3 R 3 2430243024302430243036063606360636063606
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table A4. Average results for instances with n = 200 and n = 250 .
Table A4. Average results for instances with n = 200 and n = 250 .
Machine Number   of   Jobs   n = 200 Number   of   Jobs   n = 250
m HLNSIGSAGAACOHLNSIGSAGAACO
3 S 1 R 1 549574.6598.4582599.8663.7692.2696.8703689.2
S 1 R 2 1024102410241025.6102412651265.11265.812651265
S 1 R 3 2998299829982998299837793779377937793779
S 2 R 1 2724.92732.727772794.628224000.34006.74025.64026.84085.2
S 2 R 2 2416.42418.924902543.62591.43571.43581.53621.83672.23696.6
S 2 R 3 3251.33230.13253.83291.83334.24096.74074.14089.24100.24113
S 3 R 1 2128.12151.12233.62247.62259.82629.72660.22716.42727.22768.4
S 3 R 2 23392349.523682416.824392774.52797.72834.62880.22953.6
S 3 R 3 31143108.131163131.6314237813781378137813781
4 S 1 R 1 699.9702.7702.8714712849.1855.3853.4861.6852.8
S 1 R 2 1303130313031303130316471647164716471647
S 1 R 3 3871387138713871387148804880488048804880
S 2 R 1 361636163616361636843889.53896.63977.23959.84005.8
S 2 R 2 45074507450745074517.24176.54175.24215.242444381.2
S 2 R 3 393539353935393539354812481248124828.84874
S 3 R 1 318431843186.23206.433054172417241724177.64172
S 3 R 2 2956.52960.82971.630193082.837303731.23764.63798.43852
S 3 R 3 3970397039703992.83975.85031503150315043.45031
5 S 1 R 1 810810.1810811810.210081008100810081008
S 1 R 2 1615161516151615161520222022202220222022
S 1 R 3 4804480448044804480459815981598159815981
S 2 R 1 2001.22016.82108.22222.82260.22639.22671.92720.430253094.2
S 2 R 2 19891986.72062.42227.8226122442269.524482589.82512.2
S 2 R 3 4775.84778.44780.24819.84806.461146114611461416114
S 3 R 1 1442.71480.91664.61789.217181755.11788.72036.22094.42016.8
S 3 R 2 1636.11635.416631699.21711.22152.12186.82283.223802343
S 3 R 3 4854485448544854485460016001600160016001
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table A5. SD values of makespan obtained by HLNS in 10 trials for instances in Set 1.
Table A5. SD values of makespan obtained by HLNS in 10 trials for instances in Set 1.
Small-Scale Instances ( n ) Large-Scale Instances ( n )
20304050100150200250
S 1 R 1 0.00 1.54 1.52 0.61 2.17 3.70 4.49 6.00
S 1 R 2 0.00 0.20 0.00 1.38 5.66 6.31 6.65 7.42
S 1 R 3 0.00 0.00 0.00 0.00 0.00 0.00 4.34 1.19
S 2 R 1 0.00 0.00 0.40 0.00 0.31 2.10 1.23 1.55
S 2 R 2 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.00
S 2 R 3 0.00 0.00 1.30 0.00 1.40 0.00 0.00 0.00
S 3 R 1 0.00 0.75 0.00 0.00 4.71 5.19 7.52 5.81
S 3 R 2 0.00 0.24 0.00 0.92 2.44 4.78 3.79 8.83
S 3 R 3 0.00 0.00 0.00 0.90 1.73 3.67 4.72 3.65

References

  1. Lee, C.Y.; Uzsoy, R.; Martin-Vega, L.A. Efficient algorithms for scheduling semiconductor burn-in operations. Oper. Res. 1992, 40, 764–775. [Google Scholar] [CrossRef]
  2. Uzsoy, R. Scheduling batch processing machines with incompatible job families. Int. J. Prod. Res. 1995, 33, 2685–2708. [Google Scholar] [CrossRef]
  3. Koh, S.G.; Koo, P.H.; Kim, D.C.; Hur, W.S. Scheduling a single batch processing machine with arbitrary job sizes and incompatible job families. Int. J. Prod. Econ. 2005, 98, 81–96. [Google Scholar] [CrossRef]
  4. Jolai, F. Minimizing number of tardy jobs on a batch processing machine with incompatible job families. Eur. J. Oper. Res. 2005, 162, 184–190. [Google Scholar] [CrossRef]
  5. Kashan, A.H.; Karimi, B. Scheduling a single batch-processing machine with arbitrary job sizes and incompatible job families: An ant colony framework. J. Oper. Res. Soc. 2008, 59, 1269–1280. [Google Scholar] [CrossRef]
  6. Dauzère-Pérès, S.; Mönch, L. Scheduling jobs on a single batch processing machine with incompatible job families and weighted number of tardy jobs objective. Comput. Oper. Res. 2013, 40, 1224–1233. [Google Scholar] [CrossRef]
  7. Cheng, B.; Cai, J.; Yang, S.; Hu, X. Algorithms for scheduling incompatible job families on single batching machine with limited capacity. Comput. Ind. Eng. 2014, 75, 116–120. [Google Scholar] [CrossRef]
  8. Li, X.; Li, Y.; Huang, Y. Heuristics and lower bound for minimizing maximum lateness on a batch processing machine with incompatible job families. Comput. Oper. Res. 2019, 106, 91–101. [Google Scholar] [CrossRef]
  9. Azizoglu, M.; Webster, S. Scheduling a batch processing machine with incompatible job families. Comput. Ind. Eng. 2001, 39, 325–335. [Google Scholar] [CrossRef]
  10. Tangudu, S.; Kurz, M.E. A branch and bound algorithm to minimise total weighted tardiness on a single batch processing machine with ready times and incompatible job families. Prod. Plan. Control. 2006, 17, 728–741. [Google Scholar] [CrossRef]
  11. Yao, S.; Jiang, Z.; Li, N. A branch and bound algorithm for minimizing total completion time on a single batch machine with incompatible job families and dynamic arrivals. Comput. Oper. Res. 2012, 39, 939–951. [Google Scholar] [CrossRef]
  12. Balasubramanian, H.; Mönch, L.; Fowler, J.; Pfund, M. Genetic algorithm based scheduling of parallel batch machines with incompatible job families to minimize total weighted tardiness. Int. J. Prod. Res. 2004, 42, 1621–1638. [Google Scholar] [CrossRef]
  13. Mönch, L. H Balasubramanian, JW Fowler, ME Pfund, Heuristic scheduling of jobs on parallel batch machines with incompatible job families and unequal ready times. Comput. Oper. Res. 2005, 32, 2731–2750. [Google Scholar] [CrossRef]
  14. Chiang, T.C.; Cheng, H.C.; Fu, L.C. A memetic algorithm for minimizing total weighted tardiness on parallel batch machines with incompatible job families and dynamic job arrival. Comput. Oper. Res. 2010, 37, 2257–2269. [Google Scholar] [CrossRef]
  15. Almeder, C.; Mönch, L. Metaheuristics for scheduling jobs with incompatible families on parallel batching machines. J. Oper. Res. Soc. 2011, 62, 2083–2096. [Google Scholar] [CrossRef]
  16. Venkataramana, M.; Srinivasa Raghavan, N. Ant colony-based algorithms for scheduling parallel batch processors with incompatible job families. Int. J. Math. Oper. Res. 2010, 2, 73–98. [Google Scholar] [CrossRef]
  17. Lausch, S.; Mönch, L. Metaheuristic approaches for scheduling jobs on parallel batch processing machines. In Heuristics, Metaheuristics and Approximate Methods in Planning and Scheduling; Springer: Berlin/Heidelberg, Germany, 2016; pp. 187–207. [Google Scholar]
  18. Cakici, E.; Mason, S.J.; Fowler, J.W.; Geismar, H.N. Batch scheduling on parallel machines with dynamic job arrivals and incompatible job families. Int. J. Prod. Res. 2013, 51, 2462–2477. [Google Scholar] [CrossRef]
  19. Ham, A.; Fowler, J.W.; Cakici, E. Constraint programming approach for scheduling jobs with release times, non-identical sizes, and incompatible families on parallel batching machines. IEEE Trans. Semicond. Manuf. 2017, 30, 500–507. [Google Scholar] [CrossRef]
  20. Jiang, W.; Shen, Y.; Liu, L.; Zhao, X.; Shi, L. A new method for a class of parallel batch machine scheduling problem. Flex. Serv. Manuf. J. 2021, 34, 518–550. [Google Scholar] [CrossRef]
  21. Huang, Z.; Shi, Z.; Shi, L. Minimising total weighted completion time on batch and unary machines with incompatible job families. Int. J. Prod. Res. 2019, 57, 567–581. [Google Scholar] [CrossRef]
  22. Koh, S.G.; Koo, P.H.; Ha, J.W.; Lee, W.S. Scheduling parallel batch processing machines with arbitrary job sizes and incompatible job families. Int. J. Prod. Res. 2004, 42, 4091–4107. [Google Scholar] [CrossRef]
  23. Jia, Z.H.; Wang, C.; Leung, J.Y.T. An ACO algorithm for makespan minimization in parallel batch machines with non-identical job sizes and incompatible job families. Appl. Soft Comput. 2016, 38, 395–404. [Google Scholar] [CrossRef]
  24. Sun, Y.; Qian, X.; Liu, S. Scheduling Deteriorating Jobs and Module Changes with Incompatible Job Families on Parallel Machines Using a Hybrid SADE-AFSA Algorithm. In International Conference on Learning and Intelligent Optimization; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
  25. Pfund, M.; Fowler, J.W.; Gupta, J.N. A survey of algorithms for single and multi-objective unrelated parallel-machine deterministic scheduling problems. J. Chin. Inst. Ind. Eng. 2004, 21, 230–241. [Google Scholar] [CrossRef]
  26. Shaw, P. Using constraint programming and local search methods to solve vehicle routing problems. In International Conference on Principles and Practice of Constraint Programming; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar] [CrossRef]
  27. Arroyo, J.E.C.; Leung, J.Y.T. An effective iterated greedy algorithm for scheduling unrelated parallel batch machines with non-identical capacities and unequal ready times. Comput. Ind. Eng. 2017, 105, 84–100. [Google Scholar] [CrossRef]
  28. Damodaran, P.; Vélez-Gallego, M.C. A simulated annealing algorithm to minimize makespan of parallel batch processing machines with unequal job ready times. Expert Syst. Appl. 2012, 39, 1451–1458. [Google Scholar] [CrossRef]
  29. Wang, H.M.; Chou, F.D. Solving the parallel batch-processing machines with different release times, job sizes, and capacity limits by metaheuristics. Expert Syst. Appl. 2010, 37, 1510–1521. [Google Scholar] [CrossRef]
  30. Jia, Z.H.; Leung, J.Y.T. A meta-heuristic to minimize makespan for parallel batch machines with arbitrary job sizes. Eur. J. Oper. Res. 2015, 240, 649–665. [Google Scholar] [CrossRef]
  31. Tang, M.; Ji, B.; Fang, X.; Yu, S.S. Discretization-Strategy-Based Solution for Berth Allocation and Quay Crane Assignment Problem. J. Mar. Sci. Eng. 2022, 10, 495. [Google Scholar] [CrossRef]
  32. Ji, B.; Zhang, D.; Zhang, Z.; Samson, S.Y.; Van Woensel, T. The generalized serial-lock scheduling problem on inland waterway: A novel decomposition-based solution framework and efficient heuristic approach. Transp. Res. Part E Logist. Transp. Rev. 2022, 168, 102935. [Google Scholar] [CrossRef]
  33. Ikram, R.M.A.; Dai, H.-L.; Ewees, A.A.; Shiri, J.; Kisi, O.; Zounemat-Kermani, M. Application of improved version of multi verse optimizer algorithm for modeling solar radiation. Energy Rep. 2022, 8, 12063–12080. [Google Scholar] [CrossRef]
Figure 1. A small-scale example to describe the solution approach. (a) shows the family of jobs. In (b), j 1 and j 8 are removed. In (c), j 1 and j 8 are reinserted into the solution. In (d), j 1 and j 2 exchange the position.
Figure 1. A small-scale example to describe the solution approach. (a) shows the family of jobs. In (b), j 1 and j 8 are removed. In (c), j 1 and j 8 are reinserted into the solution. In (d), j 1 and j 2 exchange the position.
Sustainability 15 03934 g001
Figure 2. The average computational time of each scale.
Figure 2. The average computational time of each scale.
Sustainability 15 03934 g002
Figure 3. Average solutions for small-scale instances.
Figure 3. Average solutions for small-scale instances.
Sustainability 15 03934 g003
Figure 4. Average solutions for large-scale instances.
Figure 4. Average solutions for large-scale instances.
Sustainability 15 03934 g004
Figure 5. The average computational time and SD of each scale.
Figure 5. The average computational time and SD of each scale.
Sustainability 15 03934 g005
Table 1. Experimental design used to generate the random instances.
Table 1. Experimental design used to generate the random instances.
FactorLevels
n 20, 30, 40, 50 (small-scale instances); 100, 150, 200, 250 (large-scale instances)
m ( n ) 2, 3 ( n { 20 , 30 , 40 , 50 } ); 3, 4, 5( n { 100,150,200,250})
C m ( m ) C m { 30 , 50 } ( m = 2 );   C m { 30 , 40 , 50 } ( m = 3 ) ;
C m { 20 , 30 , 40 , 50 }   ( m = 4 ) ;   C m { 20 , 30 , 40 , 50 , 60 } ( m = 5 )
s j U [ 1 , 15 ] ,   U [ 15 , 50 ] ,   U [ 1 , 50 ] (represented by S 1 ,   S 2 ,   S 3 , respectively)
r j U [ 1 , 0.05 P ] , U [ 1 , 0.1 P ] , U [ 1 , 0.3 P ] (represented by R 1 ,   R 2 ,   R 3 , respectively)
Table 2. List of parameters.
Table 2. List of parameters.
ParameterMeaningValue
θ Number of iterations forbidden to remove20
p t Probability of removals resulting in worst solutions being added to the tabu list0.5
T 0 Initial temperature0.1 O b ( S i n i )
α Cooling rate0.995
i t Number of iterations to decide whether to conduct a local search100
Table 3. Results for instances with n = 20 and n = 30 .
Table 3. Results for instances with n = 20 and n = 30 .
Number   of   Jobs   n = 20 Number   of   Jobs   n = 30
Machine GurobiHLNSGurobiHLNS
m ValueGap (%)BestAvgSDValueGap (%)BestAvgSD
2 S 1 R 1 11339.23 1131130.00 13354.14 133133.20.60
S 1 R 2 1323.79 132132.10.30 16638.55 157158.42.06
S 1 R 3 2260.00 2262260.00 3540.00 3543540.00
S 2 R 1 4750.00 4754750.00 6080.82 6086080.00
S 2 R 2 4180.00 4184180.00 5750.00 5755750.00
S 2 R 3 3860.00 3863860.00 5880.00 5885880.00
S 3 R 1 2533.95 2532530.00 5230.00 5235230.00
S 3 R 2 2780.00 2782780.00 4346.01 4344340.00
S 3 R 3 30723.61 3073070.00 38715.76 3873870.00
3 S 1 R 1 1017.00 1011010.00 1140.00 1141140.00
S 1 R 2 1350.00 1351350.00 1930.00 193193.10.30
S 1 R 3 3190.00 3193190.00 4310.00 4314310.00
S 2 R 1 26741.57 267268.12.47 4770.00 4774770.00
S 2 R 2 3085.84 3083080.00 3847.16 3843840.00
S 2 R 3 2910.00 2912910.00 6240.80 6246240.00
S 3 R 1 5240.00 5245240.00 25237.70 252253.13.30
S 3 R 2 2720.00 2722720.00 36618.03 3653660.89
S 3 R 3 3220.00 3223220.00 5330.00 5335330.00
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 4. Results for instances with n = 40 and n = 50 .
Table 4. Results for instances with n = 40 and n = 50 .
Number   of   Jobs   n = 40 Number   of   Jobs   n = 50
Machine GurobiHLNSGurobiHLNS
m ValueGap (%)BestAvgSDValueGap (%)BestAvgSD
2 z 15194.70 151153.94.44 198100.00 190196.73.03
S 1 R 2 19266.67 1921963.90 240100.00 240241.50.67
S 1 R 3 4540.00 4544540.00 5430.00 5435430.00
S 2 R 1 62456.55 6246240.00 109787.97 109710970.00
S 2 R 2 9330.00 9339330.00 101777.19 101710170.00
S 2 R 3 7370.00 7377370.00 103044.47 103010300.00
S 3 R 1 7782.60 7787780.00 59028.51 5905900.00
S 3 R 2 5140.59 5145140.00 82713.02 8278270.00
S 3 R 3 58110.60 5815810.00 72823.49 7287280.00
3 S 1 R 1 14880.27 148148.51.50 16717.96 167168.22.40
S 1 R 2 25014.80 250250.30.90 2736.59 273274.20.98
S 1 R 3 6070.00 6076070.00 7340.00 7347340.00
S 2 R 1 46616.34 464465.72.19 10380.00 103810380.00
S 2 R 2 52110.75 521522.40.92 7015.99 7017010.00
S 2 R 3 7128.85 706707.51.69 8419.04 8418410.00
S 3 R 1 39733.75 397397.30.90 5864.44 5865860.00
S 3 R 2 5505.64 5505500.00 6282.87 628638.74.88
S 3 R 3 6120.00 6126120.00 8223.89 812813.62.50
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 5. Results for instances with n = 100 and n = 150 .
Table 5. Results for instances with n = 100 and n = 150 .
Number   of   Jobs   n = 100 Number   of   Jobs   n = 150
Machine Lower BoundHLNSLower BoundHLNS
m BestAvgSDBestAvgSD
3 S 1 R 1 272332337.13.21 399440443.63.95
S 1 R 2 5265645640.00 770799799.41.20
S 1 R 3 1503150315030.00 2289228922890.00
S 2 R 1 1480162816280.00 1780201620253.63
S 2 R 2 1547172317230.00 2163241024100.00
S 2 R 3 1532191819180.00 2269251225120.00
S 3 R 1 808990997.64.41 115114291442.28.13
S 3 R 2 1403156615660.00 120714751487.65.18
S 3 R 3 1528158115810.00 228923522353.31.95
4 S 1 R 1 3463613634.10 506513515.36.90
S 1 R 2 6626626620.00 994996996.61.80
S 1 R 3 1935194019400.00 2930293029300.00
S 2 R 1 1737192119210.00 2853322532250.00
S 2 R 2 144415451551.63.07 2836317331730.00
S 2 R 3 1946218021800.00 2937309430940.00
S 3 R 1 89110251032.46.23 2117233523350.00
S 3 R 2 1212129712970.00 2074231823180.00
S 3 R 3 1953202620260.00 2928308830880.00
5 S 1 R 1 4154424420.00 623643644.30.78
S 1 R 2 7978008000.00 1221122112210.00
S 1 R 3 2389238923890.00 3610361036100.00
S 2 R 1 673105210646.05 89614081424.79.45
S 2 R 2 823963968.76.00 122013481371.217.77
S 2 R 3 2422253525350.00 3628364336430.00
S 3 R 1 418707724.911.53 650119712178.89
S 3 R 2 80597399113.48 1233127812918.22
S 3 R 3 2430243024300.00 3566360636060.00
Table 6. Results for instances with n = 200 and n = 250 .
Table 6. Results for instances with n = 200 and n = 250 .
Number   of   Jobs   n = 200 Number   of   Jobs   n = 250
Machine Lower BoundHLNSLower BoundHLNS
m BestAvgSDBestAvgSD
3 S 1 R 1 522591595.32.28 655688702.612.40
S 1 R 2 1024105310530.00 1256127012700.00
S 1 R 3 2998299829980.00 3779377937790.00
S 2 R 1 239227342737.82.27 358739994001.11.64
S 2 R 2 197224102425.19.13 313335643575.56.14
S 2 R 3 302232363251.312.17 379140854104.58.49
S 3 R 1 182521302147.310.54 234726332641.15.15
S 3 R 2 201923402346.24.14 242227742784.33.77
S 3 R 3 302531143115.40.92 3768378137810.00
4 S 1 R 1 6807237334.31 838873879.54.98
S 1 R 2 1303130313030.00 1647164716470.00
S 1 R 3 3871387138710.00 4880488048800.00
S 2 R 1 3303361636160.00 337838853891.67.27
S 2 R 2 4105450745070.00 357841654175.95.75
S 2 R 3 3909393539350.00 4812481248120.00
S 3 R 1 2889318431840.00 3856417241720.00
S 3 R 2 269529552958.44.22 340337303730.20.60
S 3 R 3 387539703972.93.08 4866503150310.00
5 S 1 R 1 8078198190.00 999101710170.00
S 1 R 2 1615161916190.00 2022202220220.00
S 1 R 3 4804481648160.00 5981598159810.00
S 2 R 1 126420092026.313.98 155426432657.19.84
S 2 R 2 161219781994.210.72 201122162246.319.24
S 2 R 3 476947774780.14.87 6009611461140.00
S 3 R 1 89214371466.414.45 101317531777.516.04
S 3 R 2 161216331648.320.28 201321412176.619.14
S 3 R 3 4822485448540.00 6001600160010.00
Table 7. Best results for instances with n = 20 and n = 30 .
Table 7. Best results for instances with n = 20 and n = 30 .
Machine Number   of   Jobs   n = 20 Number   of   Jobs   n = 30
m CPLEXHLNSIGSAGAACOCPLEXHLNSIGSAGAACO
2 S 1 R 1 909090929790111110110115119113
S 1 R 2 9898981109898134134135139139138
S 1 R 3 209209209209209209339339339339339339
S 2 R 1 475475475475475475608608608608608608
S 2 R 2 418418418418418418575575575575575575
S 2 R 3 386386386386386397588588588588588588
S 3 R 1 245245245245245245523523523523523523
S 3 R 2 278278278278278278434434434434434434
S 3 R 3 300300300300307320384384384384389391
3 S 1 R 1 8080808580949898989810998
S 1 R 2 120120120120120120170170170170170170
S 1 R 3 319319319319319319431431431431431431
S 2 R 1 259259259259259259477477477477477499
S 2 R 2 308308308308308316378378378378405395
S 2 R 3 291291291291291306624624624624624624
S 3 R 1 524524524524524524244244244244260245
S 3 R 2 272272272272272342358358358358370370
S 3 R 3 322322322322322335533533533533533542
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 8. Best results for instances with n = 40 and n = 50 .
Table 8. Best results for instances with n = 40 and n = 50 .
Machine Number   of   Jobs   n = 40 Number   of   Jobs   n = 50
m CPLEXHLNSIGSAGAACOCPLEXHLNSIGSAGAACO
2 S 1 R 1 129129129135146130173152152155176178
S 1 R 2 161161161161164164233212212216220226
S 1 R 3 454454454454454454541541541541541541
S 2 R 1 623623623623626628109710971097109710971097
S 2 R 2 933933933933933953101710171017101710171017
S 2 R 3 737737737737737737106610301030103010301030
S 3 R 1 778778778778778778590590590590604590
S 3 R 2 514514514514514514827827827827827827
S 3 R 3 581581581581581583732728728728728728
3 S 1 R 1 131125125130130130146146146146146148
S 1 R 2 223223223241248223259259259259263263
S 1 R 3 604604604604604604734734734734734734
S 2 R 1 489457457458473473103810381038103810381038
S 2 R 2 533521521534523564703701701701710761
S 2 R 3 725701701716709733848841841841841841
S 3 R 1 397397397397402416586586586586586591
S 3 R 2 550550550550550550644628632644644670
S 3 R 3 612612612612612612818812812818818829
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 9. Best results for instances with n = 100 and n = 150 .
Table 9. Best results for instances with n = 100 and n = 150 .
Machine Number   of   Jobs   n = 100 Number   of   Jobs   n = 150
m HLNSIGSAGAACOHLNSIGSAGAACO
3 S 1 R 1 291303320320311399412428428421
S 1 R 2 537537537537537772772782772772
S 1 R 3 1503150315031503150322892289228922892289
S 2 R 1 1628162816391628165320152016203020512070
S 2 R 2 1723172317231723177624102410241024102440
S 2 R 3 1918191819181918193225122512251225402562
S 3 R 1 98998610931033106614141421151214851588
S 3 R 2 1566156615661566157814691474150115441555
S 3 R 3 1581158115811581158123422342236423812381
4 S 1 R 1 346346353352352506506511506511
S 1 R 2 662662662662662994994994994994
S 1 R 3 1935193519351935193529302930293029302930
S 2 R 1 1921192119411921193132253225322532253348
S 2 R 2 1552155215991565166431733173317331733213
S 2 R 3 2180218021802180220730943094309430943102
S 3 R 1 1016103010501049110023352335235723772387
S 3 R 2 1297129712971320142223182318231823182411
S 3 R 3 2026202620262047202630883088308830883171
5 S 1 R 1 415415415416415626626626626626
S 1 R 2 80080080080080012211221122112211221
S 1 R 3 2389238923892389238936103610361036103610
S 2 R 1 1057105211061120115714101407145914601529
S 2 R 2 95694910101025101913331333143514831500
S 2 R 3 2535253525352535254336433643364336433643
S 3 R 1 69370176376773811971213129013551375
S 3 R 2 96396310431063105012691256125612971299
S 3 R 3 2430243024302430243036063606360636063606
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 10. Best results for instances with n = 200 and n = 250 .
Table 10. Best results for instances with n = 200 and n = 250 .
Machine Number   of   Jobs   n = 200 Number   of   Jobs   n = 250
m HLNSIGSAGAACOHLNSIGSAGAACO
3 S 1 R 1 542565592566594659686681696687
S 1 R 2 1024102410241024102412651265126612651265
S 1 R 3 2998299829982998299837793779377937793779
S 2 R 1 2720272327832785280839943999402640124077
S 2 R 2 2404240724312516256835703571360636593679
S 2 R 3 3236322432473285333240904073407940964113
S 3 R 1 2121213522222224224726212647271627112747
S 3 R 2 2333233523662386243527702785282528642915
S 3 R 3 3104310431163116314237813781378137813781
4 S 1 R 1 698701703714706849850851857849
S 1 R 2 1303130313031303130316471647164716471647
S 1 R 3 3871387138713871387148804880488048804880
S 2 R 1 3616361636163616368438843887391539313981
S 2 R 2 4507450745074507450741694170417941994374
S 2 R 3 3935393539353935393548124812481248244874
S 3 R 1 3184318431843184326541724172417241724172
S 3 R 2 2955295529682990307837303730376137613832
S 3 R 3 3970397039703991397050315031503150315031
5 S 1 R 1 81081081081181010081008100810081008
S 1 R 2 1615161516151615161520222022202220222022
S 1 R 3 4804480448044804480459815981598159815981
S 2 R 1 1982199120992174223026272645268629103039
S 2 R 2 1970196620442167223122102226235325722488
S 2 R 3 4775477547774819480161146114611461416114
S 3 R 1 1424144716091748167117351767200420472010
S 3 R 2 1633163316331693170021132150228223552331
S 3 R 3 4854485448544854485460016001600160016001
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Table 11. Comparison results of LNS, TLNS, and HLNS.
Table 11. Comparison results of LNS, TLNS, and HLNS.
Job BestSolutionAvgSolutionSDAvgTime
n LNSTLNSHLNSLNSTLNSHLNSLNSTLNSHLNSLNSTLNSHLNS
20 S 1 R 1 107.0107.0107.0107.7107.0107.00.90.00.02.72.93.1
S 1 R 2 133.5133.5133.5133.5133.6133.60.00.20.21.11.21.3
S 1 R 3 272.5272.5272.5272.5272.5272.50.00.00.00.00.00.0
S 2 R 1 371.0371.0371.0372.8372.2371.61.91.61.21.81.92.1
S 2 R 2 363.0363.0363.0363.0363.0363.00.00.00.01.91.92.1
S 2 R 3 338.5338.5338.5338.5338.5338.50.00.00.00.00.00.0
S 3 R 1 388.5388.5388.5388.5388.5388.50.00.00.01.71.82.0
S 3 R 2 275.0275.0275.0275.0275.0275.00.00.00.00.00.00.0
S 3 R 3 314.5314.5314.5314.6314.5314.50.20.00.01.71.92.0
250 S 1 R 1 861.0857.0859.3866.5863.7866.45.54.85.878.882.0128.5
S 1 R 2 1646.31646.31646.31646.31646.31646.30.00.00.025.527.441.4
S 1 R 3 4880.04880.04880.04880.04880.04880.00.00.00.00.10.10.1
S 2 R 1 3509.03505.03509.03519.83517.43516.66.97.36.3258.6264.4379.5
S 2 R 2 3321.73319.03315.03343.43334.93332.613.111.110.4261.6266.5395.4
S 2 R 3 5008.75008.75003.75012.65011.35010.22.01.82.8184.9186.0232.7
S 3 R 1 2858.02851.32852.72868.72864.62863.56.08.07.1201.2207.5380.3
S 3 R 2 2884.32881.72881.72900.92896.02897.011.86.97.8198.7208.1359.9
S 3 R 3 4937.74937.74937.74937.74937.74937.70.00.00.0231.4150.8295.0
Note: The bold numbers denote that the problem is solved optimally or reaches the best-known solution by the corresponding algorithm.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, B.; Xiao, X.; Yu, S.S.; Wu, G. A Hybrid Large Neighborhood Search Method for Minimizing Makespan on Unrelated Parallel Batch Processing Machines with Incompatible Job Families. Sustainability 2023, 15, 3934. https://doi.org/10.3390/su15053934

AMA Style

Ji B, Xiao X, Yu SS, Wu G. A Hybrid Large Neighborhood Search Method for Minimizing Makespan on Unrelated Parallel Batch Processing Machines with Incompatible Job Families. Sustainability. 2023; 15(5):3934. https://doi.org/10.3390/su15053934

Chicago/Turabian Style

Ji, Bin, Xin Xiao, Samson S. Yu, and Guohua Wu. 2023. "A Hybrid Large Neighborhood Search Method for Minimizing Makespan on Unrelated Parallel Batch Processing Machines with Incompatible Job Families" Sustainability 15, no. 5: 3934. https://doi.org/10.3390/su15053934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop