Next Article in Journal
Research on Multistage Heterogeneous Information Fusion of Product Design Decision-Making Based on Axiomatic Design
Previous Article in Journal
Business Resilience for Small and Medium Enterprises and Startups by Digital Transformation and the Role of Marketing Capabilities—A Systematic Review
Previous Article in Special Issue
Empowering Unskilled Production Systems Consultants through On-the-Job Training Support: A Digital Triplet Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling

Department of Industrial Engineering, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
*
Author to whom correspondence should be addressed.
Systems 2024, 12(6), 221; https://doi.org/10.3390/systems12060221
Submission received: 16 May 2024 / Revised: 11 June 2024 / Accepted: 17 June 2024 / Published: 20 June 2024
(This article belongs to the Special Issue Management and Simulation of Digitalized Smart Manufacturing Systems)

Abstract

:
This paper investigates a uniform parallel machine scheduling problem for makespan minimization. Due to the problem’s NP-hardness, much effort from researchers has been directed toward proposing heuristic and metaheuristic algorithms that can find an optimal or a near-optimal solution in a reasonable amount of time. This work proposes two versions of a variable neighborhood search (VNS) algorithm with five neighborhood structures, differing in their initial solution generation strategy. The first uses the longest processing time (LPT) rule, while the second introduces a novel element by utilizing a randomized longest processing time (RLPT) rule. The neighborhood structures for both versions were modified from the literature to account for the variable processing times in uniform parallel machines. We evaluated the performance of both VNS versions using a numerical example, comparing them against a genetic algorithm and a tabu search from existing literature. Results showed that the proposed VNS algorithms were competitive and obtained the optimal solution with much less effort. Additionally, we assessed the performance of the VNS algorithms on randomly generated instances. For small-sized instances, we compared their performance against the optimal solution obtained from a mathematical formulation, and against lower bounds derived from the literature for larger instances. Computational results showed that the VNS version with the randomized LPT rule (RLPT) as the initial solution (RVNS) outperformed that with the LPT rule as the initial solution (LVNS). Moreover, RVNS found the optimal solution in 90.19% of the small instances and yielded an average relative gap of about 0.15% for all cases.

1. Introduction

In today’s competitive manufacturing landscape, increasing efficiency and productivity and minimizing production costs are essential for businesses to remain competitive and sustainable. As a result, optimizing resource allocation and minimizing production completion times through scheduling is becoming increasingly important. Furthermore, efficient scheduling can improve sustainability by reducing energy consumption in the manufacturing industry. This is important since these industries account for a significant proportion of global energy consumption [1]. In China, for instance, approximately 70% of total emissions are related to industrial sectors [2]. From this standpoint, scheduling is becoming essential to improve productivity and energy efficiency.
In this paper, we study a uniform parallel machine scheduling problem for makespan minimization, denoted by Qm||Cmax, according to Graham et al.’s notations [3]. In the uniform parallel machine, a set of n independent jobs, J = {1, 2, …, j,…, n}, with processing times, pj, such that p 1   p 2 p n , are to be scheduled on m machines, M = {1, 2, …, i,…, m}, which may differ in job processing speeds, vi, such that v 1   v 2 v i v m , with machine m being the fastest machine. It is assumed that pj is the processing time of job j on the slowest machine. Therefore, the processing time of each job on each machine is pij = pj × v1/vi. The uniform parallel machine transforms into an identical parallel machine scheduling problem when the speeds of the machines are equal, i.e., v1 = v2 = … = vm. Therefore, the identical parallel machine scheduling problem is a special case of the studied scheduling problem. Each job is assumed to be available at the beginning of scheduling and no preemption is allowed. The problem is NP-hard in the strong sense [4]. Hence, heuristic and metaheuristic algorithms are developed by the researchers to obtain good-quality solutions in shorter computation times. This problem is of practical importance since parallel operations are frequent in manufacturing and services industries. For instance, consider a manufacturing facility that has m machines, where some are newer and more efficient than others. The newer machines process the jobs faster than the older machines [5]. This configuration represents a uniform parallel machine environment. The goal is to balance the load among all machines by minimizing the maximum completion time of the last job in a schedule, which is known as the makespan.
The rest of this paper is organized as follows: Section 2 reviews the existing literature on the uniform parallel machine scheduling problem. Section 3 describes the methodology used to solve the problem under consideration. Section 4 presents the computational results. Finally, Section 5 concludes the paper and provides directions for future work.

2. Literature Review

This section will review the research on the uniform parallel machine scheduling problem and examine various scheduling algorithms and approaches to address this combinatorial problem. Among the various approaches, the longest processing time (LPT) rule, introduced by Graham [6], stands out as one of the earliest and simplest heuristics, specifically designed for a special case of uniform machines with equal processing speeds. For identical parallel machine scheduling, the LPT rule assigns jobs in a non-increasing order of processing time to the least occupied machine. In contrast, for uniform parallel machine scheduling, the adapted LPT rule prioritizes jobs in a non-increasing order of processing time to a machine that completes processing the job sooner [7].
The LPT worst-case performance ratio for identical parallel machine scheduling is 4/3 − 1/3m [6]. This ratio was improved by Della Croce and Scatamacchia [8] to (4/3 − 1/(3(m − 1))) for m 3 . For the uniform parallel machine, Gonzalez et al. [7] demonstrated that the adapted LPT rule obtained a makespan at most twice the optimal solution. Friesen [9] showed that the worst-case bound for the adapted LPT rule is within (1.52, 1.67). Mireault et al. [10] derived an explicit function of the speed of the fastest machine as the worst-case ratio for LPT on two uniform parallel machines. Koulamas and Kyparisis [11] studied a modified LPT rule for two uniform parallel machines that optimally scheduled the three longest jobs and then applied the LPT rule to the remaining jobs. This modified LPT rule obtained a worst-case ratio of 1.2247. Massabò et al. [12] developed a worst-case performance ratio for the LPT heuristic in scheduling problem with two uniform parallel machines. The performance ratio is based on the index of the latest job inserted in the machine with minimum completion time. For the LPT rule in uniform parallel machine scheduling, Mitsunobu et al. [13] determined tight approximation ratios of 1.38, 1.43, and 1.46 for problems with 3, 4, and 5 processors, respectively. Friesen and Langston [14] and Chen [15] provided worst-case bounds for a MULTIFIT algorithm for uniform parallel machine scheduling.
In addition to the LPT heuristic, other heuristics have been proposed for uniform parallel machine scheduling with a makespan objective. Li and Zhang [16] studied this problem under release time constraints and proposed a heuristic based on the longest processing time and largest release date (LPT-LRD) rule. To further improve the solution quality, the authors developed a variable neighborhood search algorithm with two neighborhood structures. Sivasankaran et al. [17] proposed a heuristic and evaluated its performance by solving small instances and comparing the results with a mathematical model. Li and Yang [18] developed heuristic algorithms for scheduling uniform parallel machines with head and tails to minimize the makespan. De Giovanni et al. [19] developed a modified longest processing time rule algorithm and an iterated local search algorithm for the uniform parallel machine scheduling with a makespan objective. Song et al. [20] studied the uniform parallel machine scheduling for a green manufacturing system to minimize the makespan and maximize the maximum lateness while incorporating an upper bound constraint on the total energy cost. The authors proposed a polynomial-time algorithm for a preemptive uniform parallel machine scheduling and a two-approximation algorithm for a non-preemptive uniform parallel machine scheduling.
Metaheuristics also play a crucial role in obtaining good solutions for the considered problem. Balin [21] adapted a genetic algorithm (GA) for non-identical parallel machine scheduling with a makespan objective. The author tested its performance on a numerical example with only four uniform machines and nine jobs. Noman et al. [22] proposed a tabu search (TS) algorithm for the same problem. They compared its performance with the GA developed by Balin [21] using the same numerical example.
Several researchers have investigated the effectiveness of the VNS algorithm for parallel machine scheduling. Li and Cheng [23] proposed a variable neighborhood search (VNS) algorithm to minimize the makespan of a uniform parallel machine with release dates. The proposed VNS algorithm makes use of two distinct initial solutions based on the problem’s size. The algorithm also employs two neighborhood structures to obtain improved solutions. Sevkli and Uysa [24] proposed a modified variable neighborhood search algorithm with two neighborhood structures for makespan minimization in identical parallel machine scheduling. The authors compared the proposed algorithm with the solutions of both the genetic algorithm and the LPT rule. Jing and Jun-qing [25] studied the makespan minimization on identical parallel machine scheduling problem and proposed a VNS algorithm with two distinct initial solutions and proposed four neighborhood structures that are applied between a machine with a completion time equal to the makespan and another machine with a completion time less than the makespan. The work of Alharkan et al. [26] added a fifth neighborhood structure to the work of Jing and Jun-qing [25] and studied the order effect of these neighborhood structures in minimizing the makespan. Recently, Gharbi and Bamatraf [27] applied the VNS algorithm with an LPT-based initial solution as an upper bound for an improved arc flow formulation for the identical parallel machine scheduling with a makespan objective. Cheng et al. [28] studied the identical parallel machine scheduling problem with step-deteriorating jobs and proposed a variable neighborhood search algorithm that utilized five neighborhood structures to minimize the total completion time. The works of Senthilkumar and Narayanan [29,30] proposed four different genetic algorithms and three variations of simulated annealing algorithms to minimize the makespan in the uniform parallel machine scheduling problem. Kaabi [31] investigated scheduling on uniform parallel machines with machine unavailability, aiming to minimize the makespan. The author proposed a quadratic model for optimal solutions to small- and medium-sized problems and a novel two-phase algorithm for large-sized problems. To tackle the problem of scheduling on the uniform parallel machine with machine eligibility, job splitting, sequence-dependent setup time, and multiple servers, Kim and Lee [32] proposed a mathematical formulation and four lower bounds. Additionally, they further developed a heuristic algorithm suited for handling large-sized problems.
Due to the NP-hardness of the problem, exact algorithms can still be employed to find optimal solutions for small instances. However, for large instances, metaheuristics are crucial for finding high-quality solutions in a reasonable time. Horowitz and Sahni [33] proposed dynamic programming for Qm||Cmax. De and Morton [34] presented a mathematical formulation and proposed a branch and bound algorithm for Qm||Cmax capable of solving small-sized instances, with up to 5 jobs with 8 machines or 10 jobs with 4 machines. Liao and Lin [35] proposed an optimal algorithm for two uniform parallel machines with a makespan minimization objective. The algorithm was evaluated over a set of jobs, ranging from 10 jobs up to 1000. Later, Lin and Liao [36] extended the previous work and proposed an optimal algorithm for the uniform parallel machine scheduling problem with a makespan objective. The optimal algorithm incorporated a procedure to determine all the quasi-optimal solutions, a theorem to provide an improved lower bound, an algorithm to determine whether a quasi-optimal solution can be achieved, and another theorem to accelerate the search speed. The optimal algorithm was implemented on randomly generated instances with 3 types of machines, namely, 3, 4, and 5 machines, and a set of jobs ranging from 10 up to 1000 jobs. Popenko et al. [37] proposed a sufficient condition for schedule optimality on uniform parallel machines according to specific criteria. Their work focused on minimizing the makespan among these criteria. Berndt et al. [38] proposed improved support size bounds for integer linear programs (ILPs), leading to a faster approximation algorithm for makespan minimization on uniform parallel machine scheduling. Mallek and Boudhar [39] addressed scheduling on uniform parallel machines with conflict graphs. The authors proposed a mixed-integer line program (MILP) formulation, along with lower and upper bounds, to minimize the makespan. The work of Soper and Strusevich [40] analyzed schedules with at most one preemption compared to the optimal schedule that allows preemption for minimizing the makespan on three uniform machines. The authors derived tight bounds based on machine speeds. For more literature on parallel machines and uniform parallel machine scheduling, readers are referred to the works in [41,42].
This paper contributes to uniform parallel machine scheduling by proposing two versions of the VNS algorithm to minimize the makespan: the VNS algorithm with the LPT schedule as an initial solution (LVNS) and the VNS algorithm with a random LPT schedule as an initial solution (RVNS). LVNS is inspired by existing VNS applications on identical parallel machines, but its neighborhood structures are modified to account for the variable processing times on uniform parallel machines. The second version, RVNS, introduces a novel element by utilizing a randomized longest processing time (RLPT) rule for initial solution generation alongside the standard LPT rule used in LVNS. Moreover, a comparison of the proposed VNS algorithms with algorithms from the literature on a numerical example shows that the optimal solution can be obtained with much less effort with our proposed methods. Due to the problem’s complexity, the effectiveness of each algorithm is evaluated against the optimal solution obtained from a mathematical formulation adapted from the literature for instances with up to 20 jobs. The remaining instances are evaluated against a lower bound from the literature. Furthermore, the impact of the upper bound obtained by LVNS on reducing the solution time in a mathematical formulation is evaluated. This is achieved by comparing the CPU time required with LVNS’s upper bound to the CPU time without any specified upper bound in the mathematical formulation.
In summary, the uniform parallel machine scheduling problem is pervasive in industries, including manufacturing and logistics. The proposed VNS algorithms, LVNS and RVNS, offer a practical solution that balances computational efficiency and solution quality. This balance allows them to find near-optimal solutions in a reasonable amount of time, potentially leading to reduced production costs or improved resource utilization in real-world scheduling practices.

3. Research Methodology

The methodology applied in this paper for solving the uniform parallel machine scheduling problem with the makespan objective involved first calculating a lower bound. Three lower bounds from the literature were calculated. The maximum of these values was considered the best lower bound. Furthermore, an exact mathematical formulation from the literature was employed. Then, a description of the proposed VNS algorithms was presented. We evaluated the performance of the VNS algorithms by comparing them against genetic algorithms (GA) and tabu search (TS) using a benchmark example. For small-sized instances with up to 20 jobs, we compared the VNS algorithms against the optimal solution obtained from a mathematical formulation. For larger instances where the number of jobs exceeded 20, the best lower bound was used for evaluation. This assessment enabled a comprehensive assessment of the VNS algorithms’ effectiveness across different problem scales. The following subsections illustrate the details for each of the abovementioned steps.

3.1. Lower Bounds

Assuming that the first machine was the slowest, the following are the lower bounds used in this paper:
L B 1 = j = 1 n p 1 j i = 1 m v i
L B 2 = p m 1
The lower bound in Equation (1) was simply calculated as the total processing times of jobs on the slowest machine divided by the total speeds of the machines. The speed of each machine was determined by dividing the processing time of a job on the slowest machine by the processing time of that job on the machine of interest. For simplicity, the first job was selected to calculate the speed for each machine, as follows:
v i = p 11 p i 1   i { 1 , , m }
Equation (2) provides a lower bound on the makespan, which is equal to the processing time of a job with the largest processing time scheduled when it is scheduled on the fastest machine.
In addition to these lower bounds, Lin and Liao [36] proposed an improvement of the lower bound in Equation (2), as follows: consider P = j = 1 n p 1 j is the total processing times of jobs on the slowest machine, V = i = 1 m v i is the total machine speeds, CLPT is the upper bound obtained by the longest processing time rule, and w i = v i × L B 1 is the largest workload on each machine i that makes the completion time of the last job no more than LB1. Therefore, a bigger assigned workload to each machine will contribute to various completion times. W = i = 1 m w i is the total workload assigned in all machines. When these completion times do not exceed the upper bound, CLPT, it is considered a quasi-optimal solution. The number of quasi-optimal solutions for each machine is calculated as:
q i = v i × C L P T w i   i { 1 , , m }
The possible quasi-optimal solutions for each machine are expressed as:
C i , k = w i + k v i   i 1 , , m ;   k 1 , , q i
We sorted all quasi-optimal solutions in non-increasing order and denoted these solutions as C [ 1 ] C [ 2 ] C [ m q m ] . C [ 0 ] is the lower bound, LB1, obtained from Equation (1). The improved lower bound is the quasi-optimal solution obtained at position C P W . Therefore, the best lower bound for Qm||Cmax among these calculated lower bounds is the maximum value between them. It is expressed as:
L B = m a x ( L B 1 , L B 2 , C P W )

3.2. Mathematical Formulation

A mathematical formulation for Qm||Cmax, as presented by De and Morton [34], will be presented in this section. Due to the NP-hard nature of the problem, this mathematical formulation can solve instances with a smaller number of machines and jobs to optimality in a reasonable time. As the size of the problem increases, the time grows exponentially. A key benefit of presenting the mathematical formulation is to allow for efficient evaluation of the proposed VNS algorithms’ performance across a range of problem sizes. The notations of the mathematical formulation are presented in Table 1.
Objective function:
min C m a x
Constraints:
C m a x j J p i j x i j         i M
i M x i j = 1       j J
x i j 0 ,   1   i M ; j J
The objective function in Equation (7) minimizes the makespan. Equation (8) shows that the makespan is the maximum completion time among all machines. Equation (9) shows that a job is assigned to only one machine. Binary decision variables are shown in Equation (10).
To formulate valid bounds of the makespan, constraint (11) was added to the mathematical formulation presented by De and Morton [34]:
L B C m a x U B

3.3. Variable Neighborhood Search Algorithms

The variable neighborhood search (VNS) algorithm, developed by Mladenovi’c and Hansen in 1997 [43], is a metaheuristic approach that aims to escape from a local optimal solution by performing different neighborhood structures, N k S ,   k { 1 , , k m a x } , to an initial solution (S). Starting from the first neighborhood structure, N1, if an improvement to the current solution is found within N1, The algorithm updates the solution and restart exploring neighborhoods from N1. If no improvement is found in N1, the algorithm moves on to the next neighborhood structure, N2. This process continues until all neighborhood structures are explored without finding an improvement. Upon reaching this point, the algorithm terminates. The selection of the initial solution method and neighborhood structures significantly impacts the quality of solutions for the VNS algorithm. Therefore, when the initial solution is generated randomly or using a heuristic with some randomization, it is recommended to run the VNS algorithm multiple times for each initial solution. The best solution among all iterations is then selected as the final result.
The variable neighborhood search algorithm is a single-based solution that focuses on improving a single current solution iteratively by exploring its neighborhoods. One advantage of the VNS algorithm is its ease of implementation and versatility, allowing it to be applied to a wide range of complex optimization problems. Another advantage lies in the use of systematic neighborhood structures. This systematic approach can lead to fast convergence to good solutions and potentially a shorter computational time. Unlike other metaheuristics, such as the genetic algorithm and simulated annealing, which require careful parameter tuning and can be slower than the VNS algorithm, the GA has to evaluate each solution in the population. Moreover, simulated annealing may have slower convergence compared to the VNS algorithm. Nevertheless, VNS has some disadvantages. The main disadvantage occurs when the neighborhoods fail to improve the current solution and the algorithm gets stuck in a local optimum [28].
The flowchart of the proposed VNS algorithms is shown in Figure 1.

3.3.1. Initial Solution

This paper introduced two versions of the VNS algorithm. The first version utilizes the longest processing time (LPT) rule to generate an initial solution. The second version adds randomization to the LPT rule. The randomized LPT heuristic will be denoted as RLPT. Both versions of the VNS algorithm applied five neighborhood structures, which will be explained in the following subsection.
The LPT rule prioritizes jobs in a non-increasing order of processing time and then assigns each job to the first available machine [7]. The proposed RLPT rule works by iteratively assigning jobs to machines. At each iteration, it randomly selects one of the two jobs with the largest processing time and assigns that job to the machine with the minimum available time. The process continues until all jobs have been assigned to machines.

3.3.2. Neighborhood Structures

Neighborhood structures attempt to improve the current solution by relocating jobs to machines. In the uniform parallel machine problem with the makespan objective, each neighborhood structure focuses on reducing the completion time of a machine that determines the overall makespan. This machine is referred to as a “problem machine (pm)”. The other machines with completion times less than the makespan are called “non-problem machines (npm)”.
Five neighborhood structures (kmax) are proposed in this paper. Each neighborhood structure (k) is detailed as follows:
  • Move (j): a job j is moved from pm to npm if the processing time of job j on the new non-problem machine, npm (pnpm,j), is less than the difference in completion times between (pm) and (npm), i.e., (CpmCnpm).
  • Exchange (jk): a job j from pm is exchanged with a job k from npm if the resulting completion time for each machine does not exceed the makespan of pmCpm”. That is, (Cpmppm,j + ppm,k) < Cpm and (Cnpmpnpm,k + pnpm,j) < Cpm.
  • Exchange (jk, l): two jobs, j and k, from pm are exchanged with one job, l, from npm if (Cpmppm,jppm,k + ppm,l) < Cpm and (Cnpmpnpm,l+ pnpm,j + pnpm,k) < Cpm.
  • Exchange (j, kl): one job, j, from pm is exchanged with two jobs, k and l, from npm if (Cpmppm,j + ppm,k + ppm,l) < Cpm and (Cnpmpnpm,kpnpm,l + pnpm,j) < Cpm.
  • Exchange (jk, lt): two jobs, j and k, from pm are exchanged with two jobs, l and t, from npm if (Cpmppm,jppm,k + ppm,l + ppm,t) < Cpm and (Cnpmpnpm,lpnpm,t + pnpm,j + pnpm,k) < Cpm.

3.3.3. Example

Consider three uniform parallel machines with six jobs. The processing times for each job on each machine and the speed of each machine are presented in Table 2.
The lower bounds were calculated as LB1 = P/V = 4070.511 and LB2 = p3,1 = 2100, and the upper bound as CLPT = 4700. The Gantt chart for the LPT schedule is shown in Figure 2.
An improvement in LB1 could be obtained by first calculating the largest workload on each machine, which made the completion time of the last job no more than LB1: w 1 = s 1 × L B 1 = 1 × 4070.511 = 4070 , w 2 = 1.3382 × 4070.511 = 5447 , and w 3 = 7408 . Therefore, the total workload assigned was W = 16,925. The quasi-optimal solutions between LB1 and CLPT for each machine were q 1 = s 1 × C L P T w 1 = 1 × 4700 4070 = 630 , q 2 = 1.3382 × 4700 5447 = 843 , and q 3 = 1164 . The position of the improved lower bound will be in the index of P W = 16926 16925 = 1 . Therefore, one quasi-optimal solution will be calculated for each machine. The quasi-optimal solutions for machines 1, 2, and 3 were C 1 , 1 = w i + k / s i = 4070 + 1 / 1 = 4071 , C 2 , 1 = 4071.14 , and C 3 , 1 = 4070.879 , respectively. Then, we arranged all quasi-optimal solutions in increasing order starting with C [ 0 ] = L B 1 . Consequently, the improved lower bound was C P W = C 1 = 4070.879 . The best lower bound among the calculated lower bounds was L B = max L B 1 , L B 2 , C P W = 4070.879 .
Applying the LVNS algorithm (as detailed in Figure 1), we found that only the exchange (jk) neighborhood structure could be applied to the LPT schedule. This neighborhood structure allowed the swapping of jobs between machines. In this case, job 1 on problem machine 3 was exchanged with job 3 on non-problem machine 1. The resulting schedule is shown in Figure 3, with a makespan of 4200. This makespan obtained by the LVNS algorithm is equal to the optimal solution found by the mathematical formulation presented in Section 3.2.

4. Computational Results and Discussion

The mathematical formulation and the proposed VNS algorithms were coded in C++ using the Cplex 12.10 optimization library. The experiments were conducted on an Intel® Core i7-4930 k CPU @3.40 GHz with 34.0 GB of RAM (Intel Corporate, Santa Clara, CA, USA). A set of instances was generated to evaluate the performance of the proposed algorithms. In the data generation process, four parameters were considered:
(i)
The number of machines (m).
(ii)
The ratio of the number of jobs (n) to the number of machines (m), n/m.
(iii)
The processing times of each job, j, on the fastest machine, m, pm,j.
(iv)
The speed for each machine, i, to the fastest machine, Si.
Four different sizes of machines were selected: m   {3, 4, 5, and 10}. The ratios of n/m considered were n/m {2, 3, 4, 5, 10, 20, and 30}. The processing times of each job on the fastest machine were generated using a discrete uniform distribution: U~[1, Pmax]. The three levels of the maximum processing time generated were Pmax m   {25, 50, and 100}. Finally, the speed factor for each machine to the fastest machine, Si, was generated using a uniform distribution in the interval U~[1, Smax], and we selected three levels for Smax: Smax    {3, 5, and 7}. The value of Smax was rounded to two decimal digits. Ten instances for each combination of (m, n/m, Pmax, and Smax) were generated. Consequently, the total number of instances generated was 2520.
The heuristics for generating the initial solution (LPT and RLPT) were compared first. Then, the VNS algorithms were evaluated against GA and TS from the literature. After that, the LVNS algorithm was compared with the RVNS algorithm on the generated instances. The relative percentage deviation (RPD) for the makespan of the VNS algorithms, C 1 , was evaluated against the optimal solution or the lower bound, C 2 , as shown in Equation (12):
R P D = C 1 C 2 C 2 × 100
The comparison of the VNS algorithms was carried out against the mathematical formulation for instances with up to 20 jobs. However, for the remaining instances, the VNS algorithms were evaluated against the lower bound.
The parameter settings of the VNS algorithms were the number of neighborhood structures and the number of iterations. The number of neighborhood structures was set to 5. These neighborhoods are described in Section 3.3. The number of iterations was set to one iteration for the LVNS algorithm and ten iterations for the RVNS algorithm. The LVNS algorithm requires only one iteration since each iteration always starts with the same initial schedule. However, each iteration of the RVNS algorithm can potentially generate a different schedule. To determine the number of iterations for the RVNS algorithm, we focused on balancing a trade-off between solution quality and computational time. Therefore, we solved all the generated instances with 100 replicates to investigate the best solution among these 100 replicates for each instance. We noticed that in almost 95% of the instances, the best solution was reached within 10 iterations. Therefore, we considered 10 iterations to obtain good solutions with minimal computation time. In the following subsections, we will present the computational results.

4.1. Comparison of the LPT Rule and RLPT Rule

In this section, we focus on the LPT and RLPT algorithms, investigating their performance in generating initial solutions for the VNS algorithm. Table 3 presents the relative percent deviation (RPD) for both algorithms.
The results in Table 3 demonstrate that the RLPT algorithm outperformed the LPT algorithm in terms of the RPD. In almost all combinations of (m, n) and (S, p), the RLPT algorithm had better average RPD than the LPT algorithm. Considering each combination of (S, p), the maximum overall average RPD across all machine–job combinations (m, n) for the RLPT algorithm was 1.470, compared with 1.856 for the LPT algorithm. This indicates that the randomization introduced to the LPT algorithm helped to minimize the makespan and provided better schedules. Additionally, for both algorithms, the RPD tended to decrease as the job-to-machine ratio (n/m) increased.
Furthermore, for each processing time interval, the overall average RPD for both algorithms decreased as the interval of the speed of machines increased, i.e., from S~[1, 3] to S~[1, 7]. There was one exception: for the LPT rule with processing times in p~[1, 50], the overall average RPD increased slightly, by about 1.73%, when the speed range went from S~[1, 3] to S~[1, 5]. However, it then decreased as the speed range went to S~[1, 7]. Figure 4 shows a visual representation of the overall average RPD for the LPT and RLPT algorithms for each combination of machine speed and processing time intervals.
To evaluate the performance of LPT and RLPT for each combination of (m, n/m, Pmax, and Smax), we further analyzed the results for all ten instances generated for each combination. Table 4 and Figure 5 summarize the percentage of times each algorithm obtained a better, worse, or equal makespan compared to the other algorithm. Results from the table show the superior performance of the RLPT heuristic over the LPT heuristic across all processing time intervals and machine speed intervals. In all instance combinations, the percentage of times the RLPT heuristic provided a better makespan than the LPT heuristic exceeded 33.2%, reaching 62.5%. Conversely, the percentage of times LPT outperformed RLPT did not exceed 24.3%. The proportion of cases where both algorithms achieved the same solution was high, reaching 51.1% for instances with processing times between 1 and 25. However, this percentage decreased to 16.1% as the processing time interval increased.

4.2. Performance of the LVNS and RVNS Algorithms

This section presents a comparative analysis of the two proposed variable neighborhood search algorithms (VNS): the LVNS algorithm, which employed the LPT heuristic as an initial solution, and the RVNS algorithm, which employed the RLPT heuristic as an initial solution. First, the proposed VNS algorithm was compared with GA and TS on a numerical example from the literature. Then, the performance between LVNS and RVNS was evaluated on the instances generated in this study.
Table 5 presents a numerical example from Balin [21] involving nine jobs to be scheduled on four uniform parallel machines. The table details the processing times of each job, listed in descending order, and the speed of each machine.
The makespan obtained by the LVNS, RVNS, GA, and TS algorithms, the schedule for each algorithm, and the maximum iterations are presented in Table 6.
Table 6 shows that the proposed VNS algorithms (LVNS and RVNS) and the TS algorithm achieved the minimum makespan (15.6), which is the optimal solution for this example found by the mathematical formulation. GA obtained a suboptimal makespan of 16 within 12 iterations, as reported by Balin [21]. While the maximum number of iterations used for the TS algorithm is 5000, the authors did not specify the number of iterations required to reach the optimal solution. On the other hand, the LVNS algorithm obtained the optimal solution in one iteration and RVNS achieved the optimal solution more than once within ten iterations. In both GA and TS algorithms, the time taken to reach the best solution was not reported by the authors.
Interestingly, both LVNS and RVNS algorithms converged to the optimal solution, with execution times of 0.0012 s and 0.0069 s, respectively.
Since both GA and TS algorithms were tested against a small numerical example with nine and four uniform parallel machines, we further tested the performance of the proposed VNS algorithms through large-sized instances with up to 10 machines and 300 jobs.
The findings of the comparison between the LVNS and RVNS algorithms on the generated instances in this study can be analyzed in two parts. Instances with up to 20 jobs and instances with more than 20 jobs.
For instances with up to 20 jobs, Table 7 compares the number of times each of the two VNS algorithms, LVNS and RVNS, found the optimal solution identified by the mathematical formulation presented in Section 3.2.
Table 7 demonstrates that, for each combination of (n, m) and (S, p) that contains 10 instances, the RVNS algorithm generally outperformed the LVNS algorithm. The RVNS algorithm reached the optimal solution more frequently than the LVNS algorithm. On average, for each combination of (S, p) and across all machine–job combinations (m, n), the LVNS algorithm found the optimal solution in 47.5% to 77.5% of the instances. In comparison, the RVNS algorithm showed a higher average number of optimal solutions, ranging between 0.858 and 0.942. Overall, considering all the instances with up to 20 jobs, the LVNS algorithm found the optimal solution in 64.35% of cases, while RVNS achieved a significantly higher average of 90.19%. This indicates that RVNS was more effective at finding better schedules for small-sized problems.
The relative percent deviation (RPD) for each of the VNS algorithms across all the instances is presented in Table 8. The table shows that, in instances with a ratio of n / m 5 , in most of the instances, the RVNS algorithm generally outperformed the LVNS in minimizing the average RPD. In a few instances, both VNS algorithms achieved equal performance. The table also reveals that in only one specific combination did the LVNS algorithm achieve an RPD of 0, while the RVNS algorithm achieved an RPD of 0.1.
Conversely, for instances with a ratio of n / m > 5 , both algorithms exhibited similar RPD values. This suggests that the initial solution had a significant impact on RPD for small-sized problems. However, this impact diminished as the size of the problem increased.
Figure 6 visually depicts the average RPD for the LVNS and RVNS algorithms across all machine–job combinations, considering the different machine speeds and processing time intervals. The figure shows that the RVNS algorithm achieved both a lower minimum average RPD (0.08) and a lower maximum average RPD (0.243) compared to the LVNS algorithm (minimum: 0.240, maximum: 0.479). Additionally, the average RPD across all instances was lower for the RVNS algorithm (about 0.15) compared to (about 0.32) the LVNS algorithm.
To evaluate the performance of each algorithm in obtaining a better, worse, or equal makespan compared to the other algorithm, we analyzed the 10 instances of each combination of m, n/m, Pmax, and Smax. Table 9 shows that for the LVNS algorithm, the percent of times it obtained a makespan lower than the RVNS did not exceed 1.1% across all instance combinations. Conversely, for the RVNS algorithm, we noticed that the minimum percentage of instances where it outperformed the LVNS algorithm in terms of makespan was 11.1%. This percentage increased as the processing time interval increased, reaching a maximum of 28.6%. The case when both algorithms obtained an equal makespan had the highest percentage, ranging from 70.4% up to 88.6%. Figure 7 shows a visual representation of the outperformance of each of the VNS algorithms.
Table 10 shows the average CPU time (in seconds) for the LVNS and the RVNS algorithms. Overall, RVNS generally required more CPU time than LVNS. This is likely due to using only one iteration for each instance with LVNS, whereas ten iterations were implemented with RVNS. Moreover, the increase in time in RVNS can also be due to the initial solution, which required visiting the neighborhood structures many times to improve the solution. Once an improvement occurred, the VNS algorithm iteratively re-evaluated all machines and neighborhood structures until there was no further improvement. At this point, the algorithm terminated. The average CPU time for LVNS was consistently equal to 0.001 s. On the other hand, for RVNS, the average CPU time varied between a minimum of 0.155 s and a maximum of 0.311 s.
As observed in the previous comparison of both LVNS and RVNS algorithms, the outperformance of RVNS was in instances when the number of jobs was up to 20, and the equal performance was when the job–machine ratio was n/m > 5. This suggests that applying the RVNS algorithm for small instances, despite the slightly increased CPU time, can be beneficial. However, for instances with a job–machine ratio greater than 5, even though the RVNS incurred a higher CPU time, it achieved the same average RPD as the faster LVNS algorithm, which required minimal CPU time.
The time reduction of using the LVNS solution as an upper bound in finding the optimal solution in a mathematical formulation solved with CPLEX is investigated in Figure 8 for the instances where the number of jobs was less than or equal to 20. The figure shows that the CPU time required to solve the scheduling problem was reduced when using the VNS solution as an upper bound. The minimum and maximum percentage reductions in CPU time were 13.49% and 41.44%, respectively. This improvement occurred because the upper bound provided CPLEX with a tighter estimate of the optimal solution. This allowed CPLEX to prune more irrelevant branches of the search tree, leading to a more efficient convergence to the optimal solution. However, it is important to note that for some instances, specifying an upper bound for CPLEX might not always guarantee a reduction in CPU time compared to solving without specifying an upper bound.

5. Conclusions and Future Work

In this paper, we studied the scheduling problem of uniform parallel machines with a makespan objective. Two versions of the variable neighborhood search (VNS) algorithm were proposed, differing in their initial solution generation strategy. The first version (LVNS) utilized the longest processing time (LPT) rule for generating the initial solution, while the second version (RVNS) introduced a new variant of the LPT rule by incorporating randomization in job selection. The proposed VNS algorithms were compared with the genetic algorithm and tabu search on a numerical example from the literature. Results showed that the proposed VNS algorithms were competitive and obtained the optimal solution with much less effort. Furthermore, a mathematical formulation and a lower bound were presented to assess the performance of the initial solution heuristics and the two VNS algorithms on randomly generated instances. Computational results showed that the randomized longest processing time rule outperformed the longest processing time rule. In addition, the RVNS algorithm outperformed the LVNS algorithm in finding the optimal solution in 90.19% of small-sized instances and achieved a minimum average relative percentage deviation of 0.15% across all instances. Large-sized instances exhibited equal performance for both algorithms. We also demonstrated that incorporating an upper bound in the mathematical formulation can significantly reduce the solution time of the CPLEX solver. This research opens doors for exciting advancements. An efficient exact formulation, such as an arc flow formulation, can be developed for the studied uniform parallel machine problem with a makespan objective. This formulation can leverage the VNS algorithm’s solutions as an estimate for the upper bound. Furthermore, future work can explore incorporating machine learning into the VNS algorithms to predict promising neighborhood structures during local search, leading to better convergence. Additionally, the VNS algorithm can be extended to handle multiple, competing objectives like minimizing makespan, flow time, tardiness, or energy consumption, making it more relevant to real-world scheduling scenarios with various objectives.

Author Contributions

Conceptualization, K.B. and A.G.; methodology, K.B. and A.G.; software, K.B.; validation, K.B. and A.G.; formal analysis, K.B. and A.G.; investigation, K.B. and A.G.; resources, K.B. and A.G.; data curation, K.B. and A.G.; writing—original draft preparation, K.B.; writing—review and editing, K.B. and A.G.; visualization, K.B. and A.G.; supervision, A.G.; project administration, A.G.; funding acquisition, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Researchers Supporting Project (number RSPD2024R1039), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

The data are available upon request from the corresponding author.

Acknowledgments

The authors extend their appreciation to the Researchers Supporting Project (number RSPD2024R1039), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lee, J.-H.; Jang, H. Uniform parallel machine scheduling with dedicated machines, job splitting and setup resources. Sustainability 2019, 11, 7137. [Google Scholar] [CrossRef]
  2. Huang, J.; Wu, J.; Tang, Y.; Hao, Y. The influences of openness on China’s industrial CO2 intensity. Environ. Sci. Pollut. Res. 2020, 27, 15743–15757. [Google Scholar] [CrossRef] [PubMed]
  3. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Kan, A.R. Optimization and approximation in deterministic sequencing and scheduling: A survey. In Annals of Discrete Mathematics; Elsevier: Amsterdam, The Netherlands, 1979; Volume 5, pp. 287–326. [Google Scholar]
  4. Garey, M.R.; Johnson, D.S. Computers and Intractability; Freeman: San Francisco, CA, USA, 1979; Volume 174. [Google Scholar]
  5. Li, K.; Leung, J.T.; Cheng, B.Y. An agent-based intelligent algorithm for uniform machine scheduling to minimize total completion time. Appl. Soft Comput. 2014, 25, 277–284. [Google Scholar] [CrossRef]
  6. Graham, R.L. Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 1969, 17, 416–429. [Google Scholar] [CrossRef]
  7. Gonzalez, T.; Ibarra, O.H.; Sahni, S. Bounds for LPT schedules on uniform processors. SIAM J. Comput. 1977, 6, 155–166. [Google Scholar] [CrossRef]
  8. Della Croce, F.; Scatamacchia, R. The longest processing time rule for identical parallel machines revisited. J. Sched. 2020, 23, 163–176. [Google Scholar] [CrossRef]
  9. Friesen, D.K. Tighter bounds for LPT scheduling on uniform processors. SIAM J. Comput. 1987, 16, 554–560. [Google Scholar] [CrossRef]
  10. Mireault, P.; Orlin, J.B.; Vohra, R.V. A parametric worst case analysis of the LPT heuristic for two uniform machines. Oper. Res. 1997, 45, 116–125. [Google Scholar] [CrossRef]
  11. Koulamas, C.; Kyparisis, G.J. A modified LPT algorithm for the two uniform parallel machine makespan minimization problem. Eur. J. Oper. Res. 2009, 196, 61–68. [Google Scholar] [CrossRef]
  12. Massabò, I.; Paletta, G.; Ruiz-Torres, A.J. A note on longest processing time algorithms for the two uniform parallel machine makespan minimization problem. J. Sched. 2016, 19, 207–211. [Google Scholar] [CrossRef]
  13. Mitsunobu, T.; Suda, R.; Suppakitpaisarn, V. Worst-case analysis of LPT scheduling on a small number of non-identical processors. Inf. Process. Lett. 2024, 183, 106424. [Google Scholar] [CrossRef]
  14. Friesen, D.K.; Langston, M.A. Bounds for multifit scheduling on uniform processors. SIAM J. Comput. 1983, 12, 60–70. [Google Scholar] [CrossRef]
  15. Chen, B. Tighter bound for MULTIFIT scheduling on uniform processors. Discret. Appl. Math. 1991, 31, 227–260. [Google Scholar] [CrossRef]
  16. Li, K.; Zhang, S.-C. Heuristics for uniform parallel machine scheduling problem with minimizing makespan. In Proceedings of the 2008 IEEE International Conference on Automation and Logistics, Qingdao, China, 1–3 September 2008; IEEE: New York, NY, USA, 2018; pp. 273–278. [Google Scholar]
  17. Sivasankaran, P.; Kumar, M.R.; Senthilkumar, P.; Panneerselvam, R. Heuristic to minimize makespan in uniform parallel machines scheduling problem. Udyog Pragati 2009, 33, 1–15. [Google Scholar]
  18. Li, K.; Yang, S. Heuristic algorithms for scheduling on uniform parallel machines with heads and tails. J. Syst. Eng. Electron. 2011, 22, 462–467. [Google Scholar] [CrossRef]
  19. De Giovanni, D.; Ho, J.C.; Paletta, G.; Ruiz-Torres, A.J. Heuristics for Scheduling Uniform Machines. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 14–16 March 2018; Volume 2. [Google Scholar]
  20. Song, J.; Miao, C.; Kong, F. Uniform-machine scheduling problems in green manufacturing system. Math. Found. Comput. 2024. [Google Scholar] [CrossRef]
  21. Balin, S. Non-identical parallel machine scheduling using genetic algorithm. Expert Syst. Appl. 2011, 38, 6814–6821. [Google Scholar] [CrossRef]
  22. Noman, M.A.; Alatefi, M.; Al-Ahmari, A.M.; Ali, T. Tabu Search Algorithm Based on Lower Bound and Exact Algorithm Solutions for Minimizing the Makespan in Non-Identical Parallel Machines Scheduling. Math. Probl. Eng. 2021, 2021, 1856734. [Google Scholar] [CrossRef]
  23. Li, K.; Cheng, B.-Y. Variable neighborhood search for uniform parallel machine makespan scheduling problem with release dates. In Proceedings of the 2010 International Symposium on Computational Intelligence and Design, Hangzhou, China, 29–31 October 2010; IEEE: New York, NY, USA, 2010; Volume 2, pp. 43–46. [Google Scholar]
  24. Sevkli, M.; Uysal, H. A modified variable neighborhood search for minimizing the makespan on identical parallel machines. In Proceedings of the 2009 International Conference on Computers & Industrial Engineering, Troyes, France, 6–9 July 2009; IEEE: New York, NY, USA, 2009; pp. 108–111. [Google Scholar]
  25. Chen, J.; Li, J.-Q. Efficient variable neighborhood search for identical parallel machines scheduling. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; IEEE: New York, NY, USA, 2012; pp. 7228–7232. [Google Scholar]
  26. Alharkan, I.; Bamatraf, K.; Noman, M.A.; Kaid, H.; Nasr, E.S.A.; El-Tamimi, A.M. An order effect of neighborhood structures in variable neighborhood search algorithm for minimizing the makespan in an identical parallel machine scheduling. Math. Probl. Eng. 2018, 2018, 3586731. [Google Scholar] [CrossRef]
  27. Gharbi, A.; Bamatraf, K. An Improved Arc Flow Model with Enhanced Bounds for Minimizing the Makespan in Identical Parallel Machine Scheduling. Processes 2022, 10, 2293. [Google Scholar] [CrossRef]
  28. Cheng, W.; Guo, P.; Zhang, Z.; Zeng, M.; Liang, J. Variable neighborhood search for parallel machines scheduling problem with step deteriorating jobs. Math. Probl. Eng. 2012, 2012, 928312. [Google Scholar] [CrossRef]
  29. Senthilkumar, P.; Narayanan, S. GA Based Heuristic to Minimize Makespan in Single Machine Scheduling Problem with Uniform Parallel Machines. Intell. Inf. Manag. 2011, 3, 204–214. [Google Scholar] [CrossRef]
  30. Senthilkumar, P.; Narayanan, S. Simulated annealing algorithm to minimize makespanin single machine scheduling problem withuniform parallel machines. Intell. Inf. Manag. 2011, 3, 22. [Google Scholar] [CrossRef]
  31. Kaabi, J. Modeling and solving scheduling problem with m uniform parallel machines subject to unavailability constraints. Algorithms 2019, 12, 247. [Google Scholar] [CrossRef]
  32. Kim, H.-J.; Lee, J.-H. Scheduling uniform parallel dedicated machines with job splitting, sequence-dependent setup times, and multiple servers. Comput. Oper. Res. 2021, 126, 105115. [Google Scholar] [CrossRef]
  33. Horowitz, E.; Sahni, S. Exact and approximate algorithms for scheduling nonidentical processors. J. ACM 1976, 23, 317–327. [Google Scholar] [CrossRef]
  34. De, P.; Morton, T.E. Scheduling to minimize makespan on unequal parallel processors. Decis. Sci. 1980, 11, 586–602. [Google Scholar] [CrossRef]
  35. Liao, C.-J.; Lin, C.-H. Makespan minimization for two uniform parallel machines. Int. J. Prod. Econ. 2003, 84, 205–213. [Google Scholar] [CrossRef]
  36. Lin, C.-H.; Liao, C.-J. Makespan minimization for multiple uniform machines. Comput. Ind. Eng. 2008, 54, 983–992. [Google Scholar] [CrossRef]
  37. Popenko, V.; Sperkach, M.; Zhdanova, O.; Kokosiński, Z. On Optimality Conditions for Job Scheduling on Uniform Parallel Machines. In Proceedings of the International Conference on Computer Science, Engineering and Education Applications, Kiev, Ukraine, 26–27 January 2019; Springer: New York, NY, USA, 2019; pp. 103–112. [Google Scholar]
  38. Berndt, S.; Brinkop, H.; Jansen, K.; Mnich, M.; Stamm, T. New support size bounds for integer programming, applied to makespan minimization on uniformly related machines. arXiv 2023, arXiv:2305.08432. [Google Scholar]
  39. Mallek, A.; Boudhar, M. Scheduling on uniform machines with a conflict graph: Complexity and resolution. Int. Trans. Oper. Res. 2024, 31, 863–888. [Google Scholar] [CrossRef]
  40. Soper, A.J.; Strusevich, V.A. Parametric analysis of the quality of single preemption schedules on three uniform parallel machines. Ann. Oper. Res. 2021, 298, 469–495. [Google Scholar] [CrossRef]
  41. Mokotoff, E. Parallel machine scheduling problems: A survey. Asia-Pac. J. Oper. Res. 2001, 18, 193. [Google Scholar]
  42. Senthilkumar, P.; Narayanan, S. Literature review of single machine scheduling problem with uniform parallel machines. Intell. Inf. Manag. 2010, 2, 457–474. [Google Scholar] [CrossRef]
  43. Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res. 1997, 24, 1097–1100. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed VNS algorithms.
Figure 1. Flowchart of the proposed VNS algorithms.
Systems 12 00221 g001
Figure 2. Gantt chart schedule for LPT.
Figure 2. Gantt chart schedule for LPT.
Systems 12 00221 g002
Figure 3. Gantt chart schedule for LVNS.
Figure 3. Gantt chart schedule for LVNS.
Systems 12 00221 g003
Figure 4. Average RPD for the LPT and RLPT heuristics.
Figure 4. Average RPD for the LPT and RLPT heuristics.
Systems 12 00221 g004
Figure 5. Bar chart for the outperformance comparison between LPT and RLPT algorithms.
Figure 5. Bar chart for the outperformance comparison between LPT and RLPT algorithms.
Systems 12 00221 g005
Figure 6. Average RPD for the LVNS and RVNS algorithms.
Figure 6. Average RPD for the LVNS and RVNS algorithms.
Systems 12 00221 g006
Figure 7. Bar chart for the outperformance comparison of LVNS and RVNS algorithms.
Figure 7. Bar chart for the outperformance comparison of LVNS and RVNS algorithms.
Systems 12 00221 g007
Figure 8. Average CPU time reduction with the LVNS upper bound.
Figure 8. Average CPU time reduction with the LVNS upper bound.
Systems 12 00221 g008
Table 1. Notations of mathematical formulation.
Table 1. Notations of mathematical formulation.
VariableDefinition
Indices
i Index of a machine.
j Index of a job.
Sets
M Set of machines
J Set of jobs
Parameters
p i j Processing time of job j on machine i. i M ;   j J .
LBLower bound
UBUpper bound
Decision Variables
x i j 1 if job j is processed on machine i, 0 otherwise. i M ;   j J .
C m a x Makespan of a schedule
Table 2. Processing times of jobs for each machine.
Table 2. Processing times of jobs for each machine.
MachineSpeedJob
123456
11.0000382234582912273020022002
21.3382285625842176204014961496
31.8200210019001600150011001100
Table 3. Relative percentage deviation comparison between LPT and RLPT.
Table 3. Relative percentage deviation comparison between LPT and RLPT.
S~[1,3]S~[1,5]S~[1,7]
n/mmnp~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]
LPTRLPTLPTRLPTLPTRLPTLPTRLPTLPTRLPTLPTRLPTLPTRLPTLPTRLPTLPTRLPT
2364.0312.9663.1493.0682.7381.6553.2062.0301.1592.5402.3781.3731.4041.6221.4931.3853.4541.316
484.4422.9694.0714.8643.9332.8771.9122.8352.7721.6244.8093.7275.7874.0834.3642.0534.1192.043
5105.2915.3153.5383.7074.8393.3066.5434.9753.2002.8143.8633.5182.6081.1373.5293.4333.3093.919
10206.7477.3545.9367.1115.5915.3326.0094.5705.4574.7636.2855.9005.0884.9585.2365.1635.8865.486
3392.3511.6463.9421.1172.3741.0003.1562.4912.8341.7684.7901.8592.2621.3363.6751.8332.2760.675
4124.2772.3293.3982.7875.5763.8982.8371.9893.0642.3682.7591.9262.4421.7592.5771.5702.6932.441
5154.3512.7625.1293.5573.6352.5002.6452.4536.3053.9044.9074.1932.6651.7642.9512.6132.9602.841
10304.6283.2662.9972.7353.8733.8445.0354.6994.4294.0073.2272.5134.1543.6343.2302.7354.0304.113
43121.9400.4341.8041.5212.2181.4111.5991.0942.2451.0991.2650.9441.4050.7352.6361.3541.8281.085
4161.3821.0862.1461.5493.1132.1911.4641.2352.3351.2513.0281.3151.2680.9361.3841.5862.1781.450
5201.9391.4122.2661.7562.8981.8441.3101.1432.1561.5851.5721.2011.5271.5141.6541.4322.3391.877
10402.2471.7961.8741.8721.5891.6162.5321.7922.6441.9012.4001.9502.4281.9282.2811.9622.0632.273
53151.1590.4411.1670.7041.4710.9741.3870.5921.6641.0271.4780.6610.9850.5120.6470.5011.4430.972
4200.5990.3810.9410.6691.3340.9970.5570.2411.3200.5821.3620.9451.0100.6681.2320.6361.4660.876
5251.8221.0340.9151.0271.6721.4411.0950.9572.3241.2331.1021.1901.3891.0581.4681.2951.3711.286
10501.3901.2751.3491.2461.6141.4511.7021.3161.4511.1631.4001.0741.6341.3601.4871.2421.3421.142
103300.4800.3030.5230.3650.3290.2390.4650.3380.4260.2090.2720.2320.6480.4020.4330.1590.3460.185
4400.5660.3700.3660.2790.5910.4010.4780.4000.5670.3650.4870.2800.4390.2950.4940.3200.4930.300
5500.6300.3820.3960.2590.4720.3180.4070.3140.4270.2810.3970.3370.4280.3510.4620.3780.2940.243
101000.5390.4420.4640.3780.4150.3140.3640.3410.3210.2740.4010.2510.4120.3460.4390.3310.3370.320
203600.1540.1060.1160.0840.0890.0580.1570.1120.1350.0580.1430.0520.1500.1350.1400.0670.1010.048
4800.2130.1310.0890.0690.1140.0790.1790.1790.0790.0610.1000.0590.1320.1260.0860.0790.0850.069
51000.1640.1640.1340.1220.1020.1000.1900.1550.1760.1040.0940.0800.1640.1590.1180.0870.0990.082
102000.2150.1710.0990.0930.0950.0700.1720.1720.1410.0980.1100.0950.1680.1680.1090.0980.1140.085
303900.0660.0660.0530.0520.0450.0310.1030.1030.0550.0520.0410.0240.1170.0940.0880.0450.0430.031
41200.1210.1210.0610.0440.0470.0410.0970.0970.0710.0450.0350.0350.0820.0820.0480.0500.0430.037
51500.0970.0970.0700.0550.0360.0450.1160.1090.0580.0530.0430.0370.0890.0890.0650.0510.0480.036
103000.1200.1200.0630.0600.0590.0400.1160.1090.0640.0640.0530.0420.1200.1200.0660.0570.0490.038
Average1.8561.3911.6811.4701.8161.3601.6371.3161.7101.2601.7431.2791.4651.1211.5141.1611.6001.260
Table 4. Outperformance comparison of LPT and RLPT algorithms.
Table 4. Outperformance comparison of LPT and RLPT algorithms.
p~[1,25]p~[1,50]p~[1,100]
LPT < RLPTLPT > RLPTLPT = RLPTLPT < RLPTLPT > RLPTLPT = RLPTLPT < RLPTLPT > RLPTLPT = RLPT
S~[1,3]11.8%44.3%43.9%20.7%47.1%32.1%24.3%57.9%17.9%
S~[1,5]14.3%33.2%52.5%13.2%54.3%32.5%21.4%62.5%16.1%
S~[1,7]12.1%36.8%51.1%18.2%49.6%32.1%23.6%55.4%21.1%
Table 5. Processing times of nine jobs on four uniform parallel machines [21].
Table 5. Processing times of nine jobs on four uniform parallel machines [21].
MachineSpeedJob
483761529
11302624222018161414
2215131211109877
347.56.565.554.543.53.5
4565.24.84.443.63.22.82.8
Table 6. Comparison of the proposed versions of VNS algorithms with GA [21] and TS [22].
Table 6. Comparison of the proposed versions of VNS algorithms with GA [21] and TS [22].
AlgorithmMachineSchedule of JobsCompletion TimeMax. Iterations
GA151412
21, 916
32, 6, 815.5
44, 7, 315.2
TS12145000
2415
31, 3, 615.5
45, 7, 8, 915.6
LVNS12141
2415
35, 6, 815.5
41, 3, 7, 915.6
RVNS191410
2415
35, 6, 815.5
41, 2, 3, 715.6
Table 7. Comparison of the number of optimal solutions for small instances.
Table 7. Comparison of the number of optimal solutions for small instances.
S~[1,3]S~[1,5]S~[1,7]
n/mmnp~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]
LVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNS
236910101091081010109101091010910
4881091071081089910910910610
510698105105968810610710610
1020020312140311121512
33969695106104106108101010810
41261068210410510389103919
515610691841048277961027
4312910810691010810799108959
416910910198105861061091057
5201010710599106105881071039
5315910910710101091051010101010710
420101091081081010109101010610510
Average0.7330.9170.7250.9080.4750.8920.6750.9420.6250.8830.5830.8580.7750.9170.7170.9420.4830.858
Table 8. Relative percentage deviation comparison between LVNS and RVNS.
Table 8. Relative percentage deviation comparison between LVNS and RVNS.
S~[1,3]S~[1,5]S~[1,7]
n/mmnp~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]
LVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNS
2360.1790.0000.0000.0000.0550.0000.6510.0000.0000.0000.3740.0000.0000.1000.0000.0000.0660.000
480.6060.0000.5800.0000.2970.0000.3060.0000.4050.2110.2960.0000.2330.0000.2330.0001.7940.000
5100.6230.0230.8070.0001.3730.0002.8660.2670.3180.1540.1020.0000.4300.0000.5810.0000.5730.000
10202.6540.7901.8330.5121.9550.6742.0340.3861.4680.7031.6830.8001.9981.0731.2060.5252.8680.809
3390.4960.1490.4090.0600.5240.0000.4380.0000.7040.0000.6880.0000.5860.0000.0000.0000.1000.000
4120.4080.0000.2860.0960.7790.0000.8730.0000.3370.0001.0180.2340.0710.0000.6330.0330.9550.027
5150.4950.0000.0920.0090.4650.0240.6050.0000.5330.1230.6000.0430.1770.0300.2510.0000.4520.115
10301.2471.0651.0590.6650.7400.3861.1541.0210.7770.6970.6930.3861.3671.1781.1100.5450.8470.534
43120.0410.0000.1040.0000.0910.0120.0000.0000.1380.0000.0370.0040.0570.0000.0670.0230.1880.014
4160.0460.0000.0240.0000.4210.0070.1020.0000.0830.0750.1750.0000.1840.0000.0340.0000.0970.034
5200.0000.0000.0400.0000.0940.0170.0420.0000.1530.0000.1040.0280.0530.0000.1570.0000.3420.018
10400.8890.8890.5380.4780.3280.2330.8200.8200.4920.4470.3480.2260.8910.8580.5450.5020.3840.237
53150.0580.0000.0530.0000.0220.0000.0000.0000.0620.0000.1350.0000.0000.0000.0000.0000.0720.000
4200.0000.0000.0100.0000.0110.0000.0760.0000.0000.0000.0180.0000.0000.0000.0820.0000.0710.000
5250.6350.6100.3810.3480.1670.1670.5860.5860.2880.2870.2190.1480.6550.6550.3950.3090.2020.139
10500.6630.6630.4180.3950.1870.1720.6540.6540.3420.3400.2000.1760.7040.7040.3510.3260.1910.172
103300.2530.2530.1340.1340.0690.0690.2690.2690.1360.1360.0820.0820.3030.3030.1250.1250.0760.076
4400.3450.3450.1650.1650.0740.0740.2920.2920.1660.1660.0890.0890.2700.2700.1550.1550.0740.074
5500.3280.3280.1580.1580.0750.0750.2970.2970.1590.1590.0800.0800.3320.3320.1570.1570.0870.087
101000.4290.4290.1580.1580.0920.0920.3380.3380.1830.1830.0990.0990.3350.3350.1620.1620.0850.085
203600.1060.1060.0670.0670.0320.0320.1120.1120.0580.0580.0310.0310.1350.1350.0670.0670.0280.028
4800.1310.1310.0680.0680.0310.0310.1730.1730.0610.0610.0320.0320.1260.1260.0580.0580.0350.035
51000.1640.1640.0800.0800.0360.0360.1370.1370.0820.0820.0440.0440.1590.1590.0780.0780.0460.046
102000.1650.1650.0840.0840.0440.0440.1720.1720.0790.0790.0450.0450.1680.1680.0890.0890.0420.042
303900.0660.0660.0400.0400.0170.0170.1030.1030.0520.0520.0220.0220.0940.0940.0370.0370.0220.022
41200.1210.1210.0440.0440.0220.0220.0970.0970.0450.0450.0230.0230.0820.0820.0450.0450.0260.026
51500.0970.0970.0530.0530.0240.0240.1090.1090.0510.0510.0270.0270.0890.0890.0510.0510.0250.025
103000.1200.1200.0600.0600.0300.0300.1090.1090.0640.0640.0290.0290.1200.1200.0570.0570.0310.031
Average0.4060.2330.2770.1310.2880.0800.4790.2120.2590.1490.2600.0950.3430.2430.2400.1190.3490.096
Table 9. Performance comparison of LVNS and RVNS algorithms.
Table 9. Performance comparison of LVNS and RVNS algorithms.
p~[1,25]p~[1,50]p~[1,100]
LVNS < RVNSLVNS > RVNSLVNS = RVNSLVNS < RVNSLVNS > RVNSLVNS = RVNSLVNS < RVNSLVNS > RVNSLVNS = RVNS
S~[1,3]0.0%11.4%88.6%0.4%17.1%82.5%1.1%28.6%70.4%
S~[1,5]0.4%13.9%85.7%0.7%17.9%81.4%1.1%26.1%72.9%
S~[1,7]0.4%11.1%88.6%0.0%16.4%83.6%1.1%27.9%71.1%
Table 10. Comparison of average CPU time (in seconds) between LVNS and RVNS.
Table 10. Comparison of average CPU time (in seconds) between LVNS and RVNS.
S~[1,3]S~[1,5]S~[1,7]
n/mmnp~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]p~[1,25]p~[1,50]p~[1,100]
LVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNSLVNSRVNS
2360.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.0000.000
480.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.001
5100.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.001
10200.0000.0040.0000.0050.0000.0050.0000.0040.0000.0050.0000.0060.0000.0050.0000.0050.0000.005
3390.0000.0010.0000.0010.0000.0010.0000.0000.0000.0000.0000.0000.0000.0010.0000.0000.0000.000
4120.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.001
5150.0000.0010.0000.0020.0000.0020.0000.0010.0000.0020.0000.0020.0000.0010.0000.0020.0000.002
10300.0000.0050.0000.0060.0000.0090.0000.0060.0000.0080.0000.0080.0000.0060.0000.0080.0000.009
43120.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.001
4160.0000.0010.0000.0010.0000.0020.0000.0010.0000.0010.0000.0020.0000.0010.0000.0010.0000.002
5200.0000.0020.0000.0020.0000.0030.0000.0020.0000.0020.0000.0020.0000.0020.0000.0020.0000.003
10400.0000.0060.0000.0080.0000.0110.0000.0060.0000.0090.0000.0120.0000.0070.0000.0100.0000.013
53150.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.0010.0000.001
4200.0000.0010.0000.0020.0000.0020.0000.0020.0000.0020.0000.0020.0000.0020.0000.0020.0000.002
5250.0000.0020.0000.0030.0000.0040.0000.0020.0000.0030.0000.0040.0000.0030.0000.0030.0000.004
10500.0000.0080.0000.0100.0000.0150.0000.0090.0000.0120.0000.0160.0000.0080.0000.0140.0000.018
103300.0000.0050.0000.0060.0000.0070.0000.0050.0000.0060.0000.0070.0000.0060.0000.0060.0000.007
4400.0000.0110.0000.0110.0000.0140.0000.0120.0000.0130.0000.0140.0000.0110.0000.0140.0000.016
5500.0000.0150.0000.0140.0000.0230.0000.0140.0000.0150.0000.0250.0000.0160.0000.0230.0000.025
101000.0000.0350.0010.0580.0010.0790.0000.0440.0000.0620.0010.0840.0000.0610.0010.0800.0010.120
203600.0000.0740.0000.0810.0000.0840.0000.0600.0000.0590.0000.0840.0000.0600.0000.0690.0000.075
4800.0010.1240.0000.1330.0010.1650.0000.1320.0000.1160.0010.1660.0000.1410.0010.1610.0010.185
51000.0010.1780.0010.2120.0010.2640.0010.2570.0010.2200.0010.2340.0010.2150.0010.2470.0010.275
102000.0020.4510.0020.5080.0020.5850.0020.4900.0020.4880.0030.8110.0030.8020.0030.8160.0051.291
303900.0010.3490.0010.3960.0010.3890.0010.3120.0010.3610.0010.3480.0010.3290.0010.3280.0010.386
41200.0030.6400.0020.5220.0020.7880.0020.6300.0020.7080.0030.8330.0030.8190.0020.7320.0030.952
51500.0030.8830.0020.7330.0031.0080.0030.7470.0020.6960.0031.0460.0031.0890.0041.1640.0051.531
103000.0071.5360.0082.5400.0113.2000.0082.3290.0123.5280.0124.0050.0134.0930.0175.0270.0113.793
Average0.0010.1550.0010.1880.0010.2380.0010.1810.0010.2260.0010.2760.0010.2740.0010.3110.0010.311
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bamatraf, K.; Gharbi, A. Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling. Systems 2024, 12, 221. https://doi.org/10.3390/systems12060221

AMA Style

Bamatraf K, Gharbi A. Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling. Systems. 2024; 12(6):221. https://doi.org/10.3390/systems12060221

Chicago/Turabian Style

Bamatraf, Khaled, and Anis Gharbi. 2024. "Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling" Systems 12, no. 6: 221. https://doi.org/10.3390/systems12060221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop