Next Article in Journal
Coalitional Control Strategy for a Heterogeneous Platoon Application
Previous Article in Journal
Global Existence, Blowup, and Asymptotic Behavior for a Kirchhoff-Type Parabolic Problem Involving the Fractional Laplacian with Logarithmic Term
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing New Bounds for the Performance Guarantee of the Jump Neighborhood for Scheduling Jobs on Uniformly Related Machines

by
Felipe T. Muñoz
1,*,
Guillermo Latorre-Núñez
1 and
Mario Ramos-Maldonado
2
1
Departamento de Ingeniería Industrial, Facultad de Ingeniería, Universidad del Bío-Bío, Concepción 4051381, Chile
2
Departamento de Ingeniería en Maderas, Facultad de Ingeniería, Universidad del Bío-Bío, Concepción 4051381, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(1), 6; https://doi.org/10.3390/math12010006
Submission received: 20 October 2023 / Revised: 13 November 2023 / Accepted: 20 November 2023 / Published: 19 December 2023

Abstract

:
This study investigates the worst-case performance guarantee of locally optimal solutions to minimize the total weighted completion time on uniformly related parallel machines. The investigated neighborhood structure is Jump, also called insertion or move. This research focused on establishing the local optimality condition expressed as an inequality and mapping that maps a schedule into an inner product space so that the norm of the mapping is closely related to the total weighted completion time of the schedule. We determine two new upper bounds for the performance guarantee, which take the form of an expression based on parameters that describe the family of instances: the speed of the fastest machine, the speed of the slowest machine, and the number of machines. These new bounds outperform the parametric upper bound previously established in the existing literature and enable a better understanding of the performance of the solutions obtained for the Jump neighborhood in this scheduling problem, according to parameters that describe the family of instances.

1. Introduction

Scheduling problems, which deal with the optimal allocation of limited resources to tasks to minimize or maximize certain objectives, have been a central research focus in the combinatorial optimization literature. The significance of these problems is evident, given their diverse range of practical applications, from job scheduling in manufacturing systems to resource management in complex projects. As Graham highlighted in their influential work [1], scheduling problems present intricate challenges, and their computational complexity makes them essential subjects of study for the optimization community.
Considering the NP-hard complexity of many of these problems, exact solutions frequently become impractical in real-world scenarios. Therefore, the development and analysis of approximate solution methods become imperative. In recent studies, scheduling problems in parallel machine environments have been investigated across various industries, including semiconductor manufacturing [2,3], metalworking [4], textile manufacturing [5], automotive manufacturing [6,7], chemical processes [8], offshore oil and gas extraction [9], as well as manufacturing scheduling for contingent new requirements [10,11,12,13,14,15]. Additionally, scheduling challenges within service-related sectors have been studied, encompassing healthcare [7,16,17], audit [18], transport and cross-docking [19,20], hospital waste collection [21], emergency response for forest fire extinguishing [22], and drone scheduling problems [23], among others.

1.1. Parallel Machine Environments

In problems related to the parallel machine scheduling environment, we are presented with a set of n jobs to be scheduled on a set of m machines. Typically, jobs are scheduled without interruption on a single machine, and each machine can process only one job at a time. Each job has a known weight (relative importance) and a processing requirement (processing time). Following the standard three-field scheduling notation [1,24], these problems are represented as α | β | γ , where α describes the machine environment, β describes the job characteristics and constraints, and γ presents the objective function. The most straightforward scenario involves identical parallel machines (P), where the processing time of a job is the same on all machines. When machines have varying processing speeds, we enter the domain of the uniformly related parallel machine environment (Q). In contrast, when the processing time of a job depends arbitrarily on the machine where it will be processed, we have the environment of unrelated parallel machines (R).
A solution to the problems, or schedule, is an assignment and sequence of the jobs for each machine. Given a schedule, the completion time of job j can be determined, represented by C j , with which two of the most studied objective functions can be established: the Weighted Total Completion Time, the objective of which is to minimize j w j C j , where w j 0 is the weight of the job j, and the Makespan, the objective of which is to determine a schedule that minimizes the maximum completion time of overall jobs, expressed as C max = max j C j .
In the context of computational complexity, achieving a solution to the parallel machine environment problem of minimizing the total completion time ( C j ) can be accomplished in polynomial time. To be more specific, the problems P | | C j and Q | | C j can be efficiently solved using the Shortest Processing Time (SPT) rule, with complexities of O ( n log n ) and O ( n log m n ) , respectively, [25,26]. Moreover, the R | | C j problem can be efficiently solved using appropriate bipartite matching techniques [27]. Nevertheless, if the goal is to minimize the total weighted completion time or the makespan, these problems present NP-hard complexities [28]. This holds true even in scenarios with two identical machines [29,30]. Moreover, if the number of machines is included as part of the input, these problems become strongly NP-hard [26,31,32]. Given the inherent complexity of these problems, it is usual to look for approximate solution approaches, such as Polynomial Time Approximation Schemes (PTASs). For P | | w j C j and Q | | w j C j problems, PTASs are reported in [33,34]. However, for R | | w j C j problems, the best-reported approximation is 3 / 2 δ , where δ is a small non-negative constant [35,36]. Another commonly used approximate solution approach is Local Search.

1.2. Local Search

According to Williamson and Shmoys [37], the study of approximation algorithms provides a mathematically rigorous basis for the study of heuristics. Traditionally, heuristics and metaheuristics are empirically studied, demonstrating satisfactory performance. However, gaining a comprehensive understanding of their efficacy is essential. Thus, the study of performance guarantees bringing mathematical rigor to heuristics, enabling us to comprehend the quality of solutions obtained for all instances of the problem or to generate insights into the families of instances where the heuristic may not perform well.
Local search approaches are commonly employed for addressing scheduling problems, demonstrating notable empirical performance, but our understanding of their worst-case theoretical performance still needs to be improved. For a comprehensive review of performance guarantees and other theoretical considerations regarding local search across a broad spectrum of combinatorial problems, encompassing scheduling problems, readers are directed to [38,39]. The efficiency of a local algorithm depends on two critical factors: the size of the neighborhood and the quality of the local optima solution, as outlined by Ahuja [40]. One approach to assessing the quality of a local optimum is through worst-case analysis, which can be quantified using the performance guarantee.
According to [41], the performance guarantee for a minimization criterion is defined as the maximum achievable ratio between a locally optimal solution and the global optimum. Specifically, this performance guarantee can be formally expressed as:
p g ( P , N ) = sup k I sup σ L k c o s t ( σ ) o p t ( k ) ,
where P represents the problem, N represents the neighborhood structure, I represents the set of all instances of P , L k represents the set of all locally optimal solutions of instance k for neighborhood N , and c o s t ( σ ) and o p t ( k ) are the values of the objective function for solution σ and optimal solution of instance k, respectively.

1.3. Aim and Scope

This study delves into analyzing the worst-case performance guarantee of solutions determined by local search approaches. Specifically, we investigate the Jump neighborhood, also known as move or insertion, where a job is relocated or reassigned from one machine to another until a predetermined stopping criterion is met. We focus on the scheduling problem of minimizing the total weighted completion time in a setting of uniformly related parallel machines, denoted as Q | | w j C j . We aim to establish an improved parametric upper bound for the performance guarantee.

1.4. Related Works

Regarding the performance guarantee of the studied problem, the literature sets a lower bound of 1.423 and an upper bound of 2.618 . The lower bound is derived from an instance with three jobs and three machines [41]. In contrast, the upper bound is determined via the performance guarantee of problems R | | w j C j and Q | M j | w j C j , both problems being generalizations of the problem studied [42]. Additionally, the literature introduces a parametric upper bound, which establishes an upper bound for the performance guarantee based on parameters such as the speed of the fastest machine, the speed of the slowest machine, and the number of machines [41]. This expression enables the establishment of upper bounds for the performance guarantees lower than 2.618 for certain families of instances.
Table 1 displays the performance guarantee of the Jump neighborhood for scheduling jobs on parallel machine environments to minimize the total weighted completion time. The performance guarantee is tight for problems where a single value is reported. Specifically, this represents the worst-case performance guarantee value for a locally optimal solution within the Jump neighborhood. For problems where a range of values is reported, these values represent the lower and upper bounds for the worst-case performance guarantee. These open gaps highlight a research opportunity to establish a tight performance guarantee for the problem or narrow the gap between the upper and lower bounds. However, the challenge seems complicated since some gaps have remained open for over fifteen years, such as the gap of the P | | w j C j problem [43].
The performance guarantee of locally optimal solutions for the Jump neighborhood has also been studied for makespan minimization. Specifically, the problem P | | C m a x has been investigated in [44,45], the problem Q | | C m a x in [45,46], and the problem R | | C m a x in [45]. Problems involving machine eligibility restrictions were also studied for the P and Q environments in [47,48]. The analysis of performance guarantees in parallel machine environments has also extended to other neighborhood structures, primarily focusing on minimizing the makespan. These neighborhoods include lexjump [45,47,48,49], push [45,47], multi-exchange [45,50], and split [49]. Lexjump and push are polynomial-sized neighborhoods, while multi-exchange and split are exponential. Moreover, the efficiency of local search for the Jump neighborhood has also been studied [45,51].

1.5. Our Results

Our main result establishes that the performance guarantee for locally optimal solutions under the Jump neighborhood for the Q | | w j C j problem does not exceed the following upper bounds:
2 + s m 1 s m m 1 s m > m m 1 ,
1 2 s m s 1 + s m + 1 s m m ,
where s m and s 1 represent the speed of the fastest and slowest machine, respectively, m represents the number of machines, and 1 represents the unit step function. These results improve upon the parametric upper bounds reported in [41]:
2 1 2 m + 2 s m s 1 + 1 2 m .
Our proof technique is similar to the one employed in [41], as it establishes a local optimality condition for the solutions obtained using the Jump neighborhood. Subsequently, we utilize some properties of the solutions to the problem to establish an upper bound for the performance guarantee of the problem. The main difference in the development compared to [41] lies in the lower bound of the total weighted completion time for the optimal schedule. In [41], a lower bound is derived based on the inequality proposed by Eastman et al. [52]. In contrast, in this study, we define a lower bound based on a property obtained through a transformation. This transformation maps the set of feasible schedules of the problem to a specific inner product space, designed such that the norm closely corresponds with the total weighted completion time of the schedule.
The remainder of this paper is structured as follows. The scheduling problem and the local search neighborhood are introduced in Section 2. In Section 3, we present the properties of feasible and optimal solutions to the problem. The performance guarantee of the Jump neighborhood is studied in Section 4. Section 5 conducts the discussion. Finally, the main conclusions of this study are summarized in Section 6.

2. Preliminaries

This section offers a comprehensive description of the scheduling problem addressed in this study and the notation employed for its analysis. Furthermore, we present an overview of the studied neighborhood structure, which will aid in establishing the local optimality condition for the solution studies in this work.

2.1. Problem Statement

The problem addressed in this study consists of minimizing the total weighted completion time on uniform parallel machines. In scheduling notation [1,24], this problem is represented by Q | | w j C j . Let J represent the set of n jobs and M denote the set of m 2 machines. For each job j J , let p j be the non-negative processing requirement, while w j represents the non-negative weight of the job j. Each job must be scheduled without interruption on a single machine. Each machine can only process one job at a time. Let s i denote the processing speed of machine i M . Hence, if job j is assigned to machine i, a processing time p j / s i is required. Without loss of generality, we assume that the machines are indexed based on their speed, and we rescale the machine speeds so that:
s 1 s 2 s m ,
i M s i = m .
Following the notation in [41,42], a schedule, denoted as v , represents a solution to the problem and establishes the assignment of jobs to machines. Let v j indicate the machine to which job j is assigned. More precisely, v j = i indicates that job j is assigned to machine i in schedule v .
The sequence in which jobs assigned to machine i should be processed is determined by the Weighted Shortest Processing Time (WSPT) rule [53] (Thm. 3). This rule arranges jobs in decreasing order of the w j / p j ratio. In the case of ties in the ratio, they are broken arbitrarily. For clarity and to avoid confusion, we use the ≺, ≻, ⪯, and ⪰ notation for describing the precedence relationship between jobs and the job itself induced via the WSPT rule. Then, representing the set of jobs assigned to machine i in schedule v as J i ( v ) , for v j = i , we can express the value of the completion time of job j as:
C j ( v ) = k J i ( v ) k j p k s i = k J v j ( v ) k j p k s v j .
The total weighted completion time and the weighted sum of processing times for schedule v are defined as follows:
C ( v ) = j J w j C j ( v ) = i M j J i ( v ) w j C j ( v ) ,
η v = i M j J i ( v ) w j p j s i = j J w j p j s v j .
With the previous definitions, we have the following identities:
C ( v ) = i M j J i ( v ) k J i ( v ) k j w j p k s i = j J k J v j ( v ) k j w j p k s v j = η ( v ) + j J k J v j ( v ) k j w j p k s v j ,
C ( v ) = i M j J i ( v ) k J i ( v ) k j w k p j s i = j J k J v j ( v ) k j w k p j s v j = η ( v ) + j J k J v j ( v ) k j w k p j s v j .

2.2. Jump Neighborhood

The Jump neighborhood, also called insertion or move, is a polynomial-size neighborhood [49]. A Jump move is characterized by relocating a single job from one machine to another. The success of a Jump move is determined by the reduction in the objective function (total weighted completion time). Given a solution, if it is impossible to make Jump moves that improve the value of the objective function, this solution is a local optimum. We call this solution Jump-Opt.
Figure 1 illustrates a Jump move of a schedule denoted as x . In this scheme, job j, currently assigned to machine i, is moved or reassigned to machine h. The figure also indicates the sets of jobs scheduled before and after job j on both machines. Consider δ j ( x ) as the reduction in the total weighted completion time when job j is excluded from machine x j = i . Furthermore, let δ j ( x , h ) denote the increase in the total weighted completion time if job j is reassigned to machine h. Thus,
δ j ( x ) = w j C j ( x ) + k J x j ( x ) k j w k p j s x j ,
δ j ( x , h ) = w j p j s h + k J h ( x ) k j w j p k s h + k J h ( x ) k j w k p j s h .
Consequently, the schedule x will be a Jump-Opt solution if, and only if,
δ j ( x ) δ j ( x , h ) for all j J , h M .

3. Properties of the Optimal Schedules

This section describes some properties of feasible and optimal solutions to the Q | | w j C j problem. Guided by the insights of Cole et al. [54], we develop mapping from the set of schedules to a specific inner product space. This mapping is designed such that the norm closely corresponds to the total weighted completion time of the schedule. Let φ : M J L 2 ( [ 0 , ) ) M be the function that associates every feasible schedule v with a vector of functions, with one for each machine. If f = φ ( v ) , then for each machine i M , we define:
f i ( y ) = j J i ( v ) : p j w j y w j s i .
Lemma 1. 
Let v be a schedule of the Q | | w j C j problem, and let f = φ ( v ) . Then,
φ ( v ) 2 = 2 C ( v ) η ( v ) .
Proof. 
For f = φ ( v ) , the norm is calculated as:
φ ( v ) 2 = i M 0 f i ( y ) 2 dy = i M 0 j J i ( v ) : p j w j y w j s i k J i ( v ) : p k w k y w k s i dy = i M j J i ( v ) k J i ( v ) w j w k s i 0 1 y p j w j 1 y p k w k dy = i M j J i ( v ) k J i ( v ) w j p k s i 1 p k w k p j w j + w k p j s i 1 p k w k > p j w j .
By utilizing the ≺, ≻, ⪯, and ⪰ notation, as introduced through the application of the WSPT rule, we obtain
φ ( v ) 2 = i M j J i ( v ) k J i ( v ) k j w j p k s i + i M j J i ( v ) k J i ( v ) k j w k p j s i .
By using Equations (9)–(11), the proof is concluded. □
For subsequent analysis, it is necessary to quantify the total weighted completion time of a schedule in which all jobs are assigned to a single machine that operates at a speed equal to 1. The total weighted completion time of this schedule is
Z 1 = j J k J k j w j p k = j J k J k j w k p j .
Another solution of particular interest is the one where all jobs are assigned to a single machine that operates at speed m. Remember that according to assumption (6), the sum of the speeds of the machines is equal to m. We represent this schedule by z , and its total weighted completion time is
C ( z ) = Z 1 m .
In the following lemma, φ mapping is utilized to establish another property of solutions to the problem Q | | w j C j .
Lemma 2. 
Let v be a schedule for the Q | | w j C j problem, where i M s i = m , and let z denote a schedule in which all jobs are assigned to a machine operating at speed m (the total number of machines). For f = φ ( v ) and f z = φ ( z ) , it holds that
φ ( z ) 2 φ ( v ) 2 .
Before proceeding with the proof of Lemma 2, a graphical concept of its demonstration is presented. Figure 2 depicts the graph associated with the mapping φ for an instance with n = 4 jobs and m = 2 machines. For this instance, we have the schedule v = ( 1 , 2 , 1 , 2 ) , which indicates that jobs j 1 and j 3 are assigned to machine 1, while jobs j 2 and j 4 are assigned to machine 2. The representation of this schedule is presented in Figure 2b,c. Here, it can be observed that each job is represented by a rectangle, with its height being determined by w j / s i , and its length being determined by p j / w j . The order in which the jobs are arranged is defined by the WSPT rule. Specifically, they are ordered in decreasing order concerning the ratio w j / p j . In Figure 2b,c, the value of the function related to the mapping φ for each machine is depicted with red lines, denoted as f 1 ( y ) and f 2 ( y ) , for each machine, respectively. In Figure 2a, the sum value of these functions is depicted with a dotted red line. The value of the function related to the mapping φ for the schedule where all jobs are assigned to a single machine operating at speed m is depicted by a blue line. It is evident from this graph that the value of the function for this schedule is always less than or equal to f 1 ( y ) + f 2 ( y ) for any value of y 0 . This result can be generalized for any instance. Next, we proceed with the proof of the lemma.
Proof of Lemma 2. 
By applying the mapping φ to schedule z , we have a single vector of functions, with
f z ( y ) = j J : p j w j y w j m .
The norm for f z is,
φ ( z ) 2 = 0 f z ( y ) 2 dy = 0 j J : p j w j y w j m 2 dy = 0 1 m j J : p j w j y w j 2 dy = 0 1 m i M j J i ( v ) : p j w j y w j 2 dy .
In the last equality, based on the schedule v , the set of all jobs is represented by m disjoint sets of jobs. Thus, for this final expression, the Cauchy–Bunyakovsky–Schwarz inequality is utilized to conclude that:
φ ( z ) 2 0 i M j J i ( v ) : p j w j y w j 2 dy = i M 0 j J i ( v ) : p j w j y w j 2 dy .
However, the norm for f is:
φ ( v ) 2 = i M 0 f ( y ) 2 dy = i M 0 j J i ( v ) : p j w j y w j s i 2 dy = i M 1 s i 0 j J i ( v ) : p j w j y w j 2 dy .
To simplify the notation, we define:
α i = 0 j J i ( v ) : p j w j y w j 2 dy .
Without loss of generality, we assume that the job processing requirements and weights take values such that:
α i 1 , i M .
Next, we define the difference between φ ( z ) 2 and φ ( v ) 2 and utilize Equations (20)–(23):
L = φ ( z ) 2 φ ( v ) 2 i M α i 1 1 s i i M 1 1 s i .
To conclude the proof, it is necessary to demonstrate that the right-hand side of Equation (24) is less than or equal to zero. To achieve this, we formulate the following problem:
max s R + m i M 1 1 s i , subject to i M s i = m .
Note that the objective function of Problem (25) is concave, and the constraint is linear. Hence, applying the Lagrange method enables the identification of a global optimum for the problem. The Lagrangian function for Problem (25) is:
L ( s , λ ) = i M 1 1 s i λ i M s i m .
By solving the first-order necessary condition, we find the optimal solution: λ = 1 and s i = 1 , i M . This implies that the objective function value in Problem (25) equals zero, thereby demonstrating that
L = φ ( z ) 2 φ ( v ) 2 0 .
The following theorem is established by applying Lemma 2 to the optimal schedule.
Theorem 1. 
For an optimal schedule x * of the Q | | w j C j problem, where i M s i = m , the following expression holds:
2 Z 1 m j J w j p j m 2 C ( x * ) η ( x * ) .
Proof. 
The proof comes from Lemmas 1 and 2 and Equation (18). □
Next, we introduce an additional property that will be used to establish the main result of this study.
Lemma 3. 
For any schedule v of the Q | | w j C j problem, the weighted sum of processing times satisfies
j J w j p j s m η ( v ) j J w j p j s 1 .
Proof. 
The proof is established from Equation (9) and assumption (5). □

4. Performance Guarantee

The following two lemmas provide parametric upper bounds for the performance guarantee of locally optimal solutions for the Jump neighborhood. Here, x * represents the optimal schedule, and x represents the Jump-Opt schedule.
Lemma 4. 
Given an instance of the Q | | w j C j problem, where i M s i = m , the performance guarantee of Jump-Opt solutions is, at most,
2 + s m 1 s m m 1 s m > m m 1 .
Proof. 
From the local optimality condition, Equation (14), we have:
w j C j ( x ) + k J x j ( x ) k j w k p j s x j w j p j s h + k J h ( x ) k j w j p k s h + k J h ( x ) k j w k p j s h , j J , h M .
Next, we multiply both sides of the inequality by s h / m and sum over all h M . Note that, as established in Equation (6), h s h / m = 1 . Consequently, we derive the following valid inequality:
w j C j ( x ) + k J x j ( x ) k j w k p j s x j w j p j + h M k J h ( x ) k j w j p k m + h M k J h ( x ) k j w k p j m = w j p j + k J k j w j p k m + k J k j w k p j m = w j p j + k J k j w j p k m + k J k j w k p j m 2 w j p j m , j J .
By summing over all j J and utilizing Equations (11) and (17), while grouping certain terms, we obtain the following expression:
2 C ( x ) η ( x ) + 1 2 m j J w j p j + 2 Z 1 m .
Utilizing Theorem 1, it follows that:
2 C ( x ) η ( x ) + 1 1 m j J w j p j + 2 C ( x * ) η ( x * ) .
Using Equation (10) and Lemma 3, we have η ( x ) C ( x ) , and j J w j p j s m η ( x * ) , respectively. Therefore,
C ( x ) 2 C ( x * ) + η ( x * ) s m s m m 1 .
From Equation (30), it can be determined that there are two cases depending on the sign of the term that multiplies η ( x * ) . The term is negative for s m m / ( m 1 ) . In this case, we have
C ( x ) 2 C ( x ) .
Conversely, if the term that multiplies η ( x * ) is positive, and given that η ( x * ) C ( x * ) , we have
C ( x ) C ( x * ) 1 + s m s m m .
Finally, the proof is concluded by isolating C ( x ) / C ( x * ) . □
Lemma 5. 
Given an instance of the Q | | w j C j problem, where i M s i = m , the performance guarantee of Jump-Opt solutions is, at most,
1 2 s m s 1 + s m + 1 s m m .
Proof. 
This proof begins with Equation (29):
2 C ( x ) η ( x ) + 1 1 m j J w j p j + 2 C ( x * ) η ( x * ) .
According to Lemma 3, we deduce that η ( x ) j J w j p j s 1 . Therefore,
2 C ( x ) 1 1 m + 1 s 1 j J w j p j + 2 C ( x * ) η ( x * ) .
Furthermore, according to Lemma 3, we have that j J w j p j s m η ( x * ) . Hence,
2 C ( x ) s m s m m + s m s 1 1 η ( x * ) + 2 C ( x * ) .
Equation (10) makes it evident that η ( x * ) C ( x * ) . Further, considering the non-negativity of the term multiplying η ( x * ) , we have:
2 C ( x ) s m s m m + s m s 1 + 1 C ( x * ) .
Finally, the proof is concluded by isolating C ( x ) / C ( x * ) . □
The subsequent lemma illustrates that the parametric upper bound introduced in Lemma 5 provides a tighter upper bound compared to the one presented in [41].
Lemma 6. 
The proposed parametric upper bound for the performance guarantee of Jump-Opt solutions for the Q | | w j C j problem is better than Muñoz and Pinochet parametric upper bound.
Proof. 
By considering Equation (4) and Theorem 5, we determine the difference between the performance guarantees. Let D represent this difference:
D = 2 1 2 m + 2 s m s 1 + 1 2 m 1 2 s m s 1 + s m + 1 s m m = s m s 1 3 m 2 2 ( m + 2 ) m 2 ( m + 2 ) s 1 s m ( m 1 ) 2 m s 1 .
To establish the lemma, it is sufficient to demonstrate that D 0 . Given that s 1 / s m and s 1 are upper-bounded by 1, we establish the following inequality:
D s m s 1 3 m 2 2 ( m + 2 ) m 2 ( m + 2 ) ( m 1 ) 2 m = s m s 1 m 2 3 m + 2 2 m ( m + 2 ) = s m s 1 ( m 1 ) ( m 2 ) 2 m ( m + 2 ) .
This final expression shows that D 0 holds true for all m 2 . □
In the following theorem, we present our main result, utilizing the parametric upper bounds established in Lemmas 4 and 5, in conjunction with the constant performance guarantee for the Q | M j | w j C j problem (refer to Table 1). It is important to note that the Q | M j | w j C j problem is a generalization of the Q | | w j C j problem. In the Q | M j | w j C j problem, the presence of machine eligibility restrictions is indicated by M j , meaning each job j can be processed by a subset M j M of machines. This problem is considered a generalization since | M j | can be equal to m for all jobs. Consequently, the performance guarantee for Jump-Opt solutions in the Q | M j | w j C j problem serves as an upper bound for the performance guarantee of Jump-Opt solutions in the Q | | w j C j problem.
Theorem 2. 
Given an instance of the Q | | w j C j problem, where i M s i = m , the performance guarantee of Jump-Opt solutions is, at most,
min 2.618 , 2 + s m 1 s m m 1 s m > m m 1 , 1 2 s m s 1 + s m + 1 s m m .

5. Discussion

Based on the results presented in Section 4, it becomes evident that the bound introduced in Muñoz and Pinochet [41] is outperformed by the bound proposed in Lemma 5. This performance difference demonstrated in Lemma 6, is the key reason why the upper bound from [41] is not incorporated into Theorem 2.
Next, we will discuss the complementarity of the three upper bounds presented in Theorem 2. We will examine this complementarity in the context of two machines and the scenario where m tends to infinity. We will use u b 2 and u b 3 to refer to the upper bounds of Lemmas 4 and 5, respectively.
For environments with m = 2 machines and given that s 1 + s 2 = 2 , we have:
u b 2 = 2 + s 2 1 s 2 2 1 s 2 > 2 = 2 , u b 3 = 1 2 s 2 s 1 + s 2 2 + 1 = 1 s 1 + 1 2 s 1 4 .
The first observation for this case is that the fixed upper bound of 2.618 is dominated by the constant value of 2 provided by u b 2 . To assess the complementarity of u b 2 and u b 3 , we introduce the difference R 2 = u b 3 u b 2 . A positive value of R 2 implies that the bound u b 2 is tighter than u b 3 , while a negative value implies the opposite. Thus,
R 2 = 1 s 1 s 1 4 3 2 = ( 13 3 s 1 ) ( s 1 + 3 + 13 ) 4 s 1 .
Note that R 2 0 if, and only if, s 1 13 3 0.6056 . Then, u b 2 is a better bound than u b 3 for s 1 0.6056 , while for s 1 > 0.6056 , u b 3 is a better bound than u b 2 .
For m , it should be noted that the condition of u b 2 , s m > m / ( m 1 ) 1 , holds true consistently in a uniform parallel machines environment. Thus,
u b 2 = s m + 1 , u b 3 = 1 2 s m s 1 + s m + 1 .
Analogously to the analysis of the previous case, we use the difference to analyze the complementarity of the upper bounds. Let R m = u b 3 u b 2 be the difference for this case,
R m = 1 2 s m s 1 s m 1 .
The sign that R m takes depends on the values of s 1 and s m . Therefore, both of the two upper bounds must be considered. In other words, none of the upper bounds is dominated by the other.
To illustrate the complementarity of the three upper bounds included in Theorem 2, we provide examples. Table 2 presents five intentional parameter combinations for the Q | | w j C j problem, along with the values obtained from the upper bounds u b 2 and u b 3 , in addition to the upper bound for the generalization of the problem [42], denoted as u b 1 . The results presented provide evidence of the usefulness of all the upper bounds, contingent upon the specific parameter combinations. These examples underscore how the Theorem 2 enables the establishment of better upper bounds than those attainable through each upper bound individually. Note that in combinations 1 to 3, s 5 > m / ( m 1 ) = 1.25 . In contrast, in combinations 4 and 5, s 5 1.25 . This discrepancy has implications for determining u b 2 .
Finally, we extend the result of Lemma 5 to a particular case. The problem under study, Q | | w j C j , is a generalization of the problem in an environment with identical parallel machines, represented by P | | w j C j . For this problem, Brueggemann et al. [43] determined that the performance guarantee for solutions obtained using the Jump neighborhood lies within the interval [ 1.2 , ( 3 m 1 ) / 2 m ] . Regarding this result, it can be observed that the parametric upper bound for the performance guarantee for the Jump neighborhood proposed in Lemma 5 coincides with the value reported in [43] when s 1 = s m = 1 . Note that these values refer to the scenario of identical machines operating at unitary speed.

6. Conclusions

This study presents two new parametrical upper bounds for the worst-case performance guarantee of Jump-Opt solutions for the problem of scheduling jobs in a uniformly related parallel machine environment to minimize total weight completion time, a recognized NP-hard combinatorial optimization problem.
The research focused on establishing the local optimality condition for the Jump neighborhood and on developing a mapping to represent a schedule within an inner product space, where the norm closely corresponds to the total weighted completion time of the schedule. The determined upper bounds establish the performance guarantee based on the parameters that describe an instance family, the number of machines, and the speed of the fastest and slowest machines.
The noteworthy findings of this study include the complementarity of the developed parametric upper bounds with the fixed performance guarantee of a generalization of the problem under study. Additionally, the new bounds outperformed the parametric upper bound previously reported in the literature.

Author Contributions

Conceptualization, F.T.M.; Methodology, F.T.M. and G.L.-N.; Validation, G.L.-N. and M.R.-M.; Formal analysis, M.R.-M.; Investigation, F.T.M.; Writing—original draft, F.T.M.; Writing—review & editing, G.L.-N. and M.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Universidad del Bío-Bío grant number 2060240 IF/R.

Data Availability Statement

There are no data used for the above study.

Acknowledgments

The authors would like to thank the editor and anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Rinooy Kan, A.H.G. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Discret. Math. 1979, 5, 287–326. [Google Scholar]
  2. Chen, H.; Guo, P.; Jimenez, J.; Dong, Z.S.; Cheng, W. Unrelated parallel machine photolithography scheduling problem with dual resource constraints. IEEE Trans. Semicond. Manuf. 2022, 36, 100–112. [Google Scholar] [CrossRef]
  3. Ji, B.; Xiao, X.; Yu, S.S.; Wu, G. A hybrid large neighborhood search method for minimizing makespan on unrelated parallel batch processing machines with incompatible job families. Sustainability 2023, 15, 3934. [Google Scholar] [CrossRef]
  4. Siemiatkowski, M.S.; Deja, M. Planning optimised multi-tasking operations under the capability for parallel machining. J. Manuf. Syst. 2021, 61, 632–645. [Google Scholar] [CrossRef]
  5. Berthier, A.; Yalaoui, A.; Chehade, H.; Yalaoui, F.; Amodeo, L.; Bouillot, C. Unrelated parallel machines scheduling with dependent setup times in textile industry. Comput. Ind. Eng. 2022, 174, 108736. [Google Scholar] [CrossRef]
  6. Hidri, L.; Alqahtani, A.; Gazdar, A.; Ben Youssef, B. Green scheduling of identical parallel machines with release date, delivery time and no-idle machine constraints. Sustainability 2021, 13, 9277. [Google Scholar] [CrossRef]
  7. Vázquez-Serrano, J.I.; Cárdenas-Barrón, L.E.; Peimbert-García, R.E. Agent scheduling in unrelated parallel machines with sequence- and agent–machine–dependent setup time problem. Mathematics 2021, 9, 2955. [Google Scholar] [CrossRef]
  8. Mahafzah, B.; Jabri, R.; Murad, O. Multithreaded scheduling for program segments based on chemical reaction optimizer. Soft Comput. 2021, 25, 2741–2766. [Google Scholar] [CrossRef]
  9. Abu-Marrul, V.; Martinelli, R.; Hamacher, S.; Gribkovskaia, I. Matheuristics for a parallel machine scheduling problem with non-anticipatory family setup times: Application in the offshore oil and gas industry. Comput. Oper. Res. 2021, 128, 105162. [Google Scholar] [CrossRef]
  10. Antunes, A.R.; Matos, M.A.; Rocha, A.M.A.C.; Costa, L.A.; Varela, L.R. A statistical comparison of metaheuristics for unrelated parallel machine scheduling problems with setup times. Mathematics 2022, 10, 2431. [Google Scholar] [CrossRef]
  11. Durasević, M.; Jakobović, D. Heuristic and metaheuristic methods for the parallel unrelated machines scheduling problem: A survey. Artif. Intell. Rev. 2023, 56, 3181–3289. [Google Scholar] [CrossRef]
  12. Komari Alaei, M.R.; Soysal, M.; Elmi, A.; Banaitis, A.; Banaitiene, N.; Rostamzadeh, R.; Javanmard, S. A Bender’s algorithm of decomposition used for the parallel machine problem of robotic cell. Mathematics 2021, 9, 1730. [Google Scholar] [CrossRef]
  13. Módos, I.; Šucha, P.; Hanzálek, Z. On parallel dedicated machines scheduling under energy consumption limit. Comput. Ind. Eng. 2021, 159, 107209. [Google Scholar] [CrossRef]
  14. Sterna, M. Late and early work scheduling: A survey. Omega-Int. J. Manag. Sci. 2021, 104, 102453. [Google Scholar] [CrossRef]
  15. Xiao, Y.; Zheng, Y.; Yu, Y.; Zhang, L.; Lin, X.; Li, B. A branch and bound algorithm for a parallel machine scheduling problem in green manufacturing industry considering time cost and power consumption. J. Clean Prod. 2021, 320, 128867. [Google Scholar] [CrossRef]
  16. Fathollahi-Fard, A.M.; Ahmadi, A.; Goodarzian, F.; Cheikhrouhou, N. A bi-objective home healthcare routing and scheduling problem considering patients’ satisfaction in a fuzzy environment. Appl. Soft. Comput. 2020, 93, 106385. [Google Scholar] [CrossRef]
  17. Sepúlveda, I.A.; Aguayo, M.M.; De la Fuente, R.; Latorre-Núñez, G.; Obreque, C.; Orrego, C.V. Scheduling mobile dental clinics: A heuristic approach considering fairness among school districts. Health Care Manag. Sci. 2022, 1–26. [Google Scholar] [CrossRef]
  18. Çanakoğlu, E.; Muter, İ. Identical parallel machine scheduling with discrete additional resource and an application in audit scheduling. Int. J. Prod. Res. 2021, 59, 5321–5336. [Google Scholar] [CrossRef]
  19. Rivera, G.; Porras, R.; Sanchez-Solis, J.P.; Florencia, R.; García, V. Outranking-based multi-objective PSO for scheduling unrelated parallel machines with a freight industry-oriented application. Eng. Appl. Artif. Intell. 2022, 108, 104556. [Google Scholar] [CrossRef]
  20. Theophilus, O.; Dulebenets, M.A.; Pasha, J.; Lau, Y.Y.; Fathollahi-Fard, A.M.; Mazaheri, A. Truck scheduling optimization at a cold-chain cross-docking terminal with product perishability considerations. Comput. Ind. Eng. 2021, 156, 107240. [Google Scholar] [CrossRef]
  21. Linfati, R.; Gatica, G.; Escobar, J.W. A mathematical model for scheduling and assignment of customers in hospital waste collection routes. Appl. Sci. 2021, 11, 10557. [Google Scholar] [CrossRef]
  22. Tian, G.; Fathollahi-Fard, A.M.; Ren, Y.; Li, Z.; Jiang, X. Multi-objective scheduling of priority-based rescue vehicles to extinguish forest fires using a multi-objective discrete gravitational search algorithm. Inf. Sci. 2022, 608, 578–596. [Google Scholar] [CrossRef]
  23. Pasha, J.; Elmi, Z.; Purkayastha, S.; Fathollahi-Fard, A.M.; Ge, Y.E.; Lau, Y.Y.; Dulebenets, M.A. The drone scheduling problem: A systematic state-of-the-art review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14224–14247. [Google Scholar] [CrossRef]
  24. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 13–21. [Google Scholar]
  25. Conway, R.W.; Maxwell, W.L.; Miller, L.W. Theory of Scheduling; Addison-Wesley: Boston, MA, USA, 1967; pp. 74–79. [Google Scholar]
  26. Horowitz, E.; Sahni, S. Exact and approximate algorithms for scheduling nonidentical processors. J. ACM 1976, 23, 317–327. [Google Scholar] [CrossRef]
  27. Horn, W.A. Minimizing average flow time with parallel machines. Oper. Res. 1973, 21, 846–847. [Google Scholar] [CrossRef]
  28. Garey, M.; Johnson, D. Computers and Intractability: A Guide to the Theory of NP-Completeness; WH Freeman and Co.: San Francisco, CA, USA, 1979. [Google Scholar]
  29. Bruno, J.; Coffman, E.G., Jr.; Sethi, R. Scheduling independent tasks to reduce mean finishing time. Commun. ACM 1974, 17, 382–387. [Google Scholar] [CrossRef]
  30. Lenstra, J.K.; Rinooy Kan, A.H.G.; Brucker, P. Complexity of machine scheduling problems. Ann. Discret. Math. 1977, 1, 343–362. [Google Scholar]
  31. Garey, M.; Johnson, D. Strong NP-Completeness results: Motivation, examples, and implications. J. ACM 1978, 25, 499–508. [Google Scholar] [CrossRef]
  32. Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G.; Shmoys, D.B. Chapter 9 Sequencing and scheduling: Algorithms and complexity. In Logistics of Production and Inventory, Handbooks in Operations Research and Management Science; Graves, S.C., Rinnooy Kan, A.H.G., Zipkin, P.H., Eds.; Elsevier: Amsterdam, The Netherlands, 1993; Volume 4, pp. 445–522. [Google Scholar]
  33. Epstein, L.; Sgall, J. Approximation schemes for scheduling on uniformly related and identical parallel machines. Algorithmica 2004, 39, 43–57. [Google Scholar]
  34. Skutella, M.; Woeginger, G.J. A PTAS for minimizing the total weighted completion time on identical parallel machines. Math. Oper. Res. 2000, 25, 63–75. [Google Scholar] [CrossRef]
  35. Bansal, N.; Srinivasan, A.; Svensson, O. Lift-and-round to improve weighted completion time on unrelated machines. In Proceedings of the Forty-Eighth Annual ACM Symposium on Theory of Computing, Cambridge, MA, USA, 19–21 June 2016; pp. 156–167. [Google Scholar]
  36. Li, S. Scheduling to minimize total weighted completion time via time-indexed linear programming relaxations. SIAM J. Comput. 2020, 49, 409–440. [Google Scholar] [CrossRef]
  37. Williamson, D.P.; Shmoys, D.B. The Design of Approximation Algorithms; Cambridge University Press: Cambridge, UK, 2011; pp. 14–15. Available online: https://www.designofapproxalgs.com/index.php (accessed on 8 November 2023).
  38. Angel, E. A survey of approximation results for local search algorithms. In Efficient Approximation and Online Algorithms. Lecture Notes in Computer Science; Bampis, E., Jansen, K., Kenyon, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3484, pp. 30–73. [Google Scholar]
  39. Michiels, W.; Aarts, E.; Korst, J. Theoretical Aspects of Local Search; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  40. Ahuja, R.K.; Ergun, Ö.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef]
  41. Muñoz, F.T.; Pinochet, A.A. Performance guarantee of the jump neighborhood for scheduling jobs on uniformly related machines. Rairo-Oper. Res. 2022, 56, 1079–1088. [Google Scholar] [CrossRef]
  42. Correa, J.R.; Muñoz, F.T. Performance guarantees of local search for minsum scheduling problems. Math. Program. 2022, 191, 847–869. [Google Scholar] [CrossRef]
  43. Brueggemann, T.; Hurink, J.L.; Kern, W. Quality of move-optimal schedules for minimizing total weighted completion time. Oper. Res. Lett. 2006, 34, 583–590. [Google Scholar] [CrossRef]
  44. Finn, G.; Horowitz, E. A linear time approximation algorithm for multiprocessor scheduling. Bit 1979, 19, 312–320. [Google Scholar] [CrossRef]
  45. Schuurman, P.; Vredeveld, T. Performance guarantees of local search for multiprocessor scheduling. INFORMS J. Comput. 2007, 19, 52–63. [Google Scholar] [CrossRef]
  46. Cho, Y.; Sahni, S. Bounds for list schedules on uniform processors. SIAM J. Comput. 1980, 9, 91–103. [Google Scholar] [CrossRef]
  47. Recalde, D.; Rutten, C.; Schuurman, P.; Vredeveld, T. Local search performance guarantees for restricted related parallel machine scheduling. In LATIN 2010: Theoretical Informatics. Lecture Notes in Computer Science; López-Ortiz, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6034, pp. 108–119. [Google Scholar]
  48. Rutten, C.; Recalde, D.; Schuurman, P.; Vredeveld, T. Performance guarantees of jump neighborhoods on restricted related parallel machines. Oper. Res. Lett. 2012, 40, 287–291. [Google Scholar] [CrossRef]
  49. Brueggemann, T.; Hurink, J.L.; Vredeveld, T.; Woeginger, G.J. Exponential size neighborhoods for makespan minimization scheduling. Nav. Res. Logist. 2011, 58, 795–803. [Google Scholar] [CrossRef]
  50. Frangioni, A.; Necciari, E.; Scutellà, M.G. A multi-exchange neighborhood for minimum makespan parallel machine scheduling problems. J. Comb. Optim. 2004, 8, 195–220. [Google Scholar] [CrossRef]
  51. Brucker, P.; Hurink, J.; Werner, F. Improving local search heuristics for some scheduling problems. Part II. Discret. Appl. Math. 1997, 72, 47–69. [Google Scholar] [CrossRef]
  52. Eastman, W.L.; Even, S.; Isaacs, I.M. Bounds for the optimal scheduling of n jobs on m processors. Manage. Sci. 1964, 11, 268–279. [Google Scholar] [CrossRef]
  53. Smith, W.E. Various optimizers for single-stage production. Nav. Res. Logist. Q. 1956, 3, 59–66. [Google Scholar] [CrossRef]
  54. Cole, R.; Correa, J.R.; Gkatzelis, V.; Mirrokni, V.; Olver, N. Decentralized utilitarian mechanisms for scheduling games. Games Econ. Behav. 2015, 92, 306–326. [Google Scholar] [CrossRef]
Figure 1. Schematic for a Jump move.
Figure 1. Schematic for a Jump move.
Mathematics 12 00006 g001
Figure 2. A mapping of schedules z and v for an instance involving four jobs and two machines.
Figure 2. A mapping of schedules z and v for an instance involving four jobs and two machines.
Mathematics 12 00006 g002
Table 1. Performance guarantee of the Jump neighborhood for scheduling jobs on parallel machine environments to minimize the total weighted completion time.
Table 1. Performance guarantee of the Jump neighborhood for scheduling jobs on parallel machine environments to minimize the total weighted completion time.
ProblemJump PerformanceReferences
R | | w j C j 2.618 [42]
Q | M j | w j C j 1 2.618 [42]
P | M j | w j C j [ 1.75 , 1.809 ] [42]
Q | | w j C j 1.423 , 2.618 [41,42]
Q | | w j C j 2,3 u b = 2 1 2 m + 2 s m s 1 + 1 2 m [41]
P | | w j C j 6 5 , 3 2 1 2 m [43]
R | | C j 2[42]
Q | M j | C j 2[42]
P | M j | C j [ 1.525 , 1.618 ] [42]
1  M j denotes machine eligibility restrictions. 2  s m and s 1 represent the speed of the fastest and slowest machine, respectively, and m represents the number of machines. 3  u b represents the parametric upper bound for performance guarantee.
Table 2. Examples of the performance of the upper bounds for different parameter combinations in 5-machine instances.
Table 2. Examples of the performance of the upper bounds for different parameter combinations in 5-machine instances.
Machine Speeds s 5 / s 1 BoundsUpper BoundBest Bound
Comb. 1 s 1 = 0.050 92 u b 1 = 2.618 2.618 u b 1
s 2 = 0.100 u b 2 = 4.680
s 3 = 0.100 u b 3 = 48.34
s 4 = 0.150
s 5 = 4.600
Comb. 2 s 1 = 0.935 1.348 u b 1 = 2.618 1.678 u b 3
s 2 = 0.935 u b 2 = 2.008
s 3 = 0.935 u b 3 = 1.678
s 4 = 0.935
s 5 = 1.260
Comb. 3 s 1 = 0.050 25.20 u b 1 = 2.618 2.008 u b 2
s 2 = 1.230 u b 2 = 2.008
s 3 = 1.230 u b 3 = 13.60
s 4 = 1.230
s 5 = 1.260
Comb. 4 s 1 = 0.975 1.128 u b 1 = 2.618 1.504 u b 3
s 2 = 0.975 u b 2 = 2.000
s 3 = 0.975 u b 3 = 1.504
s 4 = 0.975
s 5 = 1.100
Comb. 5 s 1 = 0.470 2.660 u b 1 = 2.618 2.000 u b 2
s 2 = 1.000 u b 2 = 2.000
s 3 = 1.100 u b 3 = 2.330
s 4 = 1.180
s 5 = 1.250
u b 1 = 2.618 . u b 2 = 2 + s m 1 s m m 1 s m > m m 1 . u b 3 = 1 2 s m s 1 + s m + 1 s m m .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Muñoz, F.T.; Latorre-Núñez, G.; Ramos-Maldonado, M. Developing New Bounds for the Performance Guarantee of the Jump Neighborhood for Scheduling Jobs on Uniformly Related Machines. Mathematics 2024, 12, 6. https://doi.org/10.3390/math12010006

AMA Style

Muñoz FT, Latorre-Núñez G, Ramos-Maldonado M. Developing New Bounds for the Performance Guarantee of the Jump Neighborhood for Scheduling Jobs on Uniformly Related Machines. Mathematics. 2024; 12(1):6. https://doi.org/10.3390/math12010006

Chicago/Turabian Style

Muñoz, Felipe T., Guillermo Latorre-Núñez, and Mario Ramos-Maldonado. 2024. "Developing New Bounds for the Performance Guarantee of the Jump Neighborhood for Scheduling Jobs on Uniformly Related Machines" Mathematics 12, no. 1: 6. https://doi.org/10.3390/math12010006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop