Next Article in Journal
Levels of Sophistication in Elementary Students’ Understanding of Polygon Concept and Polygons Classes
Previous Article in Journal
Dealing with Degeneracies in Automated Theorem Proving in Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single Machine Vector Scheduling with General Penalties

1
School of Information Science and Engineering, Yunnan University, Kunming 650500, China
2
School of Electronic Engineering and Computer Science, Peking University, Beijing 100871, China
3
School of Mathematics and Statistics, Yunnan University, Kunming 650500, China
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(16), 1965; https://doi.org/10.3390/math9161965
Submission received: 7 June 2021 / Revised: 11 August 2021 / Accepted: 11 August 2021 / Published: 17 August 2021

Abstract

:
In this paper, we study the single machine vector scheduling problem (SMVS) with general penalties, in which each job is characterized by a d-dimensional vector and can be accepted and processed on the machine or rejected. The objective is to minimize the sum of the maximum load over all dimensions of the total vector of all accepted jobs and the rejection penalty of the rejected jobs, which is determined by a set function. We perform the following work in this paper. First, we prove that the lower bound for SMVS with general penalties is α ( n ) , where α ( n ) is any positive polynomial function of n. Then, we consider a special case in which both the diminishing-return ratio of the set function and the minimum load over all dimensions of any job are larger than zero, and we design an approximation algorithm based on the projected subgradient method. Second, we consider another special case in which the penalty set function is submodular. We propose a noncombinatorial e e 1 -approximation algorithm and a combinatorial min { r , d } -approximation algorithm, where r is the maximum ratio of the maximum load to the minimum load on the d-dimensional vector.

1. Introduction

Parallel machine scheduling has had a long history since the pioneering work of Graham in [1]. Given n jobs J 1 , J 2 , J n , m parallel machines M 1 , M 2 , , M m and that for each job J j , any machine could process it with the same time p j , each job should be processed on one of the machines, and the objective is to minimize the makespan, which is the maximum processing time over all machines. Graham [1] proved that parallel machine scheduling is strongly N P -hard and designed a classical list scheduling (LS) algorithm that achieved a worst-case guarantee of 2. Hochbaum and Shmoys [2] designed a polynomial time approximation scheme (PTAS), which was improved to an efficient polynomial time approximation scheme (EPTAS) designed by Alon et al. [3]. For this problem, the best algorithm for is the EPTAS designed by Jansen et al. [4].
However, in many practical make-to-order production systems, due to limited production capacity or tight delivery requirements, a manufacturer may only accept some jobs and rejects the others, scheduling the accepted jobs to the machines for processing. With this motivation, parallel machine scheduling with rejection was proposed by Bartal et al. [5], where a job J j can be rejected and a rejection penalty w j is paid. The objective of parallel machine scheduling with rejection is to minimize the makespan plus the total rejection penalty. Bartal et al. [5] proposed a 2-approximation algorithm with running time O ( n log n ) and a polynomial time approximation scheme (PTAS). Then, Ou et al. [6] proposed a ( 3 / 2 + ε ) -approximation algorithm with running time O ( n log n + n / ε ) , where ε is a small given positive constant. This result was further improved by Ou and Zhong [7] who designed a ( 4 / 3 + ε ) -approximation algorithm with running time O ( m n 2 / ε 2 ) . With a number of parallel machines of 1, Shabtay [8] considered four different problem variations. Zhang et al. [9] considered single machine scheduling with release dates and rejection. In [9], they proved that this problem is N P -hard and presented a fully polynomial time approximation scheme (FPTAS). Recently, He et al. [10] and Ou et al. [11] independently designed an improved approximation algorithm with a running time of O ( n log n ) . More related results can be found in the surveys [12,13,14,15,16,17,18].
The vector scheduling problem proposed by Chekuri and Khanna [19], which has been studied for a long time, is a generalization of parallel machine scheduling, where each job J j is associated with a d-dimensional vector p j = ( p j 1 , p j 2 , , p j d ) . The problem aims to schedule n d-dimensional jobs on m machines and the objective is to minimize the maximum load over all dimensions and all machines. With d as an arbitrary number, Chekuri and Khanna [19] presented a lower bound showing that there is no algorithm with a constant factor approximation for the vector scheduling problem and presented an O ( ln 2 d ) -approximation algorithm. Later, Meyerson et al. [20] proposed a better algorithm with an O ( log d ) -approximation, which was further improved to an O ( log d / log log d ) -approximation by Im et al. [21]. With d as a constant, Chekuri and Khanna [19] presented a PTAS, which was improved to an EPTAS designed by Bansal et al. [22]. More related results can be found in the surveys [19,21,23].
Li and Cui [24] considered a single machine vector scheduling problem (SMVS) with rejection that aims to minimize the maximum load over all dimensions plus the sum of the penalties of the rejected jobs. They proved that this problem is N P -hard and designed a combinatorial d-approximation algorithm and e e 1 -approximation algorithm based on randomized rounding. Then, Dai and Li [25] studied vector scheduling problem with rejection on two machines and designed a combinatorial 3-approximation algorithm and 2.54 -approximation algorithm based on randomized rounding.
In recent years, nonlinear combinatorial optimization has become increasingly utilized. One of the important fields in this area is the set function optimization and the corresponding research is in application-driving, such as machine learning and data mining [26,27,28,29,30]. In particular, the set function is submodular if the function has decreasing marginal returns. Submodular functions are used in various fields, such as operations research, economics, engineering, computer science, and management science [31,32,33]. In particular, the rejection penalties can be considered to be the loss of the manufacturer’s prestige; in economics, it is common that the penalty is increasingly small as the number of rejected jobs increases. This means that the penalty function is a submodular function, where the submodular function is a special set function, which has decreasing marginal returns. Combination optimizations with submodular penalties have been proposed and studied, and recently, Zhang et al. [34] proposed a 3-approximation algorithm for precedence-constrained scheduling with submodular rejection on parallel machines. Liu et al. [35] proposed an O ( n ) -approximation combinatorial algorithm for the submodular load balancing problem with submodular penalties. Liu and Li [36] proposed a ( 2 1 m ) -approximation combinatorial algorithm for parallel machine scheduling with submodular penalties. Liu and Li [14] proposed a 2-approximation algorithm for single machine scheduling problem with release dates and submodular rejection penalties.
However, in the real world, complicated interpersonal relationships lead to the rejection set function not being submodular. Thus, in this paper, we consider single machine vector scheduling (SMVS) with general penalties, which is a generalized case of SMVS with rejection, where the rejection penalty is determined by normalized and nondecreasing set functions. As shown in Table 1, for SMVS with general penalties, we first present a lower bound showing that there is no α ( n ) approximation, where α ( n ) is any positive polynomial function of n. Then, we consider a special case in which p j i > 0 for any dimension i of any job J j and the diminishing-return ratio of the penalty set function γ > 0 , and we design a combinatorial approximation algorithm that can output a solution in which the objective value is no more than 1 γ Z + ( 1 1 r ) · L + ε , where Z is the optimal value of this problem and r = + if min i p j i = 0 ; otherwise, r = max j max i p j i min i p j i , L = max i j : j [ n ] p j i is the maximum load of the ground set J and ε > 0 is a given parameter. Then, we consider another special case in which the penalty set function is submodular, and we propose a noncombinatorial e e 1 -approximation algorithm and a combinatorial min { r , d } -approximation algorithm. If the rejection set function is submodular and d = 1 , then the SMVS with general penalties is exactly the single machine scheduling problem with submodular rejection penalties. If the rejection set function is linear, then the SMVS with general penalties is exactly the SMVS with rejection. If the penalty of any nonempty job set is , then the SMVS with general penalties is exactly single machine vector scheduling. If the rejection set function is linear and d = 1 , then the SMVS with general penalties is exactly single machine scheduling with rejection.
The difficulty in the SMVS with general penalties is how to use the characteristics of the rejection penalty set function and the load to minimize the objective value of the feasible solution, where the load is determined by the relevant d-dimensional vectors. Since both the rejection penalty set function and the load are nonlinear, the output solution, generated by the standard techniques of either single machine scheduling problem with submodular penalties or single machine vector scheduling with rejection, may either accept many more jobs or reject many more jobs than needed. In this paper, according to the characteristics of the set function and the load, we find a balanced relationship between the set function and the load to design the approximation algorithm.
The remainder of this paper is structured as follows: In Section 2, we provide basic definitions and a formal problem statement. In Section 3, we first prove the hardness result for SMVS with general penalties. Next, we consider a special case of SMVS with general penalties and propose an approximation algorithm. In Section 4, we address the submodular case and propose a noncombinatorial approximation algorithm and a combinatorial approximation algorithm. We provide a brief conclusion in Section 5.

2. Preliminaries

Let J = { J 1 , J 2 , , J n } be a given set of n jobs, and let w ( · ) : 2 J R 0 be a real-valued set function defined on all subsets of J. A set function w ( · ) is called nondecreasing if
w ( S ) w ( T ) ,   S T J .
A set function w ( · ) is called normalized if
w ( Ø ) = 0 .
In particular, if
w ( S ) + w ( T ) w ( S T ) + w ( S T ) , S , T J ,
then set function w ( · ) is submodular; if
w ( S ) + w ( T ) = w ( S T ) + w ( S T ) , S , T J ,
then set function w ( · ) is modular.
Let
w ( J j | S ) = w ( S { J j } ) w ( S )
be the marginal gain of w ( · ) with respect to J j and S. If set function w ( · ) is nondecreasing, then w ( J j | S ) 0 , S J \ { J j } . For any S T and J j J \ T , if w ( · ) is submodular, then w ( J j | S ) w ( J j | T ) ; if w ( · ) is modular, then w ( J j | S ) = w ( J j | T ) = w ( J j | Ø ) = w ( { J j } ) .
Single machine vector scheduling (SMVS) with general penalties is defined as follows: We are given a single machine and a set of n jobs J = { J 1 , J 2 , , J n } . Each job J j J is associated with a d-dimensional vector p j = ( p j 1 , p j 2 , , p j d ) , where p j i 0 is the amount of resource i needed by job J j . The penalty set function w ( · ) : 2 J R 0 is normalized and nondecreasing. A pair ( A , R ) is called a feasible schedule if A R = J and A R = Ø , where A is the set of jobs that are accepted and processed on the machine and R is the set of rejected jobs. The objective value of a feasible schedule ( A , R ) is defined as l ( A ) + w ( R ) , where
l ( A ) = max i j : J j A p j i
is the maximum load of set A J . The objective of SMVS with general penalties is to find a feasible schedule with minimum objective value.
Instead of taking an explicit description of the function w ( · ) , we consider it an oracle. In other words, for any subset S J , w ( S ) can be computed in polynomial time, where the “polynomial” is with regard to the size n.

3. SMVS with General Penalties

In this section, we first prove that set function minimization is a special case of SMVS with general penalties and propose a lower bound showing that there is no α ( n ) -approximation, where α ( n ) is any positive polynomial function of n. Next, we address a special case in which p j i > 0 for any i [ d ] and any J j J and the diminishing-return ratio of the set function γ > 0 and design an approximation algorithm.

3.1. Hardness Result

Before proving that set function minimization is a special case of SMVS with general penalties, we introduce some characteristics of set functions.
Lemma 1
([37]). For any set function f ( · ) defined on all subsets of V, f ( · ) can be expressed as the difference between two normalized nondecreasing submodular functions g ( · ) and h ( · ) ; i.e., f ( S ) = g ( S ) h ( S ) for any S V .
Based on Lemma 1, we have the following:
Lemma 2.
For any set function f ( · ) defined on all subsets of V, f ( · ) can be expressed as the difference between one normalized nondecreasing set function q ( · ) and one normalized nondecreasing modular function m ( · ) ; i.e., f ( S ) = q ( S ) m ( S ) for any S V .
Proof. 
By Lemma 1, we can construct two nondecreasing submodular functions g ( · ) and h ( · ) defined on all subsets of V satisfying f ( S ) = g ( S ) h ( S ) for any S V .
Then, we define the set function m ( · ) as follows:
m ( S ) = 0 , if S = Ø , v : v S max { 0 , max T V \ { v } ( g ( v | T ) h ( v | T ) ) } , if Ø S V .
Thus, for any S V \ { v } , we have
m ( v | S ) = m ( S { v } ) m ( S ) = max { 0 , max T V \ { v } ( g ( v | T ) h ( v | T ) ) } = m ( { v } ) 0 .
This implies that m ( · ) is a normalized nondecreasing modular function. Moreover, let q ( · ) be the set function satisfying q ( S ) = g ( S ) h ( S ) + m ( S ) for any S V . Thus, we have
q ( Ø ) = g ( Ø ) h ( Ø ) + m ( Ø ) = 0
and, for any set T and any v V \ T ,
q ( v | T ) = q ( T { v } ) q ( T ) = g ( v | T ) h ( v | T ) + m ( { v } ) 0 ,
where the inequality follows from m ( { v } ) = max { 0 , max T V \ { v } ( g ( v | T ) h ( v | T ) ) } , i.e., q ( · ) is a normalized nondecreasing set function. Therefore, the lemma holds. □
For completeness, we provide the formal statement for set function minimization and its hardness result.
Definition 1
(Set Function Minimization). Given an instance ( V , f ( · ) ) , where V is the ground set and f ( · ) : 2 V R 0 is a set function defined on all subsets of V, set function minimization aims to find a set S V with f ( S ) f ( T ) for any T V .
Theorem 1
([37]). Let n ( = | V | ) be the size of the instance of a set function minimization, and α ( n ) > 0 be any positive polynomial function of n. Unless P = N P , there cannot exist any α ( n ) -approximation algorithm for set function minimization.
Then, we obtain the following theorem by proving that the set function minimization can be reduced to a special case of SMVS with general penalties:
Theorem 2.
Unless P = N P , there cannot exist any α ( n ) -approximation algorithm for SMVS with general penalties, where n is the number of jobs and α ( n ) > 0 is any positive polynomial function of n.
Proof. 
Given any instance ( V , f ( · ) ) of set function minimization, where V = { v 1 , , v n } , by Lemma 2, f ( · ) can be expressed as one normalized nondecreasing set function q ( · ) and one normalized nondecreasing modular function m ( · ) satisfying f ( S ) = q ( S ) m ( S ) for any S V .
We construct a corresponding instance α ( V , f ( · ) ) of SMVS with general penalties and a 1-dimensional vector, where the job set J = { J 1 , , J n } . For any subset S J , let
S V = { v j | J j J } .
Furthermore, the penalty set function w ( · ) satisfies
w ( S ) = q ( S V ) , S J ,
and each job J j J is associated with a 1-dimensional vector
p j 1 = m ( { v j } ) .
For convenience, let
M = m ( V ) = j : j j J p j 1
be the value of the ground set V on modular function m ( · ) , where m ( S V ) = j : J j S m ( { J j } ) = j : J j S p j 1 .
Given a feasible solution S V of instance ( V , f ( · ) ) , by equality (2), we can construct a schedule ( A , R ) of instance α ( V , f ( · ) ) where A = J \ S and R = S . The objective value Z of ( A , R ) is
Z = j : J j A p j 1 + w ( R ) = j : J j J \ S p j 1 + q ( S ) = m ( V ) m ( S V ) + q ( S V ) = f ( S V ) + M .
Given a schedule ( A , R ) for instance α ( J ) , by equality (2), we can construct a feasible solution S V = R V of instance ( V , f ( · ) ) . Thus, we have
f ( S V ) = q ( S V ) m ( S V ) = w ( R ) j : J j R p j 1 = w ( R ) + j : J j J \ R p j 1 j : J j J p j 1 = Z M ,
where Z is the objective value of ( A , R ) and A = J \ R .
These statements imply that the above two problems are equivalent. By Theorem 1, the theorem holds. □

3.2. Approximation Algorithm for a Special Case

In the following part of this subsection, we assume that p j i > 0 for any i [ d ] and J j J and that the diminishing-return ratio of the set function γ > 0 . Using the projected subgradient method (PSM), we can find an approximation schedule ( A , R ) of SMVS with general penalties where its objective value is no more than 1 γ Z + ( 1 1 r ) · L + ε , where Z is the optimal value of SMVS with general penalties,
r = max J j J max i p j i min i p j i
is the maximum ratio of the maximum load to the minimum load on the d-dimensional vector,
L = max i j : J j J p j i
and ε > 0 is a given parameter.
The PSM proposed by Halabi and Jegelka [26] is an approximation algorithm for minimization of the difference between set function f 1 ( · ) and set function f 2 ( · ) . The key idea of the PSM is to approximate the difference between these two functions as a submodular function, using the Lovász extension to provide (approximate) subgradients. In particular, when f 1 ( · ) is an α -weakly DR-submodular function and f 2 ( · ) is a β -weakly DR-supermodular function, the PSM is the optimal approximation algorithm [26] satisfying the following theorem, where the definitions of both weakly DR-submodular and weakly DR-supermodular are described in Definition 2.
Theorem 3
([26]). Given ε > 0 and two nondecreasing set functions f 1 and f 2 : 2 J R 0 ,   f 1 ( · ) is an α-weakly DR-submodular function and f 2 ( · ) is a β-weakly DR-supermodular function. To minimize of the difference between the functions f 1 ( · ) and f 2 ( · ) , the PSM achieves a set S satisfying
f 1 ( S ) f 2 ( S ) f 1 ( S ) α β · f 2 ( S ) + ε ,
where S is an optimal solution of this problem.
Before introducing the definition of weakly DR-submodular/supermodular, we recall the definitions of diminishing return (DR, for short). The diminishing return, proposed by Lehmann et al. [38], is a characteristic of submodular function; i.e., for any submodular function f ( · ) ,
f ( J j | S ) f ( J j | T ) , S T J and J j T .
The diminishing return is generalized to the set function, called the diminishing-return ratio [28,39]. If α ( 0 , 1 ] is the diminishing-return ratio of a set function f ( · ) , then for any S T J and J j T , we have
f ( J j | S ) α · f ( J j | T ) .
In addition, α is known as weakly DR-submodular in [26].
Definition 2
(Weakly DR-submodular and weakly DR-supermodular). A set function f ( · ) : 2 J R 0 is an α-weakly DR-submodular function, with α > 0 , if inequality (4) follows for any S T and J j T . Similarly, a set function f ( · ) : 2 J R 0 is a β-weakly DR-supermodular function, with β > 0 , if for all S T and J j T ,
f ( J j | T ) β · f ( J j | S ) .
To find an approximation schedule of SMVS with general penalties, we construct an auxiliary set function d ( · ) : 2 J R 0 as follows: For any S J , let
d ( S ) = max i j : J j J p j i max i j : J j J \ S p j i = L l ( J \ S ) ,
where
L = l ( J ) = max i j : J j J p j i
is the maximum load of the ground set J and
l ( S ) = max i j : J j S p j i , S J
is the same as the definition in (1). Thus, we have the following:
Lemma 3.
The set function d ( · ) is a normalized nondecreasing 1 r -weakly DR-supermodular.
Proof. 
Let
i S = arg max i j : J j J \ S p j i , S J
be the resource number with maximum load for set J \ S .
By the definition of d ( · ) , for any J j J and any T J \ { J j } , we have
d ( J j | T ) = d ( T { J j } ) d ( T ) = L l ( J \ ( T { J j } ) ) L + l ( J \ T ) = k : J k J \ ( T { J j } ) p k i T { J j } + k : J k J \ T p k i T k : J k J \ ( T { J j } ) p k i T + k : J k J \ T p k i T = p j i T max i p j i ,
and
d ( J j | T ) = k : J k J \ ( T { J j } ) p k i T { J j } + k : J k J \ T p k i T k : J k J \ ( T { J j } ) p k i T { J j } + k : J k J \ T p k i T { J j } = p j i T { J j } min i p j i .
Therefore, for any J j J and any T J \ { J j } , we have
min i p j i d ( J j | T ) max i p j i .
Since p j i > 0 for any i [ d ] and J j J , we have
d ( J j | T ) min i p j i min i p j i max i p j i d ( J j | S ) 1 r · d ( J j | S ) ,
where the last inequality follows from inequality (3). This implies that function d ( · ) is a 1 r -weakly DR-supermodular function. By d ( Ø ) = L l ( J \ Ø ) = 0 and d ( J j | T ) min i p j i > 0 , function d ( · ) is normalized and nondecreasing. Thus, function d ( · ) is a normalized nondecreasing 1 r -weakly DR-supermodular function. □
For SMVS with general penalties, we design an approximation algorithm by the PSM and the detailed algorithm is described below.
Let Z be the objective value of ( A , R ) generated by Algorithm 1 and let Z be the optimal value of SMVS with general penalties.
Algorithm 1: AASC
1
Construct the auxiliary set function d ( · ) defined in (5).
2
Using the PSM, find the subset S for minimization of the difference between the functions d ( · ) and w ( · ) .
3
Let A = J\S and R = S. Output the feasible schedule (A,R)
Theorem 4.
Z 1 γ · Z + ( 1 1 r ) · L + ε .
Proof. 
Let ( A , R ) be the optimal solution of SMVS with general penalties, and its objective value is
Z = max i j : J j A p j i + w ( R ) = l ( A ) + w ( R ) = ( L l ( A ) ) + w ( R ) + L = w ( R ) d ( R ) + L .
This implies that
w ( R ) d ( R ) = Z L .
Let S be the optimal solution of the minimization of the difference between the function w ( · ) and d ( · ) . Thus, we have
w ( S ) d ( S ) w ( R ) d ( R ) = Z L .
By the assumption γ > 0 , w ( · ) is a normalized nondecreasing γ -weakly DR-submodular function. By Lemma 3, d ( · ) is a normalized nondecreasing 1 r -weakly DR-supermodular function. These and Theorem 3 imply that
w ( S ) d ( S ) w ( S ) γ 1 r · d ( S ) + ε ,
where ε > 0 is a given parameter.
Since A = J \ S and R = S , the objective value of ( A , R ) is
Z = max i j : J j A p j i + w ( R ) = l ( A ) + w ( R ) = ( L l ( A ) ) + w ( R ) + L = w ( S ) d ( S ) + L 1 γ w ( S ) 1 r · d ( S ) + ε + L = 1 γ · ( w ( S ) d ( S ) ) + ( 1 γ 1 r ) · ( L l ( J \ S ) ) + L + ε 1 γ · ( w ( R ) d ( R ) ) + ( 1 + 1 γ 1 r ) · L + ε = 1 γ · ( Z L ) + ( 1 + 1 γ 1 r ) · L + ε = 1 γ · Z + ( 1 1 r ) · L + ε .
where the first inequality follows from inequality (8), the second inequality follows from inequality (7) and l ( S ) 0 for any S J . □

4. SMVS with Submodular Penalties

In this section, we consider SMVS with submodular penalties in which the general penalty set function w ( · ) is limited to submodular. We introduce two types of binary variables:
z S = 1 , if S is the rejected set ; 0 , otherwise ,
and
x j = 1 , if J j is the job processed on the machine ; 0 , otherwise .
Let A be the set of accepted and processed jobs and the load of resource i of A is
l i ( A ) = j : J j A p j i = j : J j J p j i x j .
Thus, we can formulate SMVS with submodular penalties as a natural integer program and relax the integrality constraints to obtain a linear program.
min l ( A ) + R : R J w ( R ) z R s . t . l i ( A ) = j : J j J p j i x j l ( A ) , i [ d ] = { 1 , 2 , , d } , x j + R : J j R z R 1 , J j J , x j , z R [ 0 , 1 ] , J j J , R J .
The second constraint in (9) states that every job J j J must either be processed on the machine or be rejected.
Using the Lovász extensions, linear program (9) is equivalent to a convex program that can be solved in polynomial time by the ellipsoid method, and let ( x ^ , z ^ ) be the optimal solution of the convex program, where x ^ = { x ^ j } j : J j J and z ^ = { z ^ R } R : R J . Inspired by the general rounding framework in Li et al. [40], given a constant α ( 0 , 1 ) , a threshold β is a random number from the uniform distribution over [ 0 , α ] . Then, we can construct a feasible pair ( A ^ , R ^ ) , where A ^ = { J j J | x ^ j > 1 β } is the accepted set and R ^ = J \ A ^ is the rejected set.
Similar to the proof of Lemma 2.2 in Li et al. [40], we have the following:
Lemma 4.
The expected rejection penalty of the pair ( A ^ , R ^ ) is no more than
1 α R : R J w ( R ) z ^ R .
The following claim is immediate.
Lemma 5.
The expected cost of the pair ( A ^ , R ^ ) is no more than
e e 1 Z ,
where Z is the optimal value for SMVS with submodular penalties.
Proof. 
For any job J j J , if x ^ j > 1 β , J j is accepted and processed by the machine; otherwise, J j is rejected. Let l i ( A ^ ) = j : J j A ^ p j i be the load of resource i in the pair ( A ^ , R ^ ) . Therefore, for every resource i = 1 , 2 , , d , we have
E ( l i ( A ^ ) ) = E ( j : J j A ^ p j i · 1 ) + E ( j : J j R ^ p j i · 0 ) E ( j : J j A ^ p j i x ^ j 1 β ) + E ( j : J j R ^ p j i · x ^ j 1 β ) = j : J j J p j i x ^ j E ( 1 1 β ) = l i ^ · 0 α 1 1 β 1 α 0 d β = 1 α ln ( 1 1 α ) l i ^ ,
where the inequality follows from by x ^ j > 1 β and l i ^ = j : J j J p j i x ^ j is the load of resource i in the solution ( x ^ , z ^ ) .
This statement and Lemma 4 imply that the expected cost of the pair ( A ^ , R ^ ) is
E ( max i l i ( A ^ ) ) + E ( ( R ^ ) ) 1 α ln ( 1 1 α ) max i l i ^ + 1 α R : R J w ( R ) z ^ R max { 1 α ln ( 1 1 α ) , 1 α } ( max i l i ^ + R : R J w ( R ) z ^ R ) e e 1 Z ,
where the last inequality follows from the fact that ( x ^ , z ^ ) is the optimal solution of the convex program and α = 1 1 e . □
Now, we present a noncombinatorial e e 1 -approximation algorithm for SMVS with submodular penalties. Without loss of generality, assume that z ^ 1 z ^ 2 z ^ n . For any i { 0 , 1 , , n } , we construct a feasible schedule ( A i , R i ) , where R i = { J j J | j < i } is the rejected subset and A i = J \ R i is the accepted set. We select from these schedules the one with the smallest overall objective value. By Lemma 5, we have the following:
Theorem 5.
There is a noncombinatorial e e 1 -approximation algorithm for SMVS with submodular penalties.
Since this algorithm needs to solve some convex programs, it is a noncombinatorial. It is hard to implement and has a high time complexity. Therefore, we also present a combinatorial approximation algorithm for this problem.
As above, let
r = max J j J max i p j i min i p j i
be the maximum ratio of the maximum load to the minimum load on the d-dimensional vector. In particular, we set r = + if min i p j i = 0 .
Construct an auxiliary processed time p j for each job J j J , where
p j = min i p j i , r d ; i = 1 d p j i , otherwise .
We introduce an auxiliary “dual” variable y j for each job J j J to determine which job J j is accepted or rejected. Let F be the frozen set. Initially, F = Ø and variable y j = 0 for each job J j J . The variables { y j } j : J j J \ F increase simultaneously until either some set or some job becomes tight, where a set S is tight if j : J j S y j = w ( S ) and a job J j is tight if y j = p j . If a set becomes tight, it is added to the rejected set R and the frozen set F. Otherwise, the tight job is added to the frozen set F. For jobs that have been added to the frozen set F, their variables are frozen, and the process is iterated until all points are frozen. Let A = J \ R be the accepted set and output the pair ( A , R ) . The detailed algorithm is described below.
Lemma 6.
Algorithm 2 can be implemented in polynomial time.
Proof. 
Considering line 3 of Algorithm 2, for any F J , we have | J \ F | n . This implies that the value of Δ 1 can be found in linear time.
Then, for any F J , we analyze the computing time of Δ 2 . Let y ( S ) = j : J j S F ( y j ) and k ( S ) = | S \ F | for any subset S F Ø ; it is easy to verify that y ( · ) and k ( · ) are modular functions. Since w ( · ) is a submodular function, w ( S ) + y ( S ) is also a submodular function. Therefore, for any F J , the value of
Δ 2 = min S : S J , S F Ø w ( S ) j : J j S F y j | S \ F | = min S : S J , S F Ø w ( S ) + y ( S ) k ( S ) ,
can be computed in polynomial time using the combinatorial algorithm proposed by Fleischer and Iwata [41] for the ratio of a submodular to a modular function minimization problem.
Therefore, Δ 1 and Δ 2 for any while loop can be computed in polynomial time. This means that each iteration of the while loop can be computed in polynomial time. For each iteration of the while loop, if Δ 1 Δ 2 , the set F adds one job, otherwise, the set F adds at least one job. This means, the number of the while loops is at most n. Thus, Algorithm 2 can be implemented in polynomial time. □
Algorithm 2: CSMSP
Mathematics 09 01965 i001
For any job J j J , the variable y j increases from 0 until job J j is added to the frozen set. If job J j is added to the frozen set by Δ 1 Δ 2 in line 5, the variable y j is fixed to p j . Thus, we have
y j p j , J j J ;
otherwise, the variable y j is fixed by some set T J in line 7, while the variable of all jobs in T is fixed. Thus, we have
j : J j S y j w ( S ) , S J .
In particular, for any job J j A = J \ R and any set S { J j } Ø , we have j : J j S y j < w ( S ) ; otherwise, job J j is added to the rejected set R in line 7. This implies that job J j is added to the frozen set in line 5 and
y j = p j , J j A .
By inequalities (11) and (12) and equality (13), we have the following:
Lemma 7.
The rejected job set R produced by Algorithm 1 satisfies
w ( R ) = j : J j R y j ,
Proof. 
Considering any two different subsets S 1 and S 2 generated in line 7, we have
w ( S 1 ) = J j S 1 y j , and w ( S 2 ) = j : J j S 2 y j .
Therefore, we have
j : J j S 1 S 2 y j = j : J j S 1 y j + j : J j S 2 y j j : J j S 1 S 2 y j = w ( S 1 ) + w ( S 2 ) j : J j S 1 S 2 y j w ( S 1 S 2 ) + w ( S 1 S 2 ) j : J j S 1 S 2 y j w ( S 1 S 2 ) ,
where the first inequality follows from the submodularity of w ( · ) and the second inequality follows from inequality (12). By inequality (12), we have j : J j S 1 S 2 y j w ( S 1 S 2 ) . These imply that
j : J j S 1 S 2 y j = w ( S 1 S 2 ) .
Since R = S S S , the lemma follows. □
Let Z = max i j : J j A p j i + w ( R ) be the objective value of ( A , R ) generated by Algorithm 2, and let Z be the optimal value for SMVS with submodular penalties.
Theorem 6.
Z min { r , d } · Z , where r = max J j J max i p j i min i p j i .
Proof. 
Let ( A , R ) be the optimal solution of SMVS with submodular penalties, and its objective value is
Z = max i j : J j A p j i + w ( R ) .
By Lemma 7, we have
w ( R ) = j : J j R y j = j : J j R A y j + j : J j R R y j j : J j R A p j + j : J j R R y j ,
where the inequality follows from inequality (11). By equality (13), we have
j : J j A p j = j : J j A A p j + j : J j A R p j = j : J j A A p j + j : J j A R y j .
Thus, we have
j : J j A p j + w ( R ) j : J j A p j + j : J j R y j .
Case 1. r < d , we have p j = min i p j i . The objective value of ( A , R ) is
Z = max i j : J j A p j i + w ( R ) max i j : J j A r · min i p j i + w ( R ) = r · j : J j A p j + w ( R ) r · ( j : J j A p j + w ( R ) ) r · ( j : J j A p j + j : J j R y j ) r · ( j : J j A min i p j i + w ( R ) ) r · ( max i j : J j A p j i + w ( R ) ) = r · Z = min { r , d } · Z ,
where the first inequality follows from the definition of r = max J j J max i p j i min i p j i ; the second inequality follows from r 1 , and the penalty is nonnegative; the third inequality follows from inequality (14); the fourth inequality follows from inequality (12) and the last inequality follows from p j i min i p j i .
Case 2. d r , we have p j = i = 1 d p j i . The objective value of ( A , R ) is
Z = max i j : J j A p j i + w ( R ) i = 1 d j : J j A p j i + w ( R ) = j : J j A p j + w ( R ) j : J j A p j + j : J j R y j = j : J j A i = 1 d p j i + j : J j R y j j : J j A i = 1 d p j i + w ( R ) d · max i j : J j A p j i + w ( R ) d · Z = min { r , d } · Z ,
where the second inequality follows from inequality (14), the third inequality follows from inequality (12) and the last inequality follows from d 1 .
Therefore, the theorem holds. □

5. Conclusions

In this paper, we investigate the problem of SMVS with general penalties and SMVS with submodular penalties.
For SMVS with general penalties, we first present a lower bound showing that there is no α ( n ) -approximation algorithm. Then, using PSM, we design an approximation algorithm for the case under γ > 0 and p j i > 0 for any J j J and i [ d ] . This algorithm can find a schedule ( A , R ) satisfying Z 1 γ · Z + ( 1 1 r ) · L + ε , where Z and Z are the objective value of ( A , R ) and optimal value, γ is the diminishing-return ratio of the set function w ( · ) , r = max J j J max i p j i min i p j i , L = max i J j J p j i and ε > 0 is a given parameter.
For SMVS with submodular penalties, we first design a noncombinatorial e e 1 - approximation algorithm, and then, a combinatorial min { r , d } -approximation algorithm is proposed, where r = max J j J max i p j i min i p j i and, most of the time, d = min { r , d } .
For SMVS with linear penalties, there exists an EPTAS to solve it. For SMVS with submodular penalties, there is the question whether it is possible to design an EPTAS or better algorithm. Furthermore, finding a better lower bound is an interesting direction. In the real world, our assumption for SMVS with general penalties is not reasonable. Thus, it is important to design approximate algorithms for cases under more reasonable assumptions.
Moreover, when there are multiple machines rather than a single machine, algorithms under this setting could be further developed.

Author Contributions

Conceptualization, X.L. and W.L.; methodology, W.L.; software, X.L. and Y.Z.; validation, X.L., Y.Z. and W.L.; formal analysis, X.L.; investigation, W.L.; resources, W.L.; data curation, X.L. and Y.Z.; writing—original draft preparation, X.L.; writing—review and editing, W.L.; visualization, X.L.; supervision, W.L.; project administration, W.L.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported in part by the National Natural Science Foundation of China (No. 61662088), Program for Excellent Young Talents of Yunnan University, Training Program of National Science Fund for Distinguished Young Scholars, IRTSTYN, and Key Joint Project of the Science and Technology Department of Yunnan Province and Yunnan University (No. 2018FY001(-014)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Graham, R.L. Bounds on multiprocessing timing anomalies. Siam J. Appl. Math. 1969, 17, 416–429. [Google Scholar] [CrossRef] [Green Version]
  2. Hochbaum, D.S.; Shmoys, D.B. Using dual approximation algorithms for scheduling problems: Theoretical and practical results. J. ACM 1987, 34, 144–162. [Google Scholar] [CrossRef]
  3. Alon, N.; Azar, Y.; Woeginger, G.J.; Yadid, T. Approximation schemes for scheduling on parallel machines. J. Sched. 1998, 1, 56–66. [Google Scholar] [CrossRef]
  4. Jansen, K.; Klein, K.; Verschae, J. Closing the gap for makespan scheduling via sparsification techniques. Math. Oper. Res. 2020, 45, 1371–1392. [Google Scholar] [CrossRef]
  5. Bartal, Y.; Leonardi, S.; Marchetti-Spaccamela, A.; Sgall, J.; Stougie, L. Multiprocessor scheduling with rejection. SIAM J. Discret. Math. 2000, 13, 64–78. [Google Scholar] [CrossRef]
  6. Ou, J.; Zhong, X.; Wang, G. An improved heuristic for parallel machine scheduling with rejection. Eur. J. Oper. Res. 2015, 241, 653–661. [Google Scholar] [CrossRef]
  7. Ou, J.; Zhong, X. Bicriteria order acceptance and scheduling with consideration of fill rate. Eur. J. Oper. Res. 2017, 262, 904–907. [Google Scholar] [CrossRef]
  8. Shabtay, D. The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost. Eur. J. Oper. Res. 2014, 233, 64–74. [Google Scholar] [CrossRef]
  9. Zhang, L.; Lu, L.; Yuan, J. Single machine scheduling with release dates and rejection. Eur. J. Oper. Res. 2009, 198, 975–978. [Google Scholar] [CrossRef]
  10. He, C.; Leung, J.Y.T.; Lee, K.; Pinedo, M.L. Improved algorithms for single machine scheduling with release dates and rejections. 4OR 2016, 14, 41–55. [Google Scholar] [CrossRef]
  11. Ou, J.; Zhong, X.; Li, C.L. Faster algorithms for single machine scheduling with release dates and rejection. Inf. Process. Lett. 2016, 116, 503–507. [Google Scholar] [CrossRef]
  12. Guan, L.; Li, W.; Xiao, M. Online algorithms for the mixed ring loading problem with two nodes. Optim. Lett. 2021, 15, 1229–1239. [Google Scholar] [CrossRef]
  13. Li, W.; Li, J.; Zhang, X.; Chen, Z. Penalty cost constrained identical parallel machine scheduling problem. Theor. Comput. Sci. 2015, 607, 181–192. [Google Scholar] [CrossRef]
  14. Liu, X.; Li, W. Approximation algorithm for the single machine scheduling problem with release dates and submodular rejection penalty. Mathematics 2020, 8, 133. [Google Scholar] [CrossRef] [Green Version]
  15. Mor, B.; Shapira, D. Scheduling with regular performance measures and optional job rejection on a single machine. J. Oper. Res. Soc. 2019, 71, 1–11. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, L.; Lu, L.; Yuan, J. Single-machine scheduling under the job rejection constraint. Theor. Comput. Sci. 2010, 411, 1877–1882. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, L.; Lu, L. Parallel-machine scheduling with release dates and rejection. 4OR 2016, 14, 165–172. [Google Scholar] [CrossRef]
  18. Zhong, X.; Ou, J. Improved approximation algorithms for parallel machine scheduling with release dates and job rejection. 4OR 2017, 15, 387–406. [Google Scholar] [CrossRef]
  19. Chekuri, C.; Khanna, S. On multidimensional packing problems. SIAM J. Comput. 2004, 33, 837–851. [Google Scholar] [CrossRef]
  20. Meyerson, A.; Roytman, A.; Tagiku, B. Online multidimensional load balancing. In Proceedings of the 16th International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, and the 17th International Workshop on Randomization and Approximation Techniques in Computer Science, Berkeley, CA, USA, 21–23 August 2013; Raghavendra, P., Raskhodnikova, S., Jansen, K., Rolim, J.D.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 287–302. [Google Scholar]
  21. Im, S.; Kell, N.; Kulkarni, J.; Panigrahi, D. Tight bounds for online vector scheduling. Siam J. Comput. 2019, 48, 93–121. [Google Scholar] [CrossRef] [Green Version]
  22. Bansal, N.; Oosterwijk, T.; Vredeveld, T.; Zwaan, R. Approximating vector scheduling: Almost matching upper and lower bounds. Algorithmica 2016, 76, 1077–1096. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Azar, Y.; Cohen, I.R.; Kamara, S.; Shepherd, F.B. Tight bounds for online vector bin packing. In Proceedings of the 45th annual ACM Symposium on Theory of Computing, Palo Alto, CA, USA, 2–4 June 2013; ACM: New York, NY, USA, 2013; pp. 961–970. [Google Scholar]
  24. Li, W.; Cui, Q. Vector scheduling with rejection on a single machine. 4OR 2018, 16, 95–104. [Google Scholar] [CrossRef]
  25. Dai, B.; Li, W. Vector scheduling with rejection on two machines. Int. J. Comput. Math. 2020, 97, 2507–2515. [Google Scholar] [CrossRef]
  26. Halabi, M.E.; Jegelka, S. Optimal approximation for unconstrained non-submodular minimization. In Proceedings of the 37th International Conference on Machine Learning, Online, 13–18 July 2020; Daumé, H., III, Singh, A., Eds.; PMLR, Proceedings of Machine Learning Research. ACM: Vienna, Austria, 2020; Volume 119, pp. 3961–3972. [Google Scholar]
  27. Maehara, T.; Murota, K. A framework of discrete DC programming by discrete convex analysis. Math. Program. 2015, 152, 435–466. [Google Scholar] [CrossRef]
  28. Wang, Y.; Xu, D.; Wang, Y.; Zhang, D. Non-submodular maximization on massive data streams. J. Glob. Optim. 2019, 76, 729–743. [Google Scholar] [CrossRef]
  29. Wu, W.L.; Zhang, Z.; Du, D.Z. Set function optimization. J. Oper. Res. Soc. China 2019, 7, 183–193. [Google Scholar] [CrossRef] [Green Version]
  30. Wu, C.; Wang, Y.; Lu, Z.; Pardalos, P.M.; Xu, D.; Zhang, Z.; Du, D.Z. Solving the degree-concentrated fault-tolerant spanning subgraph problem by DC programming. Math. Program. 2018, 169, 255–275. [Google Scholar] [CrossRef]
  31. Wei, K.; Iyer, R.K.; Wang, S.; Bai, W.; Bilmes, J.A. Mixed robust/average submodular partitioning: Fast algorithms, guarantees, and applications. In Proceedings of the 29th Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2015; pp. 2233–2241. [Google Scholar]
  32. Jia, X.D.; Hou, B.; Liu, W. An approximation algorithm for the generalized prize-collecting steiner forest problem with submodular penalties. J. Oper. Res. Soc. China 2021. [Google Scholar] [CrossRef]
  33. Liu, X.; Li, W. Combinatorial approximation algorithms for the submodular multicut problem in trees with submodular penalties. J. Comb. Optim. 2020. [Google Scholar] [CrossRef]
  34. Zhang, X.; Xu, D.; Du, D.; Wu, C. Approximation algorithms for precedence-constrained identical machine scheduling with rejection. J. Comb. Optim. 2018, 35, 318–330. [Google Scholar] [CrossRef]
  35. Liu, X.; Xing, P.; Li, W. Approximation algorithms for the submodular load balancing with submodular penalties. Mathematics 2020, 8, 1785. [Google Scholar] [CrossRef]
  36. Liu, X.; Li, W. Approximation algorithms for the multiprocessor scheduling with submodular penalties. Optim. Lett. 2021, 15, 2165–2180. [Google Scholar] [CrossRef]
  37. Iyer, R.; Bilmes, J. Algorithms for approximate minimization of the difference between submodular functions, with applications. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence (UAI’12), Catalina Island, CA, USA, 15–17 August 2012; AUAI Press: Arlington, VA, USA, 2012; pp. 407–417. [Google Scholar]
  38. Lehmann, B.; Lehmann, D.; Nisan, N. Combinatorial auctions with decreasing marginal utilities. Games Econ. Behav. 2006, 55, 270–296. [Google Scholar] [CrossRef]
  39. Kuhnle, A.; Smith, J.D.; Crawford, V.G.; Thai, M.T. Fast maximization of non-submodular, monotonic functions on the integer lattice. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; PLMR: Stockholm, Sweden, 2018; Volume 6, pp. 4350–4368. [Google Scholar]
  40. Li, Y.; Du, D.; Xiu, N.; Xu, D. Improved Approximation Algorithms for the Facility Location Problems with Linear/Submodular Penalties. Algorithmica 2015, 73, 460–482. [Google Scholar] [CrossRef]
  41. Fleischer, L.; Iwata, S. A push-relabel framework for submodular function minimization and applications to parametric optimization. Discret. Appl. Math. 2003, 131, 311–322. [Google Scholar] [CrossRef] [Green Version]
Table 1. The results in this paper.
Table 1. The results in this paper.
CaseSettingOur result
SMVS with
general penalties
rejected set function is
normalized and nondecreasing
lower bound is α ( n )
p j i > 0 for any dimension i of any job J j
and the diminishing-return ratio γ > 0
1 γ Z + ( 1 1 r ) · L + ε
(combinatorial algorithm)
rejected set function
is submodular
e e 1 (noncombinatorial algorithm)
min { r , d } (combinatorial algorithm)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, X.; Li, W.; Zhu, Y. Single Machine Vector Scheduling with General Penalties. Mathematics 2021, 9, 1965. https://doi.org/10.3390/math9161965

AMA Style

Liu X, Li W, Zhu Y. Single Machine Vector Scheduling with General Penalties. Mathematics. 2021; 9(16):1965. https://doi.org/10.3390/math9161965

Chicago/Turabian Style

Liu, Xiaofei, Weidong Li, and Yaoyu Zhu. 2021. "Single Machine Vector Scheduling with General Penalties" Mathematics 9, no. 16: 1965. https://doi.org/10.3390/math9161965

APA Style

Liu, X., Li, W., & Zhu, Y. (2021). Single Machine Vector Scheduling with General Penalties. Mathematics, 9(16), 1965. https://doi.org/10.3390/math9161965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop