Skip to Content
MathematicsMathematics
  • Article
  • Open Access

20 July 2023

Study on Convex Resource Allocation Scheduling with a Time-Dependent Learning Effect

and
School of Science, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Systems Engineering, Control, and Automation

Abstract

In classical schedule problems, the actual processing time of a job is a fixed constant, but in the actual production process, the processing time of a job is affected by a variety of factors, two of which are the learning effect and resource allocation. In this paper, single-machine scheduling problems with resource allocation and a time-dependent learning effect are investigated. The actual processing time of a job depends on the sum of normal processing times of previous jobs and the allocation of non-renewable resources. With the convex resource consumption function, the goal is to determine the optimal schedule and optimal resource allocation. Three problems arising from two criteria (i.e., the total resource consumption cost and the scheduling cost) are studied. For some special cases of the problems, we prove that they can be solved in polynomial time. More generally, we propose some accurate and intelligent algorithms to solve these problems.

1. Introduction

In many real-world industrial processes, job (task) processing times may be variable due to learning effects or resource allocation. Learning effects appear in, for example, the way that workers’ repeated processing of similar jobs improves their skills (see Azzouz et al. [1], Sun et al. [2], Zhao [3], Wang et al. [4], Chen et al. [5], Ren et al. [6], Wang et al. [7]). The processing times of jobs can be controlled by allocating a common limited resource, such as fuel, the financial budget, energy, or manpower (see Guan et al. [8], Wang and Cheng [9], Shabtay and Steiner [10], Zhang et al. [11], Wang et al. [12], Wang et al. [13], Liu and Wang [14]).
In addition, in many real-life situations, the simultaneous occurrence of learning effects and resource allocation can be found; e.g., in the chemical industry. Recently, Lu et al. [15] explored single-machine scheduling with learning effects, group technology, and resource allocation. The objective was to minimize the makespan subject to limited resource availability. To solve the problem, the authors proposed heuristic and branch-and-bound algorithms. Wang et al. [16] studied the resource allocation scheduling problem with learning and deterioration effects with a single machine. For linear resource allocation, they showed that some regular objective function minimizations can be solved in polynomial time. Liu and Jiang [17] considered due date assignment scheduling problems with learning effects and resource allocation. Zhao [18] addressed due window assignment flow shop scheduling problems with learning effects and resource allocation. Wang et al. [19] considered single-machine resource allocation scheduling with truncated learning effects. For the scheduling cost (i.e., the total weighted completion time) and total resource consumption cost, they provided a bicriteria analysis. They proved that some special cases of the problem are solvable in polynomial time. To solve the problem more generally, they proposed a heuristic and a branch-and-bound algorithm. Yan et al. [20] studied single-machine group scheduling with resource allocation and learning effects. For the minimization of the total completion time subject to limited resource availability, they proposed heuristic, tabu search, and branch-and-bound algorithms.
Biskup [21] considered the position-dependent learning effect, i.e., for which the actual processing time of job J ˙ j in position r is p j r A = p ¯ j r α , where p ¯ j is the normal processing time of job J ˙ j , and α 0 is the learning factor. Kuo and Yang [22] studied the time-dependent learning effect; i.e., p j r A = p ¯ j ( 1 + h = 1 r 1 p ¯ [ h ] ) α , where α 0 is the learning factor and [ h ] denotes a job scheduled in the hth position. Wang et al. [23] studied the following model: p j r A ( u ˜ j ) = p ¯ j r a u ˜ j β , where β > 0 is a given constant, and u ˜ j is the resource allocated to the job J ˙ j . The above articles considered the resource allocation scheduling problem with position-dependent learning effects. However, in general, there are two approaches to modeling the learning effect: one is the position-dependent learning effect, and the other is the time-dependent (sum-of-processing-time) learning effect (Azzouz et al. [1]). Hence, in this paper, the work of resource allocation scheduling is continued by researching the time-dependent learning effect (see Table 1). This paper’s contributions and novelties are as follows:
Table 1. Models studied.
  • Single-machine scheduling with convex resource allocation and a time-dependent learning effect is modeled and studied;
  • The solution algorithms for three versions of the total resource consumption cost and the scheduling cost are presented;
  • The computational results of the proposed algorithms are analyzed.
The paper is organized as follows. Section 2 formulates the model. Section 3 gives the basic properties of the problems. Section 4 describes the solution algorithms developed to solve the problems. Section 5 focused on the computational experiments with the algorithms. Section 6 presents the conclusions.

2. Problem Statement

In this paper, the notations used are listed in Table 2, and we consider the problem of scheduling n ˇ jobs J ˙ 1 , J ˙ 2 , J ˙ n ˇ on a single machine, with all the jobs available at time 0. If the job schedule is J ˙ 1 , J ˙ 2 , J ˙ 3 , J ˙ n ˇ , then the Gantt chart of the single-machine scheduling is as in Figure 1:
Table 2. Notations.
Figure 1. Gantt chart of the problem.
In this paper, the model we consider is given as follows
p j r A ( u ˜ j ) = p ¯ j ( 1 + h = 1 r 1 p ¯ [ h ] ) α u ˜ j β ,
where α 0 is the learning factor, and β > 0 is a given constant. The goal of this article is to determine the optimal schedule and optimal resource allocation. The first problem of this paper is to minimize
F ( u ˜ [ j ] ) = δ j = 1 n ˇ ϑ j p [ j ] A + η j = 1 n ˇ g [ j ] u ˜ [ j ] .
Using the three-field notation, the first problem (denoted by P 1 ¯ ) can be denoted as
1 p j r A ( u ˜ j ) = p ¯ j ( 1 + h = 1 r 1 p ¯ [ h ] ) α u ˜ j β δ j = 1 n ˇ ϑ j p [ j ] A + η j = 1 n ˇ g [ j ] u ˜ [ j ] .
The second problem is to minimize j = 1 n ˇ ϑ j p [ j ] A subject to the resource consumption cost cannot exceed an upper bound, i.e., j = 1 n ˇ g [ j ] u ˜ [ j ] U ˘ , and this problem (denoted by P 2 ¯ ) is
1 p j r A ( u ˜ j ) = p ¯ j ( 1 + h = 1 r 1 p ¯ [ h ] ) α u ˜ j β , j = 1 n ˇ g [ j ] u ˜ [ j ] U ˘ j = 1 n ˇ ϑ j p [ j ] A ,
where U ˘ > 0 is an upper bound on j = 1 n ˇ g [ j ] u ˜ [ j ] . The last problem is to consider the complementary problem of P 2 ¯ , i.e., the third problem (denoted by P 3 ¯ ) is
1 p j r A ( u ˜ j ) = p ¯ j ( 1 + h = 1 r 1 p ¯ [ h ] ) α u ˜ j β , j = 1 n ˇ ϑ j p [ j ] A V ˘ j = 1 n ˇ g [ j ] u ˜ [ j ] ,
where V ˘ > 0 is an upper bound on j = 1 n ˇ ϑ j p [ j ] A .
Obviously, for the makespan of all jobs C max = j = 1 n ˇ ϑ j p [ j ] A , where ϑ j = 1 ; for the total completion times j = 1 n ˇ C j = j = 1 n ˇ ϑ j p [ j ] A , where ϑ j = n ˇ j + 1 ; for the total absolute differences in completion times T A D C = i = 1 n ˇ j = 1 n ˇ C i C j = j = 1 n ˇ ϑ j p [ j ] A , where ϑ j = ( j 1 ) ( n ˇ j + 1 ) (see Kanet [24]); for the total absolute differences in waiting times T A D W = i = 1 n ˇ j = i n ˇ W i W j = j = 1 n ˇ ϑ j p [ j ] A , where ϑ j = j ( n ˇ j ) and W j = C j p j A is the waiting time of job J j (see Bagchi [25]).

3. Basic Properties

In this section, some lemmas are given and there exist the optimal resource allocation of three problems the above mentioned.

3.1. Problem P 1 ¯

Lemma 1.
For the problem P 1 ¯ , the optimal resource allocation is a function of the job schedule, i.e.,
u ˜ [ j ] * = δ β ϑ j η g [ j ] 1 1 + β p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β .
Proof. 
For any fixed job schedule ψ ¯ = ( J ˙ [ 1 ] , J ˙ [ 2 ] , , J ˙ [ n ˇ ] ) , from Equations (1) and (2), we have
F ( u ˜ [ j ] ) = δ j = 1 n ˇ ϑ j p [ j ] A + η j = 1 n ˇ g [ j ] u ˜ [ j ] = δ j = 1 n ˇ ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α u ˜ j β + η j = 1 n ˇ g [ j ] u ˜ [ j ]
We take the derivation of Equation (7), and let it be equal to 0, we have
F ( u ˜ [ j ] ) u ˜ [ j ] = δ β ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α β u ˜ j β + 1 + η g [ j ] = 0 .
From Equation (8), we have
u ˜ [ j ] * = δ β ϑ j η g [ j ] 1 1 + β p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β .
Equation (6) holds.    □
By substituting Equation (6) into Equation (7), we have
F ( u ˜ [ j ] ) = β β β + 1 + β 1 β + 1 δ 1 β + 1 η β β + 1 j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β

3.2. Problem P 2 ¯

Lemma 2.
For the problem P 2 ¯ , the optimal resource allocation as a function of the job schedule, i.e.,
u ˜ j * = ϑ j p ¯ j ( 1 + h = 1 j 1 p ¯ [ h ] ) α β 1 1 ( β + 1 ) ( β + 1 ) g j 1 1 ( β + 1 ) ( β + 1 ) j = 1 n ˇ ϑ j g j p ¯ j ( 1 + h = 1 j 1 p ¯ [ h ] ) α β 1 1 ( β + 1 ) ( β + 1 ) × U ˘ , j = 1 , 2 , , n ˇ .
Proof. 
For any fixed job schedule ψ ¯ = ( J ˙ [ 1 ] , J ˙ [ 2 ] , , J ˙ [ n ˇ ] ) , the Lagrangian function is
L ˜ u ˜ [ j ] , λ ˜ = j = 1 n ˇ ϑ j p j A + λ ˜ j = 1 n ˇ g j u ˜ j U ˘ = j = 1 n ˇ ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α u ˜ [ j ] β + λ ˜ j = 1 n ˇ g j u ˜ j U ˘ ,
where λ ˜ 0 is the Lagrangian multiplier. Deriving (11) with respect to u ˜ j and λ ˜ , we have
L ˜ u ˜ [ j ] , λ ˜ u ˜ j = λ ˜ g j β ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α β u ˜ j β + 1 = 0 .
and
L ˜ u ˜ [ j ] , λ ˜ λ ˜ = j = 1 n ˇ g [ j ] u ˜ [ j ] U ˘ = 0 .
From Equation (12), we have
u ˜ j = β ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α β 1 1 ( β + 1 ) ( β + 1 ) λ ˜ g j 1 1 ( β + 1 ) ( β + 1 ) .
Substitute Equation (14) to Equation (13), we have
λ ˜ 1 1 ( β + 1 ) ( β + 1 ) = j = 1 n ˜ β ϑ j 1 1 ( β + 1 ) ( β + 1 ) g j p ¯ j ( 1 + h = 1 j 1 p ¯ [ h ] ) α β β β + 1 ( β + 1 ) U ˘ .
From Equations (14) and (15), Equation (10) holds.    □
By substituting Equation (10) into j = 1 n ˇ ϑ j p j A = j = 1 n ˇ ϑ j p ¯ [ j ] ( 1 + h = 1 j 1 p ¯ [ h ] ) α u ˜ [ j ] β , we have
j = 1 n ϑ j p j A = j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β β + 1 U ˘ β

3.3. Problem P 3 ¯

Lemma 3.
For the problem P 3 ¯ , the optimal resource allocation is:
u j * = V ˘ 1 β ϑ j 1 β + 1 p ¯ j ( 1 + h = 1 j 1 p ¯ [ h ] ) α β β + 1 j = 1 n ˇ ϑ j 1 β + 1 g j p ¯ j ( 1 + h = 1 j 1 p ¯ [ h ] ) α β β + 1 1 β g j 1 β + 1 , j = 1 , 2 , , n ˇ .
Proof. 
Similar to Lemma 2.    □
By substituting Equation (17) into j = 1 n ˇ g j u ˜ j , we have:
j = 1 n ˇ g j u ˜ j = V ˘ 1 β j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β 1 + 1 β

4. Algorithms

Since β , δ , η , U ˘ and V ˘ are given parameters, from Equations (9), (16), and (18), it can be showed that solving P 1 ¯ , P 2 ¯ and P 3 ¯ is equal to minimizing:
M = j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β
If ϑ j = 1 , g j = 1 ( j = 1 , 2 , , n ˇ ) , we will show that the SPT rule and LPT rule can not find the optimal schedule for P 1 ¯ , P 2 ¯ and P 3 ¯ .
Example 1.
Assume that n ˇ = 3 , α = 0.5 , β = 1 and the processing times of the jobs are p ¯ 1 = 2 , p ¯ 2 = 3 , p ¯ 3 = 4 .
According to the SPT order, M = 4.0082 .
If the schedule is LPT rule, M = 3.9992 .
Therefore, SPT is not an optimal schedule for the case of ϑ j = 1 , g j = 1 ( j = 1 , 2 , , n ˇ ) .
Example 2.
Assume that n ˇ = 3 , α = 0.2 , β = 3 and the processing times of the jobs are p ¯ 1 = 2 , p ¯ 2 = 4 , p ¯ 3 = 7 .
According to the LPT rule, M = 7.5325 .
If the schedule is SPT order, M = 7.2946 .
Therefore, LPT is not an optimal schedule for the case of ϑ j = 1 , g j = 1 ( j = 1 , 2 , , n ˇ ) .

4.1. Polynomial Time Solvable Cases

4.1.1. Case 1

If p ¯ j = p ¯ ( j = 1 , 2 , , n ˇ ) , we have:
Theorem 1.
If p ¯ j = p ¯ ( j = 1 , 2 , , n ˇ ) , for P 1 ¯ , P 2 ¯ and P 3 ¯ , the optimal schedule can be solved in O ( n ˇ log n ˇ ) time.
Proof. 
If p ¯ j = p ¯ ( j = 1 , 2 , , n ˇ ) , from Equation (19), we have
M = j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β = j = 1 n ˇ ( ϑ j ) 1 1 + β [ p ¯ 1 + ( j 1 ) p ¯ α ] β 1 + β g [ j ] β 1 + β .
Let X j = ( ϑ j ) 1 1 + β [ p ¯ 1 + ( j 1 ) p ¯ α ] β 1 + β and Y [ j ] = g [ j ] β 1 + β . Obviously, Equation (20) can be minimized by HLP rule (see Hardy et al. [26]) in O ( n ˇ log n ˇ ) time, i.e., place the largest Y j on the smallest X j , the second largest Y j on the second smallest X j , and so forth.    □

4.1.2. Case 2

If α = 0 , we have:
Theorem 2.
If α = 0 , for the problems P 1 ¯ , P 2 ¯ and P 3 ¯ , the optimal schedule can be solved in O ( n ˇ log n ˇ ) time.
Proof. 
If α = 0 , from Equation (19),
M = j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β = j = 1 n ˇ ( ϑ j ) 1 1 + β g [ j ] p ¯ [ j ] β 1 + β .
Let X j = ( ϑ j ) 1 1 + β and Y [ j ] = g [ j ] p ¯ [ j ] β 1 + β . Obviously, Equation (21) can be minimized by HLP rule in O ( n ˇ log n ˇ ) time.    □

4.1.3. Case 3

If ϑ j = ϑ ( j = 1 , 2 , , n ˇ ) , and g k p ¯ k g j p ¯ j ( j = 1 , 2 , , n ˇ ) implies p ¯ k p ¯ j (or g k p ¯ k g j p ¯ j ( j = 1 , 2 , , n ˇ ) implies p ¯ k p ¯ j ), we have:
Theorem 3.
If ϑ j = ϑ ( j = 1 , 2 , , n ˇ ) , and g k p ¯ k g j p ¯ j ( j = 1 , 2 , , n ˇ ) implies p ¯ k p ¯ j (or g k p ¯ k g j p ¯ j ( j = 1 , 2 , , n ˇ ) implies p ¯ k p ¯ j ), for P 1 ¯ , P 2 ¯ and P 3 ¯ , the optimal schedule can be solved in O ( n ˇ log n ˇ ) time, i.e., by sequencing the jobs in non-decreasing (non-increasing) order of g j p ¯ j ( p ¯ j ).
Proof. 
If ϑ j = ϑ ( j = 1 , 2 , , n ˇ ) , from Equation (19), we have
M = ϑ 1 1 + β j = 1 n ˇ g [ j ] p ¯ [ j ] 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β .
Let ψ ¯ = ( J ˙ [ 1 ] , J ˙ [ 2 ] , , J ˙ [ r 1 ] , J ˙ k , J ˙ j , J ˙ [ r + 2 ] , , J ˙ [ n ˇ ] ) and ψ ¯ = ( J ˙ [ 1 ] , J ˙ [ 2 ] , , J ˙ [ r 1 ] , J ˙ j , J ˙ k , J ˙ [ r + 2 ] , , J ˙ [ n ˇ ] ) be two job schedules, where g k p ¯ k g j p ¯ j and p ¯ k p ¯ j . To show ψ ¯ dominates ψ ¯ , it suffices to show that the rth and ( r + 1 ) th jobs satisfy the following condition: ( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β + ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ k α β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β g j β 1 + β p ¯ j β 1 + β + ( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β .
Obviously, if p ¯ k p ¯ j , we have 1 + h = 1 r 1 p ¯ [ h ] + p ¯ k α β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β , then
( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β + ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ k α β 1 + β ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β ( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β ( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β + ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β ( g k p ¯ k ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β = ( g k p ¯ k ) β 1 + β ( g j p ¯ j ) β 1 + β 1 + h = 1 r 1 p ¯ [ h ] α β 1 + β 1 + h = 1 r 1 p ¯ [ h ] + p ¯ j α β 1 + β 0 .
   □

4.2. Lower Bound

Let ψ ¯ = ( ψ P ¯ , ψ U ¯ ) be a schedule, where ψ P ¯ ( ψ U ¯ ) ) is the scheduled (unscheduled) part, and there are η jobs in ψ P ¯ . Let p ¯ min = min { p ¯ j | j ψ U ¯ } , from Equation (19), we have:
M ( ψ P ¯ , ψ U ¯ ) = j = 1 η ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β + j = η + 1 n ˇ ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 η p ¯ [ h ] + h = η + 1 j 1 p ¯ [ h ] α β 1 + β j = 1 η ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β + j = η + 1 n ˇ ( ϑ j ) 1 1 + β ( g [ j ] ) β 1 + β ( p ¯ min ) β 1 + β 1 + h = 1 η p ¯ [ h ] + ( j 1 η ) p ¯ min α β 1 + β .
Observe that the terms 1 + h = 1 η p ¯ [ h ] and j = 1 η ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β are known and a lower bound can be obtained by minimizing j = η + 1 n ˇ ( ϑ j ) 1 1 + β ( g [ j ] ) β 1 + β ( p ¯ min ) β 1 + β ( 1 + h = 1 η p ¯ [ h ] + ( j 1 η ) p ¯ min ) α β 1 + β . From Theorem 1, we have the first lower bound:
M ( ψ P ¯ , ψ U ¯ ) L B 1 = j = 1 η ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β + j = η + 1 n ˇ X j Y [ j ] ,
where X j = ( ϑ j ) 1 1 + β ( p ¯ min ) β 1 + β 1 + h = 1 η p ¯ [ h ] + ( j 1 η ) p ¯ min α β 1 + β , Y [ j ] = ( g [ j ] ) β 1 + β , and j = η + 1 n ˇ X j Y [ j ] can be minimized by the HLP rule.
Similarly, let ϑ min = min { ϑ j | j = η + 1 , η + 2 , , n ˇ } , from Theorem 3, we have the second lower bound
M ( ψ P ¯ , ψ U ¯ ) L B 2 = j = 1 η ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β + ( ϑ min ) 1 1 + β j = η + 1 n ˇ ( g ( j ) p ¯ ( j ) ) β 1 + β 1 + h = 1 η p ¯ [ h ] + h = η + 1 j 1 p ¯ < h > α β 1 + β ,
where g ( η + 1 ) p ¯ ( η + 1 ) g ( η + 2 ) p ¯ ( η + 2 ) g ( n ˇ ) p ¯ ( n ˇ ) and p ¯ < η + 1 > p ¯ < η + 2 > p ¯ < n ˇ > (note that g ( j ) p ¯ ( j ) and p ¯ < j > do not necessarily correspond to the same job).
Combining Equations (24) and (25), the lower bound for P 1 ¯ , P 2 ¯ and P 3 ¯ is
M ( ψ P ¯ , ψ U ¯ ) L B = max { M ( ψ P ¯ , ψ U ¯ ) L B 1 , M ( ψ P ¯ , ψ U ¯ ) L B 2 } .

4.3. Upper Bound

From Section 4.1, we can propose the following heuristic algorithm as a upper bound (UB) for P 1 ¯ , P 2 ¯ and P 3 ¯ (see Algorithm 1).
Then use the branch-and-bound algorithm, which is abbreviated as B&B, always around a search tree, we regard the original problem as the root node of the search tree, based on which, the meaning of branch is to divide the big problem into small problems. The big problem can be seen as the parent node of the search tree, so the small problem separated from the big problem is the child node of the parent node. The process of branching is the process of adding children to the tree. The bound is to check the upper and lower bounds of the subproblem in the process of branching, if the subproblem does not produce a better solution than the current optimal solution, then cut this branch. The algorithm ends when none of the subproblems yield a better solution.
Algorithm 1: (UB)
Step 1.
Sequence the jobs by the HLP rule, where X j = ( ϑ j ) 1 1 + β ( j ) α β 1 + β and Y [ j ] = g [ j ] β 1 + β .
Step 2.
Sequence the jobs by the HLP rule, where X j = ( ϑ j ) 1 1 + β and Y [ j ] = g [ j ] β 1 + β .
Step 3.
Sequence the jobs in non-decreasing order of g j p ¯ j .
Step 4.
Sequence the jobs in non-increasing order of p ¯ j .
Step 5.
Choose the schedule with the minimal value of M = j = 1 n ˇ ( ϑ j ) 1 1 + β ( g [ j ] p ¯ [ j ] ) β 1 + β 1 + h = 1 j 1 p ¯ [ h ] α β 1 + β from Steps 1–4.

4.4. Complex Algorithms

From Nawaz et al. [27], the NEH heuristic (i.e., Algorithm 2) can be proposed for the P 1 ¯ , P 2 ¯ and P 3 ¯ , and two NEH variants are designed.
Algorithm 2:(NEH)
Step 1.
  
Step 1.1. An initial solution sorted by the SPT rule of g j p ¯ j (denoted by NEH[SPT]);
Step 1.2. An initial solution sorted by the LPT rule of p ¯ j (denoted by NEH[LPT]).
Step 2.
Pick the two jobs from the first and second positions of the list of Step 1, and find the best schedule for these two jobs by calculating M for the two possible schedules. Do not change the relative positions of these two jobs with respect to each other in the remaining steps of the algorithm. Set h = 3 .
Step 3.
Pick the job in the hth position of the list generated in Step 1 and find the best schedule by placing it at all possible h positions in the partial schedule found in the previous step, without changing the positions relative to each other of the already assigned jobs. The number of enumerations at this step equals h.
Step 4.
If n ˇ = h , STOP, otherwise set h = h + 1 and go to Step 3.
In addition, the tabu search (TS) algorithm can be proposed for P 1 ¯ , P 2 ¯ and P 3 ¯ . The initial schedule used in the TS algorithm is chosen by the SPT rule of g j p ¯ j (denoted by TS[SPT]) and the LPT rule of p ¯ j (denoted by TS[LPT]), and the maximum number of iterations for the TS algorithm is set at 1000 n ˇ . The steps of the TS heuristic are given in Wu et al. [28].
From the upper bound (i.e., Algorithm 1) and lower one (see Equation (26)), the standard branch-and-bound algorithm (denoted by B&B) can be proposed, and the depth first search strategy is used.

5. Computational Experiments

This section tests the accuracy and efficiency of the proposed algorithms UB, NEH, TS and B&B. Detailed programming and testing configurations are as follows.
  • Java version: Oracle JDK-11.01, the max memory allowed was restricted to 64 G.
  • Testing computer: One desktop computer with CPU Inter @ Core i5-10500 3.1 GHz, 8 GB RAM and Window 10 system through VC++ 6.0.
We assume that ϑ j = 1 (i.e., for the makespan C max ), and the following parameters are given:
(1)
α = 0.25 , 0.3 , 0.35 , 0.4 ;
(2)
β = 1 , 2 , 3 , 4 ;
(3)
p ¯ j is uniformly distributed over [1, 100];
(4)
g j is uniformly distributed over [1, 50];
(5)
n ˇ = 13, 14, 15, 16 (for small-scale instances and global optimum can be obtained by the B&B).
(6)
n ˇ = 100, 150, 200, 250, 300 (for larger-scale instances and B&B was disabled).
With 20 instances being generated for each combination n ˇ , α and β . For small-scale instances, from Equation (19), the error of algorithms is calculated by Equation (27)
M ( ψ ¯ ) M ( ψ ¯ * ) M ( ψ ¯ * ) × 100 % ,
where ψ ¯ is a schedule obtained by the UB, NEH[SPT], NEH[LPT], TS[SPT], TS[LPT] and the optimal schedule ψ ¯ * is obtained by the B&B. The running time of the UB, NEH[SPT], NEH[LPT], TS[SPT], TS[LPT], and B&B is defined by “CPU time” and time unit is milliseconds (ms). Table 3 compares the average with maximal running times of the above algorithms, and the maximal running time of B&B was 2,996,279 ms ( n ˇ 16 ). Table 4 compares the error of the above algorithms, and the performance of NEH[SPT] performs very well and the maximal error was 9.2 % for n ˇ 16 . For large-scale instances, the running time (i.e., “CPU time”, ms) is defined, and the error of algorithms is:
M ( U B ) M ( ψ ¯ ) M ( U B ) × 100 % .
From Table 5, Table 6, Table 7 and Table 8, we can obtain that the performance of NEH performs very well than TS and UB.
Table 3. CPU time (ms) results for n ˇ = 13 , 14 , 15 , 16 .
Table 4. Error results (%) for n ˇ = 13 , 14 , 15 , 16 .
Table 5. CPU time (ms) results for n ˇ = 100 , 150 , 200 .
Table 6. CPU time (ms) results for n ˇ = 250 , 300 .
Table 7. Error results (%) for n ˇ = 100 , 150 , 200 .
Table 8. Error results (%) for n ˇ = 250 , 300 .
As can be seen from Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, the efficiency of the algorithm is mainly related to the number of jobs, and the learning factor does not affect its efficiency. As the number of jobs increases, the execution time of the two algorithms significantly increases, the learning factor increases or decreases, the execution time fluctuates, and the number of nodes increases when α < 0.25 . For small-scale experiments (Table 3 and Table 4), the errors of NEH[SPT] and B&B are the smallest, indicating that the algorithm NEH[SPT] is highly accurate. The running time of B&B increases significantly with the growth of the number of jobs, and also, the number of nodes increases significantly with the growth of the number of jobs. When the number of jobs exceeds 16, the average running time of B&B exceeds 3600 s, which denotes that the B&B is more suitable for small-scale experiments and has higher efficiency. For large-scale experiments (Table 5, Table 6, Table 7 and Table 8), the B&B is not applicable. Considering polynomial-time algorithm (i.e., NEH) and intelligent algorithm (i.e., TS), it is obvious that polynomial-time algorithm is more efficient than intelligent algorithm, and NEH[SPT] and NEH[LPT] have similar errors, both of which are better than TS[SPT] and TS[LPT]. It implies that the NEH is more suitable for large-scale experiments.

6. Conclusions

In this paper, we studied the single-machine scheduling problems with the time-dependent learning effect and convex resource allocation. For three versions of the scheduling cost and resource consumption cost, we provided a bicriteria analysis. We proved that some special cases of the problems can be solved in polynomial time. For the general case of the problems, we proposed the heuristic algorithm, branch-and-bound algorithm, NEH algorithm and TS algorithm. Future research should further address the complexity of the problems P 1 ¯ , P 2 ¯ and P 3 ¯ , explore the flow shop or parallel machines (see Sterna and Czerniachowska [29]) scheduling with the time-dependent learning effect and resource allocation, or consider flow shop scheduling with deteriorating effects (see Sun and Geng [30]).

Author Contributions

Methodology, Y.-C.W. and J.-B.W.; investigation, J.-B.W.; writing—original draft preparation, Y.-C.W.; writing—review and editing, Y.-C.W. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was also supported by LiaoNing Revitalization Talents Program (grant no. XLYC2002017) and Science Research Foundation of Educational Department of Liaoning Province (LJKMZ20220527).

Data Availability Statement

The data used to support the findings of this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Azzouz, A.; Ennigrou, M.; Said, L.B. Scheduling problems under learning effects: Classification and cartography. Int. J. Prod. Res. 2018, 56, 1642–1661. [Google Scholar] [CrossRef]
  2. Sun, X.; Geng, X.N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  3. Zhao, S. Scheduling jobs with general truncated learning effects including proportional setup times. Comput. Appl. Math. 2022, 41, 146. [Google Scholar] [CrossRef]
  4. Wang, J.-B.; Zhang, L.-H.; Lv, Z.-G.; Lv, D.-Y.; Geng, X.-N.; Sun, X. Heuristic and exact algorithms for single-machine scheduling problems with general truncated learning effects. Comput. Appl. Math. 2022, 41, 417. [Google Scholar] [CrossRef]
  5. Chen, K.; Cheng, T.C.E.; Huang, H.; Ji, M.; Yao, D. Single-machine scheduling with autonomous and induced learning to minimize the total weighted number of tardy jobs. Eur. J. Oper. Res. 2023, 309, 24–34. [Google Scholar] [CrossRef]
  6. Ren, N.; Lv, D.Y.; Wang, J.B.; Wang, X.Y. Solution algorithms for single-machine scheduling with learning effects and exponential past-sequence-dependent delivery times. J. Ind. Manag. Optim. 2023, 19, 8429–8450. [Google Scholar] [CrossRef]
  7. Wang, S.-H.; Lv, D.-Y.; Wang, J.-B. Research on position-dependent weights scheduling with delivery times and truncated sum-of-processing-times-based learning effect. J. Ind. Manag. Optim. 2023, 19, 2824–2837. [Google Scholar] [CrossRef]
  8. Guan, X.H.; Zhai, Q.Z.; Lai, F. New lagrangian relaxation based algorithm for resource scheduling with homogeneous subproblems. J. Optim. Theory Appl. 2002, 113, 65–82. [Google Scholar] [CrossRef]
  9. Wang, X.; Cheng, T.C.E. Single machine scheduling with resource dependent release times and processing times. Eur. J. Oper. Res. 2005, 162, 727–739. [Google Scholar] [CrossRef]
  10. Shabtay, D.; Steiner, G. A survey of scheduling with controllable processing times. Discret. Appl. Math. 2007, 155, 1643–1666. [Google Scholar] [CrossRef]
  11. Zhang, L.-H.; Lv, D.-Y.; Wang, J.-B. Two-agent slack due-date assignment scheduling with resource allocations and deteriorating jobs. Mathematics 2023, 11, 2737. [Google Scholar] [CrossRef]
  12. Wang, Y.-C.; Wang, S.-H.; Wang, J.-B. Resource allocation scheduling with position-dependent weights and generalized earliness-tardiness cost. Mathematics 2023, 11, 222. [Google Scholar] [CrossRef]
  13. Wang, J.B.; Lv, D.Y.; Wang, S.Y.; Jiang, C. Resource allocation scheduling with deteriorating jobs and position-dependent workloads. J. Ind. Manag. Optim. 2023, 19, 1658–1669. [Google Scholar] [CrossRef]
  14. Liu, W.; Wang, X. Group technology scheduling with due-date assignment and controllable processing times. Processes 2023, 11, 1271. [Google Scholar] [CrossRef]
  15. Lu, Y.Y.; Wang, J.B.; Ji, P.; He, H. A note on resource allocation scheduling with group technology and learning effects on a single machine. Eng. Optim. 2017, 49, 1621–1632. [Google Scholar] [CrossRef]
  16. Wang, J.B.; Liu, M.; Yin, N.; Ji, P. Scheduling jobs with controllable processing time, truncated job-dependent learning and deterioration effects. J. Ind. Manag. Optim. 2017, 13, 1025–1039. [Google Scholar] [CrossRef]
  17. Liu, W.W.; Jiang, C. Flow shop resource allocation scheduling with due date assignment, learning effect and position-dependent weights. Asia-Pacific J. Oper. Res. 2020, 37, 2050014. [Google Scholar] [CrossRef]
  18. Zhao, S. Resource allocation flowshop scheduling with learning effect and slack due window assignment. J. Ind. Manag. Optim. 2021, 17, 2817–2835. [Google Scholar] [CrossRef]
  19. Wang, J.B.; Lv, D.Y.; Xu, J.; Ji, P.; Li, F. Bicriterion scheduling with truncated learning effects and convex controllable processing times. Int. Trans. Oper. Res. 2021, 28, 1573–1593. [Google Scholar] [CrossRef]
  20. Yan, J.-X.; Ren, N.; Bei, H.-B.; Bao, H.; Wang, J.-B. Study on resource allocation scheduling problem with learning factors and group technology. J. Ind. Manag. Optim. 2023, 19, 3419–3435. [Google Scholar] [CrossRef]
  21. Biskup, D. Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 1999, 115, 173–178. [Google Scholar] [CrossRef]
  22. Kuo, W.H.; Yang, D.L. Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect. Eur. J. Oper. Res. 2006, 174, 1184–1190. [Google Scholar] [CrossRef]
  23. Wang, D.; Wang, M.Z.; Wang, J.B. Single–Machine scheduling with learning effect and resource-dependent processing times. Comput. Ind. Eng. 2010, 59, 458–462. [Google Scholar] [CrossRef]
  24. Kanet, J.J. Minimizing variation of flow time in single machine systems. Manag. Sci. 1981, 27, 1453–1459. [Google Scholar] [CrossRef]
  25. Bagchi, U.B. Simultaneous minimization of mean and variation of flow-time and waiting time in single machine systems. Oper. Res. 1989, 37, 118–125. [Google Scholar] [CrossRef]
  26. Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1967. [Google Scholar]
  27. Nawaz, M.; Enscore Jr, E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  28. Wu, C.C.; Wu, W.H.; Hsu, P.H.; Yin, Y.; Xu, J. A single-machine scheduling with a truncated linear deterioration and ready times. Inf. Sci. 2014, 256, 109–125. [Google Scholar] [CrossRef]
  29. Sterna, M.; Czerniachowska, K. Polynomial time approximation scheme for two parallel machines scheduling with a common due date to maximize early work. J. Optim. Theory Appl. 2017, 174, 927–944. [Google Scholar] [CrossRef]
  30. Sun, X.; Geng, X.N. Single-machine scheduling with deteriorating effects and machine maintenance. Int. J. Prod. Res. 2019, 57, 3186–3199. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.