Next Article in Journal
Modifications to the Jarque–Bera Test
Previous Article in Journal
Hybrid Bio-Optimized Algorithms for Hyperparameter Tuning in Machine Learning Models: A Software Defect Prediction Case Study
Previous Article in Special Issue
Commodity Pricing and Replenishment Decision Strategy Based on the Seasonal ARIMA Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Machine Scheduling with Simultaneous Learning Effects and Delivery Times

School of Economics and Management, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(16), 2522; https://doi.org/10.3390/math12162522
Submission received: 30 June 2024 / Revised: 4 August 2024 / Accepted: 12 August 2024 / Published: 15 August 2024

Abstract

:
This paper studies the single-machine scheduling problem with truncated learning effect, time-dependent processing time, and past-sequence-dependent delivery time. The delivery time is the time that the job is delivered to the customer after processing is complete. The goal is to determine an optimal job schedule to minimize the total weighted completion time and maximum tardiness. In order to solve the general situation of the problem, we propose a branch-and-bound algorithm and other heuristic algorithms. Computational experiments also prove the effectiveness of the given algorithms.

1. Introduction

A common assumption when solving traditional scheduling problems in the field of operations and production management is that the processing time required for each job remains a fixed, predetermined constant. However, this simplistic view often fails to align with the intricate and dynamic processing environments of the real world. Essentially, the assumption of constant processing time ignores a fundamental aspect of the production process: the learning effect, which refers to the phenomenon that, as workers or machines repeatedly perform a job, their proficiency and familiarity with the task gradually increase. Therefore, when constructing a scheduling strategy, in-depth consideration of the impact of the learning effect is key to ensuring that the strategy is both accurate and adaptable to the ever-changing production environment.
In addition to extensive research on the learning effect in scheduling problems, it is also important to include the time when the job is delivered to the customer, which is the delivery time. Delivery time is one of the key metrics for measuring the efficiency of a company’s operations, supply chain management, and customer satisfaction. A shorter delivery time usually means that an enterprise can respond faster to market demand, improve customer satisfaction, and potentially increase market competitiveness. Therefore, many enterprises are committed to optimizing production processes, increasing productivity, improving logistics management, and using other measures to shorten delivery time.
However, the learning effect will not continue indefinitely and, when it reaches a certain level, learning efficiency will no longer increase significantly but may even remain unchanged or decline slightly. For example, after reaching a certain level of proficiency or optimization of the workers or machines, it faces the limitations of other factors, such as physiological fatigue, machine wear, and so on.

2. Related Work

The learning effect scheduling problem has been widely investigated by many studies during the last few decades (see Wu et al. [1], Azzouz et al. [2], Sun et al. [3], Lv and Wang [4], Zhao [5], Wang and Wang [6], Paredes-Astudillo et al. [7]). Recent studies include Ma et al. [8], who investigated a regular single-machine online scheduling problem based on the positional learning effect. Mor et al. [9] explored the application of the learning effect to flow shop scheduling to minimize the makespan, total completion time, and total load. Chen et al. [10] developed a polynomial time algorithm to optimize the total cost, including penalty and investment costs related to the due-window. Zhang et al. [11] considered single-machine resource allocation scheduling with exponential time-dependent learning effects. Lv and Wang  [12] considered two-machine flow shop scheduling with a truncated learning effect. For the total completion time minimization subject to release dates, they proposed some heuristic and branch-and-bound algorithms. In addition, the widespread application of learning effects in scheduling problems also includes the combination of learning and the deterioration effect (Lin [13]), as well as the simultaneous study of the learning effect and setup time (Zhu et al. [14] and Jiang et al. [15]), the relationship between position-dependent weights (Wang et al. [16]) and machine maintenance (Wu et al. [17]), and other production applications.
The importance of delivery time as a key factor in scheduling problems cannot be overstated. For example, Wang et al. [18] investigated the single-machine scheduling problems with past-sequence-dependent delivery time. Ji et al. [19] simultaneously considered the single-machine scheduling problem with controllable processing time and due-window assignment, in which the processing time of the job is a function of the learning effect and resource allocation. In addition, there are many studies exploring the combination of delivery time and due-date or due-window assignment, such as Ahmadizar and Farhadi [20], Zhang et al. [21], and Pan et al. [22]. At the same time, the simultaneous study of delivery time and deterioration effect includes Qian and Han [23], who studied the delivery time scheduling problem with the deterioration effect, and who introduced the corresponding polynomial time algorithm for the proposed problem. Furthermore, this direction also includes Mao et al. [24,25], Zhao et al. [26], and Lu et al. [27]. Research on the relationship between delivery time and the learning effect includes Ren et al. [28], Wang et al. [29], and Qian et al. [30]. Toksari et al. [31] validated that the single-machine scheduling problems with exponential past-sequence-dependent delivery time and the learning effect have polynomial time algorithms for minimizing makespan, total completion time, weighted total completion time, and maximum tardiness.
The truncated learning effect has also attracted significant attention from scholars. Wang et al. [32] conducted research on single-machine scheduling problems with a truncated learning effect and resource allocation. Li et al. [33] considered flow shop scheduling with general truncated learning effects. For makespan minimization, they proposed a branch-and-bound algorithm and some heuristics. Ren et al. [34], on the other hand, considered both the truncated learning effect and delivery time. Wang and Zhang [35] explored the single-machine scheduling problems with a truncated learning effect and p s d times and provided the worst-case ratio of total weighted completion time and maximum tardiness.
Based on Wang and Zhang [35], in this paper, a specific non-increasing convex function, i.e., 1 t + 1 , with respect to the start time t is introduced in the processing time model, and heuristic, branch-and-bound, Tabu search, and simulated annealing algorithms are proposed to find the optimal solution to the problem. The structure of this article is as follows: Section 3 describes the studied problem. Section 4 provides the required optimal properties and lower bounds for two problems. Section 5 provides upper bounds for two problems. Section 6 introduces several other common heuristic algorithms. Section 7 conducts data simulations. The last section contains some conclusions.

3. Problem Statement

There are n independent and non-preempted jobs available for processing on a single-machine, and all the jobs are available at time zero. Let p ˜ j r A denote the actual processing time of job J j scheduled at position r; that is:
p ˜ j r A ( t ) = p ˜ j ι ^ ( t ) max { r δ , η } ,
where r is the position of job J j in the processing sequence, t is the starting time of job J j , p ˜ j is the basic processing time (i.e., the job processing time without the learning effect) of job J j , ι ^ ( t ) is a non-increasing convex function of the starting time t, ι ^ ( t ) satisfies 0 < ι ^ ( t ) 1 , ι ^ ( 0 ) = 1 , ι ^ ( t ) < 0 ( ι ^ ( t ) is the derivative of ι ^ ( t ) ) , δ ( δ < 0 ) is the learning effect, η is a given parameter, and 0 < η < 1 . Let q ˜ j denote the past-sequence-dependent delivery time of job J j , which can be denoted by q p s d . As in Koulamas et al. [36], q ˜ j is:
q ˜ j = θ l = 1 j 1 p ˜ l ,
where θ 0 . The goal is to find an optimal schedule such that the total weighted completion time (i.e., j = 1 n w ˜ j C ˜ j , where w ˜ j (resp. C ˜ j ) is the weight (resp. completion time) of job J j ) and the maximum tardiness T ˜ max = max j = 1 , , n { max 0 , C ˜ j d ˜ j } are to be minimized. Using the three-field notation, this problem can be denoted as follows:
1 | p ˜ j r A ( t ) = p ˜ j ι ^ ( t ) max { r δ , η } , q p s d | G ,
where G j = 1 n w ˜ j C ˜ j , T ˜ max . In this paper, we only consider ι ^ ( t ) to be a special function, i.e., ι ^ ( t ) = 1 t + 1 .

4. Lower Bounds

4.1. Optimal Properties

This paper can provide the following properties under the condition of anti-consistency.
Lemma 1 
(Wang and Zhang [35]). For the problem 1 | p ˜ j r A ( t ) = p ˜ j ι ^ ( t ) max { r δ , η } , q p s d | w ˜ j C ˜ j , if p ˜ k p ˜ h implies w ˜ k w ˜ h , the optimal job sequence π * can be obtained by non-decreasing order of p ˜ k w ˜ k (WSPT rule).
Lemma 2 
(Wang and Zhang [35]). For the problem 1 | p ˜ j r A ( t ) = p ˜ j ι ^ ( t ) max { r δ , η } , q p s d | T ˜ max , if p ˜ k p ˜ h implies d ˜ k d ˜ h , the optimal job sequence π * can be obtained by non-decreasing order of d ˜ k (i.e., the EDD rule).

4.2. Criterion j = 1 n w ˜ j C ˜ j

Let λ = λ ˜ s , λ ˜ u be the sequence in which λ ˜ s is the scheduled part and λ ˜ u is the unscheduled part. Assuming that there are s jobs in the scheduled part, the remaining ( n s ) jobs are all in λ ˜ u . The objective function can be divided into known and unknown parts, which can be written as follows:
W = j = 1 s w ˜ j C ˜ j + j = s + 1 n w ˜ [ j ] [ l = 1 s p ˜ l A + l = s + 1 n p ˜ [ l ] 1 l = 1 s p ˜ l A + m = s + 1 l 1 p ˜ [ m ] A + 1 max { l δ , η } + θ l = 1 s p ˜ l + l = s + 1 j 1 p ˜ [ l ] ] .
It can be known that j = 1 s w ˜ j C ˜ j and l = 1 s p ˜ l A are fixed constants, so it follows that:
W j = 1 s w ˜ j C ˜ j + j = s + 1 n w ˜ min [ l = 1 s p ˜ l A + l = s + 1 n p ˜ [ l ] 1 l = 1 s p ˜ l A + m = s + 1 l 1 p ˜ [ m ] A + 1 max { l δ , η } + θ l = 1 s p ˜ l + l = s + 1 j 1 p ˜ [ l ] ] ,
where w ˜ min = min { w ˜ s + 1 , w ˜ s + 2 , , w ˜ n } . Then, the first lower bound can be calculated as:
L B j = 1 n w ˜ j C ˜ j 1 = j = 1 s w ˜ j C ˜ j + j = s + 1 n w ˜ ( j ) [ l = 1 s p ˜ l A + l = s + 1 n p ˜ < l > 1 l = 1 s p ˜ l A + m = s + 1 l 1 p ˜ < m > A + 1 max { l δ , η } + θ l = 1 s p ˜ l + l = s + 1 j 1 p ˜ < l > ] ,
in which p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > , and p ˜ < m > A ( m = s + 1 , s + 2 , , n ) is the actual processing time of the mth position by the order of p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > .
Let p ˜ min = min { p ˜ s + 1 , p ˜ s + 2 , , p ˜ n } ; then, the second lower bound can be obtained similarly to the first as follows:
L B j = 1 n w ˜ j C ˜ j 2 = j = 1 s w ˜ j C ˜ j + j = s + 1 n w ˜ ( j ) [ l = 1 s p ˜ l A + l = s + 1 n p ˜ min 1 l = 1 s p ˜ l A + m = s + 1 l 1 p ˜ min A + 1 max { l δ , η } + θ l = 1 s p ˜ l + l = s + 1 j 1 p ˜ min ] ,
where w ˜ ( s + 1 ) w ˜ ( s + 2 ) w ˜ ( n ) according to Lemma 1.
The third lower bound is
L B j = 1 n w ˜ j C ˜ j 3 = j = 1 s w ˜ j C ˜ j + j = s + 1 n w ˜ ( j ) [ l = 1 s p ˜ l A + l = s + 1 n p ˜ < l > 1 l = 1 s p ˜ l A + m = s + 1 l 1 p ˜ < m > A + 1 max { l δ , η } + θ l = 1 s p ˜ l + l = s + 1 j 1 p ˜ < l > ] ,
where w ˜ ( s + 1 ) w ˜ ( s + 2 ) w ˜ ( n ) and p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > , and where p ˜ < m > A is the actual processing time of the mth position by the order of p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > . Note that w ˜ ( k ) and p ˜ < k > may not necessarily correspond to the same job.
To make the lower bound tighter, select the largest of Equations (4)–(6) as the lower bound; that is:
L B j = 1 n w ˜ j C ˜ j = max L B j = 1 n w ˜ j C ˜ j 1 , L B j = 1 n w ˜ j C ˜ j 2 , L B j = 1 n w ˜ j C ˜ j 3 .

4.3. Criterion T ˜ max

Similar to criterion j = 1 n w ˜ j C ˜ j , it follows that
T ˜ [ s + 1 ] = max 0 , C ˜ s + p ˜ [ s + 1 ] 1 l = 1 s 1 p ˜ l A + p ˜ s A + 1 max { ( s + 1 ) δ , η } + θ p ˜ s d [ s + 1 ] ;
that is,
T ˜ [ s + j ] = max { 0 , C ˜ s + l = 1 j p ˜ [ s + l ] 1 l = 1 s 1 p ˜ l A + m = s s + j 1 p ˜ [ m ] A + 1 max { ( s + l ) δ , η } + θ m = s s + j 1 p ˜ [ m ] d [ s + j ] } .
Thus, the following lower bound can be obtained by defining d ˜ max = max { d ˜ s + 1 , d ˜ s + 2 , , d ˜ n } as:
L B T ˜ max 1 = max j = 1 , , n s max { 0 , C ˜ 1 d ˜ 1 } , , max { 0 , C ˜ s d ˜ s } , max { 0 , C ˜ s + l = 1 n s p ˜ < s + l > 1 l = 1 s 1 p ˜ l A + m = s n 1 p ˜ [ m ] A + 1 max { ( s + l ) δ , η } + θ m = s j 1 p ˜ [ m ] d ˜ max } ,
where p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > can be obtained by Lemma 2, and p ˜ < m > A ( m = s + 1 , s + 2 , , n ) is the actual processing time of the mth position by the order of p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > .
The second lower bound can also be calculated as follows:
L B T ˜ max 2 = max j = 1 , , n s max { 0 , C ˜ 1 d ˜ 1 } , , max { 0 , C ˜ s d ˜ s } , max { 0 , C ˜ s + l = 1 n s p ˜ < s + l > 1 l = 1 s 1 p ˜ l A + m = s n 1 p ˜ [ m ] A + 1 max { ( s + l ) δ , η } + θ m = s j 1 p ˜ [ m ] d ˜ ( s + j ) } ,
where p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > , d ˜ ( s + 1 ) d ˜ ( s + 2 ) d ˜ ( n ) , and p ˜ < m > A is the actual processing time of the mth position by the order of p ˜ < s + 1 > p ˜ < s + 2 > p ˜ < n > . Note that p ˜ < k > and d ˜ ( k ) may not correspond to the same job.
To make the lower bound tighter, select the largest of Equations (8) and (9) as the lower bound, which can be denoted as follows:
L B T ˜ max = max L B T ˜ max 1 , L B T ˜ max 2 .

5. Upper Bounds

The following methods are given to calculate the upper bounds to correct the given lower bounds.

5.1. Criterion j = 1 n w ˜ j C ˜ j

Firstly, the upper bound (UB) calculation method for j = 1 n w ˜ j C ˜ j is given through Lemma 1 as follows (Algorithm 1).
Algorithm 1: Upper Bound for j = 1 n w ˜ j C ˜ j
Step 1. Obtain the sequence π j = 1 n w ˜ j C ˜ j 1 by non-decreasing of p ˜ k ( k = 1 , 2 , , n );
Step 2. Obtain the sequence π j = 1 n w ˜ j C ˜ j 2 by non-increasing of w ˜ k ( k = 1 , 2 , , n );
Step 3. Obtain the sequence π j = 1 n w ˜ j C ˜ j 3 by non-decreasing of p ˜ k w ˜ k ( k = 1 , 2 , , n );
Step 4. From Steps 1–3, calculate and select the smallest object function value j = 1 n w ˜ j C ˜ j as the initial sequence π j = 1 n w ˜ j C ˜ j 0 ;
Step 5. Set s = 2 . Select the first two jobs from π j = 1 n w ˜ j C ˜ j 0 , and select the better one of the two possible sequences;
Step 6. Set s = s + 1 . Insert the sth job in sequence π j = 1 n w ˜ j C ˜ j 0 into s possible positions to obtain the best partial sequence. Next, determine all possible sequences by interchanging jobs in positions k and j of the above partial sequence for k , j ( 1 k < s , k < j n ). Select the best s ( s 1 ) / 2 partial sequence that has the minimum value j = 1 n w ˜ j C ˜ j ;
Step 7. If s = n , then Stop; otherwise, return to Step 6.

5.2. Criterion T ˜ max

The upper bound (UB) of T ˜ max is calculated in the same way, and the following algorithm (Algorithm 2) can be obtained based on Lemma 2:
Algorithm 2: Upper Bound for T ˜ max
Step 1. Obtain the sequence π T ˜ max 1 by non-decreasing of p ˜ k ( k = 1 , 2 , , n );
Step 2. Obtain the sequence π T ˜ max 2 by non-decreasing of d ˜ k ( k = 1 , 2 , , n );
Step 3. From Steps 1–2, calculate and select the smallest object function value T ˜ max as the initial sequence π T ˜ max 0 ;
Step 4. Set s = 2 . Select the first two jobs from π T ˜ max 0 , and select the better one of the two possible sequences;
Step 5. Set s = s + 1 . Insert the sth job in sequence π T ˜ max 0 into s possible positions to obtain the best partial sequence. Next, determine all possible sequences by interchanging jobs in positions k and j of the above partial sequence for k , j ( 1 k < s , k < j n ). Select the best s ( s 1 ) / 2 partial sequence that has the minimum value T ˜ max ;
Step 6. If s = n , then Stop; otherwise, return to Step 5.

5.3. Branch-and-Bound Algorithm

Based on the above-mentioned lower and upper bounds, the branch-and-bound (B&B) algorithm with enumeration as the central idea is proposed. This algorithm follows a depth-first strategy, assigning jobs in a forward manner starting from the first job position (assign a job to a node). This enumeration algorithm can be used as an optimal program to solve the problem posed in this study in the steps described below (i.e., Algorithm 3).
Algorithm 3: Branch-and-Bound
Step 1. (Upper bound) Calculate the initial sequence with the upper bound by Algorithm 1 for j = 1 n w ˜ j C ˜ j (Algorithm 2 for T ˜ max );
Step 2. (Bounding) Calculate the lower bounds L B j = 1 n w ˜ j C ˜ j , L B T ˜ max (see Equations (7) and (10)) for the node. If the lower bound of a node exceeds the calculated upper bound, all subsequent nodes including it are deleted. Otherwise, replace it as the new solution;
Step 3. (Termination) Continue until all nodes have been explored.

6. Other Heuristic Algorithms

In addition to the algorithms mentioned above, this study will also use the following heuristic algorithms that are widely used in scheduling problems.

6.1. Tabu Search

In this subsection, a Tabu search (TS) algorithm is used to find a near-optimal solution. Tabu search is a search element algorithm used to jump out of local optima, where the initial sequence used in the TS algorithm is chosen from Algorithm 1, and the maximum number of iterations for the TS algorithm is set at 100n, where n is the number of jobs. As in Wu et al. [37] and Lv et al. [38], the implementation of the TS algorithm is given below (Algorithm 4):
Algorithm 4: Tabu Search
Step 1. Let the tabu list be empty and the iteration number be 0;
Step 2. Let the sequence obtained from Algorithm 1 for j = 1 n w ˜ j C ˜ j (Algorithm 2 for T ˜ max ) be the initial sequence that records the value of the objective function j = 1 n w ˜ j C ˜ j ( T ˜ max ). Set the current schedule as the best solution π j = 1 n w ˜ j C ˜ j * ( π T ˜ max * );
Step 3. Search the associated neighborhood of the current schedule and resolve if there is a schedule π j = 1 n w ˜ j C ˜ j * * ( π T ˜ max * * ) with the smallest objective function value in associated neighborhoods and if it is not in the tabu list;
Step 4. If π j = 1 n w ˜ j C ˜ j * * ( π T ˜ max * * ) is better than π j = 1 n w ˜ j C ˜ j * ( π T ˜ max * ), set π j = 1 n w ˜ j C ˜ j * = π j = 1 n w ˜ j C ˜ j * * ( π T ˜ max * = π T ˜ max * * ). Update the tabu list and the number of iterations;
Step 5. If there is not a schedule in associated neighborhoods and it is not in the tabu list or if the maximum number of iterations is reached, then output the local optimal sequence π j = 1 n w ˜ j C ˜ j ( π T ˜ max ) and the corresponding function value. Otherwise, update the tabu list and turn to Step 3.

6.2. Simulated Annealing

Simulated annealing (SA) can also obtain the initial sequence according to Algorithm 1 (resp. Algorithm 2) for π j = 1 n w ˜ j C ˜ j (resp. π T ˜ max ), which is another type of heuristic algorithm that can jump out of the local optimal solution. The specific problem-solving steps are as follows (Algorithm 5):
Algorithm 5: Simulated Annealing
Step 1. The initial sequence can be calculated by Algorithm 1 for j = 1 n w ˜ j C ˜ j (Algorithm 2 for T ˜ max );
Step 2. Use the pairwise exchange neighborhood generation method to obtain other solutions;
Step 3. (Acceptance probability) If the objective function value of the new schedule is smaller than that of the original schedule, it is automatically accepted. However, if the new schedule objective value is larger, it may still be accepted with a decreasing probability as the process progresses. The acceptance probability is determined by the following exponential distribution function:
P ( a c c e p t ) = e x p ( a × Δ j = 1 n w ˜ j C ˜ j ) ( o r P ( a c c e p t ) = e x p ( a × Δ T ˜ max ) ) ,
where a is a control parameter, and Δ j = 1 n w ˜ j C ˜ j ( Δ T ˜ max ) is the change in the value of the objective function. In addition, the method was adopted to change a in the lth iteration as follows:
a = l ϑ ,
where ϑ is an experimental constant. In this experiment, let ϑ = 1 . If the function j = 1 n w ˜ j C ˜ j ( T ˜ max ) increases as a result of a random pairwise interchange, the new sequence is accepted when P ( a c c e p t ) > ε , in which ε is randomly sampled from the uniform distribution.
Step 4. (Stopping condition) Our preliminary trials indicated that the quality of the sequence is stable after 300n iterations (see Wu et al. [37]).

7. Computational Experiments

To show the efficiency of each algorithm, we assume that ι ^ ( t ) = 1 t + 1 , and the computational experiments were performed. The procedures were coded in Visual Studio 2022 v17.1.0 software and run on a HUAWEI (Shenzhen, China) personal computer with an Intel® Core™ i5-7200U @ 2.50 2.70 GHz CPU, 4.00 GB RAM, and the Windows 10 operating system. By systematically varying the parameter values and observing their effect on the data output, we gradually narrowed down the feasible range of parameters until we found the parameter intervals that produced stable, reliable, and expected data results (see Table 1).
Based on the given data, we conducted data simulations on a small scale. The results are summarized in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, where Table 2, Table 3, Table 4 and Table 5 reflect the mean and maximum values of CPU time (in milliseconds) as well as the node number of the B&B and the relative error, and Table 6, Table 7, Table 8 and Table 9 reflect their p99 values, in which P99 denotes the data point at the 99th percentile position after the data set is sorted in ascending order. The error of the solution by UB (TS, SA) is calculated as follows:
Q ˜ ( X ) Q ˜ * Q ˜ * × 100 % ,
where Q ˜ ( X ) is the objective function value generated by Algorithm X, X { U B , T S , S A } , and Q ˜ * is the optimal value generated by the B&B Algorithm.
From Table 2 and Table 3, for j = 1 n w ˜ j C ˜ j , the CPU time grows with the increase in n, and the UB and SA algorithms are substantially shorter than the B&B and TS algorithms. Specifically, for job numbers 10–11, the TS algorithm exhibits a longer CPU time compared to the B&B algorithm, whereas for job numbers ranging from 12 to 14, the TS algorithm outperforms the B&B algorithm in terms of CPU time. Additionally, the UB, TS, and SA algorithms perform very well in terms of the error (all 0%). Based on these observations, it can be inferred that, when the value range of p ˜ k , w ˜ k [ 1 , 50 ] / [ 1 , 100 ] , the algorithms (UB, TS, and SA) perform remarkably well.
From Table 4 and Table 5, for T ˜ max , the CPU time of the UB, SA, and TS algorithms is much shorter than that of the B&B algorithm. By comparing tables, it is easy to find that the maximum error of the UB (TS, SA) algorithm is less than 2.8% (1.55%, 2.78%). It can be concluded that, when the value range of T ˜ max [ 1 , 50 ] / [ 1 , 100 ] , the algorithm (i.e., UB, TS, and SA) performs better.
Based on these observations, it can be inferred that, when the ranges of p ˜ k , w ˜ k ( T ˜ max ) [ 1 , 50 ] / [ 1 , 100 ] , the algorithms (UB, TS, and SA) perform remarkably well. Moreover, the computational results also show that the UB algorithm has a shorter CPU time than the TS and SA algorithms, but its error performance is not better than that of the TS and SA algorithms.
Based on the data obtained above, the different algorithms of CPU time (see Figure 1) for Q ˜ = T ˜ max ( j = 1 n w ˜ j C ˜ j ) , where p ˜ k , w ˜ k ( 1 , 50 ) / ( 1 , 100 ) . It is clear that the UB algorithm stands out with the shortest running time, which takes no more than 5 ms. In addition, the SA algorithm also has a relatively short running time, with its maximum runtime being only 70 ms, and it performs well in terms of errors. Among them, the TS algorithm has the longest CPU time, which reaches 30,211 ms.
For further investigation of the performance of the UB, SA, and TS algorithms, statistical test results are presented in Table 10. As the results in Table 4 and Table 5 show that SA and TS potentially outrun UB, statistical hypothesis tests are implemented to compare the effectiveness of SA, TS, and UB. As an example, the instances where T ˜ max [ 1 , 100 ] for jobs 11–14 are considered. The t-test is used for the tests: t 1 = X U B ¯ X S A ¯ S w 1 / m U B + 1 / m S A , t 2 = X U B ¯ X T S ¯ S w 1 / m U B + 1 / m T S , where S w 2 = ( m U B 1 ) S U B 2 + ( m S A ( T S ) 1 ) S S A ( T S ) 2 m U B + m U B 2 , and X ¯ denotes the mean error. The corresponding statistical hypothesis test is configured as H 0 : μ U B > μ S A , H 1 : μ S A μ U B ; H 3 : μ U B > μ T S , H 4 : μ U B μ T S . A type I error of 1 % is used, and t c r i t i c a l = 2.5 . Experiment results in Table 4 show that the hypothesis that H 0 : μ C a s e 2 > μ C a s e 1 and H 3 : μ U B > μ T S with a type I error of 1 % cannot be rejected statistically.

8. Conclusions

This article investigates the learning effect scheduling problem with past-sequence-dependent delivery time, where the processing time of the job is a non-increasing convex function related to the learning effect. The aim is to find the optimal job sequence while minimizing the total weighted completion time and maximum tardiness. After analysis, both of these problems are NP-hard. The corresponding lower bounds are given through the proposed optimal properties, and heuristic algorithms are provided to calculate the upper bounds to correct them. The branch-and-bound algorithm and two other commonly used metaheuristic algorithms for solving scheduling problems are used under the given two bounds, and the effectiveness of the proposed algorithms are verified based on the results of data simulations. Future research directions can extend it to flow shop scheduling (see Rossit et al. [39], Panwalkar and Koulamas [40], Koulamas and Kyparisis [41], Khatami et al. [42], and Lv and Wang [43]) and other optimization problems with deteriorating jobs (see Huang [44], Liu et al. [45], and Sun et al. [46]).

Author Contributions

Methodology, Z.L. and J.-B.W.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science Research Foundation of Educational Department of Liaoning Province (JYTMS20230278) and the fundamental research funds for the universities of Liaoning province.

Data Availability Statement

The data used to support the findings of this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, C.-C.; Yin, Y.; Cheng, S.-R. Single-machine and two-machine flowshop scheduling problems with truncated position-based learning functions. J. Oper. Res. Soc. 2013, 64, 147–156. [Google Scholar] [CrossRef]
  2. Azzouz, A.; Ennigrou, M.; Said, L.B. Scheduling problems under learning effects: Classification and cartography. Int. J. Prod. Res. 2018, 56, 1642–1661. [Google Scholar] [CrossRef]
  3. Sun, X.Y.; Geng, X.-N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  4. Lv, D.-Y.; Wang, J.-B. Study on resource-dependent no-wait flow shop scheduling with different due-window assignment and learning effects. Asia-Pac. J. Oper. Res. 2021, 38, 2150008. [Google Scholar] [CrossRef]
  5. Zhao, S. Scheduling jobs with general truncated learning effects including proportional setup times. Comput. Appl. Math. 2022, 41, 146. [Google Scholar] [CrossRef]
  6. Wang, Y.-C.; Wang, J.-B. Study on convex resource allocation scheduling with a time-dependent learning effect. Mathematics 2023, 11, 3179. [Google Scholar] [CrossRef]
  7. Paredes-Astudillo, Y.A.; Botta-Genoulaz, V.; MontoyaTorres, J.R. Impact of learning effect modelling in flowshop scheduling with makespan minimisation based on the Nawaz-Enscore-Ham algorithm. Int. J. Prod. Res. 2024, 62, 1999–2014. [Google Scholar] [CrossRef]
  8. Ma, S.; Guo, S.; Zhang, X. An optimal online algorithm for single-processor scheduling problem with learning effect. Theor. Comput. Sci. 2022, 928, 1–12. [Google Scholar] [CrossRef]
  9. Mor, B.; Mosheiov, G.; Shapira, D. Flowshop scheduling with learning effect and job rejection. J. Sched. 2020, 23, 631–641. [Google Scholar] [CrossRef]
  10. Chen, K.; Han, S.Q.; Huang, H.L.; Ji, M. A group-dependent due window assignment scheduling problem with controllable learning effect. Asia-Pac. J. Oper. Res. 2023, 40, 2250025. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Sun, X.-Y.; Liu, T.; Wang, J.Y.; Geng, X.-N. Single-machine scheduling simultaneous consideration of resource allocations and exponential time-dependent learning effects. J. Oper. Res. Soc. 2024. [Google Scholar] [CrossRef]
  12. Lv, D.-Y.; Wang, J.-B. Research on two-machine flow shop scheduling problem with release dates and truncated learning effects. Eng. Optim. 2024. [Google Scholar] [CrossRef]
  13. Lin, S.S. Due-window assignment scheduling with learning and deterioration effects. J. Ind. Manag. Optim. 2022, 18, 2567–2578. [Google Scholar] [CrossRef]
  14. Zhu, Z.G.; Chu, F.; Yu, Y.G.; Sun, L.Y. Single-machine past-sequence-dependent setup times scheduling with resource allocation and learning effect. RAIRO Oper. Res. 2016, 50, 733–748. [Google Scholar] [CrossRef]
  15. Jiang, Y.J.; Zhang, Z.; Gong, X.; Yin, Y. An exact solution method for solving seru scheduling problems with past-sequence-dependent setup time and learning effect. Comput. Ind. Eng. 2021, 158, 107354. [Google Scholar] [CrossRef]
  16. Wang, J.-B.; Lv, D.-Y.; Wan, C. Proportionate flow shop scheduling with job-dependent due windows and position-dependent weights. Asia-Pac. J. Oper. Res. 2024, 2450011. [Google Scholar] [CrossRef]
  17. Wu, W.; Lv, D.-Y.; Wang, J.-B. Two due-date assignment scheduling with location-dependent weights and a deteriorating maintenance activity. Systems 2023, 11, 150. [Google Scholar] [CrossRef]
  18. Wang, J.-B.; Cui, B.; Ji, P.; Liu, W.W. Research on single-machine scheduling with position-dependent weights and past-sequence-dependent delivery times. J. Comb. Optim. 2021, 41, 290–303. [Google Scholar] [CrossRef]
  19. Ji, M.; Yao, D.L.; Ge, J.J.; Cheng, T.C.E. Single-machine slack due-window assignment and scheduling with past-sequence-dependent delivery times and controllable job processing times. Eur. J. Ind. Eng. 2015, 9, 794–818. [Google Scholar] [CrossRef]
  20. Ahmadizar, F.; Farhadi, S. Single-machine batch delivery scheduling with job release dates, due windows and earliness, tardiness, holding and delivery costs. Comput. Oper. Res. 2015, 53, 194–205. [Google Scholar] [CrossRef]
  21. Zhang, C.; Li, Y.T.; Cao, J.H.; Yang, Z.; Coelho, L.C. Exact and matheuristic methods for the parallel machine scheduling and location problem with delivery time and due date. Comput. Oper. Res. 2022, 147, 105936. [Google Scholar] [CrossRef]
  22. Pan, L.; Sun, X.; Wang, J.-B.; Zhang, L.H.; Lv, D.-Y. Due date assignment single-machine scheduling with delivery times, position-dependent weights and deteriorating jobs. J. Comb. Optim. 2023, 45, 100. [Google Scholar] [CrossRef]
  23. Qian, J.; Han, H. The due date assignment scheduling problem with the deteriorating jobs and delivery time. J. Appl. Math. Comput. 2021, 68, 1–14. [Google Scholar] [CrossRef]
  24. Mao, R.-R.; Wang, Y.-C.; Lv, D.-Y.; Wang, J.-B.; Lu, Y.-Y. Delivery times scheduling with deterioration effects in due window assignment environments. Mathematics 2023, 11, 3983. [Google Scholar] [CrossRef]
  25. Mao, R.-R.; Lv, D.-Y.; Ren, N.; Wang, J.-B. Supply chain scheduling with deteriorating jobs and delivery times. J. Appl. Math. Comput. 2024, 70, 2285–2312. [Google Scholar] [CrossRef]
  26. Zhao, T.; Lu, S.J.; Cheng, H.; Ren, M.Y.; Liu, X.B. Coordinated production and delivery scheduling for service-oriented manufacturing systems with deterioration effect. Int. J. Prod. Res. 2023. [Google Scholar] [CrossRef]
  27. Lu, Y.-Y.; Zhang, S.; Tao, J.-Y. Earliness-tardiness scheduling with delivery times and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024, 2450009. [Google Scholar] [CrossRef]
  28. Ren, N.; Lv, D.-Y.; Wang, J.-B.; Wang, X.-Y. Solution algorithms for single-machine scheduling with learning effects and exponential past-sequence-dependent delivery times. J. Ind. Manag. Optim. 2023, 19, 8429–8450. [Google Scholar] [CrossRef]
  29. Wang, S.-H.; Lv, D.-Y.; Wang, J.-B. Research on position-dependent weights scheduling with delivery times and truncated sum-of-processing-times-based learning effect. J. Ind. Manag. Optim. 2023, 19, 2824–2837. [Google Scholar] [CrossRef]
  30. Qian, J.; Chang, G.; Zhang, X. Single-machine common due-window assignment and scheduling with position-dependent weights, delivery time, learning effect and resource allocations. J. Appl. Math. Comput. 2024, 70, 1965–1994. [Google Scholar] [CrossRef]
  31. Toksari, M.D.; Aydogan, E.K.; Atalay, B.; Sari, S. Some scheduling problems with sum of logarithm processing times based learning effect and exponential past sequence dependent delivery times. J. Ind. Manag. Optim. 2022, 18, 1795–1807. [Google Scholar] [CrossRef]
  32. Wang, X.Y.; Liu, W.; Li, L.; Zhao, P.Z.; Zhang, R.F. Resource dependent scheduling with truncated learning effects. Math. Biosci. Eng. 2022, 19, 5957–5967. [Google Scholar] [CrossRef] [PubMed]
  33. Li, M.-H.; Lv, D.-Y.; Zhang, L.-H.; Wang, J.-B. Permutation flow shop scheduling with makespan objective and truncated learning effects. J. Appl. Math. Comput. 2024, 70, 2907–2939. [Google Scholar] [CrossRef]
  34. Ren, N.; Wang, J.-B.; Wang, E.S. Research on delivery times with truncated learning effects. Comput. Appl. Math. 2023, 42, 243. [Google Scholar] [CrossRef]
  35. Wang, S.Z.; Zhang, X.G. The supply chain scheduling problem based on truncated learning effect and time dependence. J. Southwest Univ. 2020, 42, 44–50. (In Chinese) [Google Scholar]
  36. Koulamas, C.; Kyparisis, G.J. Single-machine scheduling problems with past-sequence-dependent delivery times. Int. J. Prod. Econ. 2010, 126, 264–266. [Google Scholar] [CrossRef]
  37. Wu, C.-C.; Wu, W.-H.; Wu, W.-H.; Hsu, P.-H.; Yin, Y.; Xu, J.-A. single-machine scheduling with a truncated linear deterioration and ready times. Inform. Sci. 2014, 256, 109–125. [Google Scholar] [CrossRef]
  38. Lv, Z.-G.; Zhang, L.-H.; Wang, X.-Y.; Wang, J.-B. Single machine scheduling proportionally deteriorating jobs with ready times subject to the total weighted completion time minimization. Mathematics 2024, 12, 610. [Google Scholar] [CrossRef]
  39. Rossit, D.A.; Tohmé, F.; Frutos, M. The non-permutation flow-shop scheduling problem: A literature review. Omega 2018, 77, 143–153. [Google Scholar] [CrossRef]
  40. Panwalkar, S.S.; Koulamas, C. Analysis of flow shop scheduling anomalies. Eur. J. Oper. Res. 2020, 280, 25–33. [Google Scholar] [CrossRef]
  41. Koulamas, C.; Kyparisis, G.J. Flow shop scheduling with two distinct job due dates. Comput. Indus. Eng. 2022, 163, 107835. [Google Scholar] [CrossRef]
  42. Khatami, M.; Salehipour, A.; Cheng, T.C.E. Flow-shop scheduling with exact delays to minimize makespan. Comput. Indus. Eng. 2023, 183, 109456. [Google Scholar] [CrossRef]
  43. Lv, D.-Y.; Wang, J.-B. No-idle flow shop scheduling with deteriorating jobs and common due date under dominating machines. Asia-Pac. J. Oper. Res. 2024, 2450003. [Google Scholar] [CrossRef]
  44. Huang, X. Bicriterion scheduling with group technology and deterioration effect. J. Appl. Math. Comput. 2019, 60, 455–464. [Google Scholar] [CrossRef]
  45. Liu, F.; Yang, J.; Lu, Y.-Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  46. Sun, X.Y.; Liu, T.; Geng, X.-N.; Hu, Y.; Xu, J.-X. Optimization of scheduling problems with deterioration effects and an optional maintenance activity. J. Sched. 2023, 26, 251–266. [Google Scholar] [CrossRef]
Figure 1. Numerical parameters.
Figure 1. Numerical parameters.
Mathematics 12 02522 g001
Table 1. Numerical parameters.
Table 1. Numerical parameters.
Serial NumberParameterValue
1 δ −0.05, −0.15, −0.25, −0.35, −0.45
2 η ( 0.5 , 1 )
3 θ [ 1 , 5 ]
4 p ˜ k [ 1 , 50 ] , [ 1 , 100 ]
5 w ˜ k [ 1 , 50 ] , [ 1 , 100 ]
6 d ˜ k [ 1 , C max ]
Note: C max is the makespan base on the non-decreasing order of p ˜ k (i.e., SPT rule, Wang and Zhang [27]).
Table 2. Results of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 50 ] .
Table 2. Results of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 50 ] .
BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
n δ MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
10−0.05619.201861.002.204.006046.006469.0028.9034.000.000.000.000.000.000.00
−0.15263.90521.002.304.006194.006497.0029.2034.000.000.000.000.000.000.00
−0.25753.702113.002.204.006506.407065.0030.0034.000.000.000.000.000.000.00
−0.35536.901248.002.403.006401.506601.0030.1036.000.000.000.000.000.000.00
−0.45522.802327.002.303.006211.906599.0030.0036.000.000.000.000.000.000.00
11−0.052529.405997.003.004.0010,315.5010,639.0035.0042.000.000.000.000.000.000.00
−0.154240.5010,626.002.604.009490.309616.0033.1036.000.000.000.000.000.000.00
−0.252350.107252.002.603.009533.709655.0034.1041.000.000.000.000.000.000.00
−0.352862.906677.002.503.009494.909608.0033.0038.000.000.000.000.000.000.00
−0.452840.106384.002.503.009554.909781.0034.8039.000.000.000.000.000.000.00
12−0.0511,829.7034,021.003.004.0013,648.4013,981.0038.5043.000.000.000.000.000.000.00
−0.1520,299.5051,703.003.204.0013,871.6014,202.0038.1043.000.000.000.000.000.000.00
−0.2535,064.8072,411.003.205.0013,926.7014,377.0040.6047.000.000.000.000.000.000.00
−0.3517,803.2037,647.003.304.0014,608.0014,781.0038.9042.000.000.000.000.000.000.00
−0.4522,831.1080,466.003.104.0014,471.1015,071.0038.3041.000.000.000.000.000.000.00
13−0.05265,656.90923,120.003.504.0020,448.9020,785.0046.9061.000.000.000.000.000.000.00
−0.15249,231.901,473,884.003.404.0020,387.9021,140.0044.0055.000.000.000.000.000.000.00
−0.25112,572.70302,798.003.205.0019,454.3020,821.0042.9053.000.000.000.000.000.000.00
−0.35268,053.80946,366.003.004.0018,161.9019,484.0040.6043.000.000.000.000.000.000.00
−0.45160,286.90745,930.003.204.0020,307.3020,550.0046.1052.000.000.000.000.000.000.00
14−0.051,172,122.507,567,940.003.905.0028,585.2030,211.0048.5054.000.000.000.000.000.000.00
−0.151,424,138.704,481,564.004.005.0028,565.7029,477.0064.1070.000.000.000.000.000.000.00
−0.25929,375.402,344,144.003.704.0028,179.6028,865.0049.6053.000.000.000.000.000.000.00
−0.35833,666.901,431,689.003.805.0026,882.0028,400.0047.0051.000.000.000.000.000.000.00
−0.45736,525.301,193,341.003.905.0027,906.0029,114.0048.5054.000.000.000.000.000.000.00
Table 3. Results of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 100 ] .
Table 3. Results of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 100 ] .
BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
n δ MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
10−0.05841.801846.002.503.006338.806439.0029.5032.000.000.000.000.000.000.00
−0.15349.401058.002.303.006381.506447.0029.0031.000.000.000.000.000.000.00
−0.25289.70559.002.503.006300.206468.0029.6033.000.000.000.000.000.000.00
−0.35369.40681.002.504.006392.406497.0028.9033.000.000.000.000.000.000.00
−0.45326.301137.002.704.006393.206583.0029.9035.000.000.000.000.000.000.00
11−0.052277.507576.002.804.009555.2010,574.0032.9040.000.000.000.000.000.000.00
−0.153059.007235.002.604.009623.4010,245.0035.1040.000.000.000.000.000.000.00
−0.252820.407941.002.904.009622.4010,029.0035.0041.000.000.000.000.000.000.00
−0.351825.004495.003.004.009438.909636.0034.6039.000.000.000.000.000.000.00
−0.453975.408359.003.104.009510.509574.0032.4037.000.000.000.000.000.000.00
12−0.0543,663.80249,222.003.004.0013,429.2013,721.0037.9042.000.000.000.000.000.000.00
−0.1515,348.6048,956.003.003.0013,489.7013,579.0038.5043.000.000.000.000.000.000.00
−0.2511,382.6031,237.003.004.0013,531.0013,761.0037.4041.000.000.000.000.000.000.00
−0.3515,998.2041,520.003.204.0013,551.9013,629.0036.8040.000.000.000.000.000.000.00
−0.4511,888.5023,289.002.904.0013,488.4013,764.0036.6040.000.000.000.000.000.000.00
13−0.05109,898.30275,375.003.205.0019,035.9019,623.0041.1045.000.000.000.000.000.000.00
−0.15105,751.30351,782.003.204.0020,933.7021,178.0058.4067.000.000.000.000.000.000.00
−0.25154,626.70558,105.003.505.0020,908.2021,192.0044.7050.000.000.000.000.000.000.00
−0.35126,965.30388,984.003.605.0021,161.3021,623.0045.4056.000.000.000.000.000.000.00
−0.4586,593.80179,924.003.204.0020,846.1021,103.0045.8050.000.000.000.000.000.000.00
14−0.051,975,604.706,520,424.004.005.0028,643.7029,639.0050.4061.000.000.000.000.000.000.00
−0.15627,934.701,950,400.003.805.0028,375.9029,788.0050.7057.000.000.000.000.000.000.00
−0.251,348,825.505,293,208.003.705.0027,533.8028,386.0050.4057.000.000.000.000.000.000.00
−0.351,584,079.503,760,187.003.804.0027,597.9028,497.0047.5052.000.000.000.000.000.000.00
−0.45525,962.402,133,680.003.605.0027,711.0028,777.0050.8057.000.000.000.000.000.000.00
Table 4. Results of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 50 ] .
Table 4. Results of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 50 ] .
BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
n δ MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
10−0.0510,305.2036,694.000.501.006015.206323.0028.2031.000.042180.394720.037490.347890.002830.02818
−0.1510,480.3034,742.000.801.006166.506907.0028.8031.000.082560.430240.034950.151430.020950.15143
−0.259196.6032,804.000.601.006415.307057.0029.9033.000.072260.553990.000100.000440.000050.00027
−0.354954.3011,796.000.501.006459.706808.0028.5031.000.180640.870680.161340.807920.067630.54899
−0.454865.4022,349.000.601.006241.006538.0028.9032.000.188031.505850.061770.436740.077520.43663
11−0.0591,394.00622,506.000.801.0010,001.1010,646.0034.0037.000.254952.302400.143851.271380.133661.30486
−0.1556,224.40204,366.000.601.009461.609772.0034.7043.000.069880.236360.012900.087210.004560.03986
−0.2512,090.7038,411.000.901.009562.809686.0035.4038.000.092940.694620.083180.693080.065980.42501
−0.3513,685.0031,216.000.801.009459.709690.0033.0039.000.020920.208370.008250.081660.016070.15985
−0.4518,151.4038,034.000.601.004864.909833.0033.3037.000.118890.911320.078300.782860.105880.91320
12−0.05157,757.60810,455.000.903.0013,496.0014,138.0037.9040.000.044670.361330.007260.072350.044650.36133
−0.15127,675.801,065,652.000.701.0013,645.7014,137.0040.6047.000.069210.535110.053550.534970.024150.14151
−0.25296,353.002,182,038.000.801.0013,741.4014,244.0040.3042.000.096900.395290.068740.326010.053310.39529
−0.3588,240.90538,187.000.901.0014,561.6014,750.0040.3051.000.204731.215640.100620.456720.106040.45672
−0.45413,334.201,836,635.001.002.0014,099.6015,139.0038.9045.000.082520.410160.025520.243490.011770.09664
13−0.051,029,283.607,508,050.001.504.0020,240.2020,774.0044.0048.000.037950.227300.004740.025790.037950.22730
−0.15683,977.203,227,291.001.102.0019,994.4020,716.0047.3053.000.105310.624970.060280.425150.099410.62497
−0.25628,087.202,123,450.001.402.0018,789.5020,336.0043.9051.000.052240.408900.027120.170420.012940.10057
−0.352,514,477.0016,455,978.000.901.0017,355.6018,137.0040.9045.000.070980.361710.027340.128750.070980.36171
−0.45614,653.202,136,786.000.902.0019,857.2020,682.0046.6055.000.128510.441540.038520.227420.046890.22950
14−0.054,243,698.7016,271,598.001.403.0027,734.5030,028.0050.8054.000.003840.038280.000000.000010.003840.03828
−0.152,984,049.7020,082,425.001.202.0027,863.8029,586.0050.8055.000.163171.049920.024430.135160.163171.04992
−0.257,974,330.6038,573,838.001.402.0027,222.7029,202.0051.5064.000.080360.799830.033710.333320.080360.79983
−0.354,460,957.4037,780,840.001.102.0026,115.4028,408.0050.0067.000.097330.460550.019410.193610.097330.46055
−0.451,992,466.307,281,374.001.202.0027,364.0029,196.0051.4061.000.221661.664650.181601.543090.201381.66465
Table 5. Results of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 100 ] .
Table 5. Results of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 100 ] .
BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
n δ MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
10−0.0511,898.8057,335.000.601.006197.006556.0029.6035.000.095240.839870.083990.839870.084200.83987
−0.1510,861.5028,751.000.401.006283.706477.0031.6037.000.050900.508840.039430.394330.000000.00000
−0.2513,352.9072,680.000.501.006161.806561.0029.6034.000.050480.245490.008180.081700.000000.00000
−0.357534.2046,164.000.701.006379.106586.0030.0033.000.074830.360110.057670.359960.025870.13494
−0.453355.3015,258.000.601.006582.307616.0030.0033.000.233722.337200.138301.383000.000000.00000
11−0.0511,727.3027,595.000.501.009158.109709.0033.2038.000.056180.214270.031540.171240.017240.17150
−0.1549,113.70122,708.000.801.009471.509669.0033.5036.000.065730.445290.060120.417530.002840.02826
−0.2525,227.10147,330.000.501.009627.0010,188.0035.7042.000.124501.027580.087300.872930.000040.00024
−0.3536,752.00275,376.000.601.009292.109659.0033.5039.000.057180.316650.034150.193320.031720.31665
−0.454012.8012,740.000.602.009527.909687.0034.4038.000.090810.669670.046380.237650.043760.19914
12−0.05228,804.401,216,417.001.001.0013,300.5013,812.0038.7043.000.042020.232870.018730.187290.042020.23287
−0.15163,787.10895,682.001.001.0013,050.8013,526.0039.5044.000.009220.065570.000010.000040.000000.00002
−0.25155,448.80790,989.001.001.0013,125.8013,624.0038.7043.000.106810.519070.060110.381980.091750.46834
−0.35316,255.902,364,817.000.901.0013,364.6013,898.0038.1042.000.117510.540560.022350.223450.001470.01435
−0.45496,839.002,609,338.001.002.0013,124.0013,872.0038.4044.000.044660.239330.020200.201900.034300.23933
13−0.05393,103.502,633,680.001.002.0018,654.4019,578.0044.2048.000.093190.735250.024530.169120.062150.42484
−0.151,614,567.507,339,914.001.102.0020,351.2021,565.0046.5048.000.299441.284990.075420.415950.170301.28499
−0.251,028,580.605,506,700.001.102.0020,091.0021,060.0048.1053.000.043090.193680.007350.073470.008570.08571
−0.35649,025.303,329,984.001.202.0020,733.0022,232.0045.6054.000.093630.458110.055460.456400.035370.17480
−0.45986,772.904,045,625.000.901.0020,052.6021,235.0045.6058.000.000000.000030.000000.000010.000000.00003
14−0.051,396,713.104,382,671.001.402.0027,991.3029,138.0052.5062.000.030930.134010.017050.130490.030930.13401
−0.152,599,787.5010,633,928.001.001.0027,803.7029,593.0052.9065.000.020420.203980.019660.196420.019660.19642
−0.253,708,771.5020,829,998.001.502.0027,054.6028,450.0050.1053.000.327202.784100.091930.919120.307212.78481
−0.351,896,808.609,928,279.001.502.0026,980.3028,288.0049.9060.000.128600.605680.046170.243690.086060.36122
−0.45740,966.103,267,537.001.502.0027,695.1029,096.0048.7056.000.028910.123880.000380.003260.026230.11463
Table 6. The p99 values of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 50 ] .
Table 6. The p99 values of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 50 ] .
n δ BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
10−0.051779.643.916453.7933.820.000.000.00
−0.15511.823.916484.4034.000.000.000.00
−0.252088.973.827051.4133.820.000.000.00
−0.351192.923.006597.7635.910.000.000.00
−0.452167.613.006584.6935.730.000.000.00
11−0.055947.503.9110,635.0441.550.000.000.00
−0.1510,449.603.919614.3835.820.000.000.00
−0.257076.593.009648.9740.640.000.000.00
−0.356598.163.009607.0137.730.000.000.00
−0.456278.253.009780.4638.820.000.000.00
12−0.0532,919.313.9113,972.9042.910.000.000.00
−0.1551,645.494.0014,191.7442.910.000.000.00
−0.2572,109.324.9114,365.0346.730.000.000.00
−0.3537,495.174.0014,773.3541.820.000.000.00
−0.4578,260.734.0015,051.2041.000.000.000.00
13−0.05884,032.914.0020,776.0960.280.000.000.00
−0.151,391,338.254.0021,100.7654.550.000.000.00
−0.25294,466.974.9120,785.1852.370.000.000.00
−0.35913,411.513.9119,462.9443.000.000.000.00
−0.45695,887.034.0020,539.0252.000.000.000.00
14−0.057,023,622.794.9130,137.7453.910.000.000.00
−0.154,427,725.644.9129,468.1869.820.000.000.00
−0.252,335,338.314.0028,844.8453.000.000.000.00
−0.351,422,758.394.9128,392.2650.820.000.000.00
−0.451,190,194.874.9129,046.1453.730.000.000.00
Table 7. The p99 values of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 100 ] .
Table 7. The p99 values of Q ˜ = j = 1 n w ˜ j C ˜ j for p ˜ k , w ˜ k [ 1 , 100 ] .
  n  δ   BB CPU  UB CPU  TS CPU  SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
10−0.051.0036,582.046322.5531.000.360.320.03
−0.151.0032,939.036852.5530.910.410.150.14
−0.251.0031,930.107045.3933.000.520.000.00
−0.351.0011,768.736800.1730.910.860.800.51
−0.451.0021,185.846527.8332.001.400.410.43
11−0.051.00577,820.9110,637.5437.002.111.171.19
−0.151.00195,931.929759.1342.640.230.080.04
−0.251.0037,683.629683.4838.000.650.640.40
−0.351.0031,184.329688.4738.730.190.070.15
−0.451.0037,845.279832.9136.910.840.710.84
12−0.052.82766,120.0114,102.9040.000.340.070.34
−0.151.00975,232.5114,136.9146.910.500.490.14
−0.251.002,034,697.4714,243.6442.000.390.320.37
−0.351.00500,028.3514,749.8250.551.140.440.44
−0.451.911,828,255.7315,127.2144.640.410.220.09
13−0.053.826,949,396.7820,755.7348.000.220.030.22
−0.151.913,105,394.0120,709.0752.820.600.400.60
−0.252.002,088,152.0920,256.0850.640.380.160.09
−0.351.0015,534,130.6818,127.9144.640.350.130.35
−0.451.912,028,775.7420,673.9054.460.440.220.22
14−0.052.9116,002,957.7229,994.7954.000.030.000.03
−0.152.0018,625,896.3229,536.9555.000.980.130.98
−0.252.0036,543,648.0629,187.3363.010.730.300.73
−0.351.9134,893,804.6128,402.0666.010.460.180.46
−0.452.007,111,942.5229,151.1860.281.541.421.54
Table 8. The p99 values of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 50 ] .
Table 8. The p99 values of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 50 ] .
n δ BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
10−0.051779.643.916453.7933.820.000.000.00
−0.15511.823.916484.4034.000.000.000.00
−0.252088.973.827051.4133.820.000.000.00
−0.351192.923.006597.7635.910.000.000.00
−0.452167.613.006584.6935.730.000.000.00
11−0.055947.503.9110,635.0441.550.000.000.00
−0.1510,449.603.919614.3835.820.000.000.00
−0.257076.593.009648.9740.640.000.000.00
−0.356598.163.009607.0137.730.000.000.00
−0.456278.253.009780.4638.820.000.000.00
12−0.0532,919.313.9113,972.9042.910.000.000.00
−0.1551,645.494.0014,191.7442.910.000.000.00
−0.2572,109.324.9114,365.0346.730.000.000.00
−0.3537,495.174.0014,773.3541.820.000.000.00
−0.4578,260.734.0015,051.2041.000.000.000.00
13−0.05884,032.914.0020,776.0960.280.000.000.00
−0.151,391,338.254.0021,100.7654.550.000.000.00
−0.25294,466.974.9120,785.1852.370.000.000.00
−0.35913,411.513.9119,462.9443.000.000.000.00
−0.45695,887.034.0020,539.0252.000.000.000.00
14−0.057,023,622.794.9130,137.7453.910.000.000.00
−0.154,427,725.644.9129,468.1869.820.000.000.00
−0.252,335,338.314.0028,844.8453.000.000.000.00
−0.351,422,758.394.9128,392.2650.820.000.000.00
−0.451,190,194.874.9129,046.1453.730.000.000.00
Table 9. The p99 values of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 100 ] .
Table 9. The p99 values of Q ˜ = T ˜ max for p ˜ k , w ˜ k [ 1 , 100 ] .
n δ BB CPU UB CPU TS CPU SA CPU Q ˜ ( UB ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( TS ) Q ˜ ( Opt ) Q ˜ ( Opt ) Q ˜ ( SA ) Q ˜ ( Opt ) Q ˜ ( Opt )
10−0.051.0054,600.716552.0434.820.770.760.76
−0.151.0028,288.046474.2136.820.460.360.00
−0.251.0068,076.056559.6533.820.240.070.00
−0.351.0042,965.586583.2132.910.350.340.13
−0.451.0014,337.847547.7832.912.131.260.00
11−0.051.0027,395.029704.8637.910.210.170.16
−0.151.00119,386.469665.9436.000.420.400.03
−0.251.00140,723.8210,179.7241.730.950.790.00
−0.351.00253,145.919658.8238.910.310.190.29
−0.451.9112,473.069679.6237.910.630.230.20
12−0.051.001,168,389.3113,804.7142.910.230.170.23
−0.151.00843,280.4913,525.7343.910.060.000.00
−0.251.00748,983.6613,607.8042.820.500.360.45
−0.351.002,185,049.7413,887.3841.910.530.200.01
−0.451.912,564,437.0013,865.6143.910.220.180.22
13−0.051.912,436,347.8919,569.3647.910.680.160.40
−0.151.916,954,946.3821,516.9448.001.270.401.19
−0.251.915,214,892.0121,056.4952.910.190.070.08
−0.352.003,150,038.2722,156.7653.640.440.420.17
−0.451.003,882,834.1721,227.3557.280.000.000.00
14−0.052.004,372,111.7529,095.2561.460.130.120.13
−0.151.0010,141,020.5029,550.4364.100.190.180.18
−0.252.0019,428,368.7828,446.4053.002.570.842.56
−0.352.009,678,404.1728,284.4959.460.580.240.35
−0.452.003,203,748.3329,062.1656.000.120.000.11
Table 10. Calculated t values for the hypothesis tests.
Table 10. Calculated t values for the hypothesis tests.
n δ t 1 t 2
11−0.052.5652.619
−0.152.5972.909
−0.252.5692.813
−0.352.6182.877
−0.452.5832.867
12−0.052.5622.814
−0.152.6482.956
−0.252.6392.946
−0.352.5782.916
−0.452.6182.879
13−0.052.6182.965
−0.152.6423.014
−0.252.5612.829
−0.352.6313.009
−0.452.6342.967
14−0.052.6503.047
−0.152.6172.999
−0.252.5612.933
−0.352.5642.813
−0.452.6272.914
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Wang, J.-B. Single-Machine Scheduling with Simultaneous Learning Effects and Delivery Times. Mathematics 2024, 12, 2522. https://doi.org/10.3390/math12162522

AMA Style

Liu Z, Wang J-B. Single-Machine Scheduling with Simultaneous Learning Effects and Delivery Times. Mathematics. 2024; 12(16):2522. https://doi.org/10.3390/math12162522

Chicago/Turabian Style

Liu, Zheng, and Ji-Bo Wang. 2024. "Single-Machine Scheduling with Simultaneous Learning Effects and Delivery Times" Mathematics 12, no. 16: 2522. https://doi.org/10.3390/math12162522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop