*Article* **Single-Machine Group Scheduling Model with Position-Dependent and Job-Dependent DeJong's Learning Effect**

**Jin Qian and Yu Zhan \***

College of Science, Northeastern University, Shenyang 110819, China; qianjin@mail.neu.edu.cn **\*** Correspondence: zhanyu@mail.neu.edu.cn

**Abstract:** This paper considers the single-group scheduling models with Pegels' and DeJong's learning effect and the single-group scheduling models with Pegels' and DeJong's aging effect. In a classical scheduling model, Pegels' and DeJong's learning effect is a constant or position-dependent, while the learning effect and aging effect are job-dependent in this paper. Compared with the classical learning model and aging model for scheduling, the proposed models are more general and realistic. The objective functions are to minimize the total completion time and makespan. We propose polynomial time methods to solve all the studied problems.

**Keywords:** single-machine scheduling; aging effect; learning effect; group technology

**MSC:** 90B35; 90B36

### **1. Introduction**

In the traditional scheduling field, the processing time of a job is a constant, while in some actual situations, the processing times become shorter. A different approach to decreasing processing times is due to the concept of learning. A steady decline in processing times usually takes place by performing the same task repeatedly. This is called the learning effect. For instance, if a worker repeatedly processes the same jobs, due to the improvement of production technology and the accumulation of experience, the processing times become less. In some cases, the processing times of jobs may also become longer. As the machine ages, the processing times become longer. This is called the aging effect.

Compared with the classical learning model and aging model for scheduling, the proposed models are more general and realistic. In some classical scheduling model, Pegels' and DeJong's learning effect is a constant or position-dependent, while the learning effect and aging effect are job-dependent in this paper. This paper considers the singlegroup scheduling models with Pegels' and DeJong's learning effect and the single-group scheduling models with Pegels' and DeJong's aging effect. The objective functions are to minimize the total completion time and makespan. We propose polynomial time methods to solve all the studied problems.

#### **2. Literature Review**

The learning effect model was first proposed by Biskup [1]. Subsequently, many researchers have done a lot of work on the learning effect and aging effect. Mosheiov [2] explained the aging effect and deteriorating jobs. The SPT rule was no longer applicable to the optimal solution for the aging effect model. Yang [3] studied a single-machine scheduling problem with maintenance times and maintenance activities. This was an aging effect model that considered the linear deterioration and exponential deterioration of processing time. Zhang and Yan [4] considered a single-machine scheduling problem with setup time whose learning effect was based on group and position. The objective

**Citation:** Qian, J.; Zhan, Y. Single-Machine Group Scheduling Model with Position-Dependent and Job-Dependent DeJong's Learning Effect. *Mathematics* **2022**, *10*, 2454. https://doi.org/10.3390/math 10142454

Academic Editors: Xiang Li, Shuo Zhang and Wei Zhang

Received: 30 May 2022 Accepted: 12 July 2022 Published: 14 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** c 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

functions were to minimize the total completion time and makespan. They gave polynomial time algorithms. Li et al. [5] considered a single-machine scheduling problem with linear resource or convex resource allocation. The learning effect of the processing time was based on position. They dealt with the problem of the slack due window by an assignment method. Lu et al. [6] considered a single-machine scheduling problem with setup time and convex resource allocation whose learning effect was based on the group position and the position within the group. The optimal sequence was for the jobs in each group to be sorted according to the SPT rule. Mustu and Eren [7] considered a single-machine learning scheduling problem with setup time which was based on position. The objective function was to minimize the total tardiness and it was an NP-hard problem. They proposed some heuristic algorithms to deal with the NP-hard problem. A single-machine group scheduling problem with setup time about linear resource and convex resource allocation was considered by Yin et al. [8], Zhang et al. [9], Zhao et al. [10] and Li et al. [11].

This paper studies the Pegels' and DeJong's learning effect scheduling model and Pegels' and DeJong's aging effect scheduling model. De Jong [12] proposed a learning scheduling model. Wang et al. [13] considered a single-machine scheduling problem with setup time which was a linear function of the previous actual processing time. A learning effect that was similar to DeJong's learning effect was based on the sum of previous processing times. When the objective functions were to minimize the total completion time and makespan, they proved that the optimal schedule was sorted according to the SPT rule. Okołowski and Gawiejnowicz [14] studied a parallel-machine scheduling problem whose learning effect was DeJong's learning effect. The objective function was to minimize the makespan. Since *Pm*||*C*max is a NP-hard problem, this problem was also a NP-hard problem. They proposed two branch-and-bound algorithms to solve this problem. Ji et al. [15] considered the machine scheduling problem with DeJong's learning effect. For the singlemachine scheduling problem, the objective functions were to minimize the total completion time and makespan. They proved that the optimal schedule was sorted according to the SPT rule. Ji et al. [16] studied a parallel-machine deteriorating scheduling problem with DeJong's learning effect. The objective functions were to minimize the total completion time and makespan. When the objective functions were to minimize the total completion time, the jobs were sorted according to the SPT rule on each machine. Zhang et al. [17] considered both Pegels' and DeJong's learning effect scheduling model and Pegels' and DeJong's aging effect scheduling model. The objective functions were to minimize the total completion time and makespan. The optimal solution was that the jobs were sorted according to the SPT rule in each group. The group order was related to the objective functions. Sun et al. [18] considered a single-machine scheduling problem. A learning effect similar to DeJong's learning effect was based on the sum of previous processing times and positions. The objective functions were to minimize the total completion time and makespan. They proved that the jobs in each group were sorted according to the SPT rule. Under different objective functions, they also gave a conclusion on the group order.

To increase efficiency, similarly designed or processed products are processed in groups. This phenomenon is known as group technology in the literature. Ji et al. considered a single-machine group scheduling and job-dependent due-window assignment problem [19]. Sun et al. considered a single-machine group scheduling with learning effect and resource allocation [20]. There have also been many scholars who have done a lot of work on group technology [21–25].

The problem is described in the Section 3. The proof of the polynomial time algorithm for four problems are given in the Section 4. A summary is given in the Section 5.

#### **3. Notation and Problem Statement**

Some notations that are used in this paper are introduced.


Suppose there are *l* independent jobs *J* = {*J*1, . . . , *Jl*} processed on a machine. The *l* jobs are divided into *t* groups *G* = {*G*1, . . . , *Gt*}, 1 ≤ *t* ≤ *l*. There is a sequence-dependent setup time before each group is processed. The jobs are processed continuously in each group. The machine can only process one job at a time. The actual processing time is

$$p^{a}\_{[\boldsymbol{u}][\boldsymbol{v}]} = p\_{[\boldsymbol{u}][\boldsymbol{v}]} [\mathbb{K} + (1 - \boldsymbol{K}) \boldsymbol{s}^{a}|\_{\boldsymbol{u}|[\boldsymbol{v}]}], \boldsymbol{u} = 1, \dots, l\_{\boldsymbol{v}}; \boldsymbol{v} = 1, \dots, \boldsymbol{t}; \tag{1}$$

$$p\_{[u][v]}^{a} = p\_{[u][v]}[\mathbf{K} + (1 - \mathbf{K})\mathcal{S}\_{[u][v]}^{s-1}], \boldsymbol{\mu} = \mathbf{1}, \dots, l\_{\boldsymbol{\upsilon}}; \boldsymbol{v} = \mathbf{1}, \dots, t,\tag{2}$$

where *K* is a constant, *K* ≥ 0, *α*[*u*][*v*] < 0 and 0 < *β*[*u*][*v*] < 1. When *K* = 0, *p a* [*u*][*v*] is inversely proportional to position *s*. When 0 < *K* < 1, the actual processing time of each job is less than the normal processing time of each job. They are the learning scheduling models. When *K* = 1, the actual processing time is equal to the normal processing time. When *K* > 1, the actual processing time of each job is more than the normal processing time of each job. They are the aging scheduling models.

Our objectives are to minimize the total completion time and the makespan. By the three-region notation, DeJong's and Pegels' models can be defined as

$$\begin{split} 1|p^{a}\_{[u][v]} &= p\_{[u][v]}[K + (1-K)s^{\mathcal{A}[u][v]}]\_{\mathcal{I}} \mathcal{G}T, t\_{v}|\mathcal{C}\_{\text{max}}, \\ 1|p^{a}\_{[u][v]} &= p\_{[u][v]}[K + (1-K)\mathcal{S}^{s-1}\_{[u][v]}]\_{\mathcal{I}} \mathcal{G}T, t\_{v}|\mathcal{C}\_{\text{max}}, \\ 1|p^{a}\_{[u][v]} &= p\_{[u][v]}[K + (1-K)s^{\mathcal{A}[u][v]}]\_{\mathcal{I}} \mathcal{G}T, t\_{v}|\sum\_{v=1}^{t\_{v}} \sum\_{u=1}^{l\_{v}} \mathcal{C}\_{[u][v]}, \\ 1|p^{a}\_{[u][v]} &= p\_{[u][v]}[K + (1-K)\mathcal{S}^{s-1}\_{[u][v]}]\_{\mathcal{I}} \mathcal{G}T, t\_{v}|\sum\_{v=1}^{t\_{v}} \sum\_{u=1}^{l\_{v}} \mathcal{C}\_{[u][v]}. \end{split}$$

#### **4. Research Method**

The parameters of the traditional Pegels' and DeJong's learning effect are constants which are job-independent. This paper considers Pegels' and DeJong's models whose parameters are job-dependent. When 0 < *K* < 1, they are the learning scheduling models. When *K* > 1, they are the aging scheduling models.

#### *4.1. Makespan Minimization*

**Theorem 1.** *For the* 1|*p a* [*u*][*v*] = *p*[*u*][*v*] [*K* + (1 − *K*)*s α*[*u*][*v*] ], *GT*, *tv*|*C*max *problem, the optimal solution is that the job sequence within the group is solved by the assignment method and the order of the groups is arbitrary.*

**Proof.** Suppose there are *l<sup>v</sup>* jobs in the *v*th group, *l*<sup>1</sup> + · · · + *l<sup>t</sup>* = *l*, 1 ≤ *v* ≤ *t*. The completion time of each group is

$$\mathcal{C}\_{[l\_1][1]} = t\_{[1]} + \sum\_{u=1}^{l\_1} p\_{[u][1]} [\mathcal{K} + (1 - \mathcal{K}) u^{a[u][1]}],\tag{3}$$

$$\mathcal{C}\_{[l\_{v+1}][v+1]} = \mathcal{C}\_{[l\_v][v]} + t\_{[v+1]} + \sum\_{u=1}^{l\_{v+1}} p\_{[u][v+1]} [K + (1-K)u^{a\_{[u][v+1]}}],\tag{4}$$

$$\mathcal{C}\_{\text{max}} = \mathcal{C}\_{[t][t]} = \sum\_{v=1}^{t} t\_{[v]} + \sum\_{v=1}^{t} \sum\_{u=1}^{l\_v} p\_{[u][v]} \left[ \mathcal{K} + (1 - \mathcal{K}) u^{\mathcal{C}[u][v]} \right]. \tag{5}$$

It can be seen from *C*max that the order of the groups is arbitrary, and the job sequence within the group can be solved by the assignment method. So, the problem of sequencing jobs in each group could be solved in polynomial time.

$$\begin{array}{ll}\min & \sum\_{u=1}^{l[v]} \sum\_{h=1}^{l[v]} p\_{[u][v]} \left[ K + (1 - K) h^{a\_{[u][v]}} \right] e\_{u[v]h^{\prime}}\\ \text{s.t.} & \sum\_{h=1}^{l[v]} e\_{u[v]h} = 1 \quad v = 1, \dots, t; u = 1, \dots, l\_{[v]},\\ & \sum\_{u=1}^{l[v]} e\_{u[v]h} = 1 \quad v = 1, \dots, t; h = 1, \dots, l\_{[v]},\\ & e\_{u[v]h} = 0, or 1, \ v = 1, \dots, t; u, h = 1, \dots, l\_{[v]}.\end{array} \tag{6}$$

The algorithm is summarized as follows: It is easy to show that the total time for Algorithm 1 is *O*(*n* 3 ).

**Require:** *t*, *K*, *lv*, *tv*, *puv*, *αuv*

**Ensure:** The job sequence within the group and *C*max


3: **Last step:** Calculate *C*max.

**Example 1.** *If there are five jobs in total, they are divided into two groups. l* = 5*, t* = 2*, G*<sup>1</sup> = {*job*11, *job*21, *job*31}*, G*<sup>2</sup> = {*job*12, *job*22}*, K* = 0.5*, t*<sup>1</sup> = 2*, t*<sup>2</sup> = 3*, p*<sup>11</sup> = 7*, p*<sup>21</sup> = 5*, p*<sup>31</sup> = 3*, p*<sup>12</sup> = 4*, p*<sup>22</sup> = 2*, α*<sup>11</sup> = −0.5*, α*<sup>21</sup> = −0.1*, α*<sup>31</sup> = −0.6*, α*<sup>12</sup> = −0.2*, α*<sup>22</sup> = −0.8*.*

#### **Solution:**

The processing time of the jobs in *G*<sup>1</sup> and *G*<sup>2</sup> at different positions are shown in Table 1 and Table 2, respectively.



By the assignment method, the order of the jobs is *job*<sup>31</sup> → *job*<sup>21</sup> → *job*<sup>11</sup> in *G*1.

**Table 2.** Processing time of the group *G*<sup>2</sup> jobs in different positions.


By the assignment method, the order of the jobs is *job*<sup>12</sup> → *job*<sup>22</sup> in *G*2.

$$\begin{split} \mathsf{C\_{max}} &= \sum\_{v=1}^{2} t\_{[v]} + \sum\_{v=1}^{2} \sum\_{u=1}^{l\_v} p\_{[u][v]} [0.5 + 0.5 \times u^{a\_{[u][v]}}] \\ &= 2 + 3 + 5(0.5 + 0.5 \times 2^{-0.1}) + 7(0.5 + 0.5 \times 3^{-0.5}) + 3 + 4 + 2(0.5 + 0.5 \times 2^{-0.8}) \\ &= 23.924. \end{split} \tag{7}$$

**Theorem 2.** *For the* 1|*p a* [*u*][*v*] = *p*[*u*][*v*] [*K* + (1 − *K*)*β s*−1 [*u*][*v*] ], *GT*, *tv*|*C*max *problem, the optimal solution is that the job sequence within the group is solved by the assignment method and the order of the groups is arbitrary.*

**Proof.** The completion time of each group is

$$\mathbf{C}\_{[l\_1][1]} = t\_{[1]} + \sum\_{u=1}^{l\_1} p\_{[u][1]} [\mathbf{K} + (\mathbf{1} - \mathbf{K}) \boldsymbol{\beta}\_{[u][1]}^{u-1}]\_{\prime} \tag{8}$$

$$\mathcal{C}\_{[l\_{v+1}][v+1]} = \mathcal{C}\_{[l\_v][v]} + t\_{[v+1]} + \sum\_{u=1}^{l\_{v+1}} p\_{[u][v+1]} [K + (1-K)\beta^{u-1}\_{[u][v+1]}],\tag{9}$$

$$\mathcal{C}\_{\max} = \mathcal{C}\_{[t][t]} = \sum\_{v=1}^{t} t\_{[v]} + \sum\_{v=1}^{t} \sum\_{u=1}^{l\_v} p\_{[u][v]} [K + (1 - K)\beta^{u-1}\_{[u][v]}].\tag{10}$$

It can be seen from *C*max that the order of the groups is arbitrary, and the job sequence within the group can be solved by the assignment method. So, the problem of sequencing jobs in each group could be solved in polynomial time.

$$\begin{array}{ll}\min & \sum\_{u=1}^{l\_{[\boldsymbol{v}]}} \sum\_{h=1}^{l\_{[\boldsymbol{v}]}} p\_{[u]}[\boldsymbol{v}] \left[\boldsymbol{K} + (1 - \boldsymbol{K})\boldsymbol{\beta}^{h-1}\_{[u]}\right] \boldsymbol{e}\_{u[\boldsymbol{v}]h\boldsymbol{\prime}}\\ \text{s.t.} & \sum\_{h=1}^{l\_{[\boldsymbol{v}]}} \boldsymbol{e}\_{u[\boldsymbol{v}]h} = 1, \quad \boldsymbol{v} = 1, \ldots, t; \boldsymbol{u} = 1, \ldots, l\_{[\boldsymbol{v}]},\\ & \sum\_{u=1}^{l\_{[\boldsymbol{v}]}} \boldsymbol{e}\_{u[\boldsymbol{v}]h} = 1, \quad \boldsymbol{v} = 1, \ldots, t; h = 1, \ldots, l\_{[\boldsymbol{v}]},\\ & \boldsymbol{e}\_{u[\boldsymbol{v}]h} = 0, \boldsymbol{or} 1, \quad \boldsymbol{v} = 1, \ldots, t; \boldsymbol{u}, h = 1, \ldots, l\_{[\boldsymbol{v}]}.\end{array} \tag{11}$$

The algorithm is summarized as follows: It is easy to show that the total time for Algorithm 2 is *O*(*n* 3 ).

$$\mathbf{Algortlim 2 1} [p^a\_{[u][v]} = p\_{[u][v]} [\mathbf{K} + (1 - \mathbf{K}) \beta^{s-1}\_{[u][v]}]\_\prime G T\_\prime \ t\_v |\mathbb{C}\_{\max}$$

**Require:** *t*, *K*, *lv*, *tv*, *puv*, *βuv*

**Ensure:** The job sequence within the group and *C*max


*4.2. Total Completion Time Minimization*

**Theorem 3.** *For the* 1|*p a* [*u*][*v*] = *p*[*u*][*v*] [*K* + (1 − *K*)*s α*[*u*][*v*] ], *GT*, *tv*| ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] *problem, the optimal solution is that the job sequence within the group can be solved by the assignment l b*

$$\text{method and the groups are arranged in nondecressing order of } \frac{t\_b + \sum\_{u=1}^{\mathcal{D}} p\_{|u|b} [K + (1 - K)u^{\mathcal{C}[u]b}]}{l\_b}.$$

**Proof.** Suppose there are *l<sup>v</sup>* jobs in the *v*th group, *l*<sup>1</sup> + · · · + *l<sup>t</sup>* = *l*, 1 ≤ *v* ≤ *t*. The completion time of each job is

$$\mathcal{C}\_{[u][1]} = t\_{[1]} + \sum\_{k=1}^{u} p\_{[k][1]} [K + (1 - K)k^{\mu}|\_{"\mathbb{k}}]\_{"\mathbb{k}} 1 \le u \le l\_{1\prime} \tag{12}$$

$$\mathcal{C}\_{[u][v+1]} = \mathcal{C}\_{[lv][v]} + t\_{[v+1]} + \sum\_{k=1}^{\mu} p\_{[k][v+1]} [K + (1 - K)k^{\mu}|^{[v+1]}], 1 \le \mu \le l\_{v+1}. \tag{13}$$

Therefore, the total completion time is

$$\sum\_{v=1}^{t} \sum\_{u=1}^{l\_v} \mathbb{C}\_{[u][v]} = \sum\_{v=1}^{t} l\_v t\_{\mathbb{D}} + \sum\_{v=2}^{t} l\_v \mathbb{C}\_{[l\_{v-1}][v-1]} + \sum\_{v=1}^{t} \sum\_{u=1}^{l\_v} (l\_v - u + 1) p\_{[u][v]} [\mathbb{K} + (1 - \mathbb{K}) u^{\mathbb{A}\_{[u][v]}}].\tag{14}$$

The first term is a constant in the above formula. The second term is

$$\sum\_{\upsilon=2}^{t} l\_{\upsilon} \mathbb{C}\_{[l\_{\upsilon-1}][\upsilon-1]} = \sum\_{\upsilon=2}^{t} l\_{\upsilon} \sum\_{k=1}^{\upsilon-1} \{ t\_{[k]} + \sum\_{\iota=1}^{l\_k} p\_{[\iota u][k]} [K + (1-K)\iota^{\mathfrak{a}[u][k]}] \}. \tag{15}$$

We prove that the second term obtains the optimal solution by the adjacent exchange method. Let *S*<sup>1</sup> = (*θ*1, *G<sup>b</sup>* , *Gc*, *θ*2) and *S*<sup>2</sup> = (*θ*1, *Gc*, *G<sup>b</sup>* , *θ*2) be two job sequences with the same sequence except *G<sup>b</sup>* and *Gc*. *G<sup>b</sup>* is at the *γ*th position in the *S*<sup>1</sup> sequence, and *G<sup>c</sup>* is at the (*γ* + 1)th position in the *S*<sup>1</sup> sequence. There are *l<sup>b</sup>* jobs in group *G<sup>b</sup>* , and there are *l<sup>c</sup>* jobs in group *Gc*.

*t* ∑ *v*=2 *<sup>l</sup>vC*[*lv*−<sup>1</sup>][*v*−1] (*S*1) − *t* ∑ *v*=2 *<sup>l</sup>vC*[*lv*−<sup>1</sup>][*v*−1] (*S*2) =*l<sup>b</sup> γ*−1 ∑ *v*=1 {*t* [*v*] + *lv* ∑ *u*=1 *p*[*u*][*v*] [*K* + (1 − *K*)*u α*[*u*][*v*] ]} + *l<sup>c</sup> γ* ∑ *v*=1 {*t* [*v*] + *lv* ∑ *u*=1 *p*[*u*][*v*] [*K* + (1 − *K*)*u α*[*u*][*v*] ]} − *l<sup>c</sup> γ*−1 ∑ *v*=1 {*t* [*v*] + *lv* ∑ *u*=1 *p*[*u*][*v*] [*K* + (1 − *K*)*u α*[*u*][*v*] ]} − *l<sup>b</sup> γ* ∑ *v*=1 {*t* [*v*] + *lv* ∑ *u*=1 *p*[*u*][*v*] [*K* + (1 − *K*)*u α*[*u*][*v*] ]} =*lc*{*t<sup>b</sup>* + *lb* ∑ *u*=1 *p*[*u*]*<sup>b</sup>* [*K* + (1 − *K*)*u α*[*u*]*<sup>b</sup>* ]} − *lb*{*t<sup>c</sup>* + *lc* ∑ *u*=1 *p*[*u*]*<sup>c</sup>* [*K* + (1 − *K*)*u α*[*u*]*<sup>c</sup>* ]}. (16)

If *S*<sup>1</sup> is better than *S*2, the above formula is less than 0. Then,

$$l\_{\varepsilon} \{ t\_b + \sum\_{u=1}^{l\_b} p\_{[u]b} [K + (1 - K)u^{a[u]b}] \} < l\_b \{ t\_\varepsilon + \sum\_{u=1}^{l\_\varepsilon} p\_{[u]\varepsilon} [K + (1 - K)u^{a[u]\varepsilon}] \},\tag{17}$$

$$\frac{t\_b + \sum\_{\boldsymbol{u}=1}^{l\_b} p\_{[\boldsymbol{u}]b} \left[ \mathcal{K} + (1 - \mathcal{K}) \boldsymbol{u}^{a\_{[\boldsymbol{u}]b}} \right]}{l\_b} < \frac{t\_c + \sum\_{\boldsymbol{u}=1}^{l\_c} p\_{[\boldsymbol{u}]c} \left[ \mathcal{K} + (1 - \mathcal{K}) \boldsymbol{u}^{a\_{[\boldsymbol{u}]c}} \right]}{l\_c}. \tag{18}$$

The third term can be solved by the assignment method.

$$\begin{array}{ll}\min & \sum\_{\boldsymbol{u}=1}^{l[\boldsymbol{v}]} \sum\_{h=1}^{l[\boldsymbol{v}]} (l\_{[\boldsymbol{v}]} - h + 1) p\_{[\boldsymbol{u}][\boldsymbol{v}]} [K + (1 - K) h^{\boldsymbol{\kappa}[\boldsymbol{u}][\boldsymbol{v}]}] e\_{\boldsymbol{u}[\boldsymbol{v}]h\boldsymbol{\nu}}\\ \text{s.t.} & \sum\_{\boldsymbol{h}=1}^{l[\boldsymbol{v}]} e\_{\boldsymbol{u}[\boldsymbol{v}]h} = 1, \quad \boldsymbol{v} = 1, \dots, t; \boldsymbol{u} = 1, \dots, l\_{[\boldsymbol{v}]}\\ & \sum\_{\boldsymbol{u}=1}^{l[\boldsymbol{v}]} e\_{\boldsymbol{u}[\boldsymbol{v}]h} = 1, \quad \boldsymbol{v} = 1, \dots, t; h = 1, \dots, l\_{[\boldsymbol{v}]}\\ & e\_{\boldsymbol{u}[\boldsymbol{v}]h} = 0, \boldsymbol{or} 1, \quad \boldsymbol{v} = 1, \dots, t; \boldsymbol{u}\_{\boldsymbol{v}} h = 1, \dots, l\_{[\boldsymbol{v}]}.\end{array} \tag{19}$$

).

The algorithm is summarized as follows: It is easy to show that the total time for Algorithm 3 is *O*(*n* 3

$$\mathbf{Algorithm 3 1} \vert p^{a}\_{[u][v]} = p\_{[u][v]} \vert \mathrm{K} + (1 - \mathrm{K}) s^{\mu}\_{}^{[u][v]} \vert\_{\prime} G \\ T\_{\prime} t\_{\upsilon} \vert \sum\_{v=1}^{l\_{\upsilon}} \mathrm{C}\_{[u][v]}$$

**Require:** *t*, *K*, *lv*, *tv*, *puv*, *αuv*

**Ensure:** The job sequence within the group, group sequence and ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*]


$$\frac{t\_b + \sum\_{u=1}^{\sum} p\_{[u]b} \left[ \mathcal{K} + (1 - K)u \right]^{[u]}}{I\_b}$$

3: **Last step:** Calculate ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] .

.

**Example 2.** *The conditions are the same as in Example 1 and the objective function is changed from C*max *to* ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] *.*

#### **Solution:**

By the assignment method, the order of the jobs is *job*<sup>31</sup> → *job*<sup>21</sup> → *job*<sup>11</sup> in *G*<sup>1</sup> and the order of the jobs is *job*<sup>12</sup> → *job*<sup>22</sup> in *G*2.

$$f(\mathbf{G}\_1) = \frac{t\_1 + \sum\_{u=1}^{l\_1} p\_{[u]1} [\mathbf{K} + (1 - \mathbf{K})u^{a[u]1}]}{l\_1} = \frac{2 + \sum\_{u=1}^{3} p\_{[u]1} [0.5 + 0.5 \times u^{a[u]1}]}{3} = 5.12,\tag{20}$$

$$f(\mathbf{G}\_2) = \frac{t\_2 + \sum\_{u=1}^{l\_2} p\_{[u]2} [\mathbf{K} + (1 - \mathbf{K})u^{a[u]2}]}{l\_2} = \frac{3 + \sum\_{u=1}^{2} p\_{[u]2} [0.5 + 0.5 \times u^{a[u]2}]}{2} = 4.29.$$

Therefore, the order of the groups is *G*<sup>2</sup> → *G*1.

$$\sum\_{v=1}^{2} \sum\_{u=1}^{l\_v} \mathbb{C}\_{[u][v]} = 3 + 4 + 8.57 + 13.57 + 18.4 + 23.92 = 71.46. \tag{21}$$

The proof of the 1|*p a* [*u*][*v*] = *p*[*u*][*v*] [*K* + (1 − *K*)*β s*−1 [*u*][*v*] ], *GT*, *tv*| ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] problem is the same as that of Theorem 3.

**Theorem 4.** *For the* 1|*p a* [*u*][*v*] = *p*[*u*][*v*] [*K* + (1 − *K*)*β s*−1 [*u*][*v*] ], *GT*, *tv*| ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] *problem, the optimal solution is that the job sequence within the group can be solved by the assignment l b*

*.*

*method and the groups are arranged in nondecreasing order of tb*+ ∑ *u*=1 *p*[*u*]*<sup>b</sup>* [*K*+(1−*K*)*β s*−1 [*u*]*b* ] *lb*

The algorithm is summarized as follows:

It is easy to show that the total time for Algorithm 4 is *O*(*n* 3 ).


**Require:** *t*, *K*, *lv*, *tv*, *puv*, *βuv*

**Ensure:** The job sequence within the group, group sequence and ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*]


$$\frac{t\_b + \sum\_{\mu=1}^{\nu} p\_{[\mu]b} \left[ K + (1 - K) \mathcal{J}\_{[\mu]b}^{s-1} \right]}{I\_b}$$

3: **Last step:** Calculate ∑ *t <sup>v</sup>*=<sup>1</sup> ∑ *lv u*=1 *C*[*u*][*v*] .

.

#### **5. Conclusions**

This paper considered the single-group scheduling models with Pegels' and DeJong's learning effect and the single-group scheduling models with Pegels' and DeJong's aging effect. When 0 < *K* < 1, they are the learning scheduling models. When *K* > 1, they are the aging scheduling models. In a classical scheduling model, Pegels' and DeJong's learning effect is a constant or position-dependent, while the learning effect and aging effect were job-dependent in this paper. Compared with the classical learning model and aging model for scheduling, the proposed models were more general and realistic. The objective functions were to minimize the total completion time and makespan. We proposed polynomial time methods to solve all the studied problems.

In the future, we can also consider the multimachine Pegels' and Dejong's learning scheduling.

#### **6. Limitations**

The problem comes from a real production scheduling problem. In some practical single-group scheduling problems, the learning effect and aging effect are job-dependent. We conducted a deep research study on this issue to prove whether it was an NP-hard problem or polynomial-time solvable problem. After a long period of exploration, the present result was achieved.

**Author Contributions:** The work presented here was performed in collaboration among all authors. Conceptualization, J.Q.; methodology, J.Q.; validation, Y.Z.; investigation, Y.Z.; writing—original draft preparation, J.Q.; writing—review and editing, Y.Z.; supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by J. Qian of the Fundamental Research Funds for the Central Universities grant number N2105020, Y. Zhan of the Fundamental Research Funds for the Central Universities grant number N2105021. and Y. Zhan of the Natural Science Foundation of Liaoning Province Project grant number 2021-MS-102.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** We thank the editor and the anonymous reviewers for their helpful comments and insights that significantly improved our paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**

