**1. Introduction**

We consider single-machine scheduling problems, where a set of *n* jobs *N* = {1, 2, ... , *n*} has to be processed on a single machine starting at time *τ*. For each job *j*, a release date *rj*, a processing time *pj* and a due date *dj* are given. Many scheduling problems require the minimization of some maximum term. Denote by *Cj* the completion time of job *j*, then the minimization of the makespan (i.e., the maximum completion time of the jobs)

$$\mathcal{C}\_{\text{max}} = \max\_{j=1,\dots,n} \mathcal{C}\_{j\prime}$$

or the minimization of maximum lateness

$$L\_{\text{max}} = \max\_{j=1,\dots,n} \{ \mathbf{C}\_j - d\_j \},$$

are well-known examples of such an optimization criterion.

In this paper, we consider two related problems of such a min-max problem, namely a dual problem as well as an inverse problem of the single-machine scheduling problem with given release dates and minimizing the maximum penalty. While the original problem 1|*rj*|*Lmax* is *NP*-hard in the strong sense [1], we prove that both the dual and inverse problems of this problem can be solved in polynomial time.

Due to the *NP*-hardness of the problem 1|*rj*|*Lmax*, several branch and bound algorithms have been developed and special cases of the problem have been considered, see e.g., [2–9]. In [9], it has been shown that if the release dates for all jobs are from the interval [*dj* − *pj* − *A*, *dj* − *A*] for all jobs and some constant *A*, the problem can be solved in *O*(*n* log *n*) time if no machine idle times are allowed and in *O*(*n*<sup>2</sup> log *n*) time if machine idle times are allowed. Another important special case of this problem has been considered in [10]. In that paper, it is shown that for naturally bounded job data, the problem can be polynomially solved. More precisely, a polynomial time solution of the variant is given when the maximal job processing time and the differences between the job release dates are bounded by a constant. The binary search procedure presented in this work determines an optimal solution in *O*(*n*<sup>2</sup> log *n* log *pmax*) or *O*(*dmaxn* log *n* log *pmax*) time, where *pmax* is the maximal processing time and *dmax* is the maximal due date.

In [11], some computational experiences and applications for the more general case with additional precedence constraints are reported. Their algorithm turned out to be superior to earlier algorithms. Some applications to job shop scheduling are also discussed. A more recent branch and bound algorithm for this single-machine problem with release dates and precedence constraints has been given in [12]. This algorithm uses four heuristics for finding initial upper bounds and a variable neighborhood search procedure. It solved most considered instances within one minute of CPU time. Several approximation schemes for four variants of the problem with additional non-availability or deadline constraints have been derived in [13]. An approximation algorithm for this single-machine problem with additional workload dependent maintenance duration has been presented in [14]. This algorithm is even optimal for some special cases of the problem. A hybrid metaheuristic search algorithm for the single-machine problem with given release dates and precedence constraints has been developed in [15]. Computational tests have been made using own instance sets with 100 jobs and instances from [12] with up to 1000 jobs. The hybridization of the elektromagnetism algorithm with tabu search leads to a tradeoff between diversification and intensification strategies. The metric approach is another recent possibility for solving the problem 1|*rj*|*Lmax* approximately with guaranteed maximal error, see e.g., [16]. The introduced metric delivers an upper bound on the absolute error of the objective function value. Taking a given instance of some problem and using the introduced metric, the nearest instance is determined for which a polynomial or pseudo-polynomial algorithm is known. Then a schedule is constructed for this instance and applied to the original instance.

There have been also considered problems with several optimization criteria. In [17], the single-machine problem with the primary criterion of minimizing maximum lateness and the secondary criterion of minimizing the maximum job completion time has been investigated. The author gives dominance properties and conditions when the Pareto-optimal set can be found in polynomial time. The derived properties allow extension of the basic framework to exponential implicit enumeration schemes and polynomial approximation algorithms. The problem of finding the Pareto-optimal set for two criteria in the case when there are constraints on the source data have been considered in [18,19]. In [18], the idea of the dual approach is considered in detail, but there is no sufficient experimental study of the effectiveness of this approach. Lazarev et al. [20] considered the problem of minimizing maximum lateness and the makespan in the case of equal processing times and proposed a polynomial time approach for finding the Pareto-optimal set of feasible solutions. They presented two approaches, the efficiency of which depends on the number of jobs and the accuracy of the input-output parameters.

The dual and inverse problems considered in this paper are maximization problems. In the literature, there exist some works on other single-machine maximization problems, usually under the assumption of no inserted idle times between the processing of jobs on the machine. Maximization problems in single-machine problems were considered e.g., in [21,22]. The complexity and some algorithms for single-machine total tardiness maximization problems have been discussed in [23]. In [24], a pseudo-polynomial algorithm for the single-machine total tardiness maximization problem has been transformed by a graphical algorithm into a polynomial one.

The remainder of this paper is as follows. In Section 2, we introduce the dual problem of the single-machine problem, where the maximum penalty term of a job should be minimized. Section 3 considers an inverse problem, where the minimum of the penalty terms should be maximized. The solution of this dual problem is embedded into a branch and bound algorithm for the original

problem. Some computational results for this branch and bound algorithm are given in Section 4. The paper finishes with some concluding remarks.

#### **2. The Dual Problem**

Let us consider the general formulation of the *NP*-hard problem of minimizing the maximum penalty or cost *ϕ*max for a set of jobs on a single machine, i.e., problem 1 | *rj* | *ϕ*max. The machine cannot process more than one job of the set *N* = {1, ... , *n*} at a moment. Preemptions of the processing of a job are prohibited. Let *ϕjk* (*Cjk* (*π*)) denote the penalty for job *j* ∈ *N* if it is processed as the *k*-th job in the sequence *π* = (*j*1, *j*2, ... , *jk*, ... , *jn*). We assume that all *ϕjk* (*Cjk* (*π*)), *k* = 1, 2, ... , *n*, are arbitrary non-decreasing penalty functions.

By *μ*∗ we denote the optimal value of the objective function:

$$\mu^\* = \min\_{\pi \in \Pi(N)} \max\_{k=1,\ldots,n} \varrho\_{j\_k}(\mathbb{C}\_{j\_k}(\pi)),\tag{1}$$

where Π(*N*) = {*π*1, *π*2, ... , *πn*!} denotes the set of all permutations (schedules) of the jobs of the set *N*.

In the scheduling literature, many special cases of the following general dual problem are considered. One wishes to find an optimal job sequence *π*∗ and the corresponding objective function value *ν*∗ such that

$$\nu^\* = \max\_{k=1,\ldots,\nu} \min\_{\pi \in \Pi(N)} \varphi\_{\hat{\boldsymbol{\mu}}}(\mathbb{C}\_{\hat{\boldsymbol{\mu}}}(\pi)). \tag{2}$$

For convenience, we introduce a notation that takes into account the position of the job in the schedule. Let a schedule (job sequence) *π* ∈ Π(*N*) be given by *π* = (*j*1, *j*2, ... , *jn*). For the job, which is processed as the *k*-th job in the sequence, *k* = 1, 2, . . . , *n*, under a schedule *π*, we denote:

$$\nu\_k = \min\_{\pi \in \Pi(N)} \varphi\_{\hat{\boldsymbol{\mu}}\_k}(\mathbb{C}\_{\hat{\boldsymbol{\mu}}\_k}(\pi)), \ k = 1, 2, \dots, n. \tag{3}$$

Obviously,

$$\nu^\* = \max\_{k=1,\ldots,n} \nu\_k. \tag{4}$$

**Lemma 1** ([25])**.** *Let ϕj*(*t*), *j* = 1, 2, ... , *n*, *be arbitrary non-decreasing penalty functions in the problem* 1 | *rj* | *ϕ*max*. Then we have ν<sup>n</sup>* ≥ *ν<sup>k</sup> for all k* = 1, 2, . . . , *n, i.e., ν*<sup>∗</sup> = *νn*.

**Proof.** Suppose that there exists a number *k*, *k* < *n*, such that *ν<sup>k</sup>* > *νn*. Assume that *π<sup>n</sup>* = (*j*1, *j*2, ... , *jn*) is a schedule for which the value *ν<sup>n</sup>* is obtained. Then we consider the schedule

$$
\pi = (j\_{n-k+1}j\_{n-k+2}\dots j\_{n'}j\_1\, j\_{2'}\dots \dots j\_{n-k})\dots
$$

Please note that under the schedule *π*, the job *jn* will be carried out as the *k*-th job in the sequence. Since *Cjn* (*πn*) ≥ *Cjn* (*π*), we have

$$\nu\_n = \operatorname{\boldsymbol{\rho}}\_{\boldsymbol{\jmath}\_n}(\mathsf{C}\_{\boldsymbol{\jmath}\_n}(\pi\_n)) \ge \operatorname{\boldsymbol{\varrho}}\_{\boldsymbol{\jmath}\_n}(\mathsf{C}\_{\boldsymbol{\jmath}\_n}(\pi)) \ge \mathsf{v}\_k.$$

Due to the assumption *ν<sup>k</sup>* > *νn*, we obtain the inequality

$$\nu\_k > \nu\_n \ge \nu\_k$$

which is a contradiction. The lemma has been proved.

Thus, the solution of the dual problem 1|*rj*|*φmax* is reduced to the problem of finding the value *νn*. We enumerate all jobs in order of non-decreasing release dates: *ri*<sup>1</sup> ≤ *ri*<sup>2</sup> ≤···≤ *rin* . Due to

$$\nu\_n = \min\_{\pi \in \Pi(N)} q\_{\dot{\jmath}\_n}(\mathcal{C}\_{\dot{\jmath}\_n}(\pi))\_n$$

we will put each of the jobs *j* of the set *N* onto the last (i.e., the *n*-th) position. The other *n* − 1 jobs of the set *N* \ {*j*} are arranged in their original order starting at time *τ*. This gives the earliest completion time of processing the jobs from the set *N* \ {*j*}. This procedure is formally summarized in Algorithm 1. The input data of Algorithm 1 is the pair (*N*, *τ*), where *τ* is used to calculate *Cj*.

**Algorithm 1:** Solution of the dual problem of the problem 1 | *rj* | *ϕ*max


The complexity of Algorithm 1 can be estimated as follows. We need *O*(*n* log *n*) operations to construct the schedule *π<sup>r</sup>* . We need *O*(*n*) operations to find each of the *n* values *ϕik* (*Cik* (*πk*)). Therefore, to determine the value *ν*<sup>∗</sup> and the corresponding job *ik*, for which the value *ν*<sup>∗</sup> is obtained, no more than *O*(*n*2) operations are required.

**Theorem 1** ([25])**.** *Let ϕj*(*t*), *j* = 1, 2, ... , *n*, *be arbitrary non-decreasing penalty functions for the problem* 1 | *rj* | *ϕ*max*. Then the inequality μ*<sup>∗</sup> ≥ *ν*<sup>∗</sup> *holds.*

**Proof.** Suppose the opposite, i.e., there exists an instance of the problem 1 | *rj* | *ϕ*max, for which the inequality *μ*<sup>∗</sup> < *ν*<sup>∗</sup> holds. Let *π*<sup>∗</sup> = (*j*1, *j*2, ... , *jn*) be an optimal schedule for this instance. Then we have

$$(\varphi\_{j\_u}(\mathcal{C}\_{j\_u}(\pi^\*)) \le \mu^\* < \nu^\*,$$

which contradicts equality (3):

$$\nu^\* = \nu\_\mathfrak{n} = \min\_{\pi \in \Pi(\mathcal{N})} \mathcal{q}\_{j\_\mathfrak{n}} (\mathcal{C}\_{j\_\mathfrak{n}}(\pi)).$$

The theorem has been proved.

The obtained estimate can be efficiently used in constructing schemes of a branch and bound method for solving the problem 1 | *rj* | *ϕ*max, and for estimating the error of approximate solutions when the branch and bound algorithm stops without finding an optimal solution.

We denote by {*N* , *τ* , *ν* , *π* , *B* } the sub-problem of processing the jobs of the set *N* ⊆ *N* from time *τ* ≥ *τ* on according to some partial sequence *π* for the jobs of the set *N* \ *N* , where *ν* is the lower bound obtained by solving the dual problem of this instance, *τ* = *C*max(*π* , *τ*) is the start time of the planning horizon for the jobs from the set *B* , which is equal to the makespan value for the sequence *π* , and *τ* is the time when the machine is ready to process the jobs from the set *N*. *B* is the set of jobs that cannot be placed on the corresponding first position of the schedule.

The subsequent Algorithm 2 implements one of the possible schemes of the branch and bound method using the solution of the dual problem. The branching in Algorithm 2 is carried out as a result of dividing the current sub-instance into two instances: put the job *f* (the job with the smallest due date from the set of jobs ready for processing) at the next position in the schedule and prohibit the inclusion of the job *f* at the next position in the schedule (by increasing the possible start time of job *f*). Denote

$$r\_{\dot{\jmath}}(\tau) = \max\{r\_{\dot{\jmath}\prime}\tau\}, \quad r(N,\tau) = \min\_{\dot{\jmath}\in N} r\_{\dot{\jmath}}(\tau).$$

Let |*N*| > 1. We choose a job *f* = *f*(*N*, *τ*) from the set *N* such that

$$f(N,\tau) = \arg\min\_{j \in N \backslash B} \left\{ d\_j \mid r\_j(\tau) = r(N,\tau) \right\}.$$

If *N* = {*i*}, then *f*(*N*, *τ*) = *i* for all *τ*. Here *B* is a set of jobs that cannot be placed on the current first position. Denote by *π*∗ the currently best schedule constructed for all jobs.

We note that a deeper comparative discussion of the characteristics of the particular search strategies can be found in [26,27].

**Algorithm 2:** Solution of the problem 1 | *rj* | *L*max by the branch and bound method based on the solution of the dual problem

**1. Initial step**

Let *π*<sup>∗</sup> = ∅. The list of instances contains the original instance {*N*, *τ*, *ν*, ∅, ∅}, where *ν* is a lower bound on the optimal objective function value obtained by solving the dual problem of this instance.

#### **2. Main step**

	- *N*<sup>1</sup> = *N* \ { *f* }, *τ*<sup>1</sup> = max{*rf* , *τ* } + *pf* , *B*<sup>1</sup> = ∅, *π*<sup>1</sup> = (*π* , *f*), *ν*<sup>1</sup> is a lower bound obtained by solving the dual problem of this instance by Algorithm 3;
	- *N*<sup>2</sup> = *N* , *τ*<sup>2</sup> = *τ* , *B*<sup>2</sup> = *B* ∪ { *f* }, *π*<sup>2</sup> = *π* , *ν*<sup>2</sup> is a lower bound obtained by solving the dual problem of this instance by Algorithm 3.

If the list of instances is empty, STOP, otherwise repeat the main step 2.

To find the value *ν* in step 2(c), we need to modify Algorithm 1 taking into account a list *B* of jobs that cannot be placed on the current position. The input data of Algorithm 3 is the triplet (*N*, *τ*, *B*), where *τ* is used to calculate *Cj*.

**Algorithm 3:** Modification of the solution algorithm for the dual problem of the problem 1 | *rj* | *ϕ*max with respect to a list of jobs that cannot be on the first position


$$i\_l = \arg\min\_{j \in N \backslash \left(B \cup \{i\_k\}\right)} r\_{j\_l}$$

and construct the schedule *<sup>π</sup><sup>k</sup>* = (*il*, *<sup>π</sup><sup>r</sup>* \ {*il*, *ik*}, *ik*). Find the value *ϕik* (*Cik* (*πk*)). If *N* \ (*B* ∪ {*ik*}) = ∅, then we assume *Cik* = +∞.

3. Find the value *ν*∗ = min *<sup>k</sup>*=1,...,*<sup>n</sup> <sup>ϕ</sup>ik* (*Cik* (*πk*)) and the job *ik*, which gives the value *<sup>ν</sup>*∗.

It is easy to see that this algorithm can be used to solve the more general problem 1 | *rj* | *ϕ*max. In addition, if the algorithm is stopped without an empty list of instances due to a time limit, the current schedule *π*∗ can be taken as an approximate solution of the problem.

Hence, although the original problem 1 | *rj* | *ϕ*max is *NP*-hard in the strong sense (recall that problem 1 | *rj* | *L*max is *NP*–hard in the strong sense), the dual problem turned out to be polynomially solvable.

If precedence relations are specified between the jobs by an acyclic graph *G*, then the dual problem of the problem 1 | *rj*, *prec* | *ϕ*max can also be solved in a similar way. Since the argumentation is similar, we skip the details. Here, the core is to solve the problem 1 | *rj*, *prec* | *C*max. Jobs without successors according to the precedence graph *G* will be put one-by-one to the last positions in the job sequence. Thus, the dual problem of the problem 1 | *rj*, *prec* | *ϕ*max is also polynomially solvable.

For problems with *m* > 1 machines, e.g., problem *Pm* | *rj*, *prec* | *ϕ*max, the core consists of solving the dual problem, which is the partition problem. This dual problem is *NP*–hard in the ordinary sense.

Thus, although in mathematical programming the original and dual problems have usually the same complexity status, it turned out that the dual problems of the scheduling problems considered in this paper have a lower complexity than the original problems. This interesting fact should be investigated further in more detail also for other scheduling problems.

#### **3. The Inverse Problem of the Maximum Lateness Problem**

The inverse problem of the *NP*-hard problem of minimizing maximum lateness 1 | *rj* | *L*max consists of finding a schedule *π*, which reaches the maximum minimal lateness and finding the value

$$
\lambda^\* = \max\_{\pi \in \Pi(N)} \min\_{k=1,\ldots,n} L\_{j\_k}(\mathbb{C}\_{j\_k}(\pi)).\tag{5}
$$

Please note that for this problem, inserted idle times of the machine are prohibited.

This problem was solved only for the case of simultaneous availability of the set *N* for processing, i.e., *rj* = 0, for all *j* ∈ *N* in [28]. We consider the general case of the problem 1 | *rj* | max *L*min.

**Lemma 2.** *There exists an optimal schedule π* = (*i*1,..., *in*) *for the problem* 1 | *rj* | max *L*min*, for which*

$$d\_{i\_k} - p\_{i\_k} \le d\_{i\_{k+1}} - p\_{i\_{k+1}}, \; k = 2, 3, \dots, n - 1,\tag{6}$$

*and*

$$
\lambda^\* = \min\_{k=1,\dots,n} L\_{i\_k}(\mathbb{C}\_{i\_k}(\pi)).
$$

**Proof.** Assume that at least one of inequalities (6) is not satisfied for an optimal schedule *π* = (*j*1,..., *jn*) and let

$$
\lambda^\* = \min\_{k=1,\dots,n} L\_{i\_k}(\mathbb{C}\_{i\_k}(\pi'))\,.
$$

In what follows, the proof will consist of two stages, which can be repeated several times.

**Step 1.** If there are no machine idle times in the schedule *π* , then go to step 2. Let there be idle times according to schedule *π* , and consider the last of them:

$$\mathbb{C}\_{j\_k} < r\_{j\_{k+1}} \qquad \text{and} \quad r\_{j\_m} \le \mathbb{C}\_{j\_{m-1}\prime} \quad m = k+2, \dots, n.$$

Construct the schedule *π* = (*jk*+1, *j*1,..., *jk*, *jk*+2,..., *jn*). Since

$$\mathbb{C}\_{j}(\pi^{\prime\prime}) \ge \mathbb{C}\_{j}(\pi^{\prime}) \quad \text{ for all } j \in N\_{\prime}$$

the value of the minimal lateness will not decrease. There will be no idle time under the schedule *π*, and the optimal value *λ*∗ will be saved. Set *π* := *π* and go to step 2.

**Step 2.** If the schedule *π* meets the conditions of Lemma 2, the proof is completed. If there exist two jobs *jl*, *jl*+1, for which

$$d\_{\dot{\mathbb{H}}} - p\_{\dot{\mathbb{H}}} > d\_{\dot{\mathbb{H}}+1} - p\_{\dot{\mathbb{H}}+1'}$$

then exchange the jobs *jl*, *jl*<sup>+</sup><sup>1</sup> which yields the schedule

$$
\pi^{\prime\prime} = (j\_{1\prime}, \dots, j\_{l-1\prime}, j\_{l+1\prime}, j\_{l\prime}, j\_{l+2\prime}, \dots, j\_n).
$$

As there are no machine idle times under the schedule *π* , we have

$$r\_{\dot{\mu}} \le \mathcal{C}\_{\dot{\mu}\_{-1}}(\pi').$$

There are the following possible cases:

(1) Let *rjl*<sup>+</sup><sup>1</sup> <sup>≤</sup> *Cjl*−<sup>1</sup> (*π* ). Obviously, in this case we have

$$\mathbb{C}\_{\dot{\mathbb{M}}}(\pi') = \mathbb{C}\_{\dot{\mathbb{M}}}(\pi''), \quad k = 1, 2, \dots, l - 1, l + 2, \dots, n. \tag{7}$$

According to the assumptions, inequality

$$\mathcal{C}\_{\dot{\boldsymbol{\mu}}-1}(\boldsymbol{\pi}') + \boldsymbol{p}\_{\dot{\boldsymbol{\mu}}} + \boldsymbol{p}\_{\dot{\boldsymbol{\mu}}+1} - d\_{\dot{\boldsymbol{\mu}}+1} > \mathcal{C}\_{\dot{\boldsymbol{\mu}}-1}(\boldsymbol{\pi}') + \boldsymbol{p}\_{\dot{\boldsymbol{\mu}}} - d\_{\dot{\boldsymbol{\mu}}}.\tag{8}$$

holds. Moreover, we have

$$\mathbf{C}\_{\dot{j}\_{l-1}}(\pi') + p\_{\dot{j}\_{l+1}} - d\_{\dot{j}\_{l+1}} > \mathbf{C}\_{\dot{j}\_{l-1}}(\pi') + p\_{\dot{j}\_l} - d\_{\dot{j}\_l};\tag{9}$$

$$\mathbb{C}\_{\dot{\jmath}\_{l-1}}(\pi^{\prime}) + p\_{\dot{\jmath}\_{l+1}} - d\_{\dot{\jmath}\_l} > \mathbb{C}\_{\dot{\jmath}\_{l-1}}(\pi^{\prime}) + p\_{\dot{\jmath}\_l} - d\_{\dot{\jmath}\_l}.\tag{10}$$

Formulas (7)–(10) show that the maximal lateness is not reduced. Set *π* := *π* and repeat step 2. (2) Let *rjl*<sup>+</sup><sup>1</sup> <sup>&</sup>gt; *Cjl*−<sup>1</sup> (*π* ). In this case, we have

$$\mathbb{C}\_{\dot{\mathbb{M}}}(\pi^{\prime\prime}) = \mathbb{C}\_{\dot{\mathbb{M}}}(\pi^{\prime}), \quad k = 1, 2, \dots, l - 1,\tag{11}$$

$$\mathbb{C}\_{\dot{\boldsymbol{\mu}}\_{k}}(\boldsymbol{\pi}^{\prime\prime}) > \mathbb{C}\_{\dot{\boldsymbol{\mu}}\_{k}}(\boldsymbol{\pi}^{\prime}), \quad k = l + 2, \ldots, n. \tag{12}$$

According to the assumptions, we have

$$\mathbb{C}\_{\dot{\jmath}\_{l+1}}(\pi^{\prime\prime}) - d\_{\dot{\jmath}\_{l+1}} > \mathbb{C}\_{\dot{\jmath}\_{l}}(\pi^{\prime}) - d\_{\dot{\jmath}\_{l}};\tag{13}$$

$$\mathbb{C}\_{\dot{\jmath}\_l}(\pi^{\prime\prime}) - d\_{\dot{\jmath}\_l} > \mathbb{C}\_{\dot{\jmath}\_{l+1}}(\pi^{\prime}) - d\_{\dot{\jmath}\_{l+1}}.\tag{14}$$

Formulas (8) and (11)–(14) show that the maximal lateness is not reduced. Set *π* := *π* and go to step 1.

In a finite number of steps, we construct an optimal schedule satisfying the conditions of the lemma. The lemma has been proved.

Algorithm 4 constructs *n* schedules, one of which is satisfying the conditions of the lemma.


	- (a) construct the schedule *π<sup>k</sup>* = (*k*, 1, . . . , *k* − 1, *k* + 1, . . . , *n*) and
	- (b) determine *λ<sup>k</sup>* = min *j*=1,...,*n Lj*(*Cj*(*πk*)).

*O*(*n* log *n*) operations are needed for renumbering the jobs of the set *N*. *O*(*n*) operations are needed for constructing the schedule *π<sup>k</sup>* and calculating the value *λk*, *k* = 1, ... , *n*. Thus, no more than *O*(*n*2) operations are needed to find the value *λ*∗.

The objective function value of a solution of the problem of maximizing minimal lateness 1 | *rj* | max *L*min is a lower bound on the optimal objective function value for the original problem 1 | *rj* | *L*max.

**Theorem 2** ([18])**.** *For the optimal function values of the problem* 1 | *rj* | *L*max *and the corresponding inverse problem* 1 | *rj* | max *L*min*, the inequality μ*<sup>∗</sup> ≥ *λ*<sup>∗</sup> *holds.*

**Proof.** Denote by *π* and *π* optimal schedules for the problems 1 | *rj* | *L*max and 1 | *rj* | max *L*min, respectively. There exist jobs *k* , *k* ∈ *N*, for which the following inequalities hold:

$$
\mu^\* = \mathbb{C}\_{k'}(\pi') - d\_{k'} \ge \mathbb{C}\_j(\pi') - d\_j \quad \text{for all } j \in \mathcal{N}; \tag{15}
$$

$$\mathbb{C}\_{j}(\pi^{\prime\prime}) - d\_{j} \ge \mathbb{C}\_{k''}(\pi^{\prime\prime}) - d\_{k''} = \lambda^\* \quad \text{ for all } j \in \mathbb{N}. \tag{16}$$

Please note that jobs *k* and *k* can be identical. Let *π* = (*j*1, ... , *jn*). Obviously, the following inequality is satisfied for job *j*1:

$$\mathcal{C}\_{\dot{\gamma}\_1}(\pi') - d\_{\dot{\gamma}\_1} \ge \mathcal{C}\_{\dot{\gamma}\_1}(\pi'') - d\_{\dot{\gamma}\_1}.\tag{17}$$

According to inequalities (15)–(17), we get

$$\mu^\* = \mathbb{C}\_{k'}(\pi') - d\_{k'} \ge \mathbb{C}\_{\dot{\gamma}\_1}(\pi') - d\_{\dot{\gamma}\_1} \ge \mathbb{C}\_{\dot{\gamma}\_1}(\pi'') - d\_{\dot{\gamma}\_1} \ge \mathbb{C}\_{k''}(\pi'') - d\_{k''} = \lambda^\*,$$

i.e., *μ*<sup>∗</sup> ≥ *λ*∗. The theorem has been proved.

#### **4. Computational Results**

In this section, we present some results of the numerical experiments carried out on randomly generated test instances. The numerical experiments were carried out on a PC Intel-<sup>R</sup> CoreTM i5-4210U CPU @ 1.70GHz, 4 cores; 8 GB DDR4 RAM.

Various methods of generating test instances for different types of scheduling problems are described in [29]. For the problem 1|*rj*|*Lmax* with *n* jobs, the authors suggest the following generation scheme:


The authors suggest that *λ*, *μ*, *σ* and *k* are generation parameters that can be fixed by the user. Applying this generation scheme, we generate release dates which are independent from the processing times, while the due dates correlate with the processing times, as it usually happens in real problems. We set

$$
\lambda = \frac{1}{100}, \quad \mu = 100, \quad \sigma = 40, \quad k = 1
$$

and generated 15 instances for each *n* ∈ {3, 4, ..., 20}. The results are shown in Table 1, where the number of branching points are given for each of the 15 instances for any value of *n*.


**Table 1.** Number of branching points in Algorithm 2 for the test instances generated according to [29].

An asterisk (\*) in Table 1 means that the solution could not be found within 15 min. According to this table, a large part of the instances can be solved with several branching points not greater than the number of jobs. However, some instances appeared to be hard and required a much larger number of branching points. It can be observed that these hard instances generated according to [29] display a rather different solution behavior and need very different numbers of branching points for their exact solution. This interesting phenomenon deserves further detailed investigations which are planned as future work by the authors.

Next, we consider an instance of the problem as a point in the 3*n*-dimensional space, where each value of *ri*, *pi*, *di* represents one of the dimensions. We consider the vector from the zero point to the point of the instance. The complexity of an instance is defined by the direction of the vector, but not by its length. Therefore, to explore the complexity of different instances, we can take points on the surface of the 3*n*-dimensional cube. For the processing times and the release dates, we consider only non-negative values. As a result, we obtain points on the quarter of the surface of the 3*n*-dimensional cube with *ri* ≥ 0 and *pi* ≥ 0. Let the size of the cube be 100. We generated 300,000 points on the surface for problems with 4, 5, 6, 7, 8 and 9 jobs (i.e., in the 12-, 15-, 18-, 21, 24, 27-dimensional spaces, respectively). All instances have been solved, and we counted the number of iterations (i.e., branching points) for each problem instance. The results are shown in Figure 1 for the instances with 4 jobs, in Figure 2 for the instances with 5 jobs, in Figure 3 for the instances with 6 jobs, in Figure 4 for the instances with 7 jobs, in Figure 5 for the instances with 8 jobs and in Figure 6 for the instances with 9 jobs. At the *y*-axis, the numbers are given how often a particular number of branching points (Figure 1) or an interval for the number of branching points (Figures 2–6) has occurred among the 300,000 instances for each number *n* of jobs. For example, in Figure 1, one can see that among the 300,000 solved instances with 5 jobs, there were 72,897 instances with 3 branching points, 16,131 instances with 4 branching points, 29,342 instances with 5 branching points, and so on. It can be observed that the maximal number of 20 branching points was reached for 1294 instances, which is approximately equal to 0.4% of all instances. In Figure 2, the numbers of branching points are grouped in intervals of 5 in each column, i.e., there were 86 026 instances with several branching points between 0 and 4 (actually 4, because it is the minimum possible number of branching points for instances with 5 jobs), 28,566 instances with several branching points between 5 and 9, etc. For instances with 5 jobs, the maximum number of branching points was 93, and it was reached for only 33 instances. As it can be seen in the figures, most of instances can be solved by a small number of branches. For a larger number of jobs, one can detect a smaller number of hard instances with a large number of branching points. Thus, for the instances with 9 jobs, among the 300,000 solved instances, there were only two instances with several branching points more than 180,000: one with 184,868, and the other with 191,887.

**Figure 1.** Number of branching points for the instances with 4 jobs.

**Figure 2.** Number of branching points for the instances with 5 jobs.

**Figure 3.** Number of branching points for the instances with 6 jobs.

**Figure 4.** Number of branching points for the instances with 7 jobs.

**Figure 5.** Number of branching points for the instances with 8 jobs.

**Figure 6.** Number of branching points for the instances with 9 jobs.

#### **5. Conclusions**

For solving the *NP*-hard problem 1 | *rj* | *ϕ*max for arbitrary non-decreasing penalty functions, an algorithm has been proposed which implements the branch and bound method. For each sub-instance to be considered, a lower bound on the optimal function value is determined using a solution of the dual problem. The proposed algorithm for solving the dual problem can find a solution in several operations not exceeding *O*(*n*2). The proposed algorithm can find an optimal solution within a time limit of one second for about 98% of the instances for 8 jobs and for about 85% of instances for 9 jobs. Although there are a few instances with a large number of branching points, most instances can be solved very fast by the proposed algorithm. For the hard instances, the execution of the algorithm can be interrupted at any moment, and the current objective function value with the corresponding schedule *π* can be used as an approximate solution for the instance. However, some generated instances appeared to be very hard. At the moment, we cannot explain this interesting phenomenon. It requires deep additional investigations which are planned in the future.

In addition to the dual problem, the inverse problem has also been solved for the lateness objective function. The algorithm for solving the inverse problem has a complexity of *O*(*n*2) operations. However, in the problem of minimizing maximum lateness, one tries to 'equalize' the lateness while minimizing the maximum, in the inverse problem the lateness values are 'equalized' due to the maximization of the minimum provided that inserted machine idle times are prohibited.

**Author Contributions:** The results described in this paper were obtained by communications and mutual works of the authors. Conceptualization, A.A.L. and N.P.; data curation, A.A.L. and N.P.; formal analysis, A.A.L., N.P. and F.W.; investigation, A.A.L., N.P. and F.W.; methodology, A.A.L., N.P. and F.W.; project administration, A.A.L.; software, N.P.; supervision, A.A.L. and F.W.; validation, A.A.L., N.P. and F.W.; visualization, N.P.; writing—original draft, A.A.L. and N.P.; writing—review & editing, F.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research was partially supported by RFBR project 18-07-00656 and partially supported by the Basic Research Program of the National Research University Higher School of Economics.

**Conflicts of Interest:** The authors declare no conflict of interest.
