Next Article in Journal
Research on Discrete Artificial Bee Colony Cache Strategy of UAV Edge Network
Previous Article in Journal
A Survey of DEA Window Analysis Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling Disjoint Setups in a Single-Server Permutation Flow Shop Manufacturing Process

by
Andrzej Gnatowski
,
Jarosław Rudy
* and
Radosław Idzikowski
Department of Control Systems and Mechatronics, Wrocław University of Science and Technology, 50-370 Wrocław, Poland
*
Author to whom correspondence should be addressed.
Processes 2022, 10(9), 1837; https://doi.org/10.3390/pr10091837
Submission received: 7 August 2022 / Revised: 31 August 2022 / Accepted: 7 September 2022 / Published: 13 September 2022
(This article belongs to the Section Manufacturing Processes and Systems)

Abstract

:
In this paper, a manufacturing process for a single-server permutation Flow Shop Scheduling Problem with sequence dependant, disjoint setups and makespan minimization is considered. The full problem is divided into two levels, and the lower level, aimed at finding an optimal order of setups for a given fixed order of jobs, is tackled. The mathematical model of the problem is presented along with a solution representation. Several problem properties pertaining to the problem solution space are formulated. The connection between the number of feasible solutions and the Catalan numbers is demonstrated and a Dynamic Programming-based algorithm for counting feasible solution is proposed. An elimination property is proved, which allows one to disregard up 99.99% of the solution space for instances with 10 jobs and 4 machines. A refinement procedure allowing us to improve the solution in the time required to evaluate it is shown. To illustrate how the properties can be used, two solving methods were proposed: a Mixed-Integer Linear Programming formulation and Tabu Search metaheuristic. The proposed methods were then tested in a computer experiment using a set instance based on Taillard’s benchmark; the results demonstrated their effectiveness even under a short time limit, proving that they could be used to build algorithms for the full problem.

1. Introduction

The permutation Flow Shop Scheduling Problem (or FSSP) with makespan minimization is one of the oldest [1] classic discrete optimization problems. In the problem, there are n jobs processed in a system made up of m machines. and each job is processed on each machine in the same, predefined order. The task is to find an order in which the jobs are fed into the system that minimizes the time required to process them all (makespan). FSSP is commonly used to model and optimize various real-life manufacturing processes in industry or even other non-manufacturing multi-stage processes.
Since its introduction, FSSP have been extended and modified in many ways to model specific restrictions found in real-life production processes [2,3,4]. One of its most popular and earliest (see, e.g., a paper by Johnson [1]) extensions is considering setup times. Setups are meant to model additional activities performed on a machine between processing subsequent jobs. Such activities can include cleaning, re-fueling, inspection, change of machine settings (including different tool tips, etc.) or simply removal of the processed item and insertion of another.
Usually [5,6], it is assumed that setups can be performed independently, i.e., in the m-machine system, and up to m setups can be performed at the same time, provided each is conducted on a different machine. This is generally because either the machines are maintenance-free or there are separate crews for each machine available. However, in some production processes encountered in practice, it is not possible to perform setups simultaneously. This leads to a class of problems referred to as single-server problems [7], where a single server performs all setups in the system.
An example of such a restriction was found in a production process in a company located in Poland. The process involves metal-forming parts for various home appliances with the use of large hydraulic presses. Due to the size of the presses, heavy specialized carts are required to reconfigure them for different product types. The company owns only a single cart (server), due to its costs and the crew required to operate them. As a result, only a single machine can undergo a setup at any given time, i.e., the setups are disjoint.
We consider a special case of FSSP with sequence-dependent setup times, where only one setup crew is available (called hereinafter a full problem), and, at most, one setup can be performed in a system at a time (hence the setups are disjoint, or in other words—there is a single server). The motivation to tackle this problem is that despite the fact that disjoint setups are a practical constraint in the context of FSSP, they have not been adequately researched in the literature. Most of the previous work focused on theoretical complexity analysis, lacking a dedicated solving method or problem properties (see review in Section 2). The full problem can be dissected into two levels: (1) finding the order in which jobs are processed in the system (the upper level); and (2) finding the order of setups (the lower level). Such a decomposition into upper and lower problems is an existing literature practice when tackling complex scheduling problems (see, for example, Pongchairerks [8] or Bożejko et al. [9]). The full problem is relatively complex, as its NP-hardness has been proven even for several simplified cases (refer to the paper by Brucker et al. [10]). Thus, we decided to focus exclusively on the part introducing new constraints compared to relatively well-researched FSSP. Therefore, in this paper, we consider only a second level of the full problem, i.e., the order of jobs is fixed and only the order of setups is to be determined. We aim to identify problem properties and introduce new solving methods that can be used later to tackle the full problem, e.g., using a two-level approach.
The main objectives of this paper are to: (1) introduce a formal model of the researched problem; (2) formulate properties concerning the solution space of the problem: its size and methods of limiting the portion of the space that needs to be examined; (3) use the properties to propose solving methods that can be used in a two-level algorithm for the full problem. We achieved the goals by building a lattice path representation of a solution and identifying problem properties that allow us to explore the solutions space more efficiently. The main contributions of this paper are as follows:
  • Proposing a mathematical model for the researched problem, i.e., finding the order of setups in FSSP with a single-server (see Section 1). The model can be easily extended to describe the full problem.
  • Showing a relation between the number of feasible solutions and Catalan numbers for m = 2 machines (see Section 4.2) and providing a method based on Dynamic Programming for counting feasible solutions for any m > 1 .
  • Formulating an elimination property that allows us to quickly disregard the majority of the problem solution space (see Section 4.5), without losing optimality (see Section 4.3).
  • Formulating a refinement procedure, based on the rearranging of continuous blocks of setups performed on the same machine. Such a procedure can improve some solutions in a time required to calculate an objective function (see Section 4.4).
  • Introducing a Mixed Integer Linear Programming (MILP) formulation and a dedicated, heuristic solving method for the researched problem (see Section 5) that can be used in a two-level algorithm for the full problem. The experiments demonstrated that the heuristic algorithm (using the proposed problem properties) outperforms the MILP formulation for larger instances (see Section 6).
The remainder of this paper is structured as follows. In Section 2, we present a related literature, with the main emphasis on the FSSP and setup times. In Section 3, we formally define the researched problem and introduce a compact solution representation. Then, we follow with several problem properties in Section 4. The solving algorithms are described in Section 5 and tested in Section 6. Finally, the concluding remarks are in Section 7.

Notation

Throughout the paper, we will adopt the following notation. Boldfaced lowercase letters ( x , σ ) denote vectors (or sequences), with the k-th element of x denoted x k , and 0 being the zero vector. Ordinary letters, especially lowercase, usually denote scalars. Sets are generally denoted with calligraphic type ( J , S , X ) with the exception of Σ and Z . Symbols i and j are used to refer to jobs (e.g., when iterating). Similarly, a and b are used to refer to machines, while k and l are used to refer to setups. The most important notation is summarized in Table 1.

2. Related Work

Scheduling problems are of interest to both scientists and practitioners due to their high complexity and a multitude of possible applications. Although many applications are related to typical production lines, scheduling is also used in healthcare for operation rooms’ planning [11], software testing [12], scheduling of heat parallel furnaces [13] or even cryptography [14]. To provide a proper context to the researched problem, we surveyed the vast literature on scheduling from the three perspectives: (1) FSSP and its variants; (2) setups as a scheduling constraint; and (3) disjoint setups.

2.1. Flow Shop Scheduling Problem

FSSP is one of the oldest scheduling problems, dating back to the 1960s. In one of the earliest papers, Johnson [1] tackled FSSP with setup times. The author considered a 2- and a 3-machine systems and proved the optimality of the introduced decision algorithm (called Johnson’s rule) for a 2-machine case. Due to its practical applications and mathematical simplicity, a wide variety of extensions for FSSP have been formulated since its introduction. Despite being around for over 6 decades, FSSP remains an active point of interest for scientists in field of operations research across the world. Below, we will mention several modern approaches to FSSP before moving on onto FSSP with setups.
A FSSP variant with uncertain processing, transport and setup times was considered in a paper by Rudy [15]. The author employed fuzzy sets in order to model the problem and then proposed a fuzzy-aware Simulated Annealing method that was tested against several fuzzy-oblivious variants, demonstrating the superiority of the fuzzy approach. Zeng et al. [16] considered material wastage in flexible FSSP with the batch process. The problem was solved by a hybrid, non-dominated sorting Genetic Algorithm (GA). Yu et al. [17] considered machine eligibility in their GA approach to FSSP, while Dios et al. [18] took the possibility of missing operations into account. The mentioned two papers are also examples of hybrid scheduling problems, where features of more than one problem are combined to model the given problem. A distributed FSSP was tackled by Ruiz et al. [19]. In such a FSSP variant, multiple factories are assumed and jobs have to be assigned to factories in addition to deciding job orders for each factory. The authors proposed an advanced version of a simple non-nature-inspired Iterated Greedy (IG) metaheuristic. The method demonstrated promising results despite the fact that little problem-specific knowledge was built into it. In a paper by Bożejko et al. [20], a Tabu Search (TS) method for a two-machine FSSP with weighted tardiness goal function was proposed. The method employed a variant of the so-called block elimination property in order to accelerate the neighborhood search for the TS method.

2.2. Setups in Scheduling Problems

Due to the real-life need to model retrofitting, cleaning or otherwise preparing the machine between jobs, setups have become a commonly used concept in scheduling, and their importance is emphasized in a review by Allahverdi et al. [21]. Various types of setups exist. Their classification was proposed in several review papers, e.g., on FSSP by Cheng et al. [22] as well as Reza Hejazi and Sghafian [23], a more general Job Shop Scheduling Problem (JSSP) review by Sharma and Jain [24], or lotsizing problems reviewed by Zhu and Wilhelm [25]. We will discuss some of the most common variants below.
The setup time can depend only on the job following the setup on that machine (Sequence-Independent Setup Times, SIST) or on both the next and previous job (Sequence-Dependent Setup Times, SDST). Sequence-independent setups, as less general, were mostly considered in the past. Gupta and Tunc [26] considered flexible FSSP (parallel machines at each stage) with SIST and proposed four simple solving heuristics. Rajendran and Ziegler [27] proposed several heuristics to solve FSSP with SIST. Bożejko et al. [28] considered the FSSP variant with a machine time couplings’ concept, which can be used to model various dependencies between operations starting and completion times, including setup times and their generalizations. In the paper by Belabid et al. [29], permutation FSSP with SIST was solved by MILP and two heuristic based on Johnson’s and NEH algorithms, the iterative local search algorithm and the IG algorithm.
On the other hand, sequence-dependent setups are more commonly researched. One of the first works on FSSP with SDST is the paper by Gupta and Darrow [30], where a 2-machine variant was considered. The authors proved its NP-completeness and proposed some heuristic methods, while also demonstrating that non-permutation approaches are not always optimal. Next, Ruiz and Stützle [31] proposed an IG local search method for both makespan and total weighted tardiness goal functions for FSSP, demonstrating good results. Zandieh and Karimi [32] considered a multi-criteria hybrid FSSP problem with SDST, which they solved using a multi-population GA method. A 2-machine robotic cell was shown in a paper by Fazek Zarandi et al. [33], where the authors considered loading and unloading times, SDST and a total cycle flow time minimization. Similarly, Majumder and Laha [34] proposed a cockoo search solving method for 2-machine robotic cell with SDST. Finally, in a paper by Burcin Ozsoydan and Sağir [35], a case study for hybrid flexible FSSP with SDST was presented. The authors proposed an IG metaheuristics with 4 phases, which includes a descent neighborhood search, a NEH-like starting solution and a destruction mechanism for creating perturbations. The method was then tested against several competitive algorithms with promising results.
It also common to distinguish jobs’ batches, groups or families, which leads to two types of setup times: “small” (between jobs within a job family) and “large” (between jobs from different families). Often “small” setups have a setup time radically different from “large” setups, leading to different dispatching rules or disregarding “small” setups altogether as negligible. In a paper by Cheng et al. [36], a two-machine, flexible FSSP with a single operator was considered, where setups are performed only when the type of operation changes. The authors proposed a heuristic for the problem and analyzed its worst-case performance. A no-wait flowline manufacturing cell scheduling problem with a sequence-dependent family setup times and makespan minimization was tackled in the paper by Lin and Ying [37]. An efficient metaheuristic for solving the problem was proposed and managed to obtain some optimal solutions for the tested instances in a reasonable computational time. Rudy and Smutnicki [12] considered a problem of online scheduling of software tests on parallel machines’ cloud environment. Fixed-time setups occur only when the next operation (test case) comes from a different job (test suite) than the previous one, while also predicting processing times based on system history.

2.3. Disjoint Setups (Single Server)

The literature on scheduling problems with a limited number of simultaneous setups allowed is relatively sparse. Brucker et al. [10] provided some theoretical results for special cases of FSSP with a single-server. In particular, 2-machine, 3-machine and arbitrary number of machines were considered. Specific cases included identical or unit processing and setup times. For 12 cases, the authors proved they are polynomially solvable, while proving or citing NP-hardness for another 9 cases. In a paper by Lim et al. [38], a two-machine FSSP with sequence independent setups and a single server was considered. Authors discussed several special cases (e.g., identical processing times, short operations, etc.), identifying their properties and proposed lower bounds for the problem. A two-machine FSSP with no-wait, separable setups and a single server was considered in paper by Su and Lee [39]. The authors proved the problem can be reduced to a two-machine FSSP with no-wait and setup times if processing times on the first machine are greater than setup times on the second machine. Moreover, several properties were formulated to accelerate the convergence of the proposed methods: dedicated heuristic and Branch and Bound methods. In another paper by Samarghandi [40], the authors analyzed several special cases for the same problem. A hybrid Variable Neighbourhood Search and TS solving methods were proposed for the generic case. Similarly, a single-server 2-machine FSSP was considered in the paper by Cheng and Kovalyov [41]. Machine-dependent setups were assumed, but only when the server switched machines. The authors demonstrated that the problem can be reduced to a single-machine batching problem. For some special cases (minimizing maximum lateness or total completion time) efficient O ( n log n ) solving methods were proposed. Several other cases were proved to be NP-hard or strongly NP-hard (with special O ( n 3 ) and O ( n 4 ) cases when all operations on the machine have equal processing times). Two pseudopolynomial dynamic programming algorithms were also proposed.
In a paper by Cheng et al. [36], only one of two machines can undergo a setup at the time due to the One-Worker-Multiple-Machine model. The authors considered a cyclic moving pattern of the operator. The worst-case analysis and hard NP-completness proof for the considered FSSP SDST variant were provided. The authors also proposed a few heuristic algorithms. Similarly, Gnatowski et al. [42] considered a FSSP with sequence-dependent disjoint setups, complete with model, several properties and two solving methods: a MILP formulation and a greedy algorithm. The authors restricted themselves to a 2-machine variant only. Bożejko et al. [9,43] tackled FSSP with two-machine robotic cells, where a single robotic arm is used to both perform operations and setups in each cell. It results not only in disjoint setups, but also at most one machine can be active in a cell at any moment. The authors demonstrated, that in a special case, assigning operations and setups to machines can be conducted optimally in polynomial time. However, the paper does not consider more than 2 machines per cell. Moreover, in a paper by Iravani and Teo [44], the server handled both operations and setups, so at most one operation can be processed at a time. The authors demonstrated that for minimization of setup and holding costs, there exists an easily implementable class of asymptotically optimal schedules.
For non-FSSP problems, Vlk et al. [45] considered a Parallel Machines Scheduling Problem variant with disjoint setups and introduced Integer Linear Programming, Constraint Programming and hybrid methods. Okubo and et al. [46] proposed an approach to Resource Constrained Process Scheduling, in which setups require additional resources. This could lead to some setup operations being delayed when said resources are used by another setup. Authors proposed two solving methods, Integer Programming and Constraint Programming formulations, as well as a heuristic mask calculation algorithm for the efficient restriction of modes. Finally, Tempelmeier and Buschkühl [47] considered a “capacitated” lotsizing problem with sequence-dependent setups and a setup operator shared by 5–10 machines. Good results were obtained under a few minutes using a standard CPLEX solver. A parallel identical machines scheduling with sequence-independent setups and a single server was researched in a paper by Glass et al. [48]. Several cases were analyzed and approximate algorithms were proposed.

3. Problem Formulation

In this section, we will formally state the researched problem, and formulate its mathematical model. We will discuss different solution definitions and point to the one that limits the size of a search space.
The problem is based on the classic FSSP with setups. Each job from the set J = { 1 , 2 , , n } must be processed on each machine from the set M = { 1 , 2 , , m } . We assume that the order of jobs is fixed. The processing of a job i J on a machine a M is called operation and takes p i a > 0 time. Between each two consecutive jobs i , i + 1 , i J \ { n } , performed on the same machine a M , a setup must be performed. The setup also cannot be interrupted and takes s i a > 0 time. The constraints on the systems can be summarized as follows:
  • Each job is processed on each machine in a natural order 1 2 m .
  • Each machine processes jobs in a natural order 1 2 n .
  • Between each two consecutive operations on each machine, there is a setup.
  • Operations and setups cannot be interrupted.
  • Each machine can process at most one job at any time.
  • At most one setup can be performed in the system at any time (the system has single-server or, equivalently, setups are disjoint).
  • Thus, at any given time, a machine is in one of three possible states: (1) processing a single operation, (2) undergoing a single setup, or (3) idling. The total number of disjoint setups in the system is ( n 1 ) m (there are n 1 setups on each of m machines). To shorten various equations, we also define an auxiliary set as S = { 1 , 2 , , ( n 1 ) m } , which is used to enumerate data structures and notation related to setups in different contexts later on.
The solution of the problem can be represented by a schedule. A schedule is a set of starting and completion times for all operations and setups. By  S π i a and C π i a , a M , i J , we will denote the starting and completion time of the operation of the job i, being processed on the machine a. Analogously, by  S σ i a and C σ i a , a M , i J \ { n } , we will denote the starting and completion time of the setup performed on the machine a, after the job i. With the introduced notation, the problem constraints can be formalized as
a M i J C π i a = S π i a + p i a ,
a M i J \ { n } C σ i a = S σ i a + s i a ,
a M \ { m } i J C π i a S π i a + 1 ,
a M i J \ { n } C σ i a S π i + 1 a ,
a M i J \ { n } C π i a S σ i a ,
a , b M , a b i , j J \ { n } , i j C σ i a S σ j b C σ j b S σ i a ,
where “⊻” is an exclusive OR. Constraints (1) and (2) guarantee that operations and setups cannot be interrupted. Constraint (3) represents the sequential processing of a job (technological constraints). Constraints (4) and (5) guarantee that setups and operations performed on the same machine do not overlap and are performed in a predefined order (machine constraints). Finally, Constraint (6) assures that at most one setup can be performed in the system at any time. A schedule is called feasible if it satisfies all the Constraints (1)–(6). The problem is to find such feasible schedule, so that the makespan  C max = C π n m S π 1 1 is minimal. Without losing generality, we can fix the time the first job starts being processed; thus, we set S π 1 1 := 0. This simplifies the computation of the makespan as C max = C π n m .
A schedule representation of a solution leads to an infinite set of possible solutions. Therefore, usually representing the solution by precedence constraints is better suited for both theoretical analysis and solving algorithms. Even though the order in which the jobs are processed is fixed, the order of the setups is not. Let τ T describe an order in which setups are performed, where τ k , k S , denotes the k-th setup performed in the system and T is a set of all possible setup orders. Setups are identified by a pair of numbers ( a , i ) , where a M is the machine on which the setup is performed on and i J \ { n } is the job the setup is performed on afterwards. The number of all possible setups orders | T | is equal to ( ( n 1 ) m ) ! . We say that an order of setups is feasible if it describes at least one feasible schedule. Since the order of setups on each machine is fixed, each feasible order of setups can be described unambiguously by the order of machines on which setups are performed σ Σ , where σ k , k S is the machine on which the k-th setup is performed. When describing a setup order, we will refer to one of the two representations: σ or τ , depending on the context. In order to transform any feasible τ T into a unique σ Σ , one can simply disregard a job number from each pair describing a setup. The reversed procedure is also possible, by assigning consecutive jobs to the setups performed on the same machine (refer to Example 1). Interestingly, sometimes an infeasible τ can be transformed by this method into a feasible σ (as observed in the example). The number of different σ , denoted | Σ | , is smaller than | T | , and equals
| Σ | = | T | ( n 1 ) ! m = ( ( n 1 ) m ) ! ( n 1 ) ! m .
The set of all feasible σ is denoted by Σ feas .
Example 1.
Consider a setup order for instance with n = 4 and m = 2 :
τ 1 = ( 1 , 3 ) , ( 1 , 2 ) , ( 1 , 1 ) , ( 2 , 3 ) , ( 2 , 2 ) , ( 2 , 1 ) .
The setup order τ 1 is infeasible, e.g., τ 1 1 = ( 1 , 3 ) is scheduled to be the first setup, while operation 1 of job 3 cannot be performed yet, as the setup ( 1 , 2 ) is not completed. Now, consider the setup order σ 1 = ( 1 , 1 , 1 , 2 , 2 , 2 ) , that was built from τ 1 . This setup order is feasible and can be transformed into a feasible setup order
τ 2 = ( 1 , 1 ) , ( 1 , 2 ) , ( 1 , 3 ) , ( 2 , 1 ) , ( 2 , 2 ) , ( 2 , 3 ) .
Now, consider an infeasible setup order
τ 3 = ( 2 , 1 ) , ( 2 , 2 ) , ( 2 , 3 ) , ( 1 , 1 ) , ( 1 , 2 ) , ( 1 , 3 ) .
Its corresponding setup order σ 2 = ( 2 , 2 , 2 , 1 , 1 , 1 ) is also infeasible.
For a given setup order σ to become a solution representation, a quick method for transforming a feasible σ Σ feas into a unique feasible schedule is required. We will limit our discussion to left-shifted schedules, that is, the schedules where no setup or operation can be performed earlier, without changing the order of setups. It can be easily demonstrated that each feasible solution σ describes exactly one feasible, left-shifted schedule, and each left-shifted schedule is described by exactly one σ . This schedule can be built similarly to how a schedule is built based on an order of operations in the Job Shop Scheduling Problem (e.g., in a paper by Nowicki and Smutnicki [49]), since Constraints (1)–(6) can also be represented as a sparse, acyclic, weighted graph. Thus, for any σ Σ feas , the corresponding schedule can be built in O ( n m ) time. We define that the makespan of σ is the makespan of the corresponding left-shifted schedule, and denote it by C max ( σ ) . Since each optimal schedule can be transformed into a left-shifted one without affecting its makespan, the considered problem can be rewritten into finding an optimal setup order σ * , such that
C max ( σ * ) = min σ Σ feas C max ( σ ) .
Hereinafter, unless otherwise stated, we will only use the setup order representation σ Σ .
Example 2.
Consider the instance with n = 4 , m = 3 and processing times and setup times from Table 2. The order of setups σ = ( 1 , 1 , 1 , 2 , 3 , 2 , 3 , 2 , 3 ) is feasible for that instance. A left-shifted schedule for solution σ , with  C max = 17 , is shown as a Gantt chart in Figure 1. Note that σ is not an optimal solution. For example, the first setup on machine 2 could be performed earlier (during the third operation on machine 1).

4. Problem Properties

In this section, we will formulate several problem properties, particularly the ones regarding the solution space. We start by demonstrating the solution representation using mathematical concepts of lattice paths. We then discuss the number of feasible solutions and the ways to compute it. We formulate two theorems that can be employed in solving methods: an elimination property for skipping certain solutions and a refinement procedure that can improve solutions. Finally, we discuss what portion of the solution space can be skipped by use of the formulated properties.

4.1. Lattice Path Solution Representation

As it was demonstrated in the previous section, some orders of setups are infeasible. To better illustrate the nuances of the solution feasibility in the researched problem, we will introduce the lattice path representation of a solution.
Formally, a lattice path L = ( x 0 , x 1 , , x k ) is a sequence of points in Z d , where the difference between any pair of consecutive points x i + 1 x i , i { 0 , , k 1 } , is in a predefined set. Elements of the set are called steps. Here, the set is a standard basis E m = { e 1 , e 2 , , e m } and d = m . One can associate each step e a , with an act of a setup being performed on machine a. Consider lattice path L from ( 0 , 0 , , 0 ) to ( n 1 , n 1 , , n 1 ) , consisting of ( n 1 ) m steps in E m . Each point in this path x i = ( x 1 , x 2 , , x m ) can be interpreted as a state of a production system with x a , a M setups already performed on machine a. Then, since there are n 1 steps to be performed on each machine, each L represents a unique sequence of setups.
Example 3.
Consider a problem size n = 3 , m = 3 and a setup order σ = ( 2 , 1 , 1 , 3 , 3 , 2 ) . The transformation from σ to the corresponding L can be performed as follows. Start building the path from ( 0 , 0 , 0 ) (the number of dimensions d = m = 3 ). Then, for each element σ i in σ , build sequentially a new point in L by adding 1 to the coordinate given by value of σ i of the previous point. Refer to the equations below:
σ = ( 2 , 1 , 1 , 3 , 3 , 2     ) L = ( ( 0 , 0 , 0 ) , ( 0 , 1 , 0 ) , ( 1 , 1 , 0 ) , ( 2 , 1 , 0 ) , ( 2 , 1 , 1 ) , ( 2 , 1 , 2 ) , ( 2 , 2 , 2 ) ) .
The reasoning can be reversed to calculate the σ corresponding to L.
The introduced lattice paths can describe not only feasible setup orders, but also infeasible ones. The problem Constraints (1)–(6) can be directly translated into the domain of lattice paths, by limiting admissible paths to ones consisting of elements in
X m = x Z m | a M \ { 1 } x a 1 + min 1 j < a x j .
The interpretation of Equation (13) is that, at any time, the number setups performed on machine a cannot be greater than the number of setups performed on previous machines j = 1 , 2 , , a 1 plus one. Note that X m does not put direct constraints on a maximum and a minimum number of setups performed on any machine (i.e., constraint i M     0 x i < n ). Such constraints are represented in the set of feasible steps, as well as in the described lattice paths’ start- and endpoints.

4.2. Counting Feasible Solutions

With the lattice path solution representation introduced, one can easily calculate the number of different solutions for a given n and m = 2 . The following Lemma 1 was first demonstrated by Gnatowski et al. [42]. Here, however, we will show a proof that will be derived from the general case described in (13).
Lemma 1
(Number of feasible solutions, m = 2 [42]). Consider a problem instance with m = 2 machines and n > 1 operations. The number of different feasible setup orders is given by the n-th Catalan number
| Σ feas | = C n = 1 n + 1 2 n n = ( 2 n ) ! ( n + 1 ) ! n ! .
Proof. 
The relation between Catalan numbers and the number of feasible setups can be explained in multiple ways. One can use an analogy to the problem of finding the number of possible legal sequences of parentheses; to several different Dyck words-related problems; or to more or less general lattice paths’ analysis (see, for example, a paper by Stanley [50]). Here, we will explore on the last mentioned possibility, as it also provides some insight into the multi-machine variant of the problem.
For m = 2 , Equation (13) becomes
X 2 = x Z 2 | x 2 1 + x 1 .
Equation (15) limits the admissible lattice paths from ( 0 , 0 ) to ( n 1 , n 1 ) in Z 2 , to the ones weakly below line x 2 = x 1 + 1 . Such paths will be called L 2 paths (as shown in Figure 2).
Consider lattice paths L 2 from ( 1 , 0 ) to ( n 1 , n ) , with steps in E m , staying weakly under x 2 = x 1 + 1 . In any L 2 , the first and the last step are fixed (bold arrows in Figure 2). Therefore, the number of different L 2 paths is equal to the number of different L 2 paths. Now, by translating the first axis by 1, the  L 2 starts in ( 0 , 0 ) and ends in ( n , n ) , staying weakly under x 2 = x 1 . The problem of finding the number of different paths defined as such is well known, and has a solution of C n . For the proof of this fact, as well as other occurrences of Catalan numbers in mathematics, refer to [50].    □
While for m = 2 , the number of different feasible solutions can be computed using (14), the case of m > 2 is much harder. The reasoning from the proof of Lemma 1 cannot be trivially applied for any m > 2 , even though generalizations for multi-dimensional Catalan numbers exist (see, for example, papers by Haglund [51] or Vera-López et al. [52]).
The constraints on lattice path defined by X m are relatively complex and—to our knowledge—cannot be addressed by lattice path analysis methods (refer to e.g., a paper by Krattenthaler [53]) to obtain a useful closed-form expression for computing | Σ feas | . On the other hand, recursive formula can be easily obtained and used as a basis for a Dynamic Programming (DP) algorithm, calculating the number of feasible solutions for any m > 1 .
An outline of the proposed DP method is shown in Algorithm 1. A subproblem given by any x X m is defined as finding a number of different lattice paths from 0 to x , satisfying (13). Observe that it is equal to the number of different paths from 0 to each point y X m , where y is any point in X m that can be stepped back from x (i.e., x y E m ). For example, for  x = ( 2 , 3 , 2 ) , admissible y { ( 2 , 2 , 2 ) , ( 2 , 3 , 1 ) } . Different candidates for y are created in line 7, while in line 8, it is determined whether they are in X m . Lastly, a degenerate subproblem x = 0 has solution of 1, and constitutes a stop criterion for the recurrence (line 3).
Property 1.
Algorithm 1 runs in O ( m n m ) time, using O ( n m ) memory.
Proof. 
First, let us count the number of recursive calls of CountSolution. Since the result for each unique x is eventually stored in mem, the number of calls and the memory complexity of mem cannot be greater than a cardinality of
X m { x Z m : a M 0 x a n 1 } ,
where the latter set reflects origin and destination points for the lattice path L. In another words, mem must be able to store solutions to the subproblems corresponding to each point that can appear in L. A simple upper bound for such is obviously O ( n m ) , which is the cardinality of the latter set. This allows mem to be a continuous block of n m memory registers, since it takes O ( n m ) to initialize them (line 1). The memory block can then be indexed similarily to C++ arrays, i.e., mem( x ) is understood as mem( x i ), where   
x i : = a M x a · n a 1 .
Equation (17) is a simple one-to-one mapping of all possible nodes of the considered lattice paths to numbers { 0 , 1 , , a M n a 1 } . The mapping is analogous to a flat array index for multidimensional arrays. The index notation is used to accelerate some computations, taking advantage of an unbounded capacity of a single memory cell (each memory cell of an assumed abstract machine can hold any integer). In a constant time, one can not only access or modify any element of x represented by x i , but also copy x i , which is not possible for x directly.
Algorithm 1: Counting feasible solutions
Data: x , a point in X m .
Result: | Σ feas | : number of feasible solutions (paths from 0 to x in X m ).
Processes 10 01837 i001
Next, let us derive the computational complexity of a single CountSolution call. Lines 3–6 can be conducted in O ( m ) time. Although the computation of the index x i of x takes O ( m ) time, any change in a constant number of elements in x can be reflected in x i in constant time. Therefore, lines 9 and 15 take O ( 1 ) time. Line 10 can be conducted in O ( 1 ) time, by using cumulative minimum from line 6. Memory access in lines 11 and 13 can also be conducted in O ( 1 ) time, using precomputed index x i —resulting in an overall time complexity of O ( m ) . Memory complexity of a single CountSolution run is O ( m ) .
Overall, the computational complexity of the proposed algorithm is
O ( n m · m ) = O ( m n m ) .
The memory complexity is
O ( max { n m , m } ) = O ( n m ) .
A tighter bound can probably be demonstrated for the cardinality of (16) (by a factor of 1 m ! ). Then, however, one could not use a simple index x i to access memory, leading to a potentially worse computational complexity.
Because of the time and memory complexities, Algorithm 1 performs best when n m . Such an assumption is usually realistic. Even then, the algorithm can only be used to compute | Σ feas | for small and medium instances. Table 3 summarizes the differences between | Σ feas | and | Σ | for several n and m combinations. In particular, for m = 2 , the  | Σ | / | Σ feas | can be calculated directly
| Σ | | Σ feas | = ( 2 ( n 1 ) ) ! ( n 1 ) ! 2 · ( n + 1 ) ! n ! ( 2 n ) ! = ( 2 n 2 ) ! ( n 1 ) ! 2 · ( n 1 ) ! n ( n + 1 ) ( n 1 ) ! n ( 2 n 2 ) ! ( 2 n 1 ) 2 n = ( n + 1 ) n 2 ( 2 n 1 ) O ( n ) .
Clearly, the number of feasible solutions is significantly smaller than the total number of solutions.

4.3. Elimination Property

In this section, we will discuss the elimination property, which can be used for a quick detection of suboptimal solutions or to potentially improve a solution (as shown in Section 4.4). The property relates to the consecutive setups performed after the same job—constituting a block.
Definition 1.
For solution σ , let x k = ( x 1 k , x 2 k , , x m k ) be a k-th node of the lattice path corresponding to σ , i.e.,  x a k denotes how many setups were performed on machine a after setup σ k was completed. Let δ be an h-element subsequence of subseqeuent setups from σ starting at element k. Subsequence δ will be called a block of setups in solution σ (in short, a block) if and only if:
1. 
All setups in δ are performed on a different machine:
u , v { 1 , 2 , , h } , v u δ u δ v .
2. 
All setups in δ are performed after the same job:
i J u δ x k u = i .
3. 
The length h of the block δ is maximal i.e.,  δ of length h + 1 starting at k is not a block.
4. 
Blocks partition the sequence σ , i.e., each element σ k belongs to exactly one block.
We will illustrate the above definitions with an example.
Example 4.
Consider the following solution:
σ = ( 3 , 2 , 1 , 1 , 4 , 2 , 4 , 1 , 3 , 3 , 2 , 4 ) .
We start with lattice path node x = ( 0 , 0 , 0 , 0 ) . We then perform δ 1 = ( 3 , 2 , 1 ) . All setups are on different machines and after the same job ( i = 0 ) , thus δ 1 is the first block. Note that we cannot define the first block as ( 3 , 2 , 1 , 1 ) , because setups are not on different machines. We are now on lattice node x = ( 1 , 1 , 1 , 0 ) . The next block cannot be δ 2 = ( 1 , 4 ) , because x 1 x 4 . Thus, δ 2 = ( 1 ) , leading to x = ( 2 , 1 , 1 , 0 ) . Similarly δ 3 = ( 4 ) and not ( 4 , 2 ) because x 4 x 2 . This leads to node x = ( 2 , 1 , 1 , 1 ) . Next, the block cannot be ( 2 , 4 , 1 ) because x 2 x 1 , but it can be ( 2 , 4 ) , leading to node x = ( 2 , 2 , 1 , 2 ) . Next, we cannot have a block ( 1 , 3 , 3 ) (machines are not different) or ( 1 , 3 ) ( x 1 x 3 ) ; thus, the next two blocks will be δ 5 = ( 1 ) and δ 6 = ( 3 ) leading to nodes x = ( 3 , 2 , 1 , 2 ) and then ( 3 , 2 , 2 , 2 ) . Finally, we have δ 7 = ( 3 , 2 , 4 ) , as all elements are different and x 3 = x 2 = x 4 , leading to x = ( 3 , 3 , 3 , 3 ) . Thus, solution σ contains seven blocks, as marked by the brackets below.
σ = ( 3 , 2 , 1 , 1 , 4 , 2 , 4 , 1 , 3 , 3 , 2 , 4 ) . 1 1 1 1 1 1 1
With the blocks defined, the elimination property can be formulated. The property is a generalization of the result presented by Gnatowski et al. [42], for any m > 1 .
Theorem 1
(Elimination property). Let σ Σ feas be a feasible solution, with two consecutive setups σ k = b , σ k + 1 = a , in a block of job j J \ { n } ; such that a < b . Then, a solution σ ^ , where the order of the setups is reversed,
i S σ ^ i = a for i = k , b for i = k + 1 , σ i otherwise ,
is feasible and C max ( σ ^ ) C max ( σ ) .
Proof. 
The proof is based on an analysis of left-shifted schedules build from both σ and σ ^ . The elements of the schedule build from σ ^ are denoted with hats ( C σ C ^ , C π C ^ , etc.). Moreover, let γ denote the completion time of setup σ k 1 , which is unaffected by the swap. The notation is presented in Figure 3.
First, let us identify elements of schedule not affected by the change in the order of setups. Since k-th setup in the original solution is performed on machine b, then by the time the setup can start, the first b operations of job j must be already performed before the setup is started (marked in bold in the figure). Therefore, C π j a and C π j b remain constant for both setup orders. As a result, the completion time of setups in σ ^ performed after job j on machines { 1 , 2 , , b } \ { a , b } , can only be affected by a change in completion time of k + 1 -th setup. The change can be expressed as C σ j a C σ C ^ j b , where
C σ j a = s j a + max { s j b + max { γ , C π j b } , C π j a } = s j a + s j b + max { γ , C π j b } ,
C σ C ^ j b = s j b + max { s j a + γ , s j a + C π j a , C π j b } = s j b + s j a + max { γ , C π j a , C π j b s j a } .
Then,
C σ j a C σ C ^ j b = max { γ , C π j b } max { γ , C π j a , C π j b s j a } .
Since C π j a < C π j b , we have C σ j a C σ C ^ j b 0 and the completion time of k + 1 -th setup cannot increase in σ ^ . As a result, the operations in job j + 1 preceding operations on machines a and b cannot be delayed, thus C π C ^ j + 1 a 1 C π j + 1 a 1 and C π C ^ j + 1 b 1 C σ j + 1 b 1 . Therefore, if also operations in job j + 1 performed on machines a and b can only start earlier, then makespan also can only decrease. We check that further.
Let us discuss the change in completion time of job j + 1 on machine a,
C π j + 1 a C π C ^ j + 1 a = max s j a + s j b + max { γ , C π j b } , C π j + 1 a 1 max s j a + max { γ , C π j a } , C π C ^ j + 1 a 1 .
Knowing, that C π j + 1 a 1 C π C ^ j + 1 a 1 , we only need to calculate
s j a + s j b + max { γ , C π j b } s j a + max { γ , C π j a } = s j b + max { γ , C π j b } max { γ , C π j a } .
Since C π j b > C π j a , then the value of (28) is non-negative and therefore C π j + 1 a C π C ^ j + 1 a 0 and C π j + 1 a C π C ^ j + 1 a .
Next, let us discuss the change in completion time of job j + 1 on machine b. Since C σ j b < C σ j a < C π j + 1 a < C π j + 1 b 1 , we have
C π j + 1 b = p j + 1 b + max C σ j b , C π j + 1 b 1 = p j + 1 b + C π j + 1 b 1 ,
C π C ^ j + 1 b = p j + 1 b + max C σ C ^ j b , C π C ^ j + 1 b 1 .
Since we know that C π j + 1 b 1 C π C ^ j + 1 b 1 and C π j + 1 b 1 > C σ j a C σ C ^ j b , then also C π j + 1 b C π C ^ j + 1 b .
To sum up, it was proven that in the left-shifted schedule build for σ ^ , completion time of jobs 1 , 2 , , j + 1 on machines 1 , 2 , , b can only decrease compared to the schedule build for σ . The same can be said about completion time of k + 1 -th setup. Therefore, all further jobs and setups can only be performed earlier, resulting in C max ( σ ^ ) C max ( σ ) .    □
Theorem 1 allows to detect potentially suboptimal solutions. Since the described σ σ transformation cannot lead to an increase in makespan, any σ Σ feas that fulfills conditions of Theorem 1 (i.e., satisfies the elimination property) can be safely eliminated from the solution space, without the risk of removing all optimal solutions.

4.4. Refinement Procedure

Theorem 1 provides tools for reducing the size of the solution space. However, it can also be observed as a way to potentially improve a solution, by applying a swap move on the setups satisfying the elimination property. A single solution can contain multiple such setup pairs. Moreover, the swap can create a new pair of setups satisfying the elimination property, as shown in Example 5.
Example 5.
Consider a problem size of m = 3 and n = 2 . Solution σ = ( 3 , 2 , 1 ) is feasible; however, two setup pairs: ( 3 , 2 ) and ( 2 , 1 ) , satisfy elimination property. By applying the swap move to the first pair, we obtain solution σ = ( 2 , 3 , 1 ) —now containing a single, new pair ( 3 , 1 ) satisfying the property. By applying the swap move, we obtain σ = ( 2 , 1 , 3 ) . Once again, the solution contains a pair σ , ( 2 , 1 ) . Finally, the swap move can be applied a third time to obtain σ = ( 1 , 2 , 3 ) . The setup order σ does not satisfy the condition from Theorem 1 and cannot be eliminated. The procedure can be summarized as follows:
σ ( 3 , 2 , 1   ) ( 3 , 2 ) swap σ ( 2 , 3 , 1 ) ( 3 , 1 ) swap σ ( 2 , 1 , 3 ) ( 2 , 1 ) swap σ ( 1 , 2 , 3 ) .
The observation above is a basis of a refinement procedure. Given a solution σ , the procedure performs setup swaps until Theorem 1 cannot be applied anymore. The resulting solution σ N is said to be refined and
C max ( σ N ) C max ( σ ) .
Of course, there exist solutions that do not contain any setup pairs satisfying the elimination property. In such a case, no swaps are performed. In Example 5, the solution ( 1 , 2 , 3 ) is the result of refining the solution ( 3 , 2 , 1 ) .
A few questions arise with regards to the refinement procedure, namely, how to detect which swaps should be performed and in what order. Moreover, we would like to know how fast could this procedure be performed. We will answer those questions in the following theorem, using the fact that the procedure resembles the bubble sort algorithm.
Theorem 2
(Block property). For any feasible solution σ, the refinement procedure can be completed in O ( n m ) time.
Proof. 
First, observe that rearranging setups within a single block does not change the total number of blocks or contents of other blocks. Therefore, the refinement can be applied for each block separately. Consider a block with a length of l. There is only a single order of setups, for which no pair of setups in the block satisfy the elimination theorem—setups ordered increasingly by the machine on which they are performed. The block can be sorted by applying at most O ( l 2 ) swap-moves, chosen according to the elimination theorem (refer to bubble sort results by Cormen et al. [54] (p. 40)). As a result, the worst case time complexity of the procedure would be O ( n m 2 ) . However, the sorting can be performed quicker in O ( n m ) by using Algorithm 2, which sorts all blocks simultaneously. □
Algorithm 2: Refinement procedure
Data: σ : solution to be refined.
Result: σ : refined solution.
Processes 10 01837 i002
The refinement procedure can be utilized to potentially improve the quality of solutions as a part of larger solving algorithm (e.g., similarly to the individual learning procedure in memetic algorithms proposed by Moscato [55]). The computational complexity of O ( n m ) is equal to the complexity of calculating the objective function; thus, the refinement procedure can be used frequently.

4.5. Counting Eliminated Solutions

In order to estimate how much of the solution space can be eliminated, we first discuss the number of setup orders that cannot be eliminated (preserved solutions). For  m = 2 , the number is given by a known, closed-form expression, first proposed in the lemma by Gnatowski et al. [42] (without a proof). Here, we present a proof for the lemma.
Lemma 2
(Number of preserved solutions m = 2 [42]). Consider a problem size m = 2 , n > 1 . The number of feasible solutions not eliminated by Theorem 1 is given by the n 1 -nth Catalan number
C n 1 = 1 n 2 n 2 n 1 = ( 2 n 2 ) ! n ! ( n 1 ) ! .
Proof. 
For the elimination property to be applied, two consecutive setups must be performed after the same job, but on different machines. For  m = 2 , it means that the first setup must be performed on machine 2, and the second on machine 1. For that to be possible, all the setups between previous jobs must already be performed. Such a condition corresponds in the domain of lattice paths to the points on x 1 = x 2 line. Then, step ( 0 , 1 ) (setup performed on machine 2, against elimination property) moves the lattice path above x 1 = x 2 , and step ( 1 , 0 ) keeps the lattice path below x 1 = x 2 . Therefore, the elimination property limits the admissible lattice paths from ( 0 , 0 ) to ( n 1 , n 1 ) , to stay weakly below the x 1 = x 2 line. The number of such paths is given by C n 1 .    □
The Lemma 2 allows one to easily obtain the number of preserved solutions for m = 2 , and thus the number of eliminated solutions.
Theorem 3
(Eliminated solutions for m = 2 [42]). Consider a problem size m = 2 , n > 1 . Theorem 1 eliminates 50% to 75% of feasible solutions.
  • The proof will be omitted, as it was demonstrated in [42].
Since even for calculating | Σ feas | and m > 2 , a closed-form expression is not known; to compute the number of preserved solutions, we will resort to the DP approach again. Algorithm 1 must be modified to check both the elimination property condition and the feasibility. For the modified procedure, refer to Algorithm 3.
Algorithm 3: Counting preserved solutions
Data: x : potin in X m .
Result: | | : number of unique preserved solutions.
Processes 10 01837 i003
To allow for checking the elimination property, the machine on which the last setup was performed must be added to the definition of a subproblem. For example, for  n > 3 and x = ( 2 , 2 , 2 ) , the last setup could be performed on any of the three machines (1, 2 or 3). In the Algorithm 3, the machine the last setup was performed on is stored in l m .
Property 2.
Algorithm 3 runs in O ( m 2 n m ) time, using O ( m n m ) memory.
Proof. 
The proof is similar to the proof of Property 1. However, now the number of subproblems is m times larger, since each x X m can be potentially matched with any l m M . It results with both computational and memory complexities greater by a factor of m.    □
The Algorithm 3 was used to compute the number of unique preserved solutions, denoted by | Σ * | , for  m { 1 , 2 , , 8 } and n { 2 , 3 , , 10 } . Then, an elimination ratio
ER = | Σ feas | | Σ * | ,
was calculated. The results are demonstrated in Figure 4. For  n = 2 , only a single setup order is preserved, resulting in high elimination ratios that can be calculated from Equation (14). For  n > 2 , ER increases with the increase of n, eventually overtaking the value for n = 2 . For instances with a larger number of machines, the elimination ratio continues to increase steadily, reaching almost 10 5 for m = 8 .

5. Solving Algorithms

In this section, three solving algorithms will be described: a MILP-based method, a simple greedy heuristic and a Tabu Search metaheuristic algorithm. The first algorithm will be used later on as a reference to assess the heuristics. Both can also be a part of a two-level algorithm, solving the extended problem for varying the job order.

5.1. Greedy Heuristic

The greedy algorithm was inspired by the NEH algorithm proposed in the paper by Nawaz et al. [56] for the FSSP problem. In this case, however, the result of the algorithm is the order of setups σ for a fixed order of jobs. The method is based on the step-by-step building of a partial solution. First, we set the completion times for the all operations of the first job, as those require no setups. Then, we perform the first setup on the first machine ( σ 1 = 1 ) and the operation after it, while also calculating their completion times. Then, we proceed to insert the remaining elements into σ , as follows.
Each time, we choose the value of σ k by considering all values a M that are feasible (given partial setup order up to σ k 1 ). For each such candidate of value a, we compute the completion time of the corresponding setup. In order to do this, we store the completion time of the last setup in σ k 1 throughout the algorithm. For each calculated setup, we then calculate the completion time of the operation after it. Then, as  σ k , we choose such value a, for which the computed operation completion time is smallest. The computational complexity of the algorithm depends on the number of setups and machines, and is given by O ( m | S | ) = O ( n m 2 ) .
Example 6.
Consider the instance from Table 2. For this instance, the greedy solution is constructed as follows. In the first step, we determine the completion time of all three operations of the first job, since they can be processed without any prior setups:
C π 1 1 = 1 ,
C π 2 1 = C π 1 1 + 2 = 3 ,
C π 3 1 = C π 2 1 + 1 = 4 .
The first setup σ 1 in the greedy heuristic is always performed after the first operation on the first machine. We perform this setup, which allows us to perform the second operation on the first machine, and we calculate the appropriate completion times:
σ 1 = 1
C σ 1 1 = C π 1 1 + 1 = 2 ,
C π 1 2 = C σ 1 1 + 1 = 3 .
The next setup σ 2 can be performed on any machine; thus, we have three candidates, a { 1 , 2 , 3 } . For each candidate, we evaluate the resulting completion times:
C π 3 1 = m a x { C π 3 1 , C σ 1 1 } + 1 + 1 = 5 ,
C π 2 2 = m a x { C π 1 2 , C σ 1 1 } + 1 + 2 = 6 ,
C π 2 3 = m a x { C π 1 3 , C σ 1 1 } + 1 + 2 = 7 .
Out of those three possibilities, the minimal value is C π 3 1 , so σ 2 = 1 , C σ 2 1 = 4 and C π 2 1 = C π 2 1 . We repeat this procedure for subsequent decisions σ 3 through σ 9 . Finally, we obtain solution σ = ( 1 , 1 , 2 , 1 , 2 , 3 , 2 , 3 , 3 ) , shown as a Gantt chart in Figure 5, for which C max = 16 .

5.2. MILP Formulation

Mixed-Integer Linear Programming formulation is a popular method for solving optimization problems, especially if a dedicated algorithm does not exist. The MILP formulation used in this paper as a reference algorithm was introduced first in [42]. Although the aforementioned paper considers a two-machine problem, the formulation can be used for any m > 1 . It utilizes a relative order to encode setups’ order σ , resulting in m ( n 1 ) · ( m 1 ) ( n 1 ) binary variables (relative order of setups performed on the same machine is fixed). We improved the method by utilizing the warm start and providing the solver with upper and lower bounds on the objective function of an optimal solution.

5.2.1. Warm Start

To improve the performance of the algorithm, we used a feasible solution as a starting point. It is especially important for the large instance (with m > 3 ), where without providing a feasible initial solution, the solver struggled to find any feasible solution at all within a time limit. The starting solution was chosen as a better one from the following two:
  • Natural setup order
    1 , 2 , , m , 1 , 2 , , m , 1 , 2 , , m , n 1 times .
  • Result of a greedy heuristic from Section 5.1.

5.2.2. Objective Function Bounds

The upper bound of the objective function was assigned based on the initial solution from the warm start. The lower bound was calculated from the expression
L B = i = 1 n 1 p i 1 + s i 1 + i = 1 m p n i .

5.3. Tabu Metaheuristic

Tabu Search (TS), is a well-known local search metaheuristic proposed by Glover [57,58], which uses a short- and (optionally) a long-term memory to avoid being trapped in a local optimum. Due to its deterministic nature and good performance in practice, it is one of the most commonly used metaheuristic methods, with applications ranging from scheduling (Bożejko et al. [9]), through knapsack problem (Lai et al. [59]), model selection (Marcoulides [60]), or even data replication in cloud environments (Ebadi and Jafari Navimipour [61]). A general pseudocode of our TS is shown in Algorithm 4. Below, we will describe the most important features of our implementation.
Algorithm 4: Tabu Search pseudocode
Data: σ : initial solution.
Result: σ : best solution found.
Processes 10 01837 i004

5.3.1. Initial Solution and Stopping Condition

The initial solution was the same as for the MILP (best out of “natural” and “greedy” constructive heuristics). The choice of the initial solution is not very impactful, TS can generally provide similar results starting from the worse initial solution, provided the number of iterations is a bit higher. The algorithm stops when the time limit MaxTime is reached.

5.3.2. Neighbourhoods

The neighborhood a of solution σ is defined as the set of solutions that can be created from σ by applying a pre-defined move. In our implementation, we considered the insert move i n s ( a , b ) . When this move is applied on σ , it creates neighboring solution σ by removing from it element σ a and inserting it before element σ b . For example, performing move i n s ( 3 , 6 ) on the solution from Example 2 would result in:
σ = ( 1 , 1 , 2 , 2 , 3 , 1 , 3 , 2 , 3 ) .
Normally, the insert neighborhood contains O ( n 2 m 2 ) solutions. However, we apply several rejection rules, which may limit the neighborhood size we need to evaluate:
  • We ignore moves where a = b , as those moves do not change σ .
  • We ignore moves where | b a | > W , where W S is an algorithm parameter that defines the width of the neighborhood. We include this parameter, because the full neighborhood search is very costly (i.e., O ( n 3 m 3 ) ).
  • We reject insert moves that results in infeasible solutions and (optionally) satisfy elimination property and can be eliminated. This operation is faster than evaluating the objective function, and can be potentially conducted in constant time, assuming that the lattice path corresponding to the initial solution is known.
The move leading to the solution with the lowest makespan is chosen (unless it is forbidden, see the next paragraph for details). Optionally, the solution is then refined by the procedure, as described previously in Section 4.4.

5.3.3. Tabu List

We employed a short-term tabu list memory that stores forbidden moves. Solutions created by such moves are ignored unless they are better than the current value of σ * . The tabu list was implemented as a matrix T of size ( n m ) 2 , enabling all tabu list operations in time O ( 1 ) . The algorithm parameter C , called cadence, determines for how many iterations the moves stay forbidden. Based on preliminary research, we chose C = n m . Note that when move i n s ( k , l ) is performed, the reverse move i n s ( l , k ) also becomes forbidden.

5.3.4. Backjumps

Even with the tabu list mechanism in place, the algorithm can still enter a cycle. To alleviate this problem, we used a long-term memory in the form of a list. Every time σ * is updated, we add to the list a triple of: (1) a copy of current solution σ , (2) a copy of the current tabu list; and (3) the move that led to σ * (not its reverse). If we reach MaxNoImprove iterations without improving σ * , then we perform a backjump. This is conducted by replacing the σ and tabu list with their copies in the most recent element of the long-term memory list. Algorithm then proceeds normally, except that during the next iteration, we cannot choose the move that was saved in the memory (this prevents TS from following the same branch). After each backjump, we remove the last element of the long-term memory list. If MaxNoImprove is reached and the list is empty, then instead of performing a backjump, we simply restart the search process by setting σ to a random, feasible solution. Based on preliminary research, we chose MaxNoImprove = 200 .

6. Experimental Results

In this section, we describe the computer experiments on the effectiveness of the proposed solving methods on different instance types.

6.1. Experimental Setup

We conducted tests on the following three algorithms described earlier:
MILP 
a commercial solver using the MILP formulation of the problem from Section 5.2;
TabuA 
TS method from Section 5.3, that does not use elimination property to reject neighbors, but uses refinement procedure to improve the best neighbor;
TabuB 
TS method from Section 5.3, that uses elimination property to reject neighbors.
The TabuA and TabuB algorithms were written in C++ programming language and compiled using g++ version 9.3.0 (with -O3 compilation flag). The MILP formulation was written in Julia programming language version 1.5.3 and employed the Gurobi solver version 9.1.1.rc0 with the default parameters and presolver turned on.
The experiments were run on a machine with 64-core AMD Ryzen Threadripper 3990X processor with 2.9 GHz clock and 64 GB of RAM. Each algorithm used only one CPU core. The experiment was run under a Windows 10 Pro operating system with Windows Subsystem for Linux.

6.1.1. Test Instances

We prepared 120 test instances by the use of a modified version of the FSSP instance generator proposed by Taillard [62]. We use the same random number generator as Taillard, which is meant to provide integer numbers from uniform distribution U ( 1 , 99 ) . Both operation processing times and setup times were drawn from this distribution. We first generated n m processing times, identical to Taillard. Then, we generated—without reseeding the random generator—all n 2 m setup times, starting with the first machine. We generated setup times for each possible job pair; however, only a single pair is used, as the order of jobs is fixed. We generated 10 instances for each of the following size groups ( n × m ): 5 × 2 , 10 × 2 , 10 × 3 , 15 × 2 , 15 × 3 , 20 × 3 , 20 × 5 , 20 × 10 , 30 × 5 , 30 × 10 , 40 × 5 , 40 × 10 . All instances were given identifiers from 1 to 120. All generated instances are available in the Supplementary Materials to this paper.

6.1.2. Evaluation Method

To measure the quality of the solutions provided by the algorithms, we calculated the percentage relative differences between their quality and the quality of reference solutions—called short gaps. In other words, the gap for a solution σ is defined as:
gap ( σ ) = C max ( σ ) C max ( σ ref ) C max ( σ ref ) · 100 % ,
where σ ref is a reference solution. Generally, we chose σ ref to be the best known solution in a given context.

6.2. Width of the Neighborhood

First, we investigated how the width of the neighborhood W affects the performance of the TS algorithm. We ran the TabuA algorithm with MaxTime = 10 [ s ] for all 120 instances. For each instance, the algorithm was run 100 times, and each time W was set to a different value, equal to a given fraction of the maximum possible neighborhood width. For example, for  n = 30 and m = 10 , the value of W is in [ 1 , 290 ] . Then, a 50% width corresponds to W = 145. The reference solution for computing the gap was the best among all 100 runs for each instance. The resulting gaps, averaged over all instances, are shown in Figure 6.
Higher values of W provided considerably worse results, i.e., a larger neighborhood size does not compensate the larger time required to evaluate it. Thus, lower values of W are preferred. However, a very small neighborhood size also leads to a larger gap. Such results are to be expected, as in high quality solutions, most setups are close to their initial positions in the order σ generated by the starting procedure, and wide inserts are rarely required. On the other hand, a very narrow neighborhood requires multiple TS iterations to perform any significant change in σ . Based on this observation, we set W to 10% of the maximum neighborhood width in the following experiments.

6.3. Performance of the Algorithms

In this experiment, we have evaluated the performances of the MILP formulation and the TabuA and TabuB metaheuristics, which utilize the problem properties. In order to correctly compare such different solving methods, we opted to use the same stopping condition—a time limit. Since we intend for the solving methods to be applicable as sub-procedures for solving a two-level problem, the short running time is crucial. Thus, we decided to test several short time limits
t { 1 , 2 , 5 , 10 , 20 , 50 , 100 }
seconds. Although the times t 10 are the most practical in the context of a two-level problem, we considered times up to 100 s to better evaluate how the solving methods converge. For each instance, the reference solution was the best one found by any algorithm in 100 seconds (the makespans of the reference solutions can be found in the Supplementary Materials). The results obtained are shown in Table 4.
For smaller instances (up to 3 machines, under around 60 operations), the MILP formulation is consistently better or on-par with any TS algorithm, regardless of the time limit. In fact, the solver frequently reported for the returned solutions to be optimal and therefore unbeatable by TS. However, for larger, industry-size instances, the TS algorithm performs better than MILP. Once again, this effect is consistent over all time limits.
Comparing the two TS algorithm variants, TabuB (which makes a wide use of Theorem 1) is consistently better or on-par with TabuA (which uses only the refinement procedure). For larger running times ( t 50 ), both TS algorithms start to provide similar results. This demonstrates that, given enough running time, both versions of the TS algorithm converge to solutions of a similar quality. However, the TabuB variant is superior in convergence speed, outperforming TabuA for t 20 and staying on-par with it for t > 20 . This is easily observed in the last rows of the table, showing the difference (gain) between t = 1 and t = 100 time limits. For TabuB, the values are close to 0, meaning the algorithm does not benefit from a higher running time, while still equaling or outperforming TabuA.
Finally, to better visualize the improvements each method makes within the 100 s, we shown in Figure 7 and Figure 8 the relation between the best solution found and time for 6 exemplary instances. The performance of MILP was recorded for seven different time limits (see (47)) set within the solver, while for TS algorithms, a timestamp of each improvement was recorded. The plots confirm that TabuB method converges much faster than TabuA, while the MILP formulation still makes improvements when both TS algorithms have almost converged. Moreover, for some large instances (e.g., instance 78, refer to the figure), even 100 s is not enough for a MILP formulation to improve its starting solution, while TS algorithms converge in under 2 s. On the other hand, in several cases all algorithms failed to make any significant improvements. This, however, mostly happened for smaller instances, where the initial solutions are near-optimal.

7. Conclusions and Future Work

In the paper, we considered a single-server permutation Flow Shop manufacturing process. We divided the full problem into two levels and tackled the second level, i.e., finding an optimal order of disjoint setups for a given, fixed order of jobs.
We presented a mathematical model of the considered problem, including a compact solution representation. Then, we formulated several problem properties. We demonstrated an interesting connection between Catalan numbers and the number of feasible solutions for two machines. We also discussed the challenge of generalizing this result for more machines, despite several generalizations of the Catalan numbers existing. With a lack of a closed-form expression, we proposed a Dynamic Programming algorithm for the task, with a time complexity of O ( m n m ) . Furthermore, we formulated an elimination property that allows to detect and skip suboptimal solutions. For 10 jobs and 6 machines, the property allows one to disregard almost 99.9% of the solution space. This property was then used to develop an efficient refinement procedure, which can be applied to potentially improve any feasible solution in a time as short as a time required to evaluate it.
To solve the problem, we proposed three algorithms: a Mixed-Integer Linear Programming (MILP) formulation and two variants of the Tabu Search (TS) metaheuristic, implementing the identified properties. The solving methods were then tested empirically on instances based on Taillard’s generation scheme. The MILP formulation was the best for smaller instances (up to 50–60 operations), allowing to obtain optimal or near-optimal solutions. For larger, industry-size instances, the TS algorithms outperformed MILP (which was sometimes unable to improve the starting solution at all). Between the two TS variants, the one utilizing elimination property converged faster than the variant that used only the refinement procedure, usually finding good solutions in under 1 s. A good performance together with short running time proved the usefulness of the proposed method as a part of a larger, two-level heuristic for a full problem.
For future work, we consider three research directions. First, we want to tackle the full, two-level problem, optimizing both the order of setups and the order of jobs. Second, we want to generalize the problem shown in this paper, by allowing for a fixed number of setups to be performed at the same time (multi-server). Third, we want to extend our research concerning the connection between the number of feasible setup orders and lattice paths, in order to obtain a closed-form formula for the size of the solution space.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/pr10091837/s1.

Author Contributions

Conceptualization, A.G. and J.R.; methodology, A.G., J.R. and R.I.; software, A.G., J.R. and R.I.; validation, A.G. and J.R.; formal analysis, A.G.; investigation, A.G., J.R. and R.I.; resources, A.G.; data curation, A.G., J.R. and R.I.; writing—original draft preparation, A.G., J.R. and R.I.; writing—review and editing, A.G., J.R. and R.I.; visualization, A.G. and R.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FSSPFlow Shop Scheduling Problem
SISTSequence-Independent Setup Times
SDSTSequence-Dependent Setup Times
MILPMixed-Integer Linear Programming
GAGenetic Algorithm
TSTabu Search
JSSPJob Shop Scheduling Problem
IGIterated Greedy
DPDynamic Programming

References

  1. Johnson, S.M. Optimal two- and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  2. Neufeld, J.S.; Schulz, S.; Buscher, U. A systematic review of multi-objective hybrid flow shop scheduling. Eur. J. Oper. Res. 2022. [Google Scholar] [CrossRef]
  3. Komaki, G.M.; Sheikh, S.; Malakooti, B. Flow shop scheduling problems with assembly operations: A review and new trends. Int. J. Prod. Res. 2019, 57, 2926–2955. [Google Scholar] [CrossRef]
  4. Miyata, H.H.; Nagano, M.S. The blocking flow shop scheduling problem: A comprehensive and conceptual review. Expert Syst. Appl. 2019, 137, 130–156. [Google Scholar] [CrossRef]
  5. Rossit, D.A.; Tohmé, F.; Frutos, M. The Non-Permutation Flow-Shop scheduling problem: A literature review. Omega 2018, 77, 143–153. [Google Scholar] [CrossRef]
  6. Ruiz, R.; Maroto, C.; Alcaraz, J. Solving the flowshop scheduling problem with sequence dependent setup times using advanced metaheuristics. Eur. J. Oper. Res. 2005, 165, 34–54. [Google Scholar] [CrossRef]
  7. Babou, N.; Rebaine, D.; Boudhar, M. Two-machine open shop problem with a single server and set-up time considerations. Theor. Comput. Sci. 2021, 867, 13–29. [Google Scholar] [CrossRef]
  8. Pongchairerks, P. A Two-Level Metaheuristic Algorithm for the Job-Shop Scheduling Problem. Complexity 2019, 2019, 8683472. [Google Scholar] [CrossRef]
  9. Bożejko, W.; Gnatowski, A.; Idzikowski, R.; Wodecki, M. Cyclic flow shop scheduling problem with two-machine cells. Arch. Control Sci. 2017, 27, 151–167. [Google Scholar] [CrossRef]
  10. Brucker, P.; Knust, S.; Wang, G. Complexity results for flow-shop problems with a single server. Eur. J. Oper. Res. 2005, 165, 398–407. [Google Scholar] [CrossRef]
  11. Hamid, M.; Nasiri, M.M.; Werner, F.; Sheikhahmadi, F.; Zhalechian, M. Operating room scheduling by considering the decision-making styles of surgical team members: A comprehensive approach. Comput. Oper. Res. 2019, 108, 166–181. [Google Scholar] [CrossRef]
  12. Rudy, J.; Smutnicki, C. Online scheduling for a Testing-as-a-Service system. Bull. Pol. Acad. Sci. Tech. Sci. 2020, 68, 869–882. [Google Scholar] [CrossRef]
  13. Baykasoğlu, A.; Ozsoydan, F.B. Dynamic scheduling of parallel heat treatment furnaces: A case study at a manufacturing system. J. Manuf. Syst. 2018, 46, 152–162. [Google Scholar] [CrossRef]
  14. Rudy, J.; Rodwald, P. Job Scheduling with Machine Speeds for Password Cracking Using Hashtopolis. In Theory and Applications of Dependable Computer Systems; Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 523–533. [Google Scholar]
  15. Rudy, J. Cyclic Scheduling Line with Uncertain Data. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; pp. 311–320. [Google Scholar] [CrossRef]
  16. Zeng, Z.; Hong, M.; Man, Y.; Li, J.; Zhang, Y.; Liu, H. Multi-object optimization of flexible flow shop scheduling with batch process—Consideration total electricity consumption and material wastage. J. Clean. Prod. 2018, 183, 925–939. [Google Scholar] [CrossRef]
  17. Yu, C.; Semeraro, Q.; Matta, A. A genetic algorithm for the hybrid flow shop scheduling with unrelated machines and machine eligibility. Comput. Oper. Res. 2018, 100, 211–229. [Google Scholar] [CrossRef]
  18. Dios, M.; Fernandez-Viagas, V.; Framinan, J.M. Efficient heuristics for the hybrid flow shop scheduling problem with missing operations. Comput. Ind. Eng. 2018, 115, 88–99. [Google Scholar] [CrossRef]
  19. Ruiz, R.; Pan, Q.K.; Naderi, B. Iterated Greedy methods for the distributed permutation flowshop scheduling problem. Omega 2019, 83, 213–222. [Google Scholar] [CrossRef]
  20. Bożejko, W.; Uchroński, M.; Wodecki, M. Blocks for two-machines total weighted tardiness flow shop scheduling problem. Bull. Pol. Acad. Sci. Tech. Sci. 2020, 68, 31–41. [Google Scholar]
  21. Allahverdi, A.; Gupta, J.N.; Aldowaisan, T. A review of scheduling research involving setup considerations. Omega 1999, 27, 219–239. [Google Scholar] [CrossRef]
  22. Cheng, T.C.E.; Gupta, J.N.D.; Wang, G. A review of flowshop scheduling research with setup times. Prod. Oper. Manag. 2000, 9, 262–282. [Google Scholar] [CrossRef]
  23. Reza Hejazi, S.; Saghafian, S. Flowshop-scheduling problems with makespan criterion: A review. Int. J. Prod. Res. 2005, 43, 2895–2929. [Google Scholar] [CrossRef]
  24. Sharma, P.; Jain, A. A review on job shop scheduling with setup times. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2016, 230, 517–533. [Google Scholar] [CrossRef]
  25. Zhu, X.; Wilhelm, W.E. Scheduling and lot sizing with sequence-dependent setup: A literature review. IIE Trans. (Inst. Ind. Eng.) 2006, 38, 987–1007. [Google Scholar] [CrossRef]
  26. Gupta, J.N.; Tunc, E.A. Scheduling a two-stage hybrid flowshop with separable setup and removal times. Eur. J. Oper. Res. 1994, 77, 415–428. [Google Scholar] [CrossRef]
  27. Rajendran, C.; Ziegler, H. Heuristics for scheduling in a flowshop with setup, processing and removal times separated. Prod. Plan. Control 1997, 8, 568–576. [Google Scholar] [CrossRef]
  28. Bożejko, W.; Idzikowski, R.; Wodecki, M. Flow Shop Problem with Machine Time Couplings. In Engineering in Dependability of Computer Systems and Networks; Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 80–89. [Google Scholar] [CrossRef]
  29. Belabid, J.; Aqil, S.; Allali, K. Solving Permutation Flow Shop Scheduling Problem with Sequence-Independent Setup Time. J. Appl. Math. 2020, 2020, 7132469. [Google Scholar] [CrossRef]
  30. Gupta, J.N.; Darrow, W.P. The two-machine sequence dependent flowshop scheduling problem. Eur. J. Oper. Res. 1986, 24, 439–446. [Google Scholar] [CrossRef]
  31. Ruiz, R.; Stützle, T. An Iterated Greedy heuristic for the sequence dependent setup times flowshop problem with makespan and weighted tardiness objectives. Eur. J. Oper. Res. 2008, 187, 1143–1159. [Google Scholar] [CrossRef]
  32. Zandieh, M.; Karimi, N. An adaptive multi-population genetic algorithm to solve the multi-objective group scheduling problem in hybrid flexible flowshop with sequence-dependent setup times. J. Intell. Manuf. 2011, 22, 979–989. [Google Scholar] [CrossRef]
  33. Fazel Zarandi, M.H.; Mosadegh, H.; Fattahi, M. Two-machine robotic cell scheduling problem with sequence-dependent setup times. Comput. Oper. Res. 2013, 40, 1420–1434. [Google Scholar] [CrossRef]
  34. Majumder, A.; Laha, D. A new cuckoo search algorithm for 2-machine robotic cell scheduling problem with sequence-dependent setup times. Swarm Evol. Comput. 2016, 28, 131–143. [Google Scholar] [CrossRef]
  35. Burcin Ozsoydan, F.; Sağir, M. Iterated greedy algorithms enhanced by hyper-heuristic based learning for hybrid flexible flowshop scheduling problem with sequence dependent setup times: A case study at a manufacturing plant. Comput. Oper. Res. 2021, 125, 105044. [Google Scholar] [CrossRef]
  36. Cheng, T.C.; Wang, G.; Sriskandarajah, C. One-operator-two-machine flowshop scheduling with setup and dismounting times. Comput. Oper. Res. 1999, 26, 715–730. [Google Scholar] [CrossRef]
  37. Lin, S.W.; Ying, K.C. Makespan optimization in a no-wait flowline manufacturing cell with sequence-dependent family setup times. Comput. Ind. Eng. 2019, 128, 1–7. [Google Scholar] [CrossRef]
  38. Lim, A.; Rodrigues, B.; Wang, C. Two-machine flow shop problems with a single server. J. Sched. 2006, 9, 515–543. [Google Scholar] [CrossRef]
  39. Su, L.H.; Lee, Y.Y. The two-machine flowshop no-wait scheduling problem with a single server to minimize the total completion time. Comput. Oper. Res. 2008, 35, 2952–2963. [Google Scholar] [CrossRef]
  40. Samarghandi, H.; ElMekkawy, T.Y. An efficient hybrid algorithm for the two-machine no-wait flow shop problem with separable setup times and single server. Eur. J. Ind. Eng. 2011, 5, 111–131. [Google Scholar] [CrossRef]
  41. Cheng, T.C.; Kovalyov, M.Y. Scheduling a single server in a two-machine flow shop. Computing 2003, 70, 167–180. [Google Scholar] [CrossRef]
  42. Gnatowski, A.; Rudy, J.; Idzikowski, R. On two-machine Flow Shop Scheduling Problem with disjoint setups. In Proceedings of the 2020 IEEE 15th International Conference of System of Systems Engineering (SoSE), Budapest, Hungary, 2–4 June 2020; pp. 277–282. [Google Scholar] [CrossRef]
  43. Bożejko, W.; Gnatowski, A.; Klempous, R.; Affenzeller, M.; Beham, A. Cyclic scheduling of a robotic cell. In Proceedings of the 2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland, 16–18 October 2016; pp. 379–384. [Google Scholar] [CrossRef]
  44. Iravani, S.M.; Teo, C.P. Asymptotically optimal schedules for single-server flow shop problems with setup costs and times. Oper. Res. Lett. 2005, 33, 421–430. [Google Scholar] [CrossRef]
  45. Vlk, M.; Novak, A.; Hanzalek, Z. Makespan Minimization with Sequence-dependent Non-overlapping Setups. In Proceedings of the 8th International Conference on Operations Research and Enterprise Systems, SCITEPRESS—Science and Technology Publications, Prague, Czech Republic, 19–21 February 2019; pp. 91–101. [Google Scholar] [CrossRef]
  46. Okubo, H.; Miyamoto, T.; Yoshida, S.; Mori, K.; Kitamura, S.; Izui, Y. Project scheduling under partially renewable resources and resource consumption during setup operations. Comput. Ind. Eng. 2015, 83, 91–99. [Google Scholar] [CrossRef]
  47. Tempelmeier, H.; Buschkühl, L. Dynamic multi-machine lotsizing and sequencing with simultaneous scheduling of a common setup resource. Int. J. Prod. Econ. 2008, 113, 401–412. [Google Scholar] [CrossRef]
  48. Glass, C.A.; Shafransky, Y.M.; Strusevich, V.A. Scheduling for Parallel Dedicated Machines with a Single Server. Nav. Res. Logist. 2000, 47, 304–328. [Google Scholar] [CrossRef]
  49. Nowicki, E.; Smutnicki, C. A fast taboo search algorithm for the job shop problem. Manag. Sci. 1996, 42, 797–813. [Google Scholar] [CrossRef]
  50. Stanley, R.P. Catalan Numbers; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar] [CrossRef]
  51. Haglund, J. The q,t-Catalan Numbers and the Space of Diagonal Harmonics; American Mathematical Society: Providence, RI, USA, 2008. [Google Scholar] [CrossRef]
  52. Vera-López, A.; García-Sánchez, M.; Basova, O.; Vera-López, F. A generalization of Catalan numbers. Discret. Math. 2014, 332, 23–39. [Google Scholar] [CrossRef]
  53. Krattenthaler, C. Unimodality, Log-concavity, Real-rootedness And Beyond. In Handbook of Enumerative Combinatorics; Chapman and Hall/CRC: Boca Raton, FL, USA, 2015; pp. 461–508. [Google Scholar] [CrossRef]
  54. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Clifford, S. Introduction to Algorithms, 3rd ed.; The MIT Press: Cambridge, MA, USA; London, UK, 2009. [Google Scholar]
  55. Moscato, P. On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms. In Caltech Concurrent Computation Program, C3P Report. 1989. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.9474&rep=rep1&type=pdf (accessed on 6 August 2022).
  56. Nawaz, M.; Enscore, E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  57. Glover, F. Tabu Search—Part I. ORSA J. Comput. 1989, 1, 190–206. [Google Scholar] [CrossRef]
  58. Glover, F. Tabu Search—Part II. ORSA J. Comput. 1990, 2, 4–32. [Google Scholar] [CrossRef]
  59. Lai, X.; Hao, J.K.; Yue, D. Two-stage solution-based tabu search for the multidemand multidimensional knapsack problem. Eur. J. Oper. Res. 2019, 274, 35–48. [Google Scholar] [CrossRef]
  60. Marcoulides, K.M. Latent growth curve model selection with Tabu search. Int. J. Behav. Dev. 2020, 45, 153–159. [Google Scholar] [CrossRef]
  61. Ebadi, Y.; Jafari Navimipour, N. An energy-aware method for data replication in the cloud environments using a Tabu search and particle swarm optimization algorithm. Concurr. Comput. Pract. Exp. 2019, 31, e4757. [Google Scholar] [CrossRef]
  62. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
Figure 1. The left-shifted schedule for the instance from Example 2 and σ = ( 1 , 1 , 1 , 2 , 3 , 2 , 3 , 2 , 3 ) .
Figure 1. The left-shifted schedule for the instance from Example 2 and σ = ( 1 , 1 , 1 , 2 , 3 , 2 , 3 , 2 , 3 ) .
Processes 10 01837 g001
Figure 2. Relation between a number of feasible σ and Catalan numbers. Grey area represents admissible points in L 2 paths.
Figure 2. Relation between a number of feasible σ and Catalan numbers. Grey area represents admissible points in L 2 paths.
Processes 10 01837 g002
Figure 3. Illustration for the proof of Theorem 1. Solid arrows represent technological and machine constraints. Dashed arrows represent setup order (red for σ and blue for σ ^ ). Dark grey circles represent jobs, while light grey circles represent setups.
Figure 3. Illustration for the proof of Theorem 1. Solid arrows represent technological and machine constraints. Dashed arrows represent setup order (red for σ and blue for σ ^ ). Dark grey circles represent jobs, while light grey circles represent setups.
Processes 10 01837 g003
Figure 4. Elimination ratio for different instance sizes.
Figure 4. Elimination ratio for different instance sizes.
Processes 10 01837 g004
Figure 5. The left-shifted schedule provided by the greedy algorithm for the instance from Table 2.
Figure 5. The left-shifted schedule provided by the greedy algorithm for the instance from Table 2.
Processes 10 01837 g005
Figure 6. Impact of the neighborhood width on the gap.
Figure 6. Impact of the neighborhood width on the gap.
Processes 10 01837 g006
Figure 7. C max with regards to running time for the solving methods and 3 exemplary problem instances. For TabuA and TabuB, each mark represents a new best solution found, while for MILP, marks correspond to time limits set for the solver.
Figure 7. C max with regards to running time for the solving methods and 3 exemplary problem instances. For TabuA and TabuB, each mark represents a new best solution found, while for MILP, marks correspond to time limits set for the solver.
Processes 10 01837 g007
Figure 8. C max with regards to running time for the solving methods and 3 exemplary problem instances. For TabuA and TabuB, each mark represents a new best solution found, while for MILP, marks correspond to time limits set for the solver.
Figure 8. C max with regards to running time for the solving methods and 3 exemplary problem instances. For TabuA and TabuB, each mark represents a new best solution found, while for MILP, marks correspond to time limits set for the solver.
Processes 10 01837 g008
Table 1. Selected notation.
Table 1. Selected notation.
SymbolMeaning
J , nSet of jobs, number of jobs
M , mSet of machines, number of machines
S Set of numbers from 1 to the number of setups
p i a Processing time of job i on machine a
s i a Setup time after job i on machine a
S π i a , C π i a Start and completion time of job i on machine a
S σ i a , C σ i a Start and completion time of setup after job i on machine a
τ Order of setups (explicit representation)
T Set of all possible τ
σ Order of setups (short representation)
Σ Set of all possible σ
Σ feas Sets of feasible σ
C max ( σ ) Makespan for setup order σ
C k k-th Catalan number
Table 2. Problem instance from Example 2 for n = 4 and m = 3 .
Table 2. Problem instance from Example 2 for n = 4 and m = 3 .
i p i 1 p i 2 p i 3 s i 1 s i 2 s i 3
1 121 111
2 122 111
3 111 122
4 211 ---
Table 3. Relation between the number of feasible setup orders and the total number of setup orders.
Table 3. Relation between the number of feasible setup orders and the total number of setup orders.
m = 2 m = 3
n | Σ feas | | Σ | | Σ feas | | Σ |
2 2.002.00 6.006.00
5 4.20 × 1017.00 × 101 9.48 × 1033.46 × 104
10 1.68 × 1044.86 × 104 1.42 × 10102.28 × 1011
20 6.56 × 1093.53 × 1010 2.46 × 10232.25 × 1025
50 1.98 × 10272.55 × 1028 6.62 × 10647.67 × 1067
100 8.97 × 10562.28 × 1058 1.63 × 101351.41 × 10139
200 5.12 × 101162.58 × 10118 1.44 × 102779.60 × 10281
500 5.39 × 102966.76 × 10298 9.65 × 107049.83 × 10710
1000 2.05 × 105975.12 × 10599
m = 4 m = 5
n | Σ feas | | Σ | | Σ feas | | Σ |
2 2.40 × 1012.40 × 101 1.20 × 1021.20 × 102
5 6.56 × 1066.31 × 107 1.06 × 10103.06 × 1011
10 1.52 × 10172.15 × 1019 1.08 × 10251.90 × 1028
20 2.28 × 10398.61 × 1042 1.22 × 10573.88 × 1062
50 7.13 × 101083.71 × 10114 4.46 × 101574.14 × 10166
100 1.22 × 102273.35 × 10234
200 4.95 × 104657.84 × 10474
Table 4. Average gap [%] for the MILP, TabuA and TabuB methods for different instance size groups and time limits. Best values for each instance size and time limit are underlined. Rows “Diff” contain performance differences (gains) between running times 100 and 1.
Table 4. Average gap [%] for the MILP, TabuA and TabuB methods for different instance size groups and time limits. Best values for each instance size and time limit are underlined. Rows “Diff” contain performance differences (gains) between running times 100 and 1.
Time [s]AlgorithmSize of the Problem n × m
5 × 2 10 × 2 10 × 3 15 × 2 15 × 3 20 × 3 20 × 5 20 × 10 30 × 5 30 × 10 40 × 5 40 × 10 Average
MILP0.000.000.000.000.842.772.341.532.150.931.810.451.07
1TabuA0.000.201.400.330.942.130.250.550.340.470.330.180.60
TabuB0.000.451.430.481.442.010.060.130.120.180.020.010.54
MILP0.000.000.000.000.372.051.111.532.150.931.810.450.87
2TabuA0.000.201.400.330.931.960.120.450.340.410.100.110.54
TabuB0.000.451.430.481.441.880.060.000.120.180.020.010.52
MILP0.000.000.000.000.270.770.561.531.280.931.650.450.62
5TabuA0.000.201.400.330.931.680.060.450.110.410.080.060.49
TabuB0.000.451.430.481.441.770.060.000.080.050.020.010.49
MILP0.000.000.000.000.130.520.231.530.940.931.410.450.51
10TabuA0.000.201.400.330.931.680.060.450.110.410.080.060.48
TabuB0.000.451.430.481.441.770.060.000.080.050.000.000.49
MILP0.000.000.000.000.110.310.151.110.890.930.880.450.41
20TabuA0.000.201.400.330.931.680.060.390.080.410.020.060.47
TabuB0.000.451.430.481.441.770.060.000.080.050.000.000.49
MILP0.000.000.000.000.010.080.120.350.480.930.600.450.25
50TabuA0.000.201.400.330.931.680.060.100.080.180.000.060.42
TabuB0.000.451.430.481.441.770.060.000.080.050.000.000.49
MILP0.000.000.000.000.000.000.010.170.220.900.400.450.18
100TabuA0.000.201.400.330.931.680.060.080.070.090.000.060.41
TabuB0.000.451.430.481.441.770.060.000.080.050.000.000.49
MILP0.000.000.000.000.842.772.331.361.940.031.410.000.89
DiffTabuA0.000.000.000.000.010.450.190.470.270.370.320.120.19
TabuB0.000.000.000.000.000.240.000.130.040.120.020.010.05
Optimality of “0.00” entries: All the instances solved optimally. The majority of the instances solved optimally.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gnatowski, A.; Rudy, J.; Idzikowski, R. Scheduling Disjoint Setups in a Single-Server Permutation Flow Shop Manufacturing Process. Processes 2022, 10, 1837. https://doi.org/10.3390/pr10091837

AMA Style

Gnatowski A, Rudy J, Idzikowski R. Scheduling Disjoint Setups in a Single-Server Permutation Flow Shop Manufacturing Process. Processes. 2022; 10(9):1837. https://doi.org/10.3390/pr10091837

Chicago/Turabian Style

Gnatowski, Andrzej, Jarosław Rudy, and Radosław Idzikowski. 2022. "Scheduling Disjoint Setups in a Single-Server Permutation Flow Shop Manufacturing Process" Processes 10, no. 9: 1837. https://doi.org/10.3390/pr10091837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop