Next Article in Journal
Recent Progress in Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes
Previous Article in Journal
Primary Methods and Algorithms in Artificial-Intelligence-Based Dental Image Analysis: A Systematic Review
Previous Article in Special Issue
Advancements in Optimization: Critical Analysis of Evolutionary, Swarm, and Behavior-Based Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Heuristics for the Online Job Shop Scheduling Problem

1
Faculty of Mechanical Engineering, Aškerčeva cesta 6, 1000 Ljubljana, Slovenia
2
Rudolfovo—Science and Technology Centre Novo Mesto, Podbreznik 15, 8000 Novo Mesto, Slovenia
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(12), 568; https://doi.org/10.3390/a17120568
Submission received: 31 October 2024 / Revised: 8 December 2024 / Accepted: 10 December 2024 / Published: 12 December 2024
(This article belongs to the Special Issue Scheduling: Algorithms and Real-World Applications)

Abstract

:
The job shop scheduling problem (JSSP) is a popular NP-hard problem in combinatorial optimization, due to its theoretical appeal and its importance in applications. In practical applications, the online version is much closer to the needs of smart manufacturing in Industry 4.0 and 5.0. Here, the online version of the job shop scheduling problem is solved by a heuristics that governs local queues at the machines. This enables a distributed implementation, i.e., a digital twin can be maintained by local processors which can result in high speed real time operation. The heuristics at the level of probabilistic rules for running the local queues is experimentally shown to provide the solutions of quality that is within acceptable approximation ratios to the best known solutions obtained by the best online algorithms. The probabilistic rule defines a model which is not unlike the spin glass models that are closely related to quantum computing. Major advances of the approach are the inherent parallelism and its robustness, promising natural and likely successful application to other variations of JSSP. Experimental results show that the heuristics, although designed for solving the online version, can provide near-optimal and often even optimal solutions for many benchmark instances of the offline version of JSSP. It is also demonstrated that the best solutions of the new heuristics clearly improve over the results obtained by heuristics based on standard dispatching rules. Of course, there is a trade-off between better computational time and the quality of the results in terms of makespan criteria.

1. Introduction

Practical optimization problems are often very complex because there are many circumstances that essentially influence the quality of solutions. Modern market conditions enforce the industry to promptly answer the changes, including adaptation of product quantities and even particular features of the products to each customer. Just-in-time production for known customers is often important [1]. This implies that simple theoretical models (and the corresponding optimization problems) must be generalized to be of practical use. However, studying a general approach may better shed some more light onto its properties when applied to the basic version of a problem. An obvious disadvantage of the early application to a specific version that models various practically important features is that the many constraints and possibly adapted criteria can make it harder to observe some important basic phenomena.
The motivation for this work comes from practical needs as some of the authors closely cooperate with industrial partners trying to improve solutions in real time production. The design of the heuristics here is made with the idea to develop a framework that may be naturally used in real-time practical applications. The online version of the problem is, therefore, considered. Furthermore, we note that, with some obvious changes, the approach can naturally be implemented in the environment with changing features, e.g., changes of orders, failure of machine(s), etc.
Similar approach may be used also in the case when the problem is offline in the sense that we have a complete list of the jobs known, but have a very short time to find a solution as the production must start promptly. In other words, there is no profit if the optimal makespan solution is found after the production process is already finished.
On the other hand, we emphasize that we focus on the basic JSSP here because we want to evaluate the performance of the basic heuristics on the popular datasets. The performance, however, has to be judged keeping in mind that online algorithms clearly have a handicap when compared with the offline algorithms.
The main contributions of the work reported here are the following:
  • We design a model for randomized heuristics based on priority rules that can approximately solve online problems;
  • We provide a definition of a novel type of priority rule based on the remaining processing time of jobs, using a combination of two parameters;
  • We give experimental evidence that the approach can provide, although it is an online algorithm, optimal or near-optimal solutions on a set of benchmark problems for the offline JSSP. Furthermore, we provide evidence that the approach is robust with regard to both parameters in the sense that near-optimal solutions are reported for values of parameters within reasonably large intervals.
We wish to note that the approach using the priority rules at the machines is not new; however, to the best of our knowledge, the applications are limited to the standard dispatching rules like first-in-first-out (FIFO), shortest processing time (SPT), and longest processing time (LPT) that are easy to use but are not accurate enough for solving tough scheduling problems [2]. Randomization, which we introduce, is a novel idea that combines the positive features of the standard rules with the diversity that comes from random decisions based on Boltzmann distribution. A comparison is provided showing that restarted simulations using the here-proposed randomized dispatching rules clearly outperform the standard deterministic dispatching rules.
We believe that a potential importance of the findings reported here is that we show competitiveness of a relatively simple approach that is robust. First, it is robust in the sense that, although designed for solving the online JSSP, it provides good solutions for the offline benchmark instances. Second, contrary to many metaheuristics, it is not highly depending on lucky choices of parameters. This implies fast parameter tuning is likely, and therefore, we believe that the approach may also be very competitive when applied to variations of the basic problem. For example, dynamic events can be handled naturally, nearly without altering the algorithm.
The rest of the paper is organized as follows. In the next section, we give some further arguments why JSSP is still a challenging optimization problem after more than 70 years of research. In Section 3, we recall the formal definition of the basic problem and continue with a brief discussion of its many variants, in particular those relevant for smart manufacturing. Heuristics and metaheuristics, in particular those that motivated the present work, are discussed in Section 3.3. The preliminary section is concluded with a brief outline of related work. The main ideas are outlined in Section 4, which explain our approach. In Section 5, we provide evidence of promising behavior of our ideas on benchmark datasets. The conclusions are summarized in Section 6.

2. Motivation

The job shop scheduling problem (JSSP) is a well-known intractable combinatorial optimization problem [3] that has been studied extensively since the 1950s [4,5]. Besides the original application, the problem naturally appears in some other contexts, including the scheduling and rescheduling problem in the train traffic [6,7]. Various scheduling strategies and intelligent algorithms have been proposed to solve the job shop scheduling problem. JSSP deals with scheduling a set of jobs on a set of machines under natural constraints, e.g., given order of operations of jobs on machines. The popular performance measures of JSSP are makespan, throughput time, earliness, tardiness, and due-date cost, among others. Makespan, the time at which processing of all jobs are completed, is the most commonly used performance measure for JSSP. Although a considerable amount of research has been dedicated to the development of efficient solving methods for JSSP, many of them were found to be unable to solve even relatively small-sized JSSP, e.g., an instance of size 10 × 10 that was proposed before 1963 [8] has not been solved to optimality for more than a quarter of a century [9]. Even now, several benchmark instances of moderate size do not have known optimal solutions. Industry 4.0, with smart and distributed manufacturing systems, brings new challenges for extensive research [10].
It is well known that even the most simply stated optimization problems are known to be NP-hard, which, roughly speaking, means that there is no practical, i.e., quick and exact, optimization algorithm, assuming that the famous P ≠ NP conjecture is correct. The conjecture is among the most challenging theoretical problems and was included into the list of seven millennium problems [11]. Knowing that the problem of interest is computationally intractable implies that we may and should use heuristic approaches and that we also should aim to find nearly optimal solutions for which sometimes even approximation guaranties cannot be given. It is believed [12] that the best results are obtained when a special heuristics is designed and tuned for each particular problem. This means that the heuristics should be based on considerations of the particular problem and perhaps also on the properties of the most likely instances. On the other hand, it is useful to work within a framework of (one or more) metaheuristics which can be seen as general strategies to attack an optimization problem. Several examples are given, for example, in [13]. Hundreds of research papers and many textbooks were published on general and even on particular heuristics. For an extensive discussion of the meaning of terms heuristics and heuristic algorithms, see [14]. For a history of metaheuristics, we refer to [15]. In short, heuristics refer to problem-solving methods that are not guaranteed to find the optimal solution, but are instead intended to find a solution that is good enough in a reasonable amount of time. Heuristics are designed and tuned for each particular problem, and are based on considerations of the problem and the properties of the most likely instances. Metaheuristics refer to general strategies for attacking optimization problems using heuristics.
For modeling practical problems in production, a number of variants of JSSP have been defined and studied [16]. Due to hardness of the basic problem and its variants, the exact algorithms, mostly based on the branch and bound method, can solve to optimality only in instances of very limited size. Therefore, many popular heuristics and metaheuristics were applied, and some more were designed to find near-optimal solutions of JSSP problem or its variants. We provide a sample of recent references in later sections.
The simulation-based approach is widely adopted in industry as dispatching rules are readily implemented, are computationally efficient, and are robust to variability and uncertainty of a job shop [17]. Furthermore, the approach can be used for both online and offline versions of the problem. In the offline version of JSSP, it is assumed that all the input data are available, and we have (a lot of) time to find an optimal or at least a near-optimal solution. The situation is essentially different when considering the online version of the problem. In this case, the new jobs arrive in the time when algorithm is already running, and thus, the complete information is not available a priori. It is well known that the online versions are usually harder to handle than the offline versions of the same problem, see, e.g., [18]. The online version is of particular interest in real-time optimization of production process.
A usual approach to control the industrial production and perform real-time optimization of production processes is to use digital twins. A digital twin is a virtual replica of a physical system that can be used to simulate and optimize the system’s performance in real time. Using the real-time data, it may be profitable to use a real-time optimizer that runs parallel to, or shortly ahead of, the actual production process. This involves continuously updating the production schedule as new jobs arrive, and using real-time data to make decisions on how to best allocate resources. This could include using machine learning algorithms to predict future job arrivals and optimize the schedule accordingly, or using heuristic algorithms to quickly find near-optimal solutions in the face of time constraints. A promising idea that we elaborate here is to upgrade the digital twin with a heuristics that locally manage queues. This involves using heuristic algorithms to manage the queue of jobs waiting to be processed in real time, and continuously updating the production schedule as new jobs arrive. This would allow the system to quickly adapt to changing conditions and find near-optimal solutions in the face of time constraints. Overall, the key to tackling the online version of JSSP in real-time optimization of production processes is to use a combination of digital twins, real-time optimizers, and heuristics. This allows to continuously update the production schedule as new jobs arrive, and use real-time data to make decisions on how to best allocate resources.

3. Preliminaries

In this section, we recall formal definition of the job shop scheduling problem. Then, we discuss some variations; in particular, we emphasize the relevance of the online versions for Industry 4.0 and 5.0. We also provide some further background on heuristics and metaheuristics, in order to both explain the motivation for ideas used, and to provide arguments that imply the good performance of the approach on the tested instances. We wish to emphasize here that the results of the online computations are compared with the best known solutions provided by offline algorithms. In the last subsection, some related previous work is briefly recalled.

3.1. The Basic Job Shop Scheduling Problem

The formal definition of the basic JSSP is as follows. A finite set of n jobs J = { J 1 , J 2 , , J n } and a finite set of m machines M = { M 1 , M 2 , , M m } are given. Each job J j consists of a set of operations O j , 1 , O j , 2 , , O j , m which need to be processed in a specific order (known as precedence constraints). Each operation has a specific machine that it needs to be processed on and only one operation in a job can be processed at a given time (machines have capacity one). The operation O j k takes time τ j k and may not be interrupted. The task to be solved is to assign the starting times t j k for all the operations such that all the precedence and capacity constraints are satisfied. The total time in which all operations for all jobs are completed is referred to as makespan C m a x , and the optimal solution is the minimal makespan of a feasible schedule, formally:
C * = min { C m a x } = min { max ( t j k + τ j k ) | all   jobs ,   all   operations ,   schedule   feasible } .
This is the basic version of JSSP. Due to practical importance of the problem, many variants have been studied in the literature.
Dimensionality of an instance of JSSP is the number of operations that have to be scheduled. When each of the n job needs to use each among m machines, the dimensionality is m × n .
Obviously, instead of makespan, one can optimize other objectives. Some other objectives may be closer to practical needs, and in fact, multiobjective versions of the problem may be the models of choice in industrial applications. We wish to note that our approach is robust in the sense that it can be applied to other objective functions either readily or with some natural changes. However, for comparison of the results, we use the makespan objective, as it is used for the majority of benchmark instances.
Several other features of the model can be naturally relaxed or generalized. For example, the capacity of machines may be more than one, and operations may be performed on a set of machines. Below, we discuss some variations.

3.2. Variations of the JSSP

The job shop scheduling problem is among the most popular optimization problems due to its importance in practical situations. After the first paper on the job shop scheduling problem [19], a countless number of studies have been published. See, e.g., a survey on deterministic job shop scheduling [20] and another survey on fuzzy versions of the problem [21]. The survey paper [22] in a recent special issue [23] dealt with the design, balancing, and scheduling of assembly and production lines under inaccurate data. Various applications led to generalizations and variations of the original problem [24]. The importance of the problem and its known complexity motivated the application of many heuristics and metaheuristics. In [16], a general framework for automatic design for shop scheduling strategies based on hyper-heuristics has been constructed, and various state-of-the-art technical points in the development process are summarized. Some existing types of shop scheduling strategies were summarized and classified using a new classification method.
In particular, the smart manufacturing and Industry 4.0 reinforced the needs for solving real time scheduling problems. The authors of the review [16] proposed a classification of JSSP. For example, DFJSPP stands for dynamic flexible job shop scheduling problem. This variation is an example which nicely illustrates some of the many challenges that the practical scheduling adds to the basic problem. Two further examples include minimum-energy scheduling [25] and bi-objective dynamic open shop scheduling [26]. Due to the high complexity of practical problems, the actual shop production environment may change at any time (e.g., order cancellation, addition, advance of delivery) [27]; thus, for each disturbance, the solver that is not adapted to the flexible JSSP may spend much time for rescheduling.
The digital twin is one of the main concepts associated with the Industry 4.0 wave [28]. It provides virtual representations of systems along their life cycle. Optimization and decision making would then rely on the data that are updated in real time with the physical system, through synchronization enabled by sensors. Corresponding optimization tools, therefore, cannot rely on full information given a priori. Hence, in most cases, the online versions of the optimization problems have to be solved in this context. As explained later, our approach is based on a heuristics that governs local queues at the machines. In principle, this enables a distributed implementation, i.e., a digital twin can be maintained by local processors which can result in high-speed, real-time operation. Furthermore, the approach is robust in the sense that various changes can be handled without severe consequences.
However, in order to evaluate the approach, we have used some popular benchmark instances for the offline version of the problem. The reason for this is that there are no standard benchmark instances for the online version, while the offline datasets are well known [29] and can be used for both versions. We wish to emphasize that the online versions of the problems are usually more complex than the offline versions of the problem. Usually, competitive analysis [30] is used to obtain competitive ratios between the versions. For example, Hurink and Paulus [31] show that 2 is a tight lower bound on the competitive ratio for a problem with two machines and the objective of minimizing the makespan.

3.3. Heuristics and Metaheuristics

NP-hard problems are usually addressed by using heuristic methods and approximation algorithms. Combinatorial optimization problems can in many cases be viewed as searching for the best element of some set of discrete items; therefore, in principle, any sort of search algorithm or metaheuristic can be used to solve them. However, generic search algorithms are not guaranteed to find an optimal solution, nor are they guaranteed to run quickly (in polynomial time). Assuming that the conjecture P ≠ NP is true, there is no efficient (i.e., polynomial time) algorithm for any NP-hard problem [11]. Here, by the term algorithm we mean any precisely defined sequence of actions (decisions) that is used when solving an optimization problem. If the problem to be solved is NP-hard, then either the algorithm has superpolynomial time complexity or its results cannot be expected to be exact. Such algorithms are called heuristic algorithms. Now, we first give a brief summary of what is understood as heuristics and metaheuristics in this work. A metaheuristic is a higher-level procedure or heuristic that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity. In contrast to heuristics, metaheuristics often make fewer or even no assumptions about the optimization problem being solved, and so, they may be usable for a variety of problems, while heuristics are usually designed for particular problem or even particular type of instances of the problem. Compared to optimization algorithms, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems. We say that a heuristics searches for so-called near-optimal solutions. This means that in some occasions, we strongly hope that the solution will be of good quality, but in general, we have no approximation guarantee. Many books and survey papers have been published on the subject, see for example [13,32,33,34].
Most studies on metaheuristics are experimental, describing empirical results based on computer experiments with various implementations of heuristic algorithms. The literature is enormous, and therefore, we will later only give a short list of recent references related to the JSSP problem, which is of main interest in this work. Some formal theoretical results on heuristics and metaheuristics are also available, often on convergence and the possibility of finding the global optimum. Theoretical results are often rather involved and at the same time of limited practical importance. As an example, we mention a very basic fact that any local search and even random guessing asymptotically outperform the simulated annealing algorithm provided that the temperature schedule of the latter satisfies the convergence constraints [35]. Simulated annealing (SA) [34] can be understood as a random relaxation of the iterative improvement algorithm [32]. It was a very popular heuristics in the 1990s, and it is still a frequently used relaxation of iterative improvement. Based on both theoretical results and the vast number of computational studies, several researchers have come to the conclusion that simple heuristics are often very competitive [12,15], which is of course another discovery of the well known Occam’s razor principle [36]. In the last two decades, a tsunami of “novel” metaheuristics based on metaphors of some natural or human-made processes has appeared [37]. However, the beauty of a metaphor does not imply that the corresponding metaheuristics is a valuable contribution to the science and/or useful knowledge on heuristics for optimization. For clarification of this view, see for example [15].

3.4. A Brief Outline of Related Previous Work—Heuristics for JSSP

As JSPP is NP-hard [3], it is expected that exact algorithms are very time consuming. Among many exact approaches, the branch-and-bound technique is quite efficient, but optimal solutions can be found only for small-scale JSSP, because exact approaches are incapable of solving large scale problems due to unreasonable computational expenses [38].
Not surprisingly, the tsunami of metaphor-based heuristics has also hit the JSSP. For example, a recent survey [39] provided over 250 references related to genetic programming in machine learning for JSSP. In the survey paper [20], the authors observe that at the time, the best methods appeared to be those encompassing hybrid systems such as local search techniques embedded within a metastrategy that employ a simple neighborhood structure combined with a mechanism that allow non improving moves. Well-known examples of the last are simulated annealing (SA) [40] and threshold accepting [41], and in some sense also Tabu search (TS) [38]. For more recent local search-based heuristics, see, for example, [42,43,44] and the references there.
While it is clear that the question which (meta)heuristics are the best is not well-grounded because it is always possible to tune the parameters for the chosen datasets such that any reasonable heuristics performs well or even excellent, it may be worth emphasizing again that this fact is in favor of simple (but not too simple!) and robust approaches. Here, we do not intend survey previous work, and only use the results of the previous work as the information of the best known solution for benchmark instances.
In the field of job shop scheduling, a heuristic can often be regarded as a scheduling strategy at the level of priority queues at the machines. This has been well-known for a long time [7,45,46,47,48,49,50,51]. Examples of often-used simple rules are FIFO (first in first out (i.e., served)) and largest first, among others (e.g., [7,52]). We may see some analogy between this approach and the greedy construction methods for some other problems in combinatorial optimization.

4. Our Approach

In the research that is reported here, our aim was to design a heuristics for the online version of the problem that may be implemented to run as an optimizer parallel to the digital twin. This implies it is desired to have an approach that allows massively parallel and real-time operation, which in turn likely determines that we have to focus on programming the priority dispatching rules. Another limitation is that we must have decisions made promptly, so we need simple, i.e., quickly computable rules. In this context, we recall that randomized construction heuristics may be a method of choice is such situation [12]. As much as time allows, multistart of such heuristics may be used to choose the best among the decisions checked so far. Regarding the probability distributions, it is clear that the choices which improve the solution quality should be preferred, compare simulated annealing, tabu search, and the generalized Boltzmann machine neural network. More precisely, the probabilistic priority dispatching rules that we will later use are motivated by our positive past experience with the graph coloring heuristics [53,54] that shows behavior which “is not unlike the phenomenon of phase-transition which occurs in the Ising model, Potts model and other models of statistical mechanics” [55].
The heuristic method we use for solving the problem of JSSP is explained below.

4.1. Priority Queues at Machines

Our approach is based on the idea of using buffer states ( B i ) for each machine ( M i ) to hold the remaining operations while the machine is processing one operation. The buffer states allow for the prioritization of operations based on their weight ( ω j i ) and arrival time in the buffer.
In our method, each operation O j i of job J j that comes to a specific machine M i first goes through the buffer state, i.e., enters the priority queue B i . In the case where the buffer B i at machine M i has multiple orders (operations to be performed) or the machine M i is occupied or nonoperational, the orders have to wait in the queue B i for a change in the state of the machine M i . The queue is a priority queue in the sense that operations in the buffer can overtake each other based on the weight of each operation ( ω j i ) and the arrival time of each operation in the buffer. The lower the weight, the higher priority it has.
The choice of the weights assigned to each job in the buffer state is crucial for the performance of the system. Traditionally, this weight has been assigned using a uniform random function. However, recent research has shown [56] that incorporating the remaining processing time of each operation on all machines and the sum of remaining processing times of all operations on all machines can lead to better results.
To this end, a new formula has been proposed for calculating the weight of each operation. The formula takes into account the remaining processing time of the jobs that have operations who are waiting to be processed by machine M i at time t. The formula takes into account the remaining processing time of the job J j associated with the operation at time t it is waiting to be processed by machine M i , denoted by p i j ( t ) . Note that these values are known in advance as they only depend on the time needed to perform the operations of job J j , hence p i j ( t ) = p i j . Formally, the weight of job J j at machine M i is defined as follows. Assume that operation O j i of job J j at machine M i is at k-th position of job J j , so its remaining processing time is:
p j i = l = k n τ j l .
At the time the new job is chosen from the queue B i , the weight of operation O j i (job J j ) is:
ω j i ( t ) = p j i J l B i p l j ( t )
where the sum in denominator runs over all operations in the queue B i at time t. The weight is, thus, changing in time, depending on the current status of the jobs. The weights are used to define the probability of choosing the next job to be processed. More precisely, let J j be a job in the buffer at machine M i at time t. Then, the probability of choosing J j is proportional to:
Ω j i ( t ) = e ω j i ( t )
Hence, the probability of choosing the job J j in the queue of machine M i at time t is P j = 1 S e ω j i ( t ) , where S = Ω j i ( t ) , the sum running over all jobs in the queue of machine M i at time t.
In the case of the rare event where there are two or more operations in the buffer state B i at the same time and they have the same weight, the FIFO (First In First Out) rule is applied, which means that priority is given to the operation that arrived earlier.
Coded in a pseudo programming language, the choice of the next job is made as follows.
Algorithm 1: Pseudocode for Heuristic JSSP with Randomized Selection
Algorithms 17 00568 i001
Example 1.
The proposed heuristic algorithm prioritizes jobs based on calculated weights, allowing for probabilistic selection. Assume there are four jobs (A, B, C, D) in the buffer queue, each with assigned probabilities reflecting their priority (see Figure 1). In this case, the most likely selection order is A, D, B, C, as shown in the initial queue state.
However, due to the randomness of the probabilistic selection mechanism, a lower-priority job may be selected occasionally, thus changing the typical order. In this example, operation C1 of job C, which has a lower probability of being selected, was processed earlier due to the stochastic nature of the algorithm.
The diagrams below illustrate the process:
  • The initial buffer queue state with jobs A, B, C, and D awaiting processing;
  • The expected processing sequence based on priority probabilities: A1 → D1 → B1 → C1;
  • The actual sequence executed due to a possible random selection: A1 → C1 → D1 → B1.
This example highlights the balance achieved by the algorithm between adhering to priority rules and allowing for stochastic exploration, which is essential for avoiding local optima and improving the overall scheduling performance.
Figure 1. Illustration of the probabilistic job selection process. Due to the stochastic nature of the algorithm, job C1 interrupts the expected sequence and is processed earlier than anticipated.
Figure 1. Illustration of the probabilistic job selection process. Due to the stochastic nature of the algorithm, job C1 interrupts the expected sequence and is processed earlier than anticipated.
Algorithms 17 00568 g001
The new formula has been tested on various JSSP problems and has shown promising results. By assigning weights based on the remaining processing time of each operation and the sum of remaining processing times of all operations, the proposed method has been able to improve the scheduling performance and obtain better results than the traditional uniform random function method.
After implementing the new weight calculation formula, we found that our algorithm performed better than before, but we still saw some room for improvement. We realized that introducing some level of randomness in the weight calculation could help the algorithm explore the search space more effectively.

4.2. Introducing the Parameter Temperature

To achieve this, we have borrowed the idea of temperature used in the popular heuristics simulated annealing that is inspired by analogy with statistical mechanics [34]. Simulated annealing involves gradually decreasing the temperature of a system to allow it to settle into a lower energy state. In our context, we introduced the temperature parameter to add some control of the importance of weights in the randomized choice of the next job to be chosen.
In our method, we have implemented analogous idea to increase the randomness of the solution search. This is achieved through the use of a temperature parameter, denoted as T. Unlike traditional simulated annealing algorithms, where the temperature decreases over time, we set T to a constant value that ranges among runs of simulations. The idea of fixed temperature schedule is well known [57,58]. The purpose of this approach is to maintain a certain level of randomness throughout the entire search process, which can lead to the discovery of better solutions that may be missed if the randomness is decreased too quickly.
In the Formula (4) below, T is the temperature parameter, and r i is a random value drawn from a uniform distribution between 0 and 1. Parameter T can be seen as a change of base of the exponential expression as e w i T = ( e 1 T ) w i = b T w i . Clearly, large T makes base B T close to e 0 = 1 , implying that the weights w i tend to be ignored. The priority rule, thus, gets very similar to the uniform random choice. On the other hand, very small T means that a larger weight w i very likely means the operation of the job will be selected. The factor r i adds some random noise to the weight calculation, which decreases as the temperature parameter decreases.
Specifically, we modified the weight calculation Formula (3) to include the temperature parameter, as follows:
Ω j i = r i · e ω j i ( t ) / T = r i · ( e 1 T ) ω j i ( t ) = r i · ( e 1 T ) ω j i ( t ) = r i · b T ω j i ( t ) .
By introducing this level of randomness, we were able to explore the search space more effectively, and found that our algorithm produced even better results than before. We have validated the effectiveness of our modified algorithm by comparing its performance to existing methods and found that it it is competitive both in terms of solution quality and computation time. By combining the strengths of both the weighting system and simulated annealing, we have achieved significant improvements in the flow time of operations across all machines. Overall, from experimental results we observe that our heuristic method represents a powerful and flexible approach for solving complex job shop scheduling problems. Through the incorporation of innovative techniques such as simulated annealing, we have been able to achieve highly competitive results and make significant contributions to the field of production planning and scheduling.

4.3. Artificial Machine Status Nonoperational

We expand heuristic further as follows. We allow the possibility that a machine is idle, i.e., does not start executing an operation, even if the buffer is not empty. In other words, in the model, we allow that sometimes the machines are nonoperational, we may say that the machine is in the artificial status nonoperational in contrast to the possibility of being nonoperational in the real production process due to some technical reasons. The likelihood of this happening is determined by a specific factor ( λ ) that can be adjusted as a parameter of the heuristics. The parameter λ allows us to control the trade-off between the efficiency of the schedule and the workload of each machine. When λ is set to a higher value, the likelihood of a station not receiving an order is higher, which can lead to a more even distribution of workload among stations. On the other hand, when λ is set to a lower value, machines are more likely to receive orders, which can lead to a faster completion time for individual orders but may result in some machines being overloaded. Our approach combines several heuristic techniques to create a powerful scheduling algorithm that can handle complex job shop scheduling problems efficiently. By using a weighted buffer state, probabilistic event handling, and simulated annealing, we optimize the flow time of all orders while ensuring an even distribution of workload among stations.
To evaluate the effectiveness of our approach, a digital model was developed which incorporates the algorithm described above for job shop scheduling. We ran the model on various well-known benchmarks for the JSSP, in each case using 20,000 different scenarios. The factors T and λ were kept constant for each scenario, but the seed for random numbers was varied to test the performance of our algorithm under different conditions. This approach allowed us to compare our results with existing methods and validate the effectiveness of our approach.

5. Experimental Results

The goal of exhaustive experiments was to investigate the effect of parameters λ and T on the final result. By conducting a large number of scenarios, we aimed to determine the optimal values for these parameters for each problem. The results presented in Table 1 highlight the best objective function values obtained for each λ and T combination, with the value of T at which the best result was first found shown in brackets. Bolded results indicate that the solution obtained is the same as the best known solution (BKS), while results in bold and italic indicate our best result, which is not the same as the BKS. These experiments provide valuable insights into the performance of the algorithm under different parameter settings, which can be used to inform future research and development efforts.
In this study, we present the results of experiments conducted on 17 well-known JSSP problems. For each problem, we conducted a number of scenarios in which two parameters were varied. The parameter λ runs over six values, i.e., 0%, 1%, 2%, 3.5%, 5%, and 10%, and the parameter T over 11 different values, i.e., 0.10, 0.20, 0.40, 0.60, 0.80, 1.00, 2.00, 3.00, 5.00, 7.00, and 10.00. For each combination of λ and T, we conducted 20,000 experiments. Table 1 shows the best objective function values obtained for each λ and T combination.
Note that Table 1 only displays the best results obtained for each value of λ and the corresponding value of T. The full set of results for each scenario is available upon request.
Figure 2 presents the Gantt chart of the best-known solution for the LA01 instance. The solution achieves an optimum time of 666.
The results summarized in Table 1 indicate that very different values of parameter T were used in the best runs; note that only one, the first T, is given. Regarding parameter λ , there are several instances for which the best results were obtained with λ = 0 , while there are some other instances where nondelay schedules were inferior. This is expected, as in some instances, it is known that the optimal or best-known solutions must have delays. It may be interesting to note that for some instances (FT10, FT20, ABZ5, ABZ6, LA03), the best solutions were obtained with only one (small nonzero) λ . It is not clear whether this was pure luck or whether there is some reason for such behavior. As the approach is based on the priority rule, which only uses the information on the jobs currently in the local queue, we can write our first observation.
Observation 1.
The heuristics running online (without full information on the instance available a priori) is able to find near-optimal solutions (in some cases even optimal).
In addition to the main experiments, we also conducted advanced experiments to analyze the quality of the obtained solutions. Specifically, we evaluated the distribution of the obtained results relative to the best known solution we obtained. In the first experiment, we have performed multiple runs of the randomized simulations to see how good solutions may be found, and we have seen that the best solutions are close to the performance of the best offline algorithms. However, our algorithm is online and, more importantly, we design it to run in real time, and thus, possibly assisting the production process. Therefore, besides the best solution obtained, it is important to have an idea what can be expected to achieve in a limited time, i.e., when only a small number of repetitions is performed. To see this, we have for each problem instance measured the percentage of times that our approach found a solution that is equal to the best known solution we got, and how many times the results are within 1%, 5%, and 10% from the best known solution. Below (see Table 2, Table 3 and Table 4), we provide the results for three JSSP instances: LA01, LA05, and FT10. It is important to note that we have similar results for all 17 JSSP instances that were included in our experiments, which are available upon request.
For example, from Table 2, we read that for several choices of parameter T, around 4% of the runs provide a solution less than 10% worse than the best solution. In this case, since repetitions are independent trials, an estimate for probability of such a solution being found in k runs is by Bernoulli’s trials formula equal to:
P = i = 1 k k i p i ( 1 p ) k i = 1 ( 1 p ) k .
Hence, for example, in k = 100 runs, such a solution is found with probability P 98 % . Similar results are obtained for the second example, see Table 3. Even for the instance FT10 (Table 4), which appears to be hard for the heuristics, we observe that a small number of runs is likely to be close to the best that can be expected from these heuristics. More precisely, we have P 74 % , a reasonably high probability of finding near-optimal solution that is at most 5% worse than the best obtained in 20,000 runs. At first sight, these probabilities may not seem to be very impressive, but recalling the approximate bounds for the online algorithms [31], the performance is indeed very good.
Observation 2.
The heuristics frequently finds solutions close to the best obtained.
Finally, consider the graphs of four examples (Figure 3, Figure 4 and Figure 5), where the performance of the algorithm with varying values of parameter T is compared. The instance LA01 has already been considered (Table 2) and we have seen that the algorithm has found optimal and near-optimal solutions frequently. The second example, instance LA05, seems to be easier, and the speed of convergence is much faster, see Figure 4. The last two examples are shown on Figure 5. The instance ABZ6 is obviously harder, and again, the solution quality nicely improves with the number of trials; however, the best solutions obtained seem to converge to some approximate solutions rather than the optimal. In other words, we can expect that the solution will be an approximation with a certain gap that is not expected to vanish in reasonable time. In summary, the figures show that the speed of convergence depends on parameter T, but, on the other hand, the differences are not large as the curves all have similar shape.
We conclude from the examples that the behavior of the algorithm is rather robust with respect to parameter T, which implies that we may likely with success run the algorithm using any parameter value from a large interval. Hence, fine tuning is expected to improve the performance, but the algorithm may already work very well for values of parameters that are not optimized.
Observation 3.
Performance of the heuristics is robust relative to the parameter T.
This means that in practical situations, we can start using the algorithm with some default values of the parameter(s), and then later learn which values are better. In particular, an intelligent learning agent may be later supplemented to the system allowing learning/adapting the values of the parameters during the production process.
The last observation indicates that it may be of some (theoretical) interest to look more closely to the independecies between the parameter T and the behavior of the heuristics. For example, we can ask the question of what is the optimal temperature T for a given instance, or a set of instances. However, based on experience with similar questions, we guess that answering such a question is far from easy, recalling the open questions in case of some simulated-annealing-based heuristics for the traveling salesman problem and the graph coloring problem [54,57,58].
In the last experiment, a performance comparison is conducted with several well-known heuristics for solving the JSSP that are based on various standard dispatching rules, i.e., types of buffer priority queues at the machines. The widely used strategies for prioritizing jobs and managing machine schedules that we include in comparison are described below:
  • FIFO (First In, First Out): Processes jobs in the order they arrive, without prioritizing based on any other criteria;
  • LLT (Longest Processing Time Left): Prioritizes jobs with the longest remaining processing time;
  • SLT (Shortest Processing Time Left): Prioritizes jobs with the shortest remaining processing time;
  • LPT (Longest Processing Time): Prioritizes jobs with the longest operation time first;
  • SPT (Shortest Processing Time): Prioritizes jobs with the shortest operation time;
  • LTT (Longest Total Time): Prioritizes jobs with the longest total time remaining across all operations;
  • STT (Shortest Total Time): Prioritizes jobs with the shortest total time remaining.
The heuristics were tested on a collection of well-known JSSP benchmark instances available on the website www.jobshoppuzzle.com) [59], a platform designed for exploring and testing solutions for JSSP. This resource provides a practical setting to evaluate heuristics, offering diverse problem sets and a controlled environment for benchmarking.
Table 5 compares results across various JSSP instances. The columns in the table are defined as follows: the first two columns give instance and its best-known solution BKS, serving as a benchmark for performance evaluation. The columns FIFO, LLT, SLT, LPT, LTT, and STT show results obtained using traditional heuristics, followed by the best solution among the standard rules (in column 9). The last two columns summarize the results of the heuristics proposed here. More precisely, column BEST gives the best solution obtained, while column WORST provides the solution obtained by the least successful parameter combination of our heuristics. The results demonstrate that the proposed algorithm frequently matches the best-known solutions, and often finds good near-optimal solutions. It also outperforms traditional heuristics in nearly all cases, as indicated by the BEST results. Furthermore, the robustness of the proposed algorithm is evident, as its WORST results are better than or equal to the best heuristic results in all but two instances, highlighting its reliability and consistency across diverse problem sets. We summarize the comments as an observation.
Observation 4.
The proposed heuristics provides results better than the results obtained by the standard dispatching rules.

6. Discussion and Conclusions

In this paper, we designed and tested heuristics for solving the job shop scheduling problem that is distributed and online since all the decisions are made at the local queues at the buffers at the machines, and the decisions are made at the time the next operation (job) is chosen by the priority rule using only local information. For evaluation of the quality of the solutions obtained, we have used a set of benchmark instances of the offline job shop scheduling problems, although our heuristics can solve the online version of the job shop scheduling problem. In principle, this can support the digital twin technology that is essential in modern smart manufacturing. Although our experiments are performed on classical computer using a simulation tool, the approach allows direct application to asynchronous parallel implementation in real industrial environment. This is planned to be implemented in the future.
Our approach is based on heuristics that govern local queues at the machines, which in principle enables a distributed implementation, i.e., a digital twin can be maintained by local processors, which can result in high-speed, real-time operation. Furthermore, implementation of a version that handles unpredicted events is rather straightforward. For example, a new job can enter the system at any time, and it can also be canceled (removed) even after it is partially processed. A failure of a machine has to change some basic information only at the jobs that need to be processed by that machine, for example, replacement of the machine with another one, or altering the processing time that may be caused by the event. Obviously, there is no rescheduling needed, as the new information will only influence future decisions in the waiting queues. Variants of the algorithm have already been successfully used for versions tailored to industrial partners. As stated in the introduction, this case study focuses on the basic version of JSSP because we hoped to be able to make some basic observations, and because we can use some popular benchmark datasets.
Finally, note that the experimental results are very promising. We wish to emphasize that we use the benchmark instances for which the best known or optimal solutions were found by the best offline algorithms that in some cases are very time consuming. Therefore, in this context, much more important than finding the best known solutions is to observe that very good approximations were found in a short time by our heuristics. Also note that the excellent behavior of the algorithm is observed at various values of parameters, implying that the tuning of the heuristics can be expected to be relatively easy, and can perhaps potentially be performed on the fly by some self-adapting mechanisms. This idea has yet to be tested, and we believe it may be a big challenge for future study.
In this work, we have put forward a novel type of dispatching rules that proved to be an improvement over the use of standard rules. The standard rules are deterministic, while in our method, the order of jobs to be processed is determined on basis of certain probability distributions. While the average performance is comparable to the standard dispatching rules, we show that the best solutions obtained clearly outperform the standard deterministic dispatching rules. The results are compared to the best known solutions on standard benchmark problems for the offline problem to obtain a firm impression on the quality of the solution. A deeper comparison of our method with the offline algorithms and metaheuristics is of limited interest, as the methods are very different, and it is not clear what a reasonable criteria and methods for comparison of the performances may be. Therefore, in this study, we did not use any advanced statistical tests or other methods for ranking the approaches.
In conclusion, promising experimental results on the offline benchmark instances and observed robustness of the method made us believe that the following challenges may be tractable research tasks in the future:
  • Variations of JSSP. Application of the method to variations of the JSSP with different objectives including multicriteria optimization and taking into account various additional constraints that appear in specific applications. Due to its robustness, the method is likely to be competitive on variations of the basic problem.
  • Distributed parallel implementation. As the computation is performed at local queues, it seems to be obvious that there is no severe restriction on the dimension of the problems to be solved. The approach can be implemented using full parallelism when local processors are used at each machine, implying the possibility of handling very large instances of the problem.
  • Machine learning for parameter fine tuning. Fine tuning of parameters may be advanced by machine learning techniques, and potentially, it can be dynamically adapted on the run, based on the features of jobs processed recently. Namely, it is likely that optimal parameter values for one dataset of instances is not optimal for another dataset. If this hypothesis holds, then it should indeed be profitable to add a machine learning tool that would learn from past and recent experience.

Author Contributions

Conceptualization, N.H. and J.Ž.; software, H.Z.; validation, H.Z., N.H. and J.Ž.; investigation, H.Z.; writing—original draft preparation, H.Z; writing—review and editing, J.Ž.; supervision, N.H. and J.Ž.; project administration, N.H. and J.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Slovenian Research and Innovation Agency (ARIS), grants J2-2512, J2-4470 and P2-0248. The research work carried out by the first author was partially supported by the project INNO2MARE (Funded by the European Union under the Horizon Europe Grant no. 101087348). The third author was also partially supported by ARIS through the annual work program of Rudolfovo.

Data Availability Statement

The data are available at various benchmark datasets on web, including the webpage https://www.jobshoppuzzle.com/benchmarks.html, accessed on 4 December 2024.

Acknowledgments

The authors wish to thank to anonymous reviewers for constructive comments that helped us to considerably improve this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ahmadian, M.M.; Salehipour, A. The just-in-time job-shop scheduling problem with distinct due-dates for operations. J. Heuristics 2021, 27, 175–204. [Google Scholar] [CrossRef]
  2. Fuladi, S.K.; Kim, C.-S. Dynamic Events in the Flexible Job-Shop Scheduling Problem: Rescheduling with a Hybrid Metaheuristic Algorithm. Algorithms 2024, 17, 142. [Google Scholar] [CrossRef]
  3. Garey, M.R.; Johnson, D.S.; Sethi, R. The Complexity of Flowshop and Jobshop Scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  4. Johnson, S.M. Optimal two-and three-stage production schedules with setup times included. Nav. Res. Logist. Quart. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  5. Karush, W.; Moody, L.A. Determination of Feasible Shipping Schedules for a Job Shop. Oper. Res. 1958, 6, 35–55. [Google Scholar] [CrossRef]
  6. Szpigel, B. Optimal Train Scheduling on a Single Line Railway. Oper. Res. 1973, 72, 344–351. [Google Scholar]
  7. Gholami, O.; Törnquist Krasemann, J. A Heuristic Approach to Solving the Train Traffic Re-Scheduling Problem in Real Time. Algorithms 2018, 11, 55. [Google Scholar] [CrossRef]
  8. Muth, J.F.; Thompson, G.L. Industrial Scheduling; Prentice-Hall: Cliffs, NJ, USA, 1963. [Google Scholar]
  9. Zhang, C.Y.; Li, P.; Rao, Y.; Guan, Z. A very fast TS/SA algorithm for the job shop scheduling problem. Comput. Oper. Res. 2008, 35, 282–294. [Google Scholar] [CrossRef]
  10. Zhang, J.; Ding, G.; Zou, Y.; Qin, S.; Fu, J. Review of job shop scheduling research and its new perspectives under Industry 4.0. J. Intell. Manuf. 2019, 30, 1809–1830. [Google Scholar] [CrossRef]
  11. The Millennium Prize Problems, P vs. NP. Available online: https://www.claymath.org/millennium/p-vs-np/ (accessed on 1 August 2023).
  12. Žerovnik, J. Heuristics for NP-hard optimization problems—Simpler is better!? Logist. Supply Chain. Sustain. Glob. Chall. 2015, 6, 1–10. [Google Scholar] [CrossRef]
  13. Talbi, E.-G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  14. Romanycia, M.H.J.; Pelletier, F.J. What is a heuristic? Comput. Intell. 1985, 1, 47–58. [Google Scholar] [CrossRef]
  15. Sörensen, K.; Glover, F.W. Metaheuristics. In Encyclopedia of Operations Research and Management Science; Gass, S.I., Fu, M.C., Eds.; Springer: Boston, MA, USA, 2013. [Google Scholar] [CrossRef]
  16. Guo, H.; Liu, J.; Zhuang, C. Automatic design for shop scheduling strategies based on hyper-heuristics: A systematic review. Adv. Eng. Inform. 2022, 54, 101756. [Google Scholar] [CrossRef]
  17. Blackstone, J.H.; Phillips, D.T.; Hogg, G.L. A state-of-the-art survey of dispatching rules for manufacturing job shop operations. Int. J. Prod. Res. 1982, 20, 27–45. [Google Scholar] [CrossRef]
  18. Liu, M.; Xu, Y.; Chu, C.; Zheng, F. Online scheduling on two uniform machines to minimize the makespan. Theor. Comput. Sci. 2009, 410, 2099–2109. [Google Scholar] [CrossRef]
  19. Graham, R.L. Bounds for certain multiprocessing anomalies. Bell Syst. Tech. J. 1966, 45, 1563–1581. [Google Scholar] [CrossRef]
  20. Jain, A.S.; Meeran, S. Deterministic job-shop scheduling: Past, present and future. Eur. J. Oper. Res. 1999, 113, 390–434. [Google Scholar] [CrossRef]
  21. Abdullah, S.; Abdolrazzagh-Nezhad, M. Fuzzy job-shop scheduling problems: A review. Inf. Sci. 2014, 278, 380–407. [Google Scholar] [CrossRef]
  22. Sotskov, Y. Assembly and Production Line Designing, Balancing and Scheduling with Inaccurate Data: A Survey and Perspectives. Algorithms 2022, 16, 100. [Google Scholar] [CrossRef]
  23. Werner, F. Special Issue “Scheduling: Algorithms and Applications”. Algorithms 2023, 16, 268. [Google Scholar] [CrossRef]
  24. Xiong, H.; Shi, S.; Ren, D.; Hu, J. A survey of job shop scheduling problem: The types and models. Comput. Oper. Res. 2022, 142, 105731. [Google Scholar] [CrossRef]
  25. Olanrewaju, O.A.; Krykhtine, F.L.P.; Mora-Camino, F. Minimum-Energy Scheduling of Flexible Job-Shop Through Optimization and Comprehensive Heuristic. Algorithms 2024, 17, 520. [Google Scholar] [CrossRef]
  26. Abdelmaguid, T.F. Bi-Objective, Dynamic, Multiprocessor Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach. Algorithms 2024, 17, 371. [Google Scholar] [CrossRef]
  27. Matrenin, P.V. Improvement of Ant Colony Algorithm Performance for the Job-Shop Scheduling Problem Using Evolutionary Adaptation and Software Realization Heuristics. Algorithms 2023, 16, 15. [Google Scholar] [CrossRef]
  28. Negri, E.; Fumagalli, L.; Macchi, M. A Review of the Roles of Digital Twin in CPS-based Production Systems. Procedia Manuf. 2017, 11, 939–948. [Google Scholar] [CrossRef]
  29. van Hoorn, J.J. The Current state of bounds on benchmark instances of the job-shop scheduling problem. J. Sched. 2018, 21, 127–128. [Google Scholar] [CrossRef]
  30. Borodin, A.; El-Yaniv, R. Online Computation and Competitive Analysis; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  31. Hurink, J.L.; Paulus, J.J. Online scheduling of parallel jobs on two machines is 2-competitive. Oper. Res. Lett. 2008, 36, 51–56. [Google Scholar] [CrossRef]
  32. Aarts, E.H.L.; Lenstra, J.K. Local Search Algorithms; John Wiley & Sons: Chichester, UK, 1997. [Google Scholar]
  33. Korte, B.; Vygen, J. Combinatorial Optimization: Theory and Algorithms; Springer: Berlin, Germany, 2002. [Google Scholar]
  34. van Laarhoven, P.J.; Aarts, E.H. Simulated Annealing: Theory and Applications; Kluwer: Dordrecht, The Netherlands, 1987. [Google Scholar]
  35. Ferreira, A.G.; Žerovnik, J. Bounding the probability of success of stochastic methods for global optimization. Comput. Math. Appl. 1993, 25, 1–8. [Google Scholar] [CrossRef]
  36. Occam’s Razor. Available online: https://en.wikipedia.org/wiki/Occam’s_razor (accessed on 1 August 2023).
  37. Sörensen, K. Sörensen Metaheuristics—The metaphor exposed. Int. Trans. Oper. Res. 2013, 22, 3–18. [Google Scholar] [CrossRef]
  38. Peng, B.; Lü, Z.; Cheng, T. A tabu search/path relinking algorithm to solve the job shop scheduling problem. Comput. Oper. Res. 2015, 53, 154–164. [Google Scholar] [CrossRef]
  39. Zhang, F.; Mei, Y.; Nguyen, S.; Zhang, M. Survey on Genetic Programming and Machine Learning Techniques for Heuristic Design in Job Shop Scheduling. IEEE Trans. Evol. Comput. 2024, 28, 147–167. [Google Scholar] [CrossRef]
  40. Laarhoven, P.J.V.; Aarts, E.H.; Lenstra, J.K. Job shop scheduling by simulated annealing. Oper. Res. 1992, 40, 113–125. [Google Scholar] [CrossRef]
  41. Lee, D.S.; Vassiliadis, V.S.; Park, J.M. A novel threshold accepting meta-heuristic for the job-shop scheduling problem. Comput. Oper. Res. 2004, 31, 2199–2213. [Google Scholar] [CrossRef]
  42. Wang, B.; Wang, X.; Lan, F.; Pan, Q. A hybrid local-search algorithm for robust job-shop scheduling under scenarios. Appl. Soft Comput. 2018, 62, 259–271. [Google Scholar] [CrossRef]
  43. Kuhpfahl, J.; Bierwirth, C. A study on local search neighborhoods for the job shop scheduling problem with total weighted tardiness objective. Comput. Oper. Res. 2016, 66, 44–57. [Google Scholar] [CrossRef]
  44. Ruiz, R.; Stützle, T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  45. Palmer, D.S. Sequencing jobs through a multi-stage process in the minimum total time—A quick method of obtaining a near optimum. J. Oper. Res. Soc. 1965, 16, 101–107. [Google Scholar] [CrossRef]
  46. Wilkerson, L.J.; Irwin, J.D. An improved method for scheduling independent tasks. AIIE Trans. 1971, 3, 239–245. [Google Scholar] [CrossRef]
  47. Krone, M.J.; Stieglitz, K. Heuristic-programming solution of a flowshop scheduling problem. Oper. Res. 1974, 22, 629–638. [Google Scholar] [CrossRef]
  48. Fry, T.D.; Vicens, L.; Macleod, K.; Fernandez, S. A Heuristic Solution Procedure to Minimize T on a Single Machine. J. Oper. Res. Soc. 1989, 40, 293–297. [Google Scholar] [CrossRef]
  49. Maccarthy, B.L.; Liu, J. Addressing the gap in scheduling research: A review of optimization and heuristic methods in production scheduling. Int. J. Prod. Res. 1993, 31, 59–79. [Google Scholar] [CrossRef]
  50. Turker, A.K.; Aktepe, A.; Inal, A.F.; Ersoz, O.O.; Das, G.S.; Birgoren, B. A Decision Support System for Dynamic Job-Shop Scheduling Using Real-Time Data with Simulation. Mathematics 2019, 7, 278. [Google Scholar] [CrossRef]
  51. Durasević, M.; Jakobović, D. Creating dispatching rules by simple ensemble combination. J. Heuristics 2019, 25, 959–1013. [Google Scholar] [CrossRef]
  52. Pranzo, M.; Pacciarelli, D. An iterated greedy metaheuristic for the blocking job shop scheduling problem. J. Heuristics 2016, 22, 587–611. [Google Scholar] [CrossRef]
  53. Shawe-Taylor, J.; Žerovnik, J. Boltzmann Machines with Finite Alphabet. In Proceedings of the International Conference on Artificial Neural Networks, ICANN’92, Brighton, UK, 4–7 September 1992; Elsevier Science: Brighton, UK, 1992; pp. 391–394. [Google Scholar]
  54. Shawe-Taylor, J.; Žerovnik, J. Analysis of the Mean Field Annealing Algorithm for Graph Colouring. J. Artif. Neural Netw. 1995, 2, 329–340. [Google Scholar]
  55. Petford, A.; Welsh, D. A Randomised 3-coloring Algorithm. Discrete Math. 1989, 74, 253–261. [Google Scholar] [CrossRef]
  56. Zupan, H.; Herakovič, N.; Starbek, M.; Kušar, J. Hybrid Algorithm Based on Priority Rules for Simulation of Workshop Production. Int. J. Simul. Model. 2016, 15, 29–41. [Google Scholar] [CrossRef]
  57. Fielding, M. Simulated Annealing with an Optimal Fixed Temperature. SIAM J. Optim. 2000, 11, 289–307. [Google Scholar] [CrossRef]
  58. Žerovnik, J. On Temperature Schedules for Generalized Boltzmann Machine. Neural Netw. World 2000, 10, 495–503. [Google Scholar]
  59. Job Shop Puzzle. Available online: http://www.jobshoppuzzle.com (accessed on 4 December 2024).
Figure 2. Gantt chart of the best solution for LA01 instance with a makespan of 666.
Figure 2. Gantt chart of the best solution for LA01 instance with a makespan of 666.
Algorithms 17 00568 g002
Figure 3. Performance of the algorithm with various values of parameter T. Example LA01.
Figure 3. Performance of the algorithm with various values of parameter T. Example LA01.
Algorithms 17 00568 g003
Figure 4. Performance of the algorithm with various values of parameter T. Example LA05.
Figure 4. Performance of the algorithm with various values of parameter T. Example LA05.
Algorithms 17 00568 g004
Figure 5. Performance of the algorithm with various values of parameter T. Selected examples.
Figure 5. Performance of the algorithm with various values of parameter T. Selected examples.
Algorithms 17 00568 g005
Table 1. Results of the algorithm for different λ values. The value of T at which the best result was first found is in parentheses. Results in bold indicate the best value obtained by our heuristics. Results in bold are BKS; results in in bold and italic are the best results that are not BKS.
Table 1. Results of the algorithm for different λ values. The value of T at which the best result was first found is in parentheses. Results in bold indicate the best value obtained by our heuristics. Results in bold are BKS; results in in bold and italic are the best results that are not BKS.
Our Best with λ and (T)
InstanceSize (n × m)BKS0%1%2%3.5%5%10%
FT066 × 65558 (0.1)55 (0.4)55 (0.2)55 (0.6)55 (0.2)55 (0.4)
FT1010 × 10930994 (0.2)964 (0.1)993 (0.6)1017 (1)1005 (0.6)1024 (0.2)
FT2020 × 511651210 (0.1)1223 (0.1)1205 (0.4)1229 (5)1244 (0.4)1275 (0.1)
ABZ510 × 1012341276 (0.1)1276 (0.4)1264 (7)1270 (2)1266 (0.4)1291 (10)
ABZ610 × 10943971 (0.1)951 (5)948 (5)968 (0.8)948 (0.8)978 (1)
ABZ720 × 15665740 (0.4)743 (7)758 (0.2)763 (3)766 (0.1)808 (2)
ABZ820 × 15670754 (0.1)762 (2)768 (0.8)785 (0.6)788 (3)819 (0.4)
ABZ920 × 15691779 (2)782 (3)788 (0.8)799 (3)801 (0.2)853 (0.4)
LA0110 × 5666666 (0.1)666 (0.1)666 (0.1)666 (0.1)666 (0.1)666 (0.1)
LA0210 × 5655676 (0.8)672 (0.6)676 (0.8)673 (0.1)672 (0.2)672 (0.6)
LA0310 × 5597633 (0.6)630 (0.6)622 (0.8)631 (0.4)635 (1)623 (0.6)
LA0410 × 5590611 (0.2)611 (0.8)611 (0.2)611 (0.4)611 (0.2)616 (3)
LA0510 × 5593593 (0.1)593 (0.1)593 (0.1)593 (0.1)593 (0.1)593 (0.1)
LA0615 × 5926926 (0.1)926 (0.1)926 (0.1)926 (0.1)926 (0.1)926 (0.2)
LA0715 × 5890890 (0.2)890 (0.6)890 (2)897 (0.2)896 (0.4)932 (0.6)
LA0815 × 5863863 (0.4)863 (0.1)863 (0.2)863 (0.6)863 (0.2)863 (0.6)
LA0915 × 5951951 (0.1)951 (0.1)951 (0.1)951 (0.2)951 (0.2)951 (0.6)
Table 2. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for LA01. Each experiment was repeated 20,000 times.
Table 2. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for LA01. Each experiment was repeated 20,000 times.
Number of SolutionsPercentage of Solutions
T BKSBest/BKS (%)0%1%5%10%0%1%5%10%
0.10666100.0026611857940.130.3050.9253.97
0.20666100.0018552048600.090.2751.024.3
0.40666100.0020592148620.10.2951.074.31
0.60666100.0016592528390.080.2951.264.195
0.80666100.0022562438550.110.281.2154.275
1.00666100.0026732759080.130.3651.3754.54
2.00666100.0015522027140.0750.261.013.57
3.00666100.0010511786620.050.2550.893.31
5.00666100.007411574840.0350.2050.7852.42
7.00666100.003261043920.0150.130.521.96
10.00666100.00213742880.010.0650.371.44
Table 3. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for LA05. Each experiment was repeated 20,000 times.
Table 3. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for LA05. Each experiment was repeated 20,000 times.
Number of SolutionsPercentage of Solutions
T BKSBest/BKS (%)0%1%5%10%0%1%5%10%
0.10593100.0013414373831860.670.7153.6915.93
0.20593100.00289312160544031.4451.568.02522.015
0.40593100.00448484243756792.242.4212.18528.395
0.60593100.00540596294765502.72.9814.73532.75
0.80593100.00551607314369522.7553.03515.71534.76
1.00593100.00558622337874382.793.1116.8937.19
2.00593100.00574638363984592.873.1918.19542.295
3.00593100.00544612364890072.723.0618.2445.035
5.00593100.00476515363094762.382.57518.1547.38
7.00593100.00406447344794982.032.23517.23547.49
10.00593100.00284315313096371.421.57515.6548.185
Table 4. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for FT10. Each experiment was repeated 20,000 times.
Table 4. The number and percentage of solutions close to the best solution found. Varied parameter T and blocking fixed to 1%. Results for FT10. Each experiment was repeated 20,000 times.
Number of SolutionsPercentage of Solutions
T BKSBest/BKS (%)0%1%5%10%0%1%5%10%
0.10964103.66115740.0050.0050.0250.37
0.201009108.4913496340.0050.0150.2453.17
0.401014109.0314596630.0050.020.2953.315
0.601000107.5311253470.0050.0050.1251.735
0.80992106.6711112020.0050.0050.0551.01
1.001020109.6814787590.0050.020.393.795
2.001020109.6814445620.0050.020.222.81
3.001006108.1711132590.0050.0050.0651.295
5.001038111.6111606820.0050.0050.33.41
7.001024110.1112223000.0050.010.111.5
10.001032110.9711193080.0050.0050.0951.54
Table 5. Comparison of our results versus heuristics based on standard dispatching rules. The first two columns give instance and its best known solution, BKS. The columns FIFO, LLT, SLT, LPT, LTT, and STT show results obtained using traditional heuristics, followed by the best solution among the standard heuristics (in column 9). Columns BEST and WORST show our best and least successful parameter combinations of our heuristics.
Table 5. Comparison of our results versus heuristics based on standard dispatching rules. The first two columns give instance and its best known solution, BKS. The columns FIFO, LLT, SLT, LPT, LTT, and STT show results obtained using traditional heuristics, followed by the best solution among the standard heuristics (in column 9). Columns BEST and WORST show our best and least successful parameter combinations of our heuristics.
InstanceStandard RulesRandomized
idBKSFIFOLLTSLTLPTLTTSTT BESTWORST
FT0655606794738767605558
FT1093012101178153015341291127211789641024
FT201165165015881513161015211544151312051275
ABZ51234146814512093186020901890145112641291
ABZ69431075109613971287150013061075948978
ABZ76657737971082102610551035773740808
ABZ86708149001114103410341021814754819
ABZ96919209441159109611611077920779853
LA16667677351210803920948735666666
LA2655834875966898958945834672676
LA3597747704897748770843704622635
LA4590708790976848916930708611616
LA5593593612905787827689593593593
LA69269269261498110513691120926926926
LA78901096103112821145112810981031890932
LA886398010111348106111681003980863863
LA995195110661348111113321280951951951
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zupan, H.; Herakovič, N.; Žerovnik, J. A Robust Heuristics for the Online Job Shop Scheduling Problem. Algorithms 2024, 17, 568. https://doi.org/10.3390/a17120568

AMA Style

Zupan H, Herakovič N, Žerovnik J. A Robust Heuristics for the Online Job Shop Scheduling Problem. Algorithms. 2024; 17(12):568. https://doi.org/10.3390/a17120568

Chicago/Turabian Style

Zupan, Hugo, Niko Herakovič, and Janez Žerovnik. 2024. "A Robust Heuristics for the Online Job Shop Scheduling Problem" Algorithms 17, no. 12: 568. https://doi.org/10.3390/a17120568

APA Style

Zupan, H., Herakovič, N., & Žerovnik, J. (2024). A Robust Heuristics for the Online Job Shop Scheduling Problem. Algorithms, 17(12), 568. https://doi.org/10.3390/a17120568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop