1. Introduction
Practical optimization problems are often very complex because there are many circumstances that essentially influence the quality of solutions. Modern market conditions enforce the industry to promptly answer the changes, including adaptation of product quantities and even particular features of the products to each customer. Just-in-time production for known customers is often important [
1]. This implies that simple theoretical models (and the corresponding optimization problems) must be generalized to be of practical use. However, studying a general approach may better shed some more light onto its properties when applied to the basic version of a problem. An obvious disadvantage of the early application to a specific version that models various practically important features is that the many constraints and possibly adapted criteria can make it harder to observe some important basic phenomena.
The motivation for this work comes from practical needs as some of the authors closely cooperate with industrial partners trying to improve solutions in real time production. The design of the heuristics here is made with the idea to develop a framework that may be naturally used in real-time practical applications. The online version of the problem is, therefore, considered. Furthermore, we note that, with some obvious changes, the approach can naturally be implemented in the environment with changing features, e.g., changes of orders, failure of machine(s), etc.
Similar approach may be used also in the case when the problem is offline in the sense that we have a complete list of the jobs known, but have a very short time to find a solution as the production must start promptly. In other words, there is no profit if the optimal makespan solution is found after the production process is already finished.
On the other hand, we emphasize that we focus on the basic JSSP here because we want to evaluate the performance of the basic heuristics on the popular datasets. The performance, however, has to be judged keeping in mind that online algorithms clearly have a handicap when compared with the offline algorithms.
The main contributions of the work reported here are the following:
We design a model for randomized heuristics based on priority rules that can approximately solve online problems;
We provide a definition of a novel type of priority rule based on the remaining processing time of jobs, using a combination of two parameters;
We give experimental evidence that the approach can provide, although it is an online algorithm, optimal or near-optimal solutions on a set of benchmark problems for the offline JSSP. Furthermore, we provide evidence that the approach is robust with regard to both parameters in the sense that near-optimal solutions are reported for values of parameters within reasonably large intervals.
We wish to note that the approach using the priority rules at the machines is not new; however, to the best of our knowledge, the applications are limited to the standard dispatching rules like first-in-first-out (FIFO), shortest processing time (SPT), and longest processing time (LPT) that are easy to use but are not accurate enough for solving tough scheduling problems [
2]. Randomization, which we introduce, is a novel idea that combines the positive features of the standard rules with the diversity that comes from random decisions based on Boltzmann distribution. A comparison is provided showing that restarted simulations using the here-proposed randomized dispatching rules clearly outperform the standard deterministic dispatching rules.
We believe that a potential importance of the findings reported here is that we show competitiveness of a relatively simple approach that is robust. First, it is robust in the sense that, although designed for solving the online JSSP, it provides good solutions for the offline benchmark instances. Second, contrary to many metaheuristics, it is not highly depending on lucky choices of parameters. This implies fast parameter tuning is likely, and therefore, we believe that the approach may also be very competitive when applied to variations of the basic problem. For example, dynamic events can be handled naturally, nearly without altering the algorithm.
The rest of the paper is organized as follows. In the next section, we give some further arguments why JSSP is still a challenging optimization problem after more than 70 years of research. In
Section 3, we recall the formal definition of the basic problem and continue with a brief discussion of its many variants, in particular those relevant for smart manufacturing. Heuristics and metaheuristics, in particular those that motivated the present work, are discussed in
Section 3.3. The preliminary section is concluded with a brief outline of related work. The main ideas are outlined in
Section 4, which explain our approach. In
Section 5, we provide evidence of promising behavior of our ideas on benchmark datasets. The conclusions are summarized in
Section 6.
2. Motivation
The job shop scheduling problem (JSSP) is a well-known intractable combinatorial optimization problem [
3] that has been studied extensively since the 1950s [
4,
5]. Besides the original application, the problem naturally appears in some other contexts, including the scheduling and rescheduling problem in the train traffic [
6,
7]. Various scheduling strategies and intelligent algorithms have been proposed to solve the job shop scheduling problem. JSSP deals with scheduling a set of jobs on a set of machines under natural constraints, e.g., given order of operations of jobs on machines. The popular performance measures of JSSP are makespan, throughput time, earliness, tardiness, and due-date cost, among others. Makespan, the time at which processing of all jobs are completed, is the most commonly used performance measure for JSSP. Although a considerable amount of research has been dedicated to the development of efficient solving methods for JSSP, many of them were found to be unable to solve even relatively small-sized JSSP, e.g., an instance of size 10 × 10 that was proposed before 1963 [
8] has not been solved to optimality for more than a quarter of a century [
9]. Even now, several benchmark instances of moderate size do not have known optimal solutions. Industry 4.0, with smart and distributed manufacturing systems, brings new challenges for extensive research [
10].
It is well known that even the most simply stated optimization problems are known to be NP-hard, which, roughly speaking, means that there is no practical, i.e., quick and exact, optimization algorithm, assuming that the famous P ≠ NP conjecture is correct. The conjecture is among the most challenging theoretical problems and was included into the list of seven millennium problems [
11]. Knowing that the problem of interest is computationally intractable implies that we may and should use heuristic approaches and that we also should aim to find nearly optimal solutions for which sometimes even approximation guaranties cannot be given. It is believed [
12] that the best results are obtained when a special heuristics is designed and tuned for each particular problem. This means that the heuristics should be based on considerations of the particular problem and perhaps also on the properties of the most likely instances. On the other hand, it is useful to work within a framework of (one or more) metaheuristics which can be seen as general strategies to attack an optimization problem. Several examples are given, for example, in [
13]. Hundreds of research papers and many textbooks were published on general and even on particular heuristics. For an extensive discussion of the meaning of terms heuristics and heuristic algorithms, see [
14]. For a history of metaheuristics, we refer to [
15]. In short, heuristics refer to problem-solving methods that are not guaranteed to find the optimal solution, but are instead intended to find a solution that is good enough in a reasonable amount of time. Heuristics are designed and tuned for each particular problem, and are based on considerations of the problem and the properties of the most likely instances. Metaheuristics refer to general strategies for attacking optimization problems using heuristics.
For modeling practical problems in production, a number of variants of JSSP have been defined and studied [
16]. Due to hardness of the basic problem and its variants, the exact algorithms, mostly based on the branch and bound method, can solve to optimality only in instances of very limited size. Therefore, many popular heuristics and metaheuristics were applied, and some more were designed to find near-optimal solutions of JSSP problem or its variants. We provide a sample of recent references in later sections.
The simulation-based approach is widely adopted in industry as dispatching rules are readily implemented, are computationally efficient, and are robust to variability and uncertainty of a job shop [
17]. Furthermore, the approach can be used for both online and offline versions of the problem. In the offline version of JSSP, it is assumed that all the input data are available, and we have (a lot of) time to find an optimal or at least a near-optimal solution. The situation is essentially different when considering the online version of the problem. In this case, the new jobs arrive in the time when algorithm is already running, and thus, the complete information is not available a priori. It is well known that the online versions are usually harder to handle than the offline versions of the same problem, see, e.g., [
18]. The online version is of particular interest in real-time optimization of production process.
A usual approach to control the industrial production and perform real-time optimization of production processes is to use digital twins. A digital twin is a virtual replica of a physical system that can be used to simulate and optimize the system’s performance in real time. Using the real-time data, it may be profitable to use a real-time optimizer that runs parallel to, or shortly ahead of, the actual production process. This involves continuously updating the production schedule as new jobs arrive, and using real-time data to make decisions on how to best allocate resources. This could include using machine learning algorithms to predict future job arrivals and optimize the schedule accordingly, or using heuristic algorithms to quickly find near-optimal solutions in the face of time constraints. A promising idea that we elaborate here is to upgrade the digital twin with a heuristics that locally manage queues. This involves using heuristic algorithms to manage the queue of jobs waiting to be processed in real time, and continuously updating the production schedule as new jobs arrive. This would allow the system to quickly adapt to changing conditions and find near-optimal solutions in the face of time constraints. Overall, the key to tackling the online version of JSSP in real-time optimization of production processes is to use a combination of digital twins, real-time optimizers, and heuristics. This allows to continuously update the production schedule as new jobs arrive, and use real-time data to make decisions on how to best allocate resources.
4. Our Approach
In the research that is reported here, our aim was to design a heuristics for the online version of the problem that may be implemented to run as an optimizer parallel to the digital twin. This implies it is desired to have an approach that allows massively parallel and real-time operation, which in turn likely determines that we have to focus on programming the priority dispatching rules. Another limitation is that we must have decisions made promptly, so we need simple, i.e., quickly computable rules. In this context, we recall that randomized construction heuristics may be a method of choice is such situation [
12]. As much as time allows, multistart of such heuristics may be used to choose the best among the decisions checked so far. Regarding the probability distributions, it is clear that the choices which improve the solution quality should be preferred, compare simulated annealing, tabu search, and the generalized Boltzmann machine neural network. More precisely, the probabilistic priority dispatching rules that we will later use are motivated by our positive past experience with the graph coloring heuristics [
53,
54] that shows behavior which “is not unlike the phenomenon of phase-transition which occurs in the Ising model, Potts model and other models of statistical mechanics” [
55].
The heuristic method we use for solving the problem of JSSP is explained below.
4.1. Priority Queues at Machines
Our approach is based on the idea of using buffer states () for each machine () to hold the remaining operations while the machine is processing one operation. The buffer states allow for the prioritization of operations based on their weight () and arrival time in the buffer.
In our method, each operation of job that comes to a specific machine first goes through the buffer state, i.e., enters the priority queue . In the case where the buffer at machine has multiple orders (operations to be performed) or the machine is occupied or nonoperational, the orders have to wait in the queue for a change in the state of the machine . The queue is a priority queue in the sense that operations in the buffer can overtake each other based on the weight of each operation () and the arrival time of each operation in the buffer. The lower the weight, the higher priority it has.
The choice of the weights assigned to each job in the buffer state is crucial for the performance of the system. Traditionally, this weight has been assigned using a uniform random function. However, recent research has shown [
56] that incorporating the remaining processing time of each operation on all machines and the sum of remaining processing times of all operations on all machines can lead to better results.
To this end, a new formula has been proposed for calculating the weight of each operation. The formula takes into account the remaining processing time of the jobs that have operations who are waiting to be processed by machine
at time
t. The formula takes into account the remaining processing time of the job
associated with the operation at time
t it is waiting to be processed by machine
, denoted by
. Note that these values are known in advance as they only depend on the time needed to perform the operations of job
, hence
. Formally, the weight of job
at machine
is defined as follows. Assume that operation
of job
at machine
is at
k-th position of job
, so its remaining processing time is:
At the time the new job is chosen from the queue
, the weight of operation
(job
) is:
where the sum in denominator runs over all operations in the queue
at time
t. The weight is, thus, changing in time, depending on the current status of the jobs. The weights are used to define the probability of choosing the next job to be processed. More precisely, let
be a job in the buffer at machine
at time
t. Then, the probability of choosing
is proportional to:
Hence, the probability of choosing the job in the queue of machine at time t is , where , the sum running over all jobs in the queue of machine at time t.
In the case of the rare event where there are two or more operations in the buffer state at the same time and they have the same weight, the FIFO (First In First Out) rule is applied, which means that priority is given to the operation that arrived earlier.
Coded in a pseudo programming language, the choice of the next job is made as follows.
Algorithm 1: Pseudocode for Heuristic JSSP with Randomized Selection |
|
Example 1. The proposed heuristic algorithm prioritizes jobs based on calculated weights, allowing for probabilistic selection. Assume there are four jobs (A, B, C, D) in the buffer queue, each with assigned probabilities reflecting their priority (see Figure 1). In this case, the most likely selection order is A, D, B, C, as shown in the initial queue state. However, due to the randomness of the probabilistic selection mechanism, a lower-priority job may be selected occasionally, thus changing the typical order. In this example, operation C1 of job C, which has a lower probability of being selected, was processed earlier due to the stochastic nature of the algorithm.
The diagrams below illustrate the process:
The initial buffer queue state with jobs A, B, C, and D awaiting processing;
The expected processing sequence based on priority probabilities: A1 → D1 → B1 → C1;
The actual sequence executed due to a possible random selection: A1 → C1 → D1 → B1.
This example highlights the balance achieved by the algorithm between adhering to priority rules and allowing for stochastic exploration, which is essential for avoiding local optima and improving the overall scheduling performance.
Figure 1.
Illustration of the probabilistic job selection process. Due to the stochastic nature of the algorithm, job C1 interrupts the expected sequence and is processed earlier than anticipated.
Figure 1.
Illustration of the probabilistic job selection process. Due to the stochastic nature of the algorithm, job C1 interrupts the expected sequence and is processed earlier than anticipated.
The new formula has been tested on various JSSP problems and has shown promising results. By assigning weights based on the remaining processing time of each operation and the sum of remaining processing times of all operations, the proposed method has been able to improve the scheduling performance and obtain better results than the traditional uniform random function method.
After implementing the new weight calculation formula, we found that our algorithm performed better than before, but we still saw some room for improvement. We realized that introducing some level of randomness in the weight calculation could help the algorithm explore the search space more effectively.
4.2. Introducing the Parameter Temperature
To achieve this, we have borrowed the idea of temperature used in the popular heuristics simulated annealing that is inspired by analogy with statistical mechanics [
34]. Simulated annealing involves gradually decreasing the temperature of a system to allow it to settle into a lower energy state. In our context, we introduced the temperature parameter to add some control of the importance of weights in the randomized choice of the next job to be chosen.
In our method, we have implemented analogous idea to increase the randomness of the solution search. This is achieved through the use of a temperature parameter, denoted as
T. Unlike traditional simulated annealing algorithms, where the temperature decreases over time, we set
T to a constant value that ranges among runs of simulations. The idea of fixed temperature schedule is well known [
57,
58]. The purpose of this approach is to maintain a certain level of randomness throughout the entire search process, which can lead to the discovery of better solutions that may be missed if the randomness is decreased too quickly.
In the Formula (
4) below,
T is the temperature parameter, and
is a random value drawn from a uniform distribution between 0 and 1. Parameter
T can be seen as a change of base of the exponential expression as
. Clearly, large
T makes base
close to
, implying that the weights
tend to be ignored. The priority rule, thus, gets very similar to the uniform random choice. On the other hand, very small
T means that a larger weight
very likely means the operation of the job will be selected. The factor
adds some random noise to the weight calculation, which decreases as the temperature parameter decreases.
Specifically, we modified the weight calculation Formula (
3) to include the temperature parameter, as follows:
By introducing this level of randomness, we were able to explore the search space more effectively, and found that our algorithm produced even better results than before. We have validated the effectiveness of our modified algorithm by comparing its performance to existing methods and found that it it is competitive both in terms of solution quality and computation time. By combining the strengths of both the weighting system and simulated annealing, we have achieved significant improvements in the flow time of operations across all machines. Overall, from experimental results we observe that our heuristic method represents a powerful and flexible approach for solving complex job shop scheduling problems. Through the incorporation of innovative techniques such as simulated annealing, we have been able to achieve highly competitive results and make significant contributions to the field of production planning and scheduling.
4.3. Artificial Machine Status Nonoperational
We expand heuristic further as follows. We allow the possibility that a machine is idle, i.e., does not start executing an operation, even if the buffer is not empty. In other words, in the model, we allow that sometimes the machines are nonoperational, we may say that the machine is in the artificial status nonoperational in contrast to the possibility of being nonoperational in the real production process due to some technical reasons. The likelihood of this happening is determined by a specific factor () that can be adjusted as a parameter of the heuristics. The parameter allows us to control the trade-off between the efficiency of the schedule and the workload of each machine. When is set to a higher value, the likelihood of a station not receiving an order is higher, which can lead to a more even distribution of workload among stations. On the other hand, when is set to a lower value, machines are more likely to receive orders, which can lead to a faster completion time for individual orders but may result in some machines being overloaded. Our approach combines several heuristic techniques to create a powerful scheduling algorithm that can handle complex job shop scheduling problems efficiently. By using a weighted buffer state, probabilistic event handling, and simulated annealing, we optimize the flow time of all orders while ensuring an even distribution of workload among stations.
To evaluate the effectiveness of our approach, a digital model was developed which incorporates the algorithm described above for job shop scheduling. We ran the model on various well-known benchmarks for the JSSP, in each case using 20,000 different scenarios. The factors T and were kept constant for each scenario, but the seed for random numbers was varied to test the performance of our algorithm under different conditions. This approach allowed us to compare our results with existing methods and validate the effectiveness of our approach.
5. Experimental Results
The goal of exhaustive experiments was to investigate the effect of parameters
and
T on the final result. By conducting a large number of scenarios, we aimed to determine the optimal values for these parameters for each problem. The results presented in
Table 1 highlight the best objective function values obtained for each
and
T combination, with the value of
T at which the best result was first found shown in brackets. Bolded results indicate that the solution obtained is the same as the best known solution (BKS), while results in bold and italic indicate our best result, which is not the same as the BKS. These experiments provide valuable insights into the performance of the algorithm under different parameter settings, which can be used to inform future research and development efforts.
In this study, we present the results of experiments conducted on 17 well-known JSSP problems. For each problem, we conducted a number of scenarios in which two parameters were varied. The parameter
runs over six values, i.e., 0%, 1%, 2%, 3.5%, 5%, and 10%, and the parameter
T over 11 different values, i.e., 0.10, 0.20, 0.40, 0.60, 0.80, 1.00, 2.00, 3.00, 5.00, 7.00, and 10.00. For each combination of
and
T, we conducted 20,000 experiments.
Table 1 shows the best objective function values obtained for each
and
T combination.
Note that
Table 1 only displays the best results obtained for each value of
and the corresponding value of
T. The full set of results for each scenario is available upon request.
Figure 2 presents the Gantt chart of the best-known solution for the LA01 instance. The solution achieves an optimum time of 666.
The results summarized in
Table 1 indicate that very different values of parameter
T were used in the best runs; note that only one, the first
T, is given. Regarding parameter
, there are several instances for which the best results were obtained with
, while there are some other instances where nondelay schedules were inferior. This is expected, as in some instances, it is known that the optimal or best-known solutions must have delays. It may be interesting to note that for some instances (FT10, FT20, ABZ5, ABZ6, LA03), the best solutions were obtained with only one (small nonzero)
. It is not clear whether this was pure luck or whether there is some reason for such behavior. As the approach is based on the priority rule, which only uses the information on the jobs currently in the local queue, we can write our first observation.
Observation 1. The heuristics running online (without full information on the instance available a priori) is able to find near-optimal solutions (in some cases even optimal).
In addition to the main experiments, we also conducted advanced experiments to analyze the quality of the obtained solutions. Specifically, we evaluated the distribution of the obtained results relative to the best known solution we obtained. In the first experiment, we have performed multiple runs of the randomized simulations to see how good solutions may be found, and we have seen that the best solutions are close to the performance of the best offline algorithms. However, our algorithm is online and, more importantly, we design it to run in real time, and thus, possibly assisting the production process. Therefore, besides the best solution obtained, it is important to have an idea what can be expected to achieve in a limited time, i.e., when only a small number of repetitions is performed. To see this, we have for each problem instance measured the percentage of times that our approach found a solution that is equal to the best known solution we got, and how many times the results are within 1%, 5%, and 10% from the best known solution. Below (see
Table 2,
Table 3 and
Table 4), we provide the results for three JSSP instances: LA01, LA05, and FT10. It is important to note that we have similar results for all 17 JSSP instances that were included in our experiments, which are available upon request.
For example, from
Table 2, we read that for several choices of parameter
T, around 4% of the runs provide a solution less than 10% worse than the best solution. In this case, since repetitions are independent trials, an estimate for probability of such a solution being found in
k runs is by Bernoulli’s trials formula equal to:
Hence, for example, in
runs, such a solution is found with probability
Similar results are obtained for the second example, see
Table 3. Even for the instance FT10 (
Table 4), which appears to be hard for the heuristics, we observe that a small number of runs is likely to be close to the best that can be expected from these heuristics. More precisely, we have
, a reasonably high probability of finding near-optimal solution that is at most 5% worse than the best obtained in 20,000 runs. At first sight, these probabilities may not seem to be very impressive, but recalling the approximate bounds for the online algorithms [
31], the performance is indeed very good.
Observation 2. The heuristics frequently finds solutions close to the best obtained.
Finally, consider the graphs of four examples (
Figure 3,
Figure 4 and
Figure 5), where the performance of the algorithm with varying values of parameter
T is compared. The instance LA01 has already been considered (
Table 2) and we have seen that the algorithm has found optimal and near-optimal solutions frequently. The second example, instance LA05, seems to be easier, and the speed of convergence is much faster, see
Figure 4. The last two examples are shown on
Figure 5. The instance ABZ6 is obviously harder, and again, the solution quality nicely improves with the number of trials; however, the best solutions obtained seem to converge to some approximate solutions rather than the optimal. In other words, we can expect that the solution will be an approximation with a certain gap that is not expected to vanish in reasonable time. In summary, the figures show that the speed of convergence depends on parameter
T, but, on the other hand, the differences are not large as the curves all have similar shape.
We conclude from the examples that the behavior of the algorithm is rather robust with respect to parameter T, which implies that we may likely with success run the algorithm using any parameter value from a large interval. Hence, fine tuning is expected to improve the performance, but the algorithm may already work very well for values of parameters that are not optimized.
Observation 3. Performance of the heuristics is robust relative to the parameter T.
This means that in practical situations, we can start using the algorithm with some default values of the parameter(s), and then later learn which values are better. In particular, an intelligent learning agent may be later supplemented to the system allowing learning/adapting the values of the parameters during the production process.
The last observation indicates that it may be of some (theoretical) interest to look more closely to the independecies between the parameter
T and the behavior of the heuristics. For example, we can ask the question of what is the optimal temperature
T for a given instance, or a set of instances. However, based on experience with similar questions, we guess that answering such a question is far from easy, recalling the open questions in case of some simulated-annealing-based heuristics for the traveling salesman problem and the graph coloring problem [
54,
57,
58].
In the last experiment, a performance comparison is conducted with several well-known heuristics for solving the JSSP that are based on various standard dispatching rules, i.e., types of buffer priority queues at the machines. The widely used strategies for prioritizing jobs and managing machine schedules that we include in comparison are described below:
FIFO (First In, First Out): Processes jobs in the order they arrive, without prioritizing based on any other criteria;
LLT (Longest Processing Time Left): Prioritizes jobs with the longest remaining processing time;
SLT (Shortest Processing Time Left): Prioritizes jobs with the shortest remaining processing time;
LPT (Longest Processing Time): Prioritizes jobs with the longest operation time first;
SPT (Shortest Processing Time): Prioritizes jobs with the shortest operation time;
LTT (Longest Total Time): Prioritizes jobs with the longest total time remaining across all operations;
STT (Shortest Total Time): Prioritizes jobs with the shortest total time remaining.
The heuristics were tested on a collection of well-known JSSP benchmark instances available on the website
www.jobshoppuzzle.com) [
59], a platform designed for exploring and testing solutions for JSSP. This resource provides a practical setting to evaluate heuristics, offering diverse problem sets and a controlled environment for benchmarking.
Table 5 compares results across various JSSP instances. The columns in the table are defined as follows: the first two columns give instance and its best-known solution
BKS, serving as a benchmark for performance evaluation. The columns
FIFO, LLT, SLT, LPT, LTT, and STT show results obtained using traditional heuristics, followed by the best solution among the standard rules (in column 9). The last two columns summarize the results of the heuristics proposed here. More precisely, column
BEST gives the best solution obtained, while column
WORST provides the solution obtained by the least successful parameter combination of our heuristics. The results demonstrate that the proposed algorithm frequently matches the best-known solutions, and often finds good near-optimal solutions. It also outperforms traditional heuristics in nearly all cases, as indicated by the
BEST results. Furthermore, the robustness of the proposed algorithm is evident, as its
WORST results are better than or equal to the best heuristic results in all but two instances, highlighting its reliability and consistency across diverse problem sets. We summarize the comments as an observation.
Observation 4. The proposed heuristics provides results better than the results obtained by the standard dispatching rules.
6. Discussion and Conclusions
In this paper, we designed and tested heuristics for solving the job shop scheduling problem that is distributed and online since all the decisions are made at the local queues at the buffers at the machines, and the decisions are made at the time the next operation (job) is chosen by the priority rule using only local information. For evaluation of the quality of the solutions obtained, we have used a set of benchmark instances of the offline job shop scheduling problems, although our heuristics can solve the online version of the job shop scheduling problem. In principle, this can support the digital twin technology that is essential in modern smart manufacturing. Although our experiments are performed on classical computer using a simulation tool, the approach allows direct application to asynchronous parallel implementation in real industrial environment. This is planned to be implemented in the future.
Our approach is based on heuristics that govern local queues at the machines, which in principle enables a distributed implementation, i.e., a digital twin can be maintained by local processors, which can result in high-speed, real-time operation. Furthermore, implementation of a version that handles unpredicted events is rather straightforward. For example, a new job can enter the system at any time, and it can also be canceled (removed) even after it is partially processed. A failure of a machine has to change some basic information only at the jobs that need to be processed by that machine, for example, replacement of the machine with another one, or altering the processing time that may be caused by the event. Obviously, there is no rescheduling needed, as the new information will only influence future decisions in the waiting queues. Variants of the algorithm have already been successfully used for versions tailored to industrial partners. As stated in the introduction, this case study focuses on the basic version of JSSP because we hoped to be able to make some basic observations, and because we can use some popular benchmark datasets.
Finally, note that the experimental results are very promising. We wish to emphasize that we use the benchmark instances for which the best known or optimal solutions were found by the best offline algorithms that in some cases are very time consuming. Therefore, in this context, much more important than finding the best known solutions is to observe that very good approximations were found in a short time by our heuristics. Also note that the excellent behavior of the algorithm is observed at various values of parameters, implying that the tuning of the heuristics can be expected to be relatively easy, and can perhaps potentially be performed on the fly by some self-adapting mechanisms. This idea has yet to be tested, and we believe it may be a big challenge for future study.
In this work, we have put forward a novel type of dispatching rules that proved to be an improvement over the use of standard rules. The standard rules are deterministic, while in our method, the order of jobs to be processed is determined on basis of certain probability distributions. While the average performance is comparable to the standard dispatching rules, we show that the best solutions obtained clearly outperform the standard deterministic dispatching rules. The results are compared to the best known solutions on standard benchmark problems for the offline problem to obtain a firm impression on the quality of the solution. A deeper comparison of our method with the offline algorithms and metaheuristics is of limited interest, as the methods are very different, and it is not clear what a reasonable criteria and methods for comparison of the performances may be. Therefore, in this study, we did not use any advanced statistical tests or other methods for ranking the approaches.
In conclusion, promising experimental results on the offline benchmark instances and observed robustness of the method made us believe that the following challenges may be tractable research tasks in the future:
Variations of JSSP. Application of the method to variations of the JSSP with different objectives including multicriteria optimization and taking into account various additional constraints that appear in specific applications. Due to its robustness, the method is likely to be competitive on variations of the basic problem.
Distributed parallel implementation. As the computation is performed at local queues, it seems to be obvious that there is no severe restriction on the dimension of the problems to be solved. The approach can be implemented using full parallelism when local processors are used at each machine, implying the possibility of handling very large instances of the problem.
Machine learning for parameter fine tuning. Fine tuning of parameters may be advanced by machine learning techniques, and potentially, it can be dynamically adapted on the run, based on the features of jobs processed recently. Namely, it is likely that optimal parameter values for one dataset of instances is not optimal for another dataset. If this hypothesis holds, then it should indeed be profitable to add a machine learning tool that would learn from past and recent experience.