Next Article in Journal
A Novel Approach for Determination of Reliability of Covering a Node from K Nodes
Previous Article in Journal
On (ϕ, ψ)-Metric Spaces with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Harris Hawks Optimizer for Solving Machine Scheduling Problems

by
Hamza Jouhari
1,
Deming Lei
1,*,
Mohammed A. A. Al-qaness
2,
Mohamed Abd Elaziz
3,
Robertas Damaševičius
4,5,
Marcin Korytkowski
6 and
Ahmed A. Ewees
7,8
1
School of Automation, Wuhan University of Technology, Wuhan 430070, China
2
State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
3
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
4
Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
5
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
6
Department of Intelligent Computer Systems, Czestochowa University of Technology, 42-200 Częstochowa, Poland
7
Department of e-Systems, University of Bisha, Bisha 61922, Saudi Arabia
8
Department of Computer, Damietta University, Damietta 34517, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(9), 1460; https://doi.org/10.3390/sym12091460
Submission received: 21 July 2020 / Revised: 11 August 2020 / Accepted: 17 August 2020 / Published: 5 September 2020

Abstract

:
Scheduling can be described as a decision-making process. It is applied in various applications, such as manufacturing, airports, and information processing systems. More so, the presence of symmetry is common in certain types of scheduling problems. There are three types of parallel machine scheduling problems (PMSP): uniform, identical, and unrelated parallel machine scheduling problems (UPMSPs). Recently, UPMSPs with setup time had attracted more attention due to its applications in different industries and services. In this study, we present an efficient method to address the UPMSPs while using a modified harris hawks optimizer (HHO). The new method, called MHHO, uses the salp swarm algorithm (SSA) as a local search for HHO in order to enhance its performance and to decrease its computation time. To test the performance of MHHO, several experiments are implemented using small and large problem instances. Moreover, the proposed method is compared to several state-of-art approaches used for UPMSPs. The MHHO shows better performance in both small and large problem cases.

1. Introduction

Parallel machine scheduling only has an important role not in manufacturing (e.g., electronic and chemical industries), but in many scheduling processes, including transport scheduling like train schedules, airport management like landing scheduling, and hospital management, such as surgery scheduling. Symmetry also has a presence in different scheduling problems [1,2]. Parallel machine scheduling can be considered as complex artificial systems that have the characteristics of randomness, multi constraint, complexity, and multi-objective, etc. Job scheduling problems are known as NP-hard problems, due to their complexity, randomness, multi-objectives, and multi-variables [3]. Therefore, machine job scheduling problems are very important and they have gained a huge influence to increase manufacturing productivity and to improve service operations. More so, they designate limited resources to jobs with reference to operational constraints (i.e., to obtain the typical usage of available resources) [4,5]. Such problems are complex to solve with a specific algorithm during an exact time. Therefore, many studies had been presented while using various heuristic algorithms to propose solutions that may solve machine scheduling problems [6,7,8].
In traditional parallel machine scheduling problems (PMSPs), independent n jobs are to be processed on a number of identical m machines. Accordingly, each job must be executed on one of the m machines within a fixed t time. There are three well-known types of PMSPs, uniform, identical, and unrelated machines. The identical machines process jobs within the same time in all machines. In uniform machines, job processing implemented jobs at a regular rate at different speeds. Where UPMSPs different jobs are processed in machines at various rates, and a machine can implement different jobs at various rates [9].
UPMSPs are employed in various applications, such as various manufacturing industries, including food processing plants, and car factories [10], semiconductor [11,12], tobacco [13], textile [14], petroleum [15], and tire [16]. Additionally, they have been applied in multiprocessor computer [17,18,19], for multithreading [20], in hospital operating rooms [21], human resources management [22], mail facilities [23], printing [24], pharmacy automation [25], vehicular networks [26], and heterogeneous systems that include GPU and CPU [27].
In previous literature, numerous meta-heuristic methods were utilized to address scheduling problems in unrelated parallel machines such as ant colony optimization (ACO) [28,29], genetic algorithm (GA) [15], simulated annealing (SA) [30], tabu search (TS) algorithm [31], firefly algorithm (FA) [32], particle swarm optimization (PSO) [33], artificial bee colony (ABC) [34], or their combinations, such as hybridization of GA and SA [35].
In this paper, we design a new model to solve the problems of unrelated parallel machine scheduling that us based on improved harris hawks optimizer (HHO) by salp swarm algorithm (SSA). HHO is a new nature-inspired optimizer presented by Heidari et al. [36]. It is influenced by the cooperative behavior of Harris’ Hawks that hunt escaping rabbits. Heidari et al. [36] demonstrated that HHO outperforms nature-inspired techniques in 29 engineering problems. It had been used in various applications, such as feature selection [37], engineering problems [38,39,40,41,42,43], satellite image processing [44], prediction models [45], and scheduling tasks in cloud computing [46]. SSA is also a nature-inspired method that simulates the behavior of Salpidae’s family. It was employed in different fields, such as node localization in wireless sensor network [47], feature selection [48,49,50], prediction models [51], cloud computing [52], and image segmentation [53,54]. From these applications for HHO and SSA, it has been observed the high ability of both of them to find the optimal solution for different optimization problems. As well as, in most cases, they provided results better than the comparative algorithms. However, each of them still has its limitations that motivated us to benefit from the advantages of both of them.
The modified version of HHO is called MHHO, in which the SSA is used to enhance the performance of HHO to be applied for solving different UPMSPs problems. SSA works as a local search for HHO to improve its performance and decrease its computational time.
This paper provides the following contributions:
  • The improved HHO with SSA is adopted to solve the UPMSPS problem for small and large jobs on different machines with setup time.
  • The results of numeric experiments is presented to validate the proposed method with different conditions.
  • We compare the MHHO method with known methods in order to demonstrate the superiority of the MHHO over existing methods.
The rest parts of this study are arranged, as follows. Section 2 presents a review of previous works on UPMSPs. Section 3 presents UPMSPS preliminaries. Next, the proposed algorithm is described in Section 4, while in Section 5, experiments and results are given to assess the performance of the proposed algorithm. Finally, we conclude our paper and remark future research in Section 6.

2. Literature Review

In this section, a number of previous published literature on UPMSPs is described. We encourage the reader to see the comprehensive surveys [6,7,8] to obtain more knowledge about the earlier works on this research direction. Arnaout et al. [28] applied the ACO algorithm to address UPMSPs. The performance of the introduced algorithm outperforms the TS algorithm and other meta-heuristic methods. In [17], another metaheuristic method used to solve UPMSPs based on simple-iterated greedy local search. The proposed method produced efficient solutions. In [55], the authors improved the GA by integrating the dominant propensities to solve UPMSPs. The proposed hybrid GADP algorithm reached the optimal solution and compared with previous heuristics. In [56], another algorithm, called GRASP, is presented to deal with the scheduling problems of unrelated parallel machine. They compared GRASP with existed methods and approved its superiority over existed UPMSPs methods. Rodriguez et al. [57] proposed an effective algorithm, namely iterative greedy (IG), to solve a large-scale UPMSPs. They compared their with several previous UMPSPs methods, and showed that IG outperforms previous methods. In [29], a two-stage ACO scheme is presented to solve the UPSMPs and minimize the scheduling makespan on UPM with sequence-dependent set-up time of 120 jobs and 10 machines.
Bitar et al. [58] presented a memetic algorithm to deal with the large-scale of UPMSPs in semiconductor-manufacturing. The proposed method aims to minimize the flow time and maximize the number of processed products. Moreover, a memetic algorithm was employed to solve various scheduling problems, such as [59,60,61,62,63]. In [31], a minimization of the makespan of a number of j jobs and m machines of UPMSPs is considered. A hybridization of two methods, namely truncated branch-and-bound and TS, is proposed and compared to some of the metaheuristic algorithms from the exited approaches. The authors of [64] addressed UPMSPs by employing the power-down scheme in order to minimize total tardiness and total energy consumption. A mathematical model is formulated, then, Ant Optimization algorithm is proposed with Apparent Tardiness Cost (ATC) heuristic rule.
Pacheco et al. [65] presented a metaheuristic algorithm, called Variable Neighborhood Search, to solve the problems of UPMS. They addressed the problem of job scheduling in one machine and regarded two problems, called sequence-dependent setup times and preventive maintenance. Li et al. [66] suggested a polynomial-time approximation algorithm to solve UPMSPs of green manufacturing companies by including the sum of energy cost. In [67], the authors presented a TS algorithm to solve parallel machines scheduling problems. They developed two mathematical models for the problems of a number of parallel machines scheduling of operations that operated together and to minimize the makespan. In [68], an efficient mathematical model and a number of heuristic algorithms are presented in order to minimize the energy consumption and the tardiness of the unrelated parallel machine scheduling problems.
Fanjul-Peyro et al. [69] addressed UPMSPs and presents two models called a mathematical programming and a mixed integer linear program algorithm. The proposed models reached optimal solutions for UPMSPs of 400 jobs and four machines. Wang el al. [70] addressed the large-scale UPMSPs and presented a solution model by applying the TS algorithm components that embedded with multiple-jump strategy. This model solves the on-preemptive scheduling problems and reached the local optimal solution. In [33], the authors presented a hybrid GA and PSO method, called HPSOGA, to address the UPMSPs. They used the Taguchi method to determine and calibrate the optimal parameters. The proposed method outperformed previous solution methods. Ding et al. [71] combined mixed integer linear programming (MILP) with a column generation heuristic to address the UPMSPs under the time-of-use pricing. The proposed scheme was used in order to minimize the total electricity cost with a bounded makespan restriction.
Wu et al. [72] presented an effective method to minimize and reduce the energy consumption and the makespan to resolve unrelated parallel machine scheduling problems. They proposed a memetic differential evolution algorithm. Additionally, they used a meta-Lamarckian learning strategy to enhance their method, which, when compared to two existed methods, demonstrated better performance. Bektur and Tugba [72] considered UPMS problems with a common server. They presented a MILP scheme to solve UPMSPs. More so, they proposed both SA and TS to address the problem. Arroyo et al. [73] proposed an IG algorithm and a MILP model for unrelated parallel machine scheduling. The proposed solution model had been tested on different size of benchmark and compared to some of the existed models, such as ACO, SA, and discrete differential evolution. The IG algorithm outperformed the other three methods in all test benchmarks.

3. Preliminaries

Here, we present a brief definition of the problem, harris hawks optimizer (HHO), and salp swarm algorithm (SSA).

3.1. Problem Definition

Following [74], the problem of the UPMSP is mathematically formulated as a Mixed Integer Programming (MIP) model, as in these equations:
M i n C m a x
where, C m a x indicates the max time of completion (makespan). Equation (1) is the fitness function of UPMS problems that requires minimizing the value of C m a x . More details can be found in [74,75].

3.2. Harris Hawks Optimization

The HHO is a new meta-heuristic technique proposed by [36]. It mimics the natural behaviors of Harris’ hawks during searching and catching the rabbits. HHO performs the optimization process while using three stages, i.e., exploration step, transition step from exploration to exploitation, and exploitation step. More details can be found in [36].

3.3. Salp Swarm Algorithm

The SSA is an optimization method that mimics the natural behavior of the salps. The salp chain was mathematically modeled to solve different kinds of global optimization problems [76]. SSA begins by splitting the populations into two categories (i.e., leader and followers). The first category (i.e., leader) is at the front of the salps chain whereas, the other salps called followers. More details can be found in [76].

4. Proposed Method

The proposed method, called Modified HHO (MHHO), to solve UPMSPs with setup times is presented in this section. It applies the SSA as a local search for the HHO algorithm, as shown in Figure 1.
The MHHO algorithm begins by creating random integer numbers to represent the initial solution of UPMSPs solutions (X), whereas the solution includes a set of individuals (N) for a set of n jobs that listed to be processed over m machines. The initial makespan value is calculated for each individual by the fitness function and the best solution is selected based on the smallest fitness value.
Subsequently, the solution is updated using either the HHO or the SSA based on a probability. This probability is computed while using the fitness value for each solution. These steps are repeated until the stop condition is met. Here, it is set to the max number of fitness evaluations, which is set to 100,000.
The description of the initial and updating solutions are explained as follows.

4.1. Solution Representations

The UPMSP solution contains N J dimension with values in the range [ 1 , m ] (i.e., X ( 1 ) = [ x 1 , x 2 , , x n ] ). For example, if there are three machines and five jobs, the representation of one individual can be like this: P = [ 2 1 2 1 3 ] , which means the first and third jobs will be scheduled on machine number 2, then the second and fourth jobs will be scheduled on machine number 1, etc.
Thereafter, the sequence of the jobs for each machine is determined. For example, if there is a T matrix with n jobs and m machines, the T i j indicates the sequence-dependent setup times that are needed to set up for j-th job after i-th job on the m-th machine.
T = 5 2 6 4 13 7 1 10 11 15 3 8 9 12 14
Subsequently, for machine number 1, the sequence-dependent setup times are represented by the first row as following (5, 2, 6, 4, and 13) jobs. The other rows represent the jobs for the remaining machines. Zero values in the machine’s row indicate no jobs will be processed by this machine. In this study, the makespan value is determined using the Adjusted Processing Times Matrix [ S P ] k [30] to linear combine the previous P and S. It is formulated as in the following equation:
S P = S + P
where S is n × m and represents the processing times. P is n × n and represents the setup times for each machine. Therefore, this matrix represents the initial solution for the optimization process.

4.2. Updating Stage

The MHHO evaluates the makespan for each solution ( X i , i = 1 , 2 , , N ) based on the fitness value for each one, as in Equation (1). The smallest makespan C m a x determines the best solution. After that, the MHHO updates the solution by switching between HHO and SSA algorithms based on a computed probability ( p r o b i ), as in Equation (3).
p r o b i = f i t i k = 1 N f i t k ,
where f i t i is the current fitness. If p r o b i > 0.5 , then the individual X i is updated by HHO, or else the SSA is used. These steps are repeated until meeting the stop condition. Finally, the best scheduling based on the minimum C m a x value is outputted.

5. Experiments and Results

A set of the UPMSP benchmark dataset is used to assess the performance of the MHHO. In addition, its results are compared with the traditional HHO and SSA methods, and state-of-the-art methods with different numbers of machines and job sequences.

5.1. Dataset Description

In general, this UPMSP dataset has different number of instance problems, and each of them has its processing time and setup time. In addition, these instances are randomly constructed according to the discrete uniform distribution from U [ 50 , 100 ] .
In this study, the number of machines is set to 2, 4, 6, 8, 10, and 12. While, the number of jobs are 6, 7, 8, 9, 10, 11, 20, 40, 60, 80, 100, and 120. Whereas, the jobs of is classified into two groups, the small group which contains 6, 7, 8, 9, 10, and 11 jobs. The second group (i.e., large group) contains 20, 40, 60, 80, 100, and 120 jobs. The dataset used in this paper is publicly available in the following website http://www.schedulingresearch.com.
For each problem, the algorithm is performed 30 times to enable fair comparison and further statistical analysis of the results.

5.2. Performance Measure

In this section, the metrics used to evaluate the performance of the algorithm are discussed. These metrics, including the relative percentage deviation and improvement ratio measure and the definition of each of them, is given, as follows:
  • Relative percentage deviation ( ρ ): It is defined in Equation (4) to compare the results of all methods.
    ρ = C m a x L B L B × 100
    where C m a x represents the average of makespan. L B is the lower boundary, which represents the average of the processing time for each job and it is defined in Equation (5).
    L B = max ( L B m i n , L B m a x )
    where L B m i n and L B m a x are the minimum and maximum time of processing, respectively, for each job among the machines and they are defined as:
    L B m i n = j = 1 N J ( min k = 1 , , N m i = 1 , , N J [ P R i , j , k ] ) N m ,
    L B m a x = max j = 1 , , n min k = 1 , , m i = 1 , , n [ P R i , j , k ]
    where P R is the adjusted processing time, N m and N J are the numbers of machines and jobs, respectively.
  • Improvement ratio ( δ ): It measures the ratio of enhancement of the proposed algorithm against the others and it is defined as:
    δ = C m a x ( m e t h o d ) C m a x ( M H H O ) C m a x ( M H H O ) × 100
    where C m a x ( m e t h o d ) is the average of the makespan values of the compared method.

5.3. Parameter Settings

The proposed MHHO is compared with the traditional HHO and SSA and Table 1 shows the parameters for each algorithm. These parameters were tuned and selected experimentally.
The experiments were performed using MATLAB R2014b (MathWorks Inc., Natick, MA, USA) run over a computer with CPU Core i7 and 8 GB RAM working with Windows 10 (64 bit). The average of C m a x of different 15 instances was computed for each job in each problem. The common parameter, such as the maximum number of fitness evaluations, is set to 100,000 for all algorithms, whereas the population size depends on the problem size.

5.4. Comparison with Basic HHO and SSA

In this experiment, we compare the performance of the proposed MHHO with the traditional SSA and HHO algorithm, as given in Table 2. It can be noticed from this table that the high performance of the proposed MHHO according to different metrics. For example, the proposed MHHO has the smallest δ among all of the tested instances (either in small or large datasets). Moreover, it can be observed the high improvement of the MHHO than two SSA, and HHO with average overall instances nearly 36.890 % and 21.611%, respectively. Figure 2 depicts the average of the improvement along with each machine and job. Note that high improvement of MHHO over SSA is achieved especially at the number of jobs 2, 10, and 12, whereas, at the machines 40 then 100. By comparing the results of HHO and MHHO, it can be noticed the high difference in the improvement is achieved at the number of jobs 2, while, when the number of machines becomes high, like 100 and 120, the proposed MHHO outperforms HHO with high improvements.
According to the value of σ C m a x , note that the proposed MHHO has the smallest value among all of the tested instances, followed by the SSA, then the HHO. This indicates the high stability of the proposed MHHO algorithm.
In terms of CPU time, the traditional SSA algorithm is the faster algorithm, followed by the MHHO; however, HHO takes a long time until it reached the stop conditions. The main reason that MHHO takes that time results from the operator of traditional HHO that needs more CPU time.
Figure 3 depicts the CPU time that is required by the proposed MHHO and the other two methods among each tested number of machines. From these results, it can be observed that HHO consumes more time than the other two methods, while the faster algorithm is SSA, followed by MHHO. The mean reason for the longer time of MHHO results is because of the drawbacks of the traditional HHO that depends on more operators.

5.5. Results of δ over Small-Size Problems

In this experiment, we compare the results of δ of MHHO with other methods using small-size problems as in Table 3. These methods including the traditional HHO, and SSA, SA [75], T9 [74], T8 [74]. From these results, it can be observed that the MHHO outperforms other methods, especially the SA, T9, and T8. However, by comparing the proposed MHHO with SSA, it can be noticed the high performance of the MHHO overall the small-size problems except at machine 2 and job 9, there are small improvements. Additionally, by comparing the proposed MHHO with HHO, the MHHO stills provides better results than the HHO among all the tested problems.

5.6. Comparison with Other MH Methods Using Large-Size Problems

We compare the performance of MHHO with a set of method, including invasive weeds optimization (IWO) [32], Artificial Bee Colony (ABC) [77], metaheuristic for randomized priority search (Meta-RaPS) [32], improved firefly (IFA) [32], Hybrid ABC (HABC) [77], Tabu Search (TS) [32], RSA [78], partition heuristic (PH) [32], Genetic Algorithm (GA) [32], Ant Colony Optimization (ACO) [32], extended ACO (ACOII) [32], GADP [55], basic Simulated Annealing (SA) [78], FA [32], SOS with the longest processing time first (LPT) rule (SOS-LPT) [79], symbiotic organisms search (SOS) [79], HSOSSA [30], ESA [30], and ESOS [30].
The comparison is performed based on the value of ρ and δ as given in Table 4 and Table 5. Table 6 lists the parameter settings for these algorithms.
From Table 4, one can observe that MHHO has the best ρ values at 28 instances from 36 instances (as in boldface), followed by HSOSSA, which achieves the best ρ value, at eight instances.
Moreover, Table 5 shows the performance of the algorithms in terms of δ . Note that MHHO has high improvements over the other approaches in most of the tested instances except at some jobs. For example, in machine 2 and job 20, the HSOSSA provides better results than MHHO. The same observation is noticed at machine 4 and jobs 20 and jobs 40. While, at the machine 8 and job 40, it can be seen that HSOSSA and ESOS have better value than MHHO. As well as, the FA, ESA, and ESOS provide better results than MHHO at machine 10 and job 20, in addition, at machine 10 and job 40, the HSOSSA and ESOS are better than MHHO. Finally, at the machine 20 and job 20, the performance of the MHHO is less than FAm, SOS-LPT, and HSOSSA.
Because there are several state-of-the-art methods that have been developed but used different measures to assess their performance. Accordingly, we compared our proposed MHHO with those methods at machines 2, 6, and 12 with jobs 20, 40, 60, and 80, the results are given in Table 7, which uses the minimum, average, and maximum of the C m a x . From this table, it can be concluded that the proposed MHHO has better performance than others in terms of average (Avg.) overall the instances, except at the machine 12 and job 20, where the ESOS is better. In addition, in terms of the best C m a x value, the proposed MHHO provides the smallest value overall the instances and jobs followed by FA. Moreover, it can be seen that the δ value for the proposed MHHO is better than the δ for others.

6. Conclusions

We presented a new hybrid meta-heuristic approach to address unrelated parallel machine scheduling problems (UPMSPs) with sequence-dependent setup times. The proposed MHHO approach is a hybrid of Harris Hawks optimizer (HHO) and the salp swarm algorithm (SSA). MHHO allows enhancing the performance of HHO based on SSA, and also avoid premature convergence. Therefore, the SSA is applied as a local operator to improve the performance of HHO to find optimal solutions and avoid local optima. Moreover, we considered the minimization of makespan as a performance evaluation of the proposed approach. The proposed MHHO had been evaluated while using a set of instances, which contains small and large job sizes over six machines during a series of experiments. Furthermore, to approve the superiority of the MHHO, we compared MHHO with the original SSA, HHO, and other well-known methods that had been applied to solve UPMSPs. The comparison results showed that MHHO outperforms both SSA and HHO. Additionally, MHHO outperforms other methods, additionally, it showed better convergence to the optimal solution.
For future work, the MHHO could be employed in various optimization applications, such as data clustering, cloud computing scheduling, image processing, forecasting applications, and others.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, J.; Anjos, M.F.; Vannelli, A. Symmetry in Scheduling Problems; Cahier du GERAD G-2010-69; GERAD: Montreal, QC, Canada, 2010. [Google Scholar]
  2. Trindade, R.S.; de Araújo, O.C.B.; Fampa, M.H.C.; Müller, F.M. Modelling and symmetry breaking in scheduling problems on batch processing machines. Int. J. Prod. Res. 2018, 56, 7031–7048. [Google Scholar] [CrossRef]
  3. Zhou, E.; Zhu, J.; Deng, L. Flexible job-shop scheduling based on genetic algorithm and simulation validation. In MATEC Web of Conferences; EDP Sciences: Zhengzhou, China, 2017; Volume 100, p. 02047. [Google Scholar]
  4. Afzalirad, M.; Rezaeian, J. Resource-constrained unrelated parallel machine scheduling problem with sequence dependent setup times, precedence constraints and machine eligibility restrictions. Comput. Ind. Eng. 2016, 98, 40–52. [Google Scholar] [CrossRef]
  5. Li, J.; Qi, Y.; Wei, W.; Lin, J.; Wozniak, M.; Damasevicius, R. dCCPI-predictor: A state-aware approach for effectively predicting cross-core performance interference. Future Gener. Comput. Syst. 2020, 105, 184–195. [Google Scholar] [CrossRef]
  6. Allahverdi, A.; Ng, C.; Cheng, T.E.; Kovalyov, M.Y. A survey of scheduling problems with setup times or costs. Eur. J. Oper. Res. 2008, 187, 985–1032. [Google Scholar] [CrossRef] [Green Version]
  7. Allahverdi, A. The third comprehensive survey on scheduling problems with setup times/costs. Eur. J. Oper. Res. 2015, 246, 345–378. [Google Scholar] [CrossRef]
  8. Allahverdi, A. A survey of scheduling problems with no-wait in process. Eur. J. Oper. Res. 2016, 255, 665–686. [Google Scholar] [CrossRef]
  9. Lin, Y.K.; Pfund, M.E.; Fowler, J.W. Heuristics for minimizing regular performance measures in unrelated parallel machine scheduling problems. Comput. Oper. Res. 2011, 38, 901–916. [Google Scholar] [CrossRef]
  10. Fanjul-Peyro, L.; Perea, F.; Ruiz, R. Models and matheuristics for the unrelated parallel machine scheduling problem with additional resources. Eur. J. Oper. Res. 2017, 260, 482–493. [Google Scholar] [CrossRef]
  11. Şen, H.; Bülbül, K. A strong preemptive relaxation for weighted tardiness and earliness/tardiness problems on unrelated parallel machines. INFORMS J. Comput. 2015, 27, 135–150. [Google Scholar] [CrossRef] [Green Version]
  12. Detienne, B.; Dauzere-Péres, S.; Yugma, C. Scheduling jobs on parallel machines to minimize a regular step total cost function. J. Sched. 2011, 14, 523–538. [Google Scholar] [CrossRef]
  13. Zuo, X.; Tan, W.; Lin, H. Cigarette Production Scheduling by Combining Workflow Model and Immune Algorithm. IEEE Trans. Autom. Sci. Eng. 2014, 11, 251–264. [Google Scholar] [CrossRef]
  14. Silva, C.; Magalhaes, J.M. Heuristic lot size scheduling on unrelated parallel machines with applications in the textile industry. Comput. Ind. Eng. 2006, 50, 76–89. [Google Scholar] [CrossRef] [Green Version]
  15. Sheremetov, L.; Martínez-Muñoz, J.; Chi-Chim, M. Two-stage genetic algorithm for parallel machines scheduling problem: Cyclic steam stimulation of high viscosity oil reservoirs. Appl. Soft Comput. 2018, 64, 317–330. [Google Scholar] [CrossRef]
  16. Yu, S.; Yang, D.; Zhu, K.; Lu, S. Scheduling method for tire building based on heuristic algorithm. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013. [Google Scholar] [CrossRef]
  17. Fanjul-Peyro, L.; Ruiz, R. Iterated greedy local search methods for unrelated parallel machine scheduling. Eur. J. Oper. Res. 2010, 207, 55–69. [Google Scholar] [CrossRef] [Green Version]
  18. Alfa, A.A.; Misra, S.; Ogwueleka, F.N.; Ahuja, R.; Adewumi, A.; Damasevicius, R.; Maskeliunas, R. Implications of Job Loading and Scheduling Structures on Machine Memory Effectiveness. In Lecture Notes in Electrical Engineering; Springer: Singapore, 2019; pp. 383–393. [Google Scholar] [CrossRef]
  19. Alfa, A.A.; Misra, S.; Ogwueleka, F.N.; Ahuja, R.; Adewumi, A.; Damasevicius, R.; Maskeliunas, R. An Effective Instruction Execution and Processing Model in Multiuser Machine Environment. In Advances in Intelligent Systems and Computing; Springer: Singapore, 2020; pp. 805–817. [Google Scholar] [CrossRef]
  20. Połap, D.; Kęsik, K.; Woźniak, M.; Damaševičius, R. Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space. Appl. Sci. 2018, 8, 293. [Google Scholar] [CrossRef] [Green Version]
  21. Fanjul-Peyro, L.; Ruiz, R. Scheduling unrelated parallel machines with optional machines and jobs selection. Comput. Oper. Res. 2012, 39, 1745–1753. [Google Scholar] [CrossRef] [Green Version]
  22. Gara-Ali, A.; Finke, G.; Espinouse, M.L. Parallel-machine scheduling with maintenance: Praising the assignment problem. Eur. J. Oper. Res. 2016, 252, 90–97. [Google Scholar] [CrossRef]
  23. Zhang, X.; Bard, J.F. Equipment scheduling at mail processing and distribution centers. IIE Trans. 2005, 37, 175–187. [Google Scholar] [CrossRef]
  24. Huang, S.; Cai, L.; Zhang, X. Parallel dedicated machine scheduling problem with sequence-dependent setups and a single server. Comput. Ind. Eng. 2010, 58, 165–174. [Google Scholar] [CrossRef]
  25. Dauod, H.; Li, D.; Yoon, S.W.; Srihari, K. Multi-objective optimization of the order scheduling problem in mail-order pharmacy automation systems. Int. J. Adv. Manuf. Technol. 2016, 99, 73–83. [Google Scholar] [CrossRef]
  26. Wu, Q.; Fan, X.; Wei, W.; Wozniak, M. Dynamic Scheduling Algorithm for Delay-Sensitive Vehicular Safety Applications in Cellular Network. Inf. Technol. Control 2020, 49, 161–178. [Google Scholar] [CrossRef]
  27. Szenasi, S. Static load balancing on heterogeneous systems containing CPU and GPU. In Proceedings of the 18th International Multidisciplinary Scientific GeoConference SGEM 2018, Albena, Bulgaria, 2–8 July 2018; pp. 717–722. [Google Scholar] [CrossRef]
  28. Arnaout, J.P.; Musa, R.; Rabadi, G. Ant colony optimization algorithm to parallel machine scheduling problem with setups. In Proceedings of the 2008 IEEE International Conference on Automation Science and Engineering, Arlington, VA, USA, 23–26 August 2008; pp. 578–582. [Google Scholar]
  29. Arnaout, J.P.; Musa, R.; Rabadi, G. A two-stage Ant Colony optimization algorithm to minimize the makespan on unrelated parallel machines—Part II: Enhancements and experimentations. J. Intell. Manuf. 2014, 25, 43–53. [Google Scholar] [CrossRef]
  30. Ezugwu, A.E. Enhanced symbiotic organisms search algorithm for unrelated parallel machines manufacturing scheduling with setup times. Knowl. Based Syst. 2019, 172, 15–32. [Google Scholar] [CrossRef]
  31. Sels, V.; Coelho, J.; Dias, A.M.; Vanhoucke, M. Hybrid tabu search and a truncated branch-and-bound for the unrelated parallel machine scheduling problem. Comput. Oper. Res. 2015, 53, 107–117. [Google Scholar] [CrossRef] [Green Version]
  32. Ezugwu, A.E.; Akutsah, F. An Improved Firefly Algorithm for the Unrelated Parallel Machines Scheduling Problem With Sequence-Dependent Setup Times. IEEE Access 2018, 6, 54459–54478. [Google Scholar] [CrossRef]
  33. Mir, M.S.S.; Rezaeian, J. A robust hybrid approach based on particle swarm optimization and genetic algorithm to minimize the total machine load on unrelated parallel machines. Appl. Soft Comput. 2016, 41, 488–504. [Google Scholar]
  34. Lei, D.; Yuan, Y.; Cai, J. An improved artificial bee colony for multi-objective distributed unrelated parallel machine scheduling. Int. J. Prod. Res. 2020, 1–13. [Google Scholar] [CrossRef]
  35. Rajkumar, R.; Robert, R.J. A Hybrid Algorithm for Multi-Objective Optimization of Minimizing Makespan and Total Flow Time in Permutation Flow Shop Scheduling Problems. Inf. Technol. Control 2019, 48. [Google Scholar] [CrossRef] [Green Version]
  36. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  37. Thaher, T.; Heidari, A.A.; Mafarja, M.; Dong, J.S.; Mirjalili, S. Binary Harris Hawks Optimizer for High-Dimensional, Low Sample Size Feature Selection. In Evolutionary Machine Learning Techniques; Springer: Berlin/Heidelberg, Germany, 2020; pp. 251–272. [Google Scholar]
  38. Yousri, D.; Allam, D.; Eteiba, M.B. Optimal photovoltaic array reconfiguration for alleviating the partial shading influence based on a modified harris hawks optimizer. Energy Convers. Manag. 2020, 206, 112470. [Google Scholar] [CrossRef]
  39. Essa, F.; Abd Elaziz, M.; Elsheikh, A.H. An enhanced productivity prediction model of active solar still using artificial neural network and Harris Hawks optimizer. Appl. Therm. Eng. 2020, 170, 115020. [Google Scholar] [CrossRef]
  40. Shehabeldeen, T.A.; Abd Elaziz, M.; Elsheikh, A.H.; Zhou, J. Modeling of friction stir welding process using adaptive neuro-fuzzy inference system integrated with harris hawks optimizer. J. Mater. Res. Technol. 2019, 8, 5882–5892. [Google Scholar] [CrossRef]
  41. Abd Elaziz, M.; Heidari, A.A.; Fujita, H.; Moayedi, H. A competitive chain-based Harris Hawks Optimizer for global optimization and multi-level image thresholding problems. Appl. Soft Comput. 2020, 106347. [Google Scholar] [CrossRef]
  42. Ewees, A.A.; Abd Elaziz, M. Performance analysis of chaotic multi-verse harris hawks optimization: A case study on solving engineering problems. Eng. Appl. Artif. Intell. 2020, 88, 103370. [Google Scholar] [CrossRef]
  43. Yıldız, A.R.; Yıldız, B.S.; Sait, S.M.; Bureerat, S.; Pholdee, N. A new hybrid harris hawks-Nelder-Mead optimization algorithm for solving design and manufacturing problems. Mater. Test. 2019, 61, 735–743. [Google Scholar] [CrossRef]
  44. Golilarz, N.A.; Gao, H.; Demirel, H. Satellite Image De-Noising with Harris Hawks Meta Heuristic Optimization Algorithm and Improved Adaptive Generalized Gaussian Distribution Threshold Function. IEEE Access 2019, 7, 57459–57468. [Google Scholar] [CrossRef]
  45. Hu, H.; Li, Y.; Bai, Y.; Zhang, J.; Liu, M. The Improved Antlion Optimizer and Artificial Neural Network for Chinese Influenza Prediction. Complexity 2019, 2019, 1480392. [Google Scholar] [CrossRef] [Green Version]
  46. Attiya, I.; Abd Elaziz, M.; Xiong, S. Job scheduling in cloud computing using a modified harris hawks optimization and simulated annealing algorithm. Comput. Intell. Neurosci. 2020, 2020, 3504642. [Google Scholar] [CrossRef] [Green Version]
  47. Kanoosh, H.M.; Houssein, E.H.; Selim, M.M. Salp Swarm Algorithm for Node Localization in Wireless Sensor Networks. J. Comput. Networks Commun. 2019, 2019, 1028723. [Google Scholar] [CrossRef] [Green Version]
  48. Ibrahim, R.A.; Ewees, A.A.; Oliva, D.; Abd Elaziz, M.; Lu, S. Improved salp swarm algorithm based on particle swarm optimization for feature selection. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 3155–3169. [Google Scholar] [CrossRef]
  49. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.; Mirjalili, S. Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Syst. Appl. 2020, 145, 113122. [Google Scholar] [CrossRef]
  50. Neggaz, N.; Ewees, A.A.; Abd Elaziz, M.; Mafarja, M. Boosting salp swarm algorithm by sine cosine algorithm and disrupt operator for feature selection. Expert Syst. Appl. 2020, 145, 113103. [Google Scholar] [CrossRef]
  51. Abd Elaziz, M.; Ewees, A.A.; Alameer, Z. Improving adaptive neuro-fuzzy inference system based on a modified salp swarm algorithm using genetic algorithm to forecast crude oil price. Nat. Resour. Res. 2019, 29, 1–16. [Google Scholar] [CrossRef]
  52. Alresheedi, S.S.; Lu, S.; Abd Elaziz, M.; Ewees, A.A. Improved multiobjective salp swarm optimization for virtual machine placement in cloud computing. Hum. Centric Comput. Inf. Sci. 2019, 9, 15. [Google Scholar] [CrossRef] [Green Version]
  53. Ibrahim, A.; Ahmed, A.; Hussein, S.; Hassanien, A.E. Fish image segmentation using salp swarm algorithm. In International Conference on Advanced Machine Learning Technologies and Applications; Springer: Berlin/Heidelberg, Germany, 2018; pp. 42–51. [Google Scholar]
  54. Alwerfali, H.S.N.; Elaziz, M.A.; Al-Qaness, M.A.; Abbasi, A.A.; Lu, S.; Liu, F.; Li, L. A Multilevel Image Thresholding Based on Hybrid Salp Swarm Algorithm and Fuzzy Entropy. IEEE Access 2019, 7, 181405–181422. [Google Scholar] [CrossRef]
  55. Chang, P.C.; Chen, S.H. Integrating dominance properties with genetic algorithms for parallel machine scheduling problems with setup times. Appl. Soft Comput. 2011, 11, 1263–1274. [Google Scholar] [CrossRef]
  56. Rodriguez, F.J.; Blum, C.; García-Martínez, C.; Lozano, M. GRASP with path-relinking for the non-identical parallel machine scheduling problem with minimising total weighted completion times. Ann. Oper. Res. 2012, 201, 383–401. [Google Scholar] [CrossRef]
  57. Rodriguez, F.J.; Lozano, M.; Blum, C.; GarcíA-MartíNez, C. An iterated greedy algorithm for the large-scale unrelated parallel machines scheduling problem. Comput. Oper. Res. 2013, 40, 1829–1841. [Google Scholar] [CrossRef]
  58. Bitar, A.; Dauzère-Pérès, S.; Yugma, C.; Roussel, R. A memetic algorithm to solve an unrelated parallel machine scheduling problem with auxiliary resources in semiconductor manufacturing. J. Sched. 2016, 19, 367–376. [Google Scholar] [CrossRef] [Green Version]
  59. González, M.A.; Vela, C.R. An efficient memetic algorithm for total weighted tardiness minimization in a single machine with setups. Appl. Soft Comput. 2015, 37, 506–518. [Google Scholar] [CrossRef]
  60. Wang, S.Y.; Wang, L. An estimation of distribution algorithm-based memetic algorithm for the distributed assembly permutation flow-shop scheduling problem. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 139–149. [Google Scholar] [CrossRef]
  61. Shao, W.; Pi, D.; Shao, Z. Memetic algorithm with node and edge histogram for no-idle flow shop scheduling problem to minimize the makespan criterion. Appl. Soft Comput. 2017, 54, 164–182. [Google Scholar] [CrossRef]
  62. Pan, Q.K.; Wang, L.; Sang, H.Y.; Li, J.Q.; Liu, M. A high performing memetic algorithm for the flowshop scheduling problem with blocking. IEEE Trans. Autom. Sci. Eng. 2013, 10, 741–756. [Google Scholar]
  63. Deng, J.; Wang, L. A competitive memetic algorithm for multi-objective distributed permutation flow shop scheduling problem. Swarm Evol. Comput. 2017, 32, 121–131. [Google Scholar] [CrossRef]
  64. Liang, P.; Yang, H.D.; Liu, G.S.; Guo, J.H. An ant optimization model for unrelated parallel machine scheduling with energy consumption and total tardiness. Math. Probl. Eng. 2015, 2015, 907034. [Google Scholar] [CrossRef] [Green Version]
  65. Pacheco, J.; Porras, S.; Casado, S.; Baruque, B. Variable neighborhood search with memory for a single-machine scheduling problem with periodic maintenance and sequence-dependent set-up times. Knowl. Based Syst. 2018, 145, 236–249. [Google Scholar] [CrossRef]
  66. Li, K.; Zhang, X.; Leung, J.Y.T.; Yang, S.L. Parallel machine scheduling problems in green manufacturing industry. J. Manuf. Syst. 2016, 38, 98–106. [Google Scholar] [CrossRef]
  67. Özpeynirci, S.; Gökgür, B.; Hnich, B. Parallel machine scheduling with tool loading. Appl. Math. Model. 2016, 40, 5660–5671. [Google Scholar] [CrossRef]
  68. Li, Z.; Yang, H.; Zhang, S.; Liu, G. Unrelated parallel machine scheduling problem with energy and tardiness cost. Int. J. Adv. Manuf. Technol. 2016, 84, 213–226. [Google Scholar] [CrossRef]
  69. Fanjul-Peyro, L.; Ruiz, R.; Perea, F. Reformulations and an exact algorithm for unrelated parallel machine scheduling problems with setup times. Comput. Oper. Res. 2019, 101, 173–182. [Google Scholar] [CrossRef]
  70. Wang, H.; Alidaee, B. Effective heuristic for large-scale unrelated parallel machines scheduling problems. Omega 2019, 83, 261–274. [Google Scholar] [CrossRef]
  71. Ding, J.Y.; Song, S.; Zhang, R.; Chiong, R.; Wu, C. Parallel machine scheduling under time-of-use electricity prices: New models and optimization approaches. IEEE Trans. Autom. Sci. Eng. 2016, 13, 1138–1154. [Google Scholar] [CrossRef]
  72. Wu, X.; Che, A. A memetic differential evolution algorithm for energy-efficient parallel machine scheduling. Omega 2019, 82, 155–165. [Google Scholar] [CrossRef]
  73. Arroyo, J.E.C.; Leung, J.Y.T.; Tavares, R.G. An iterated greedy algorithm for total flow time minimization in unrelated parallel batch machines with unequal job release times. Eng. Appl. Artif. Intell. 2019, 77, 239–254. [Google Scholar] [CrossRef]
  74. Helal, M.; Rabadi, G.; Al-Salem, A. A tabu search algorithm to minimize the makespan for the unrelated parallel machines scheduling problem with setup times. Int. J. Oper. Res. 2006, 3, 182–192. [Google Scholar]
  75. Jouhari, H.; Lei, D.; Al-qaness, M.A.; Elaziz, M.A.; Ewees, A.A.; Farouk, O. Sine-Cosine Algorithm to Enhance Simulated Annealing for Unrelated Parallel Machine Scheduling with Setup Times. Mathematics 2019, 7, 1120. [Google Scholar] [CrossRef] [Green Version]
  76. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  77. Lin, S.W.; Ying, K.C. ABC-based manufacturing scheduling for unrelated parallel machines with machine-dependent and job sequence-dependent setup times. Comput. Oper. Res. 2014, 51, 172–181. [Google Scholar] [CrossRef]
  78. Ying, K.C.; Lee, Z.J.; Lin, S.W. Makespan minimization for scheduling unrelated parallel machines with setup times. J. Intell. Manuf. 2012, 23, 1795–1803. [Google Scholar] [CrossRef]
  79. Ezugwu, A.E.; Adeleke, O.J.; Viriri, S. Symbiotic organisms search algorithm for the unrelated parallel machines scheduling with sequence-dependent setup times. PLoS ONE 2018, 13, e0200030. [Google Scholar] [CrossRef]
Figure 1. The stages of the proposed method.
Figure 1. The stages of the proposed method.
Symmetry 12 01460 g001
Figure 2. Average of the improvement of modiffied HHO (MHHO) along the number of machines and jobs. SSA is Salp Swarm Algorithm. HHO is Harris Hawks Optimizer. (a) No. of machines; (b) No. of jobs.
Figure 2. Average of the improvement of modiffied HHO (MHHO) along the number of machines and jobs. SSA is Salp Swarm Algorithm. HHO is Harris Hawks Optimizer. (a) No. of machines; (b) No. of jobs.
Symmetry 12 01460 g002
Figure 3. Average of CPU time for the SSA, HHO, and MHHO on all machines. SSA is Salp Swarm Algorithm. HHO is Harris Hawks Optimizer. MHHO is the proposed hybridization of HHO and SSA.
Figure 3. Average of CPU time for the SSA, HHO, and MHHO on all machines. SSA is Salp Swarm Algorithm. HHO is Harris Hawks Optimizer. MHHO is the proposed hybridization of HHO and SSA.
Symmetry 12 01460 g003
Table 1. Parameter settings for the algorithms.
Table 1. Parameter settings for the algorithms.
ParameterValues
SSA C 2 [ 0 , 1 ] , C 3 [ 0 , 1 ]
HHO E [ 0 , 2 ]
MHHO C 2 [ 0 , 1 ] , C 3 [ 0 , 1 ] , E [ 0 , 2 ]
Table 2. Average results of ρ values, CPU time, and S T D for SSA, HHO, and MHHO methods, as well as the improvement.
Table 2. Average results of ρ values, CPU time, and S T D for SSA, HHO, and MHHO methods, as well as the improvement.
mJobSSAHHOMHHOImprovement
ρ σ C max Time ρ σ C max Time ρ σ C max Time δ SSA δ SSA
2201.2728.446.112.70109.2314.110.3415.509.962.757.00
402.6837.0117.574.8238.8244.790.4817.8029.964.619.10
603.1851.3235.266.90238.6497.381.3215.5058.341.414.23
803.4845.8759.124.73395.73178.210.8632.30101.243.034.48
1004.3969.7688.678.2488.46238.950.0231.95149.14208.91393.22
1203.3456.10126.006.60176.09350.640.0454.01208.2679.14157.41
4206.3971.976.885.33134.7917.340.4069.2711.6815.0412.40
408.3924.5420.006.58479.9547.680.8424.3331.948.996.83
607.8718.8037.078.31704.47138.161.0915.1861.376.226.62
807.0022.9961.2720.38333.94580.062.5522.99317.261.746.98
1009.7732.66106.2819.8835.08880.000.8226.47474.3310.9623.33
1208.268.49180.863.54101.791206.662.4918.37652.042.320.42
62025.00141.877.666.83152.9556.042.275.2041.8110.022.01
4024.34122.1520.437.04127.96151.050.707.05103.4233.589.00
6024.28116.2539.119.79364.91342.870.796.90189.1529.8911.45
8013.8392.2163.8022.3136.06194.243.347.60131.583.145.68
10016.91129.5293.7214.80192.00292.542.8024.91166.745.044.29
12014.587.37130.6313.75164.65407.383.6419.96230.843.002.78
82042.0885.798.6614.8874.7722.910.7331.1215.0556.6219.38
4040.21184.8221.896.2768.7760.020.6517.2038.5860.768.64
6023.6529.2841.004.9289.07112.343.3027.4273.386.170.49
8025.1099.3766.157.0298.45183.881.7366.67116.2813.523.06
10021.9458.5397.3210.41308.32261.273.2057.05168.235.852.25
12016.5231.75133.6735.2097.15431.875.0025.49245.402.306.04
1020182.66220.699.559.2053.5766.842.1626.5348.7683.403.25
4081.80314.5623.383.7258.00178.110.2125.27111.81396.8317.09
6028.36221.7043.3013.3468.00323.090.9241.02257.1229.9413.55
8042.93102.9768.8813.10142.50674.731.48946.99359.2327.837.80
10028.97164.06100.134.8857.97267.530.7729.51175.2336.695.34
12021.2153.67138.145.6063.62377.345.0640.67237.813.190.11
1220189.6875.3510.6119.4216.9925.1315.899.7417.9010.940.22
40185.00229.3125.198.3177.4161.386.9016.0043.0225.800.20
6078.85346.5145.6110.9456.59111.720.734.5878.69106.8913.97
8035.13362.3671.4113.5968.43191.422.1510.50124.3115.375.33
10048.98154.08101.1627.1178.55284.766.0926.31177.387.053.45
12039.48121.48134.246.2454.90360.793.9225.77248.839.080.59
Table 3. Results of δ between MHHO and the other comparison methods in small-size problems.
Table 3. Results of δ between MHHO and the other comparison methods in small-size problems.
mJobsHHOSSASAT9T8
260.1300.2530.0450.1440.139
70.0120.2340.1650.2440.226
80.0260.1000.1300.1490.147
90.0140.0060.1410.1230.110
100.0050.0450.2150.1400.133
110.0200.0240.2170.1190.116
460.0090.1160.1840.1920.203
70.0080.0990.3290.2940.318
80.0150.5000.7480.4370.445
90.0070.2780.3940.2870.282
100.0160.1080.2890.1750.165
110.0250.1540.3470.1640.165
680.0040.2530.1890.1440.167
90.0280.3590.2230.1160.139
100.0120.3040.3680.1800.201
110.0110.2910.2380.0300.060
8100.0020.3050.1130.0160.024
110.0180.3260.4160.0060.016
Table 4. Comparison between MHHO and the state-of-the-art methods while using the ρ .
Table 4. Comparison between MHHO and the state-of-the-art methods while using the ρ .
mJobMHHOFASOS-LPTSOSHSOSSAESAESOSACOGAIWORSAHABC
2200.1661.3191.2673.2290.2503.5051.66713.5803.7191.9764.4504.140
400.4782.1851.3461.4140.5423.3651.57420.8603.7893.0352.8402.370
601.1472.9011.3251.5791.3234.4341.67023.1535.4804.1432.5302.100
800.8633.1551.2941.5010.9606.1131.44024.8285.2704.3972.3901.940
1000.0213.4990.1750.3370.9076.0551.81125.5245.4164.8032.0301.800
1200.0423.9090.1930.5991.2877.2082.68825.9545.2334.8061.9501.690
4200.3980.9440.4455.604−5.8900.8140.81427.9045.4732.7178.7408.480
400.8404.1552.4922.745−0.4985.3653.11439.6589.1297.8275.7405.130
601.0904.2693.3314.1161.3618.1583.28745.49522.5238.7964.7504.080
802.5546.5653.7724.3172.7039.9264.21248.0559.8238.8874.5903.880
1000.8175.3312.9323.6543.92211.1914.87850.01312.7509.9774.0803.450
1202.4895.9633.2163.4503.59112.2334.36950.60612.19010.2213.6203.330
6202.27012.2886.5706.5844.85711.7557.00944.44517.16315.58223.12023.050
400.7047.0503.4504.7330.7327.1793.41158.65812.56611.4129.3708.770
600.7864.9893.2234.3470.9639.9233.40965.31712.5369.6686.5105.790
803.3429.9705.1645.4123.47613.2025.39370.93914.28011.0846.9105.950
1002.8008.9725.1155.4344.22714.7715.23671.11315.52613.6115.6304.840
1203.6426.6775.0005.5323.87415.1485.49277.63216.94812.9305.3704.570
8200.73012.4633.2355.5961.7128.8974.70664.83113.16315.03427.14027.040
400.6512.3241.6295.967−2.8245.709−0.18177.57714.49212.7689.0308.100
603.30111.2233.8525.2023.56913.7745.03488.47915.59313.72410.4909.620
801.7298.6333.7126.0373.41114.6545.82385.42415.25511.7366.9006.000
1003.20214.2693.5004.7834.22817.0915.11091.58916.54915.0597.5306.720
1205.00013.9885.0266.4205.36417.6796.36494.59618.04616.2156.6605.330
10202.164−9.8286.7909.612−19.660−10.554−13.77978.03810.157−5.39915.79015.130
400.2062.9631.4236.572−5.4035.124−0.30690.54416.47711.69710.5509.350
600.9177.4771.0843.6110.29411.3563.77998.99015.36411.2729.0707.620
801.48914.7623.2584.7042.06916.7505.135111.03621.40416.6697.9506.960
1000.76912.2253.4627.6804.71018.2867.514116.66721.69217.6987.0706.350
1205.0639.9385.2196.9046.22319.8656.879116.21419.63517.7576.6606.230
122015.8917.8363.80431.8760.9238.0253.09996.69111.3876.00332.43032.310
406.64928.8239.77625.0478.43616.0459.762106.70524.86521.15624.94024.270
600.24431.4073.4919.3961.07812.7113.427115.13622.45621.3019.8508.220
802.19210.8425.03210.8243.06117.8335.030121.42621.78314.65710.98010.400
1006.08813.6657.40011.2226.47019.3247.397124.44323.18422.63611.67011.330
1203.91814.0456.5228.4475.12320.3728.447130.77224.56018.8917.2906.870
Avg. 2.3808.3673.5706.5141.59410.6473.74272.02514.33011.2439.0738.423
Table 5. Results of δ between MHHO and the other comparison methods.
Table 5. Results of δ between MHHO and the other comparison methods.
mJobFASOS-LPTSOSHSOSSAESAESOSACOGAIWORSAHABC
2206.956.6418.460.5120.139.0580.8721.4210.9225.8323.96
403.571.821.960.136.042.2942.676.935.354.953.96
601.530.150.380.152.860.4619.183.782.611.200.83
802.660.500.740.116.080.6727.775.114.101.771.25
100166.337.3515.1442.36288.5485.581219.46257.97228.6696.0785.07
12092.843.6213.3929.89172.0463.52622.02124.61114.3845.8139.57
4201.370.1213.07−15.791.041.0469.0712.745.8220.9520.29
403.951.972.27−1.595.392.7146.249.878.325.845.11
602.922.062.780.256.482.0240.7419.667.073.362.74
801.570.480.690.062.890.6517.822.852.480.800.52
1005.522.593.473.8012.704.9760.2214.6111.213.993.22
1201.400.290.390.443.910.7619.333.903.110.450.34
6204.411.891.901.144.182.0918.586.565.879.199.16
409.013.905.720.049.203.8582.3216.8515.2112.3111.46
605.353.104.530.2311.623.3482.1014.9511.307.286.37
801.980.550.620.042.950.6120.233.272.321.070.78
1002.200.830.940.514.280.8724.404.553.861.010.73
1200.830.370.520.063.160.5120.323.652.550.470.25
82016.073.436.661.3411.185.4487.7817.0219.5936.1636.03
402.571.508.17−5.347.77−1.28118.1721.2618.6112.8711.44
602.400.170.580.083.170.5325.813.723.162.181.91
803.991.152.490.977.482.3748.427.835.792.992.47
1003.460.090.490.324.340.6027.604.173.701.351.10
1201.800.010.280.072.540.2717.922.612.240.330.07
1020−5.542.143.44−10.08−5.88−7.3735.063.69−3.496.305.99
4013.415.9230.96−27.2823.92−2.49439.3879.1455.8950.3144.48
607.160.182.94−0.6811.393.12106.9915.7611.308.897.31
808.911.192.160.3910.252.4573.5713.3710.194.343.67
10014.913.508.995.1322.798.78150.7927.2222.038.207.26
1200.960.030.360.232.920.3621.962.882.510.320.23
1220−0.51−0.761.01−0.94−0.49−0.805.08−0.28−0.621.041.03
403.340.472.770.271.410.4715.052.742.182.752.65
60127.9013.3337.563.4251.1713.07471.5391.1686.4239.4332.74
803.951.303.940.407.141.3054.408.945.694.013.75
1001.240.220.840.062.170.2219.442.812.720.920.86
1202.580.661.160.314.201.1632.385.273.820.860.75
Table 6. Parameter settings for all algorithms.
Table 6. Parameter settings for all algorithms.
ParameterValues
ACOPheromone evaporation = 0.01, Global update rate = 0.081, NP = 60, Pheromone amounts = 1, LocalIter = 31
SADPInitial temperature = LB + (UB − LB)/10, final temperature = LB, temperature reduction rate = 0.99
GADP2Crossover rate = 0.6, NP = 50, Mutation rate = 0.5
ESOSNumber of random move = [0.5 m, 0.9 m], NP = 50
GADPCrossover rate = 0.6, NP = 50, Mutation rate = 0.5
FASelection pressure = 3, NP = 100, Max generation = 1000, Uniform mutation rate = 3, α = 0.2, Light absorption = 1
SOSNumber of random move = [0.5 m, 0.9 m], NP = 50
IWOMin seeds = 0, Max seeds = 5, NP = 15, Initial Standard deviation = 1, Final standard deviation = 0.001, Variance reduction exponent = 2
GACrossover rate = 0.6, Mutation rate = 0.5, Np = 15, Selection pressure = 5
ESAInitial temperature = 0.025, Temperature reduction rate = 0.99
SOS-LPTNumber of random move = [0.5 m, 0.9 m], NP = 50
HSOSSANumber of random move = [0.5 m, 0.9 m], NP = 50, Initial temperature = 0.025, Temperature reduction rate = 0.99
RSAInitial temperature = 1.5, final temperature = 0.02, Iter = (jobs + machines−1) × 1500, Cooling schedule = 0.98, Boltzmann constant = 0.1, Non-improving = 28
HABCInitial temperature = 0.25 × (sum of processing time and average setup time for each job on each machine)/(jobs × machines), NP = 10, limits = 1000, T m a x = (jobs × machines × 35)/100, Cooling schedule = 0.90
Table 7. Comparison of the results of MHHO and other state-of-the-art algorithms. The best values are shown in bold.
Table 7. Comparison of the results of MHHO and other state-of-the-art algorithms. The best values are shown in bold.
ACO [32]SADP [55]GADP2 [55]ESOS [30]GADP [55]
mnMinAvg.Max δ MinAvg.Max δ MinAvg.Max δ MinAvg.Max δ MinAvg.Max δ
22012831347134813.61196125513385.81242125412665.71190121312362.31242125512675.8
4027722834288920.82371246225505.02441245924744.92363237823931.42446246924875.3
6042234323440423.23588368037644.83652367536954.73519357536311.93664369537345.3
8057015823591124.84753487950454.64846487248964.44705473047541.44859489249234.9
62046652356144.644145548125.744845445925.43813873936.944845646326.0
409921137126958.679684189217.380983185315.97347427503.581583886116.9
6011711771191065.412101259129517.612191246127016.31100110711133.312251252127716.9
8022002443260971.016061662170516.316221648167215.31500150715145.516231655168415.8
122026634335696.222924126537.723523924436.61771831894.623424024737.1
40643717796106.744046649334.344445546831.13773823869.944445647131.4
60104711171217115.262866971528.980964966725.15345385423.762965267325.6
80135215281639121.481989195929.181984988423.07137237334.882185288923.5
FA [32]IWO [32]GA [32]ESA [30]MHHO
mnMinAvg.Max δ MinAvg.Max δ MinAvg.Max δ MinAvg.Max δ MinAvg.Max δ
2201156120112561.31152120912702.01176122413313.21202123512684.11162118812110.2
402330239624622.22343241625003.02364243425073.82416242824403.52346235524000.4
603523361236762.93543365637394.13608370338145.53601366737334.53533355635821.3
804738481349653.24769487050084.44796491150495.34905495950136.34690470547900.9
62038640743512.439241945315.739042548617.339740641412.03693713802.4
4075276711927.076579682511.168180789112.57547657766.77207227330.6
601111112511425.01120117512089.711311206128412.61163117711909.91078108010940.8
8015522346238864.215331588166111.115971633167214.316131619162413.31474147715053.4
12201761882057.61711852065.818119421411.11871891918.018920221215.6
4036544753828.838242046721.137143348824.839440441316.33683714026.9
6062468277531.556964084823.359763671922.557859060113.65205235300.8
8069576579310.971979284614.772384199121.980881381817.87017067312.3

Share and Cite

MDPI and ACS Style

Jouhari, H.; Lei, D.; Al-qaness, M.A.A.; Elaziz, M.A.; Damaševičius, R.; Korytkowski, M.; Ewees, A.A. Modified Harris Hawks Optimizer for Solving Machine Scheduling Problems. Symmetry 2020, 12, 1460. https://doi.org/10.3390/sym12091460

AMA Style

Jouhari H, Lei D, Al-qaness MAA, Elaziz MA, Damaševičius R, Korytkowski M, Ewees AA. Modified Harris Hawks Optimizer for Solving Machine Scheduling Problems. Symmetry. 2020; 12(9):1460. https://doi.org/10.3390/sym12091460

Chicago/Turabian Style

Jouhari, Hamza, Deming Lei, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Robertas Damaševičius, Marcin Korytkowski, and Ahmed A. Ewees. 2020. "Modified Harris Hawks Optimizer for Solving Machine Scheduling Problems" Symmetry 12, no. 9: 1460. https://doi.org/10.3390/sym12091460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop