Next Article in Journal
Reliability-Aware SPICE Compatible Compact Modeling of IGZO Inverters on a Flexible Substrate
Previous Article in Journal
More Than Just Concrete: Acoustically Efficient Porous Concrete with Different Aggregate Shape and Gradation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling

1
Department of Computer Science, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
2
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
3
Department of Computational Mathematics, Science, and Engineering (CMSE), College of Engineering, Michigan State University, East Lansing, MI 48824, USA
4
Artificial Intelligence and Information Systems Research Group, School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
5
Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 4837; https://doi.org/10.3390/app11114837
Submission received: 1 April 2021 / Revised: 21 May 2021 / Accepted: 22 May 2021 / Published: 25 May 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
This paper studies the generalized normal distribution algorithm (GNDO) performance for tackling the permutation flow shop scheduling problem (PFSSP). Because PFSSP is a discrete problem and GNDO generates continuous values, the largest ranked value rule is used to convert those continuous values into discrete ones to make GNDO applicable for solving this discrete problem. Additionally, the discrete GNDO is effectively integrated with a local search strategy to improve the quality of the best-so-far solution in an abbreviated version of HGNDO. More than that, a new improvement using the swap mutation operator applied on the best-so-far solution to avoid being stuck into local optima by accelerating the convergence speed is effectively applied to HGNDO to propose a new version, namely a hybrid-improved GNDO (HIGNDO). Last but not least, the local search strategy is improved using the scramble mutation operator to utilize each trial as ideally as possible for reaching better outcomes. This improved local search strategy is integrated with IGNDO to produce a new strong algorithm abbreviated as IHGNDO. Those proposed algorithms are extensively compared with a number of well-established optimization algorithms using various statistical analyses to estimate the optimal makespan for 41 well-known instances in a reasonable time. The findings show the benefits and speedup of both IHGNDO and HIGNDO over all the compared algorithms, in addition to HGNDO.

1. Introduction

The permutation flow shop scheduling problem (PFSSP) is a critical problem that needs to be solved accurately and effectively to minimize the makespan criteria. The solution to this problem involves finding the near-optimal permutation of n jobs to be processed in a set of m machines sequentially that will minimize the makespan required, even completing the last job in the last machine [1]. This problem has significant utilization in several fields, especially in industries such as computing designs, procurement, and information processing. According to its significant effectiveness and its nature, which is normally classified as nondeterministic polynomial time (NP)-hard [1,2,3,4,5,6], several techniques of exact, heuristic, and meta-heuristic properties have been extensively employed for solving this problem. Some of them will be surveyed in the rest of this section.
Exact methods such as linear programming [7] and branch and bound [8] could fulfill the optimal value for the small-scale problem, but for medium-scale and large-scale problems, their performance degrades significantly, in addition to increasing exponentially the computational cost. Therefore, the heuristics algorithms have been designed to overcome this expensive computational cost and high dimensionality. Involving the heuristic algorithms, the Nawaz-Enscore-Ham (NEH) algorithm employed by Nawaz et al. [9] for solving PFSSP could be the most effective heuristic algorithm, and their results are comparable with the meta-heuristic algorithms [10,11,12,13], which are being used for solving several optimization problems in a reasonable time. Broadly speaking, the image segmentation problem is an indispensable process in image processing fields, so several image segmentation methods have been suggested such as clustering, fractal-wavelet techniques [14,15,16,17,18,19,20], region growing, and thresolding; Among those techniques, the threshold-based segmentation technique is the most effective due to the metaheuristic algorithms which could segment the images based on this technique with high accuracy [21].
The particle swarm optimization (PSO)-based memetic algorithm (MA) [22], namely PSOMA, has been proposed for tackling the PFSSP as an attempt to find the near-optimal job permutation that minimizes the maximum completion time. In detail, to adapt the PSOMA for solving the PFSSP, the authors used a ranked order rule to convert the continuous values produced by the standard algorithms into discrete ones. In addition, to improve the quality and diversity of the initialized solutions, the NEH algorithm has been used. Furthermore, to balance between the exploration and exploitation operators, a local search operator has been used to be applied on some solutions selected using the roulette wheel mechanism with a specific probability. Ultimately, to avoid being stuck into local minima, PSOMA used the simulated annealing with multiple neighborhood search strategies. It is worth mentioning that the local search has been used with the PSO for tackling several optimization problems, and this confirms that the local search has a significant influence on the performance after integration; some of those works are comprehensive learning PSO with a local search for multimodal functions [23], PSO with local search [24], and many others [2,25,26,27,28,29].
The cuckoo search-based memetic algorithm (HCS) [30] has been adapted using the largest ranked values rule for tackling the PFSSP. Besides, HCS used the NEH algorithm to initialize the population to fulfill better quality and diversity. Furthermore, this algorithm used a fast local search to accelerate the convergence speed in an attempt to improve its exploitation algorithm. This algorithm was compared with a number of optimization algorithms: hybrid genetic algorithm (HGA), particle swarm optimization with variable neighborhood search, and the differential-evolution-based hybrid algorithm (HDE) on four benchmark instances to see its efficacy.
The hybrid discrete artificial bee colony algorithm (HDABC) [31] has been adapted for tackling the PFSSP. In HDABC, the initialization step was achieved based on the Greedy Randomized Adaptive Search Procedure (GRASP) with the NEH algorithm to include better quality and diversity. After that, the discrete operators such as insert, swap, GRASP, and path relinking are used to generate new solutions. Ultimately, a local search strategy has been applied to improve the quality of the best-so-far solution as an attempt to improve the searchability of HDABC. HDABC has been extensively compared with a number of the algorithms: ant colony system (ACS), PSO embedded with a variable neighborhood search (VNS) (PSOVNS), PSOMA, and HDABC.
Xie, Z., et al. [32] developed a hybrid teaching learning-based optimization (HTLBO) for tackling the PFSSP. Due to the continuous nature of the teaching-learning-based optimization, the largest ranked value rule is used to make it applicable to the PFSSP. In addition, HTLBO used simulated annealing as a local search to improve the quality of the obtained solutions. The differential evolution-based memetic algorithm (ODDE) [33] has been adapted using the largest ranked value rule for tackling the PFSSP. ODDE in the initialization step used the NEH algorithm to initialize the solutions with a certain quality and diversity. In ODDE, an approach based on the diversity of the population was used to tune the crossover rate, in addition to accelerating the convergence speed of the algorithm using the opposition-based learning. Finally, ODDE used a local search strategy to avoid being stuck into local minima by improving the best-so-far solution.
The whale optimization algorithm integrated with a local search-ability on the best solution and mutation operators have been suggested by Abdel-Basset, M., et al. [34] to propose a new variant, namely HWA, for tackling PFSSP. Broadly speaking, HWA used the NEH algorithm in the initialization step to create 10% of the populations with a certain diversity and quality as an attempt to avoid being stuck into local minima for reaching better outcomes. Afterward, to make WOA applicable to the PFSSP, the LRV rule was used to make the solutions generated by it relevant to this problem. Furthermore, it was integrated with two operators to improve the diversity for avoiding being stuck into the local minima problem: swap mutation and insert-reversed block. Finally, to accelerate the convergence speed toward the optimal solution and avoid being stuck in the local minimum, it was integrated with a fast local search strategy on the best-so-far solution.
In [35], Mishra developed a discrete Jaya optimization algorithm for tackling the PFSSP. Because the standard Jaya algorithm has been adapted for tackling the continuous optimization problem that is contradicted to the PFSSP, which is normally classified as a discrete one, the largest order value rule was used to convert those continuous values into discrete ones relevant to the PFSSP. This discrete Jaya algorithm was verified on a set of well-known benchmarks and compared extensively under various statistical analyses with hybrid genetic algorithm (HGA, 2003), hybrid differential evolution (HDE, 2008), hybrid particle swarm optimization (HSPO, 2008), teaching-learning based optimization (TLBO, 2014), and hybrid backtracking search algorithm (HBSA, 2015) that are not up to date, and its performance with the recent optimization algorithms published over the last three years are unknown.
The whale optimization algorithm (WOA) [36] improved using the chaos map and then integrated with the NEH algorithm has been proposed for tackling the PFSSP. In detail, the NEH algorithm and the largest ranked values rule are used in the initialization step of the chaos WOA (CWA) to initialize the solutions in better quality. After that, CWA used the chaotic maps to avoid being stuck into local minima and accelerate convergence speed by assisting two other operators: cross operator and reversal-insertion to improve its exploration capability. Ultimately, CWA used the local search strategy to improve the quality of the best-so-far solution to improve the exploitation capability of CWA. This algorithm was observed using various benchmarks and compared with various optimization algorithms to check its superiority.
Further, a new discrete multiobjective approach based on the fireworks algorithm (DMOFWA) has been recently proposed for solving the multi-objective flow shop scheduling problem with sequence-dependent setup times (MOFSP-SDST); this approach was abbreviately called DMOFWA [37]. Inside this approach, two various machine learning techniques have been integrated: The first one called opposition-based learning was used to improve the exploration operator of the standard algorithm to avert entrapment into local minima, and the second one is the clustering analysis and was used to cluster fireworks individuals.
To overcome expensive computational costs and local minima problems that might suffer from most of the above-described algorithms, we developed a novel discrete optimization algorithm to tackle the PFSSP in a reasonable time compared to some existing techniques. Recently, a new optimization algorithm, namely generalized optimization algorithm (GNDO), based on the normal distribution theory, has been developed by Zhang [38] for tackling the parameter extraction problem of the single diode and double diode photovoltaic models. Due to its high ability to estimate the parameter values that minimize the error rate between the measured I-V curve and the estimated I-V curve, in this paper, we try to observe its performance for tackling the PFSSP. In order to make GNDO applicable to the PFSSP classified as a discrete problem contradicted by the continuous problems tackled using the standard GNDO, the largest ranked value (LRV) rule is used to convert those continuous values into job permutations adequate to solve the PFSSP. Furthermore, this discrete GNDO using the LRV rule is integrated with a local search strategy to avoid being stuck into local minima for reaching better outcomes; this version is named a hybrid GNDO (GNDO). In another attempt to improve the quality of HGNDO, it was integrated with the swap mutation operator applied on the best-so-far solution as another attempt to promote the exploitation capability for reaching better outcomes; this version was abbreviated as HIGNDO. Finally, to improve the quality of the solutions, the local search strategy is improved using the scramble mutation operator and then integrated with HIGNDO to produce a new version named IHGNDO. The proposed algorithms, HGNDO, HIGNDO, and IHGNDO, are verified using 41 well-known instances widely used in the literature and compared with a number of the recent well-established algorithms to verify their efficacy using various performance metrics. The experimental results affirm the superiority of IHGNDO and HIGNDO over the other algorithms in terms of standard deviation, computational cost, and makespan. Generally, our contributions in this work include the following:
  • Develop GNDO using the LRV rule for PFSSP.
  • Improve GNDO using the swap mutation operator to avoid being stuck into local minima.
  • Enhance the local search strategy using the scramble mutation operator for accelerating the convergence speed toward the near-optimal solution.
  • Integrate the improved local search strategy and the standard one with the improved GNDO and GNDO for tackling the PFSSP.
  • The experimental findings show that IHGNDO and HIGNDO are better in terms of standard deviation and computational cost and final accuracy.
This work is organized as follows: Section 2 explains the PFSSP; Section 3 describes the standard generalized normal distribution optimization algorithm; Section 4 explains the proposed algorithm; Section 5 includes the results and discussion; and Section 6 illustrates our conclusions and future work.

2. Description of the Permutation Flow Shop Scheduling Problem

Assuming that n jobs are running sequentially over m machines in the permutation flow that will minimize the makespan, this problem is known as the permutation flow shop scheduling problem (PFSSP). The makespan is measured using time units such as seconds, milliseconds, etc. Therefore, to solve this problem, the best permutation c * that will minimize the makespan of execution of the last job on the last machine must be accurately extracted. In general, the following points summarize the PFSSP: (1) on each machine, each job   j b | b = 1 , 2 , 3 ,     , n could run just once, where n is the number of jobs; (2) just a job could be executed on a machine   i z | z = 1 , 2 , 3 ,     . ,   m at a time with processing time PT, where m is the number of machines; (3) each job j b will have a completion time c on a machine v z , and this time is symbolized as c( j b ,   i z ); (4) each job has a processing time comprised of the set-up time of the machine and the running time; and (5) each job takes a time of 0 when starting. Mathematically, PFSSP could be modeled as follows:
c ( j 1 ,   i 1 ) = P T j 1 ,   i 1  
c ( j b ,   i 1 ) = c ( j b 1 ,   i 1 ) + P T j b ,   i 1 ,           b = 2 , 3 ,   4 ,     ,   n  
c ( j 1 ,   i z ) = c ( j b ,   i z 1 ) + P T j 1 ,   i z ,           z = 2 , 3 ,   4 ,     ,   m  
c ( j b ,   i z ) = max ( c ( j b 1 ,   i z ) ,   c ( j b ,   i z 1 ) ) + P T j b ,   i z ,     b = 2 , 3 ,   4 ,     ,   n ,   z = 2 , 3 , 4 ,     . . ,   m  
In our work, the objective function used by the suggested algorithm to evaluate each solution is described as follows:
f ( j i ) = c ( j b ,   i z )
where j i is the jobs permutation of the ith solution. This objective function will be used to evaluate each permutation extracted by the algorithms, and the one with less makespan is considered the best.

3. Standard Algorithm: Generalized Normal Distribution Optimization

Zhang [38] developed a new optimization algorithm based on the normal distribution theory to tackle the parameter estimation problem of Photovoltaic models: single diode model and double diode model; this algorithm is called generalized normal distribution optimization (GNDO). The mathematical model of GNDO is extensively described in the rest of this section.

3.1. Exploitation Operator

This operator is utilized to search extensively around the best-so-far solution X * to check if there are better solutions as an attempt to accelerate the convergence speed. In GNDO, this operator is designed based on searching around the mean μ i of X * , the current i th solution X i t , and the mean M of all solutions at generation t calculated according to Equation (8); μ i is computed using Equation (7). After that, GNDO exploits the solutions around this mean using a step size computed according to Equation (9) to generate a new trial solution T i t using Equation (6) having the following characteristics: accelerating the convergence speed in addition to improving the quality of the solutions. T i t is carried over to the next generation if its objective value is better than the objective of X i t .
T i t = μ i + δ i × η ,     i = 1 :   N
μ i = ( X i t + X * + M ) / 3.0
M = i = 1 N X i t N
δ i = 1 3 [ ( X i t μ ) 2 + ( X * μ ) 2 + ( M μ ) 2 ) ]
η = { l o g ( 1 ) × c o s ( 2 π 2 ) ,                 r 1   r 2 l o g ( 1 ) × c o s ( 2 π 2 + π ) ,     r 1 > r 2
r 1 , r 2 , 1 , and 2 are four numbers generated randomly at the interval between 0 and 1.

3.2. Exploration Operator

However, μ i may be local minima, and subsequently searching around it is futile to improve the quality of the solutions. Therefore, the exploration operator is used to explore the search space as much as possible to avoid being stuck into local minima. In mathematical terms, this operator is formulated as follows:
    T i t = X i t + β × ( | 3 | × v 1 ) + ( 1 β ) × ( | 4 | × v 2 )
3 and 4 are two randomly generated numerical values based on the standard normal distribution; β is a random number created between 0 and 1. v 1 and v 2 are generated as follows:
v 1 = { X i t X a 1 t ,     i f   f ( X i t ) f ( X p 1 t ) X a 1 t X i t ,                         o t h e r w i s e
v 2 = { X a 2 t X a 3 t ,               i f   f ( X p 2 t ) f ( X p 3 t ) X a 3 t X a 2 t ,                                   o t h e r w i s e
a 1 ,     a 2 ,     and   a 3   are three indices selected randomly from the population, such that a 1 a 2 a 3 i . The exploration and exploitation operators are randomly swapped in the optimization process.

4. The Proposed Work

In this section, the steps of initialization, swap mutation and scramble mutation operators, and improved local search that comprise the proposed algorithms will be discussed in detail within this section.

4.1. Initialization

In the beginning, N solutions with n dimensions for each one are generated and initialized with distinct integers generated randomly between 0 and n. After that, those solutions will be evaluated, and the one with less makespan will be carried over to the next generation as the best-so-far solution. The ending to this phase considers starting the optimization process used to optimize the initial solutions to generate new better ones. However, unfortunately, the updated solutions generated by GNDO are continuous, not discrete, as required for the PFSSP, so the largest ranked value (LRV) is used to convert the continuous values generated by GNDO into a job permutation. The LRV sets the largest value in the updated solution as the first order of a job permutation and the second-largest value as the second one. Table 1 presents a simple example to illustrate the LRV rule for generating the job permutation from an updated solution T i t .

4.2. Swap Mutation Operator

This mutation operator is extensively used for solving the permutation problem by swapping the values of two positions selected randomly from the solution. In the proposed algorithm, this operation is applied on the best-so-far solution 0.1 times to search for other solutions with a smaller makespan than the current best-so-far. Figure 1 gives an example about the swap mutation operator, where Figure 1a shows the order of the positions before using this mutation operator, while Figure 1b shows the order after swapping the value in the third position with the values in the seventh position.

4.3. Scramble Mutation Operator

In this operator, two positions are randomly picked, and the jobs between those two positions are shuffled and inserted again, as depicted in the following table (Table 2).

4.4. Improved Local Search Strategy (ILSS)

Additionally, in this work, a local search strategy is used to explore the solutions around the best-so-far solution for finding better solutions. This strategy will try according to a specific probability LSP each job in the best-so-far solution in all positions within this best solution to find a permutation with better makespan than the current best-so-far one. This strategy is used with the best-so-far solution without the others because the best-so-far solution might be so close to the optimal solution and need only simple changes to fulfill this optimal solution. This local search is integrated with the improved GNDO using the swap mutation operator to generate a version for tackling PFSSP, abbreviated as HIGNDO. In addition, in some cases, small changes may consume a large number of iterations without any benefit, so, in this research, a new addition to this LSS is made to make more changes to the best-so-far solution in the hope of finding a better solution. This addition is based on using the scramble mutation operator additionally with the LSS to explore more permutations. This improved local search strategy is abbreviated as ILSS, and its steps are listed in Algorithm 1. In Algorithm 2, the steps of improved GNDO (IGNDO) using the swap mutation operator hybridized with the LSS without the scramble mutation operator are extensively described to produce a version for tackling PFSSP known as HIGNDO. A new version using ILSS with IGNDO is developed to verify the efficacy of our improvement to the LSS for reaching better outcomes. This version is abbreviated as IHGNDO and is depicted in Figure 2.
Algorithm 1 Improved LSS (ILSS).
Input: X *
  • For I = 1: n
  •    X = X *
  •   For j = 1: n
  •      r : create a random number between 0 and 1.
  •     If(r < LSP)
  •        X j = X i *
  •       Applying scramble mutation operator on   X
  •       Calculate the fitness of X .
  •       Update X * if X is better.
  •     End if
  •   End for
  • End for
Return X *
Algorithm 2 HIGNDO.
Input: N, t m a x
  • t = 0
  • Initialization phase.
  • While  t < t m a x
  •   For  i = 1 : N
  •      Create a number α randomly between [0, 1].
  •      Create a number α 1 randomly between [0, 1].
  •     If  α > α 1
  •       Calculate M using Equation (8)
  •       Calculate   μ i ,     δ i ,     a n d   η
  •       Calculate T i t using Equation (6).
  •      If  f ( T i t ) < f ( X i t )
  •          X i t = T i t
  •      End If
  •     Else
  •       Compute T i t according to Equation (11).
  •      If  f ( T i t ) < f ( X i t )
  •          X i t = T i t
  •      End If
  •     End If
  •     For  j = 1 : 0.1 * n
  •        T : Applying the swap mutation on the best-so-far solution.
  •      If  f ( T ) < f ( X * )
  •           X * = T
  •       End If
  •      End for
  •      Applying algorithm 1 without Line 7.
  •    End For
  •     t + + ;
  • End while
Output: return X *

5. Results and Comparisons

In our experiments, the proposed algorithms are extensively validated on three benchmarks commonly used in the literature: (1) the first dataset is called the Carlier dataset, having eight instances with a number of jobs ranging between 7 and 14, and a number of machines at the interval between 4 and 9 [39]; (2) the second is the Reeves dataset with 21 instances, where the number of machines and the number of jobs ranges between 20 and 75, and 5 and 20, respectively [40]; and (3) finally, the third one is known as the Heller and involves two instances with a number of jobs ranging between 20 and 100, and a number of machines of 10, respectively [41]. Those datasets are taken from [42] with some characteristics about the number of jobs and machines, and the best-known makespan z * in Table 3. Furthermore, the proposed algorithms are extensively compared with a number of the well-established optimization algorithms: sine cosine algorithm (SCA) [43], slap swarm algorithm (SSA) [44], whale optimization algorithm (WOA) [34], genetic algorithm (GA), equilibrium optimization algorithm (EOA) [45], marine predators optimization algorithm (MPA) [42], and a hybrid tunicate swarm algorithm (HTSA) [46] integrated with the local search strategy to ensure a fair comparison and verify their efficacy in terms of six performance metrics: average relative error (ARE), worst relative error (WRE), best relative error (BRE), an average of makespan (Avg), standard deviation (SD), and computational cost (Time in milliseconds (ms)). BRE indicates how far the best-obtained solution Z B is close to the best-known solution and is formulated using the following formula:
B R E = | Z * Z B | Z *
Meanwhile, WRE calculated using the next equation is a metric used to assess the remoteness between the worst-obtained makespan Z w and the best known.
W R E = | Z * Z w | Z *
Regarding ARE, it is used to show the relative error with respect to the average makespan values within 30 independent runs and the best-known one. Mathematically, ARE is modeled as follows.
A R E = | Z * Z A v g | Z *
The algorithms used in our experiments after integrating local search are named a hybrid SCA (HSCA) [43], a hybrid SSA (HSSA) [44], a hybrid WOA (HWOA) [34], a hybrid GA (HGA), a hybrid EOA (HEOA) [45], a hybrid MPA (HMPA) [42], and a hybrid TSA (HTSA) [46]. Regarding the parameters of those algorithms, they were assigned after extensive experiments. The EOA has two parameters: a 1 (exploration factor) and a 2 (exploitation factor), which are needed to be accurately estimated, and after several experiments for extracting their optimal values, we note that all observed values for a 2 were significantly converged; therefore, it is set to 1 as used in the standard algorithm; a 1 , which is responsible for the exploration operator, is assigned a value of 2 estimated after several experiments, pictured in Figure 3a. The SSA is self-adaptive algorithm since it does not have parameters to be assigned before beginning the optimization process; on the other hand, the HSCA has one parameter called a responsible for deterimining where the algorithm will search for the near-optimal solution, and the value to this parameter was set to 3, as shown in Figure 3b. The HMPA has one parameter P called the scaling factor, and it is set in our experiment as cited in the standard algorithm because we found that these parameters have no effect on the performance of the algorithm while solving this problem. Finally, the HTSA has two effective parameters, namely x m a x and x m i n , representing the initial and subordinate speeds for social interaction and are assigned to 1 and 2, as described in Figure 3c,d which depict the outcomes of their tuning using various values. The HGA used a value of 0.02 and 0.8 for both the mutation and crossover probabilities, as recommended in [40]. All algorithms were executed under those parameters 30 independent times within the same environment with a maximum of iteration and population size: 200 and 50, respectively.

5.1. Comparison under Carlier

This section validates the performance of the algorithms on the Carlier instances to show the readers the efficacy of each one. Each algorithm is run 30 independent times on each instance out of eight instances of the Carlier dataset, and then the various performance metrics are calculated and presented in Table 4, which shows the superiority of IHGNDO, HIGNDO, and HGNDO on most test cases. Broadly speaking, IHGNDO could reach the best-known value for all instances and fulfill a value of 0 for ARE, WRE, BRE, and SD, in addition to its outperformance in the time metric for two instances. Meanwhile, HIGNDO could fulfill the best-known values of seven instances within all independent runs while failing incoming true the best-known value for Car04 instance in all runs. In addition, HIGNDO could be the best for the time metric in five instances. Generally, IHGNDO could occupy the first rank for the makespan metric and the second rank after HIGNDO in terms of the CPU time. Additionally, Figure 4 presents the average of ARE, WRE, and BRE on all instances, which shows that IHGNDO could occupy the first rank for WRE and ARE, while it is competitive with the others in terms of WRE. Regarding SD, an average of makespan, and time metrics depicted in Figure 5, HIGNDO comes in the first rank before IHGNDO for the time metric; IHGNDO could be the best for time and Avg metrics. Ultimately, Figure 6, Figure 7 and Figure 8 compare the makespan values obtained by the different algorithms based on the boxplot. Those figures show the superiority of IHGNDO in terms of the average makespan. From the above analysis, IHGNDO could achieve positive outcomes reasonably, which makes it a strong alternative to the existing algorithms developed for tackling the PFSSP.

5.2. Comparison under Reeves

In this subsection, the proposed algorithms will be verified on the Reeve instances to verify their efficacy and compared to some state-of-the-art algorithms to show their superiority. After running and calculation, various metrics values are introduced in Table 5 and Table 6 to observe the performance of the algorithms. Observing those tables shows the superiority of the proposed algorithms: IHGNDO, HIGNDO, and HGNDO for most performance metrics in most test cases. To confirm that, Figure 9 and Figure 10 are presented to show the average of each performance metric on all instances in the Reeves benchmark; those figures elaborate the superiority of HIGNDO over the others in terms of BRE, ARE, and Avg makespan, while IHGNDO could outperform in terms of SD and come in the six ranks for the time metric. Since the proposed algorithms could outperform the others in terms of final accuracy in a reasonable time, they are a strong alternative to the existing algorithms adapted for tackling the same problem. In addition, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 show the boxplot of the makespan values obtained by various algorithms on the instances from reC01 to reC17, which confirm the superiority of IHGNDO and HIGNDO in comparison to the others.

5.3. Comparison under Heller

Here, the proposed algorithms will be compared to the other algorithms under the Heller instances. In Table 7, various performance metrics values are exposed that show the superiority of IHGNDO in terms of ARE and Z A v g for the Hel1 instance and competitiveness with HIGNDO on Hel2 in terms of WRE, ARE, Time, SD, and Z A v g . Furthermore, for doing that, Figure 20 and Figure 21 are exposed to show the average of WRE, ARE, SD, Time, Avg makespan, and BRE; those figures showed that IHGNDO is the best in terms of ARE, WRE, and Avg makespan; HIGNDO could be superior for Time and SD metrics; and all algorithms are competitive for BRE metric. Figure 22 and Figure 23 depict the boxplot of makespan values produced in 30 independent runs on Hel1 and Hel2 using various optimization algorithms. From those figures, it is concluded that IHGNDO is the best.

6. Conclusions and Future Work

As a new attempt to produce a new algorithm that could tackle the permutation flow shop scheduling problem (PFSSP), in this paper, we investigate the performance of a novel optimization algorithm, namely generalized normal distribution (GNDO), for solving this problem. Due to the continuous nature of GNDO and the discreteness of PFSSP, the largest ranked value (LRV) rule is used to make GNDO applicable for solving this problem. In a new attempt to improve the performance of the discrete GNDO, a new version of GNDO, namely a hybrid GNDO (HGNDO), is developed based on applying a local search strategy to improve the quality of the optimal global solution. In addition, the GNDO has an improvement by also applying the swap mutation operator on the best-so-far solution to find better solutions, and this improvement is integrated with HGNDO to produce a new version, namely HIGNDO. Finally, the scramble mutation operator is integrated with the local search strategy to utilize each attempt done by this local search for improving the best-so-far solution as much as possible; this local search is used with the improved GNDO using the swap mutation operator to produce a strong version abbreviated as IHGNDO for tackling the PFSSP. To validate the performance of the algorithms accurately, 41 common instances used widely in the literature are employed. Additionally, to check the proposed superiority, they are extensively compared with some well-established recently-published optimization algorithms using various performance metrics. The findings show that HIGNDO and IHGNDO could be superior in terms of standard deviation, CPU time, and makespan. Those findings also show that IHGNDO is better than HIGNDO for most performance metrics, and this confirms the effectiveness of our improvement to the local search strategy. Our future work involves applying those proposed algorithms for tackling other types of the flow shop scheduling problem.

Author Contributions

Conceptualization, M.A.-B., R.M., and M.A.; methodology, M.A.-B., R.M., and M.A.; software, M.A.-B. and R.M.; validation, M.A., M.A.-B., R.M..; formal analysis, M.A.-B., R.M., and M.A.; investigation, S.S.A., V.C., and M.A.; resources, M.A.-B. and R.M.; data curation, M.A.-B., R.M., and M.A.; writing—original draft preparation, M.A.-B., R.M., and M.A.; writing—review and editing, S.S.A., V.C., and M.A.; visualization, M.A.-B., M.A., and R.M.; supervision, M.A., M.A.-B., and S.S.A.; project administration, M.A.-B., R.M. and M.A.; funding acquisition, S.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This project is funded by King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

The study did not involve humans or animals.

Informed Consent Statement

The study did not involve humans.

Data Availability Statement

We refer to data in the paper as following “The data sets used, can be available online: http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/flowshop1.txt”, accessed 1 March 2021, Brunel University London Subject: flowshop1.txt This file contains a set of 31 FSP test instances. These instances were contributed to OR-Library by Dirk C. Mattfeld (email dirk@uni-bremen.de) and Rob J.M. Vaessens (email robv@win.tue.nl). people.brunel.ac.uk.

Acknowledgments

Research Supporting Project number (RSP−2021/167), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sayadi, M.; Ramezanian, R.; Ghaffari-Nasab, N. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems. Int. J. Ind. Eng. Comput. 2010, 1, 1–10. [Google Scholar] [CrossRef]
  2. Ali, A.B.; Luque, G.; Alba, E. An efficient discrete PSO coupled with a fast local search heuristic for the DNA fragment assembly problem. Inf. Sci. 2020, 512, 880–908. [Google Scholar]
  3. Li, Y.; He, Y.; Liu, X.; Guo, X.; Li, Z. A novel discrete whale optimization algorithm for solving knapsack problems. Appl. Intell. 2020, 50, 3350–3366. [Google Scholar] [CrossRef]
  4. Diab, A.A.; Sultan, H.M.; Do, T.D.; Kamel, O.M.; Mossa, M.A. Coyote optimization algorithm for parameters estimation of various models of solar cells and PV modules. IEEE Access 2020, 8, 111102–111140. [Google Scholar] [CrossRef]
  5. Fidanova, S. Hybrid Ant Colony Optimization Algorithm for Multiple Knapsack Problem. In Proceedings of the 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Jaipur, India, 1–3 December 2020. [Google Scholar]
  6. Gokalp, O.; Tasci, E.; Ugur, A. A novel wrapper feature selection algorithm based on iterated greedy metaheuristic for sentiment classification. Expert Syst. Appl. 2020, 146, 113176. [Google Scholar] [CrossRef]
  7. Tseng, F.T.; Stafford, E.F., Jr. New MILP models for the permutation flowshop problem. J. Oper. Res. Soc. 2008, 59, 1373–1386. [Google Scholar] [CrossRef]
  8. Madhushini, N.; Rajendran, C. Branch-and-bound algorithms for scheduling in an m-machine permutation flowshop with a single objective and with multiple objectives. Eur. J. Ind. Eng. 2011, 5, 361–387. [Google Scholar] [CrossRef]
  9. Nawaz, M.; Enscore, E.E., Jr.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  10. Al-Habob, A.A.; Dobre, O.A.; Armada, A.G.; Muhaidat, S. Task scheduling for mobile edge computing using genetic algorithm and conflict graphs. IEEE Trans. Veh. Technol. 2020, 69, 8805–8819. [Google Scholar] [CrossRef]
  11. Montoya, O.; Gil-González, W.; Grisales-Noreña, L. Sine-cosine algorithm for parameters’ estimation in solar cells using datasheet information. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020. [Google Scholar]
  12. Xiong, L.; Tang, G.; Chen, Y.C.; Hu, Y.X.; Chen, R.S. Color disease spot image segmentation algorithm based on chaotic particle swarm optimization and FCM. J. Supercomput. 2020, 22, 1–15. [Google Scholar] [CrossRef]
  13. Sharma, M.; Garg, R. HIGA: Harmony-inspired genetic algorithm for rack-aware energy-efficient task scheduling in cloud data centers. Eng. Sci. Technol. Int. J. 2020, 23, 211–224. [Google Scholar] [CrossRef]
  14. Berry, M.V.; Lewis, Z.V.; Nye, J.F. On the Weierstrass-Mandelbrot fractal function. Math. Phys. Sci. 1980, 370, 459–484. [Google Scholar]
  15. Guariglia, E.J.E. Entropy and fractal antennas. Entropy 2016, 18, 84. [Google Scholar] [CrossRef]
  16. Yang, L.; Su, H.; Zhong, C.; Meng, Z.; Luo, H.; Li, X.; Tang, Y.Y.; Lu, Y. Hyperspectral image classification using wavelet transform-based smooth ordering. Int. J. Wavelets Multiresolut. Inf. Process. 2019, 17, 1950050. [Google Scholar] [CrossRef]
  17. Guariglia, E.J.E. Harmonic sierpinski gasket and applications. Entropy 2018, 20, 714. [Google Scholar] [CrossRef] [Green Version]
  18. Zheng, X.; Tang, Y.Y.; Zhou, J. A framework of adaptive multiscale wavelet decomposition for signals on undirected graphs. IEEE Trans. Signal Process. 2019, 67, 1696–1711. [Google Scholar] [CrossRef]
  19. Guariglia, E.; Silvestrov, S. Fractional-Wavelet Analysis of Positive definite Distributions and Wavelets on D ( ) . In Engineering Mathematics II; Springer: Berlin/Heidelberg, Germany, 2016; pp. 337–353. [Google Scholar]
  20. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. In Fundamental Papers in Wavelet Theory; Springer: Berlin/Heidelberg, Germany, 1989; Volume 11, pp. 674–693. [Google Scholar]
  21. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens. 2019, 11, 1421. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, B.; Wang, L.; Jin, Y.-H. An effective PSO-based memetic algorithm for flow shop scheduling. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2007, 37, 18–27. [Google Scholar] [CrossRef]
  23. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions. IEEE Trans. Evol. Comput. 2018, 23, 718–731. [Google Scholar] [CrossRef]
  24. Chen, J.; Qin, Z.; Liu, Y.; Lu, J. Particle swarm optimization with local search. In Proceedings of the 2005 International Conference on Neural Networks and Brain, Beijing, China, 13–15 October 2005. [Google Scholar]
  25. Chen, R.-M.; Shih, H.-F.J.A. Solving university course timetabling problems using constriction particle swarm optimization with local search. Algorithms 2013, 6, 227–244. [Google Scholar] [CrossRef]
  26. Javidi, M.M.; Emami, N. A hybrid search method of wrapper feature selection by chaos particle swarm optimization and local search. Turk. J. Electr. Eng. Comput. Sci. 2016, 24, 3852–3861. [Google Scholar] [CrossRef]
  27. Moslehi, G.; Mahnam, M. A Pareto approach to multi-objective flexible job-shop scheduling problem using particle swarm optimization and local search. Int. J. Prod. Econ. 2011, 129, 14–22. [Google Scholar] [CrossRef]
  28. Wan, C.; Wang, J.; Yang, G.; Gu, H.; Zhang, X. Wind farm micro-siting by Gaussian particle swarm optimization with local search strategy. Renew. Energy 2012, 48, 276–286. [Google Scholar] [CrossRef]
  29. Wang, L.; Singh, C. Reserve-constrained multiarea environmental/economic dispatch based on particle swarm optimization with local search. Eng. Appl. Artif. Intell. 2009, 22, 298–307. [Google Scholar] [CrossRef]
  30. Li, X.; Yin, M. A hybrid cuckoo search via Lévy flights for the permutation flow shop scheduling problem. Int. J. Prod. Res. 2013, 51, 4732–4754. [Google Scholar] [CrossRef]
  31. Liu, Y.-F.; Liu, S.-Y. A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem. Appl. Soft Comput. 2013, 13, 1459–1463. [Google Scholar] [CrossRef]
  32. Xie, Z.; Zhang, C.; Shao, X.; Lin, W.; Zhu, H. An effective hybrid teaching–learning-based optimization algorithm for permutation flow shop scheduling problem. Adv. Eng. Softw. 2014, 77, 35–47. [Google Scholar] [CrossRef]
  33. Li, X.; Yin, M. An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure. Adv. Eng. Softw. 2013, 55, 10–31. [Google Scholar] [CrossRef]
  34. Abdel-Basset, M.; Manogaran, G.; El-Shahat, D.; Mirjalili, S. A hybrid whale optimization algorithm based on local search strategy for the permutation flow shop scheduling problem. Future Gener. Comput. Syst. 2018, 85, 129–145. [Google Scholar] [CrossRef] [Green Version]
  35. Mishra, A.; Shrivastava, D. A discrete Jaya algorithm for permutation flow-shop scheduling problem. Int. J. Ind. Eng. Comput. 2020, 11, 415–428. [Google Scholar] [CrossRef]
  36. Li, J.; Guo, L.; Li, Y.; Liu, C.; Wang, L.; Hu, H. Enhancing Whale Optimization Algorithm with Chaotic Theory for Permutation Flow Shop Scheduling Problem. Int. J. Comput. Intell. Syst. 2021, 14, 651–675. [Google Scholar] [CrossRef]
  37. He, L.; Li, W.; Zhang, Y.; Cao, Y. A discrete multi-objective fireworks algorithm for flowshop scheduling with sequence-dependent setup times. Swarm Evol. Comput. 2019, 51, 100575. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Jin, Z.; Mirjalili, S. Generalized normal distribution optimization and its applications in parameter extraction of photovoltaic models. Energy Convers. Manag. 2020, 224, 113301. [Google Scholar] [CrossRef]
  39. Carlier, J. Ordonnancements a contraintes disjonctives. Rairo-Oper. Res. 1978, 12, 333–350. [Google Scholar] [CrossRef] [Green Version]
  40. Reeves, C.R. A genetic algorithm for flowshop sequencing. Comput. Oper. Res. 1995, 22, 5–13. [Google Scholar] [CrossRef]
  41. Heller, J. Some numerical experiments for an M× J flow shop and its decision-theoretical aspects. Oper. Res. 1960, 8, 178–184. [Google Scholar] [CrossRef]
  42. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M.; Chakrabortty, R.K.; Ryan, M.J. A Simple and Effective Approach for Tackling the Permutation Flow Shop Scheduling Problem. Mathematics 2021, 9, 270. [Google Scholar] [CrossRef]
  43. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  45. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  46. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
Figure 1. Depiction of the swap mutation operator.
Figure 1. Depiction of the swap mutation operator.
Applsci 11 04837 g001
Figure 2. The steps of the IHGNDO algorithm.
Figure 2. The steps of the IHGNDO algorithm.
Applsci 11 04837 g002
Figure 3. Parameters tunning under Car01 instance.
Figure 3. Parameters tunning under Car01 instance.
Applsci 11 04837 g003
Figure 4. Comparison in terms of BRE, ARE, and WRE on Carlier instances.
Figure 4. Comparison in terms of BRE, ARE, and WRE on Carlier instances.
Applsci 11 04837 g004
Figure 5. Comparison in terms of time, SD, and Avg on Carlier instances.
Figure 5. Comparison in terms of time, SD, and Avg on Carlier instances.
Applsci 11 04837 g005
Figure 6. Boxplot for Car03 instance.
Figure 6. Boxplot for Car03 instance.
Applsci 11 04837 g006
Figure 7. Boxplot for Car05 instance.
Figure 7. Boxplot for Car05 instance.
Applsci 11 04837 g007
Figure 8. Boxplot for Car06 instance.
Figure 8. Boxplot for Car06 instance.
Applsci 11 04837 g008
Figure 9. Comparison in terms of BRE, ARE, and WRE on Reeves instances.
Figure 9. Comparison in terms of BRE, ARE, and WRE on Reeves instances.
Applsci 11 04837 g009
Figure 10. Comparison in terms of time, SD, and Avg on Reeves instances.
Figure 10. Comparison in terms of time, SD, and Avg on Reeves instances.
Applsci 11 04837 g010
Figure 11. Boxplot for reC01 instance.
Figure 11. Boxplot for reC01 instance.
Applsci 11 04837 g011
Figure 12. Boxplot for reC03 instance.
Figure 12. Boxplot for reC03 instance.
Applsci 11 04837 g012
Figure 13. Boxplot for reC05 instance.
Figure 13. Boxplot for reC05 instance.
Applsci 11 04837 g013
Figure 14. Boxplot for reC07 instance.
Figure 14. Boxplot for reC07 instance.
Applsci 11 04837 g014
Figure 15. Boxplot for reC09 instance.
Figure 15. Boxplot for reC09 instance.
Applsci 11 04837 g015
Figure 16. Boxplot for reC11 instance.
Figure 16. Boxplot for reC11 instance.
Applsci 11 04837 g016
Figure 17. Boxplot for reC13 instance.
Figure 17. Boxplot for reC13 instance.
Applsci 11 04837 g017
Figure 18. Boxplot for reC15 instance.
Figure 18. Boxplot for reC15 instance.
Applsci 11 04837 g018
Figure 19. Boxplot for reC17 instance.
Figure 19. Boxplot for reC17 instance.
Applsci 11 04837 g019
Figure 20. Comparison in terms of BRE, ARE, and WRE on Heller instances.
Figure 20. Comparison in terms of BRE, ARE, and WRE on Heller instances.
Applsci 11 04837 g020
Figure 21. Comparison in terms of time, SD, and Avg on Heller instances.
Figure 21. Comparison in terms of time, SD, and Avg on Heller instances.
Applsci 11 04837 g021
Figure 22. Boxplot for Hel1 instance.
Figure 22. Boxplot for Hel1 instance.
Applsci 11 04837 g022
Figure 23. Boxplot for Hel2 instance.
Figure 23. Boxplot for Hel2 instance.
Applsci 11 04837 g023
Table 1. Representation of the updated solution T i t .
Table 1. Representation of the updated solution T i t .
Position, Job0123456
Position, T i t 0.10.50.80.20.60.70.9
Job, T T i t 6415320
Table 2. Scramble mutation operator.
Table 2. Scramble mutation operator.
0123456789 Applsci 11 04837 i0010123456789
Table 3. Description of Carlier, Heller, Reeves instances.
Table 3. Description of Carlier, Heller, Reeves instances.
Namenm Z *   Namenm Z *   NameNm Z *   Namenm Z *  
Hel12010516Car07776590Rec1320151930Rec2923152287
Hel210010136Car08888366Rec1520151950Rec3150103045
Car011157038Rec012051247Rec1720151902Rec3350103114
Car021347166Rec032051109Rec1930102017Rec3550103277
Car031257312Rec052051242Rec2130102011Rec3775204951
Car041448003Rec0720101566Rec2330102011Rec3975205087
Car051067720Rec0920101537Rec2530152513Rec4175204960
Car06898505Rec01120101431Rec2730152373
Table 4. Comparison of carlier instances.
Table 4. Comparison of carlier instances.
InstancesAlgorithm Z *   BREWREARE Z A v g   Time(MS)SD Z *   BREWREARE Z A v g   Time(MS)SD
Car01IHGNDO70380.00000.00000.00007038.00000.00770.0000Car0477200.00000.00000.00007720.00000.05580.0000
HIGNDO0.00000.00000.00007038.00000.00780.00000.00000.01310.00147731.16670.063630.1575
HGNDO0.00000.00000.00007038.00000.00790.00000.00000.01310.00287741.90000.194936.3129
HMPA0.00000.00000.00007038.00000.01920.00000.00000.00390.00117728.40001.056910.1114
HWOA0.00000.00000.00007038.00000.00520.00000.00000.00390.00157731.43330.196111.6152
HEO0.00000.01950.00117045.90000.009129.94260.00000.01350.00487756.70000.239337.5501
HSCA0.00000.04560.00437068.26670.083076.61590.00000.04860.00767778.66670.4299104.3537
HSSA0.00000.06170.00487072.10000.0079107.69470.00000.04860.00787780.00000.066872.7736
HTSA0.00000.09970.03957315.66670.3338286.37070.00000.11240.03347977.83330.6940255.2532
HGA0.00000.01690.00187050.60000.010529.16920.00000.01530.00997796.10000.605240.0161
Car02IHGNDO71660.00000.00000.00007166.00000.02000.0000Car0585050.00000.00000.00008505.00000.01340.0000
HIGNDO0.00000.00000.00007166.00000.01510.00000.00000.00000.00008505.00000.01920.0000
HGNDO0.00000.00000.00007166.00000.01520.00000.00000.00760.00088511.50000.071319.5000
HMPA0.00000.02930.00107173.00000.317537.69620.00000.05400.00708564.43330.5601101.8279
HWOA0.00000.00000.00007166.00000.02240.00000.00000.00760.00038507.16670.055211.6679
HEO0.00000.02930.00787222.00000.098092.86550.00000.03960.00768569.70000.163278.3812
HSCA0.00000.11360.03477414.60000.2283344.79380.00000.07700.02238694.86670.5217190.9799
HSSA0.00000.12310.01837297.43330.0313262.85290.00000.03660.01098597.96670.049295.4919
HTSA0.00000.17490.07887730.53330.4812420.79190.00000.12500.04618897.40000.8095345.5301
HGA0.00000.02930.00637211.10000.196684.10880.00000.05820.00848576.16670.4249115.9747
Car03IHGNDO73120.00000.00000.00007312.00000.09530.0000Car0665900.00000.00000.00006590.00000.00670.0000
HIGNDO0.00000.00000.00007312.00000.04550.00000.00000.00000.00006590.00000.00510.0000
HGNDO0.00000.00740.00277331.80000.201826.02230.00000.00000.00006590.00000.00670.0000
HMPA0.00000.02540.00737365.20001.374542.63750.00000.04780.00896648.53330.117581.3858
HWOA0.00000.00740.00427342.60000.249626.75890.00000.00000.00006590.00000.02060.0000
HEO0.00000.01500.00907378.06670.200937.89190.00000.03470.00676634.33330.038063.7377
HSCA0.00000.10020.01467418.73330.4578174.98740.00000.03470.01256672.46670.451777.5735
HSSA0.00000.12650.01807443.30000.0630184.08280.00000.02470.00626631.16670.032953.6452
HTSA0.00000.15700.06317773.06670.6972401.45740.00000.09000.03136796.03330.7878180.7279
HGA0.00000.01500.00847373.66670.614638.45980.00000.04370.00986654.26670.613767.0020
Car04IHGNDO80030.00000.00000.00008003.00000.01390.0000Car0783660.00000.00000.00008366.00000.01110.0000
HIGNDO0.00000.00000.00008003.00000.00820.00000.00000.00000.00008366.00000.00630.0000
HGNDO0.00000.00000.00008003.00000.01460.00000.00000.00000.00008366.00000.00700.0000
HMPA0.00000.00140.00008003.36670.11111.97460.00000.02250.00098373.70000.104134.3581
HWOA0.00000.00000.00008003.00000.02050.00000.00000.00000.00008366.00000.00850.0000
HEO0.00000.01120.00048006.00000.047916.15550.00000.01350.00088372.80000.048425.6013
HSCA0.00000.06590.00688057.36670.1228151.01250.00000.06340.00928443.03330.2099146.8188
HSSA0.00000.09470.01158095.00000.0291212.06600.00000.00000.00008366.00000.01670.0000
HTSA0.00000.13690.04858390.76670.4935366.45140.00000.08650.02338560.83330.4741228.3998
HGA0.00000.00000.00008003.00000.10370.00000.00000.00690.00088372.96670.166317.8783
Table 5. Comparison on the Reeve instances⸺(reC01–reC23).
Table 5. Comparison on the Reeve instances⸺(reC01–reC23).
InstAlgorithm Z *   BREWREARE Z A v g   Time(MS)SDInst Z *   BREWREARE Z A v g   Time(MS)SD
reC01IHGNDO12470.00000.00160.00161248.93330.61320.3590reC1319300.00310.02490.01161952.33330.809912.3216
HIGNDO0.00000.00320.00151248.86670.60070.71800.00260.02120.01221953.50000.822410.1415
HGNDO0.00000.01440.00261250.26670.91363.35590.00520.04300.01961967.80000.985817.7696
HMPA0.00160.02650.00651255.10002.44158.99760.00930.04250.01911966.76672.687414.5709
HWOA0.00160.04650.00471252.83331.108010.37650.00260.04150.01661962.00001.436817.8419
HEO0.00160.06340.01331263.63330.384519.35030.00670.09740.03071989.16670.393031.0656
HSCA0.00000.12910.01121260.96670.857235.48470.00160.15280.04502016.80000.926693.9260
HSSA0.00160.15880.04011296.96670.121063.06110.00830.03830.02321974.76670.134215.1738
HTSA0.00160.17640.06401326.80001.084089.20070.00260.17410.05942044.63331.1585125.2979
HGA0.00160.06340.01171261.53330.986919.56990.00880.04400.02311974.56671.251018.9520
reC03IHGNDO11090.00000.00180.00111110.20000.47570.9798reC1519500.00560.02000.01181973.06670.82966.4028
HIGNDO0.00000.00270.00131110.46670.51281.08730.00670.03080.01251974.46670.82519.7151
HGNDO0.00000.00360.00131110.40000.77201.14310.00260.04000.01721983.60000.999618.6683
HMPA0.00000.02160.00351112.83332.48835.80850.00820.04260.02301994.93332.697023.7079
HWOA0.00000.00900.00131110.43330.78801.90930.00360.04260.01951988.10001.449621.4357
HEO0.00000.09110.01241122.70000.351919.88990.01180.09230.02982008.16670.399732.4860
HSCA0.00000.16230.03611149.06670.819157.86530.00510.13690.02952007.50000.962949.8108
HSSA0.00180.15870.03141143.80000.123055.44030.01080.12460.03142011.30000.135240.7015
HTSA0.00000.16590.07681194.20001.000167.76550.00560.14410.06342073.66671.1775103.7662
HGA0.00000.03790.00911119.10000.848911.21410.00820.05080.02662001.80001.288725.2645
reC05IHGNDO12420.00240.00240.00241245.00000.63431.9746reC1719020.00000.04840.02251944.70000.817617.6921
HIGNDO0.00240.01130.00271245.36670.64241.97460.00000.03890.02451948.53330.811915.7623
HGNDO0.00240.01130.00441247.50000.86383.85360.00790.07050.03191962.70001.037625.6621
HMPA0.00240.02170.00601249.40002.26476.56050.01050.07150.03371966.06672.904426.5806
HWOA0.00240.01130.00631249.83330.94284.31340.00000.04360.03021959.43331.395816.1734
HEO0.00240.02170.01021254.70000.31348.75270.01310.06150.03641971.20000.374320.4163
HSCA0.00240.09020.01781264.13330.735233.18310.00470.14560.03971977.43330.884357.9555
HSSA0.00240.12720.02411271.90000.101345.63500.01100.05890.03411966.90000.128822.1696
HTSA0.00240.14490.04011291.80000.927354.07240.01310.18870.08382061.46671.0997108.3872
HGA0.00240.02500.00611249.60000.84657.57450.01790.06570.03691972.13331.192522.5961
reC07IHGNDO15660.00000.01150.00701576.93330.60788.4929reC1920170.04360.06450.05142120.70001.61209.2273
HIGNDO0.00000.01150.00391572.06670.52258.44560.04070.06490.05152120.80001.587410.9891
HGNDO0.00000.01150.00531574.36670.63768.73490.04460.07090.05162121.06671.925811.9022
HMPA0.00000.03830.01121583.46672.351910.63560.04710.17150.06132140.70003.558943.3083
HWOA0.00000.01150.00531574.23330.90488.45650.04360.07040.05382125.43332.481114.5113
HEO0.00130.03830.01601591.10000.360715.45610.04710.07880.06552149.03330.530214.6708
HSCA0.00000.12770.02721608.53330.851660.52860.04560.21520.08202182.30001.2905112.9936
HSSA0.00000.03830.01531590.00000.118713.43130.05450.21810.07672171.70000.201077.7321
HTSA0.00000.17500.06841673.10001.117997.76280.04910.25830.14132302.06671.5909159.0463
HGA0.00000.02300.01101583.30001.10207.79810.05300.09120.06802154.23331.576919.2036
reC09IHGNDO15370.00000.02410.00681547.40000.574911.7774reC2120110.01740.02240.01892049.00001.61232.2361
HIGNDO0.00000.03250.00651547.06670.542413.51530.01740.01940.01872048.53331.58651.9276
HGNDO0.00000.03900.00851550.10000.776614.32560.01740.02140.01852048.13332.00502.2470
HMPA0.00000.04100.02011567.90002.473616.12110.01740.02540.01922049.70003.74672.7221
HWOA0.00000.04160.01761564.00001.129816.40330.01040.01940.01862048.33332.58423.5056
HEO0.00850.08850.02511575.56670.346321.16240.01740.03630.02382058.76670.533010.2524
HSCA0.00650.15160.03871596.43330.826166.82650.01740.19140.03442080.16671.267992.3263
HSSA0.00720.13140.03081584.26670.116240.13550.01940.18750.03952090.36670.204091.4033
HTSA0.00000.19130.05011614.03331.006189.87190.01740.19940.08642184.66671.5538156.4899
HGA0.00070.04160.02221571.16671.029615.58440.01740.05370.02662064.53331.535017.1692
reC11IHGNDO14310.00000.02100.00701441.00000.63398.4735reC2320110.00500.02340.01202035.03331.625412.0706
HIGNDO0.00000.03560.00911444.06670.574213.00240.00450.03380.01312037.26671.589114.7827
HGNDO0.00000.03140.01531452.83330.873711.51260.00500.02640.01412039.43331.882114.7912
HMPA0.00000.08940.01751456.10002.172023.80390.00600.03630.02202055.23333.499314.7098
HWOA0.00000.05940.01761456.23331.115518.76640.00350.03180.01652044.13332.437016.3477
HEO0.00490.05870.02401465.30000.354221.40430.01540.04670.02982070.96670.524116.7381
HSCA0.00000.16000.03781485.06670.750572.64430.00500.16110.02782066.86671.272670.7327
HSSA0.00000.15930.02731470.03330.114849.82330.01040.17850.04182095.13330.199477.4277
HTSA0.00000.18240.07331535.93330.9973108.47890.00800.20040.07552162.86671.5456149.6422
HGA0.00000.04960.02491466.60000.995818.71830.00500.03830.02662064.56671.533216.6046
Table 6. Comparison on the Reeve instances⸺(reC25-reC41).
Table 6. Comparison on the Reeve instances⸺(reC25-reC41).
InstAlgorithm Z *   BREWREARE Z A v g   Time(MS)SDInst Z *   BREWREARE Z A v g   Time(MS)SD
reC25IHGNDO25130.00920.03420.02202568.26671.868816.9232reC3532770.00000.00000.00003277.00001.15260.0000
HIGNDO0.01310.03900.02592577.96671.835217.10650.00000.00000.00003277.00000.99990.0000
HGNDO0.01230.03900.02402573.26672.097019.52760.00000.00000.00003277.00000.61100.0000
HMPA0.01390.04930.03192593.20003.897422.41790.00000.00340.00053278.76673.62423.7299
HWOA0.00640.04580.02702580.86673.030221.67910.00000.00000.00003277.00001.21820.0000
HEO0.01630.05050.03732606.63330.591221.36900.00000.02750.00263285.63330.991216.8750
HSCA0.01070.16750.04932637.00001.4448118.61390.00000.12020.00473292.53331.649070.4134
HSSA0.01110.05170.03622604.06670.228923.80050.00000.14280.01483325.40000.4445125.6247
HTSA0.01470.18230.07292696.10001.7504155.54430.00000.13850.06073476.03332.6465199.8649
HGA0.01710.05050.03482600.33331.888319.70170.00000.02840.00293286.66672.679416.4100
reC27IHGNDO23730.00880.03880.02202425.16671.869817.2937reC3749510.02970.05620.04485172.766717.384031.6698
HIGNDO0.00970.03670.01912418.33331.836218.82790.02500.05780.04105154.233317.826034.6763
HGNDO0.01050.06410.02222425.60002.135228.83350.02180.05230.03845141.000020.007741.1671
HMPA0.00930.04260.02262426.53333.903717.93270.03820.06730.04955196.133313.158534.4710
HWOA0.00880.03960.01972419.66673.052819.22380.03150.05290.04265161.733327.317031.0987
HEO0.01470.06150.03032444.86670.593925.09410.04580.07660.05625229.43332.486835.6779
HSCA0.00970.17740.02792439.13331.462867.63070.03070.21350.06815287.96676.5152262.3587
HSSA0.01390.19090.03642459.46670.231171.73780.04000.07550.05635229.80001.435542.7398
HTSA0.01220.21740.07782557.73331.7549192.42430.03720.21270.07575325.90007.3935288.5887
HGA0.01220.05900.03002444.23331.888124.94750.04040.06890.05625229.46677.464738.0059
reC29IHGNDO22870.00870.04680.02372341.16671.854222.1105reC3950870.01790.03520.02555216.533317.253620.5065
HIGNDO0.00310.07040.02592346.33331.831433.33200.00900.02710.01885182.666718.472922.1891
HGNDO0.00570.05030.02412342.13332.126925.18690.00940.02970.02085192.733320.999626.3438
HMPA0.01440.06470.03192359.86673.884927.47330.01730.04720.02935235.866713.149339.1550
HWOA0.00920.05820.02602346.56673.075628.27560.01320.02990.02025189.866727.472717.3661
HEO0.02400.15520.04702394.46670.603954.54220.02830.05540.04065293.40002.522638.4106
HSCA0.01530.22390.07722463.50001.4498174.47990.01550.19280.05545368.96676.6178299.1470
HSSA0.02100.21470.06002424.16670.2285125.78850.03480.20480.05755379.36671.4566230.4488
HTSA0.01490.23170.07262453.10001.7457185.67070.01610.20580.08175502.63337.2613387.9788
HGA0.01620.07000.03592369.10001.848128.75570.02610.04990.03685274.10007.001828.4843
reC31IHGNDO30450.00850.02760.02073108.00004.869918.6744reC4149600.03020.05690.04225169.200017.257831.5546
HIGNDO0.00390.02760.01313084.96674.723719.32780.02520.04660.03685142.566718.944430.6884
HGNDO0.00330.02760.01453089.20005.729524.36040.02040.05460.03595137.933320.359638.5650
HMPA0.01510.03090.02453119.50006.449112.07960.03100.05750.04255170.666713.148834.8715
HWOA0.00260.03480.01773098.93337.436526.66320.02360.05140.03735144.866727.873629.6533
HEO0.01970.05250.03343146.80001.082320.84770.03690.07220.05275221.53332.490536.3187
HSCA0.01540.18460.05473211.60002.6588187.02700.03490.21190.05165216.06676.6015151.4369
HSSA0.01870.21250.06953256.53330.5026203.71530.03990.07040.05495232.06671.459536.8953
HTSA0.01380.19410.06633247.00003.1755214.09030.03310.24070.06485281.20007.3321276.2412
HGA0.02230.06930.03423149.20003.007827.95520.04110.07740.05475231.33337.016135.3802
reC33IHGNDO31140.00580.01090.00843140.13334.84952.1868
HIGNDO0.00000.02020.00853140.50004.76688.2735
HGNDO0.00130.02020.00783138.33335.658511.2497
HMPA0.00830.02020.01093147.96676.304813.6808
HWOA0.00830.00830.00833140.00007.07350.0000
HEO0.00710.03690.01603163.90001.042120.2227
HSCA0.00130.15320.01903173.20002.606998.5341
HSSA0.00390.16890.02503191.86670.4736118.6611
HTSA0.00220.18110.09233401.26673.0636240.2365
HGA0.00800.04660.01483160.00002.818324.0680
Table 7. Comparison on the Heller instances.
Table 7. Comparison on the Heller instances.
InstAlgorithm Z *   BREWREARE Z A v g   Time(MS)SDInst Z *   BREWREARE Z A v g   Time(MS)SD
Hel1IHGNDO516−0.00190.0000−0.0005515.76676.32750.4230He121360.00000.00740.0040136.53330.66420.4819
HIGNDO−0.00190.0019−0.0001515.93337.08160.35900.00000.00740.0040136.53330.51950.4819
HGNDO−0.00190.0058−0.0001515.966713.84560.75200.00000.01470.0059136.80000.72270.5416
HMPA0.00000.00580.0016516.833312.34271.03550.00000.02940.0098137.33332.35870.9428
HWOA−0.00190.0000−0.0002515.90007.54610.30000.00000.01470.0044136.60000.76500.6110
HEO−0.00190.01740.0045518.33333.22062.02210.00000.03680.0154138.10000.37491.3503
HSCA−0.00190.11050.0180525.26676.008320.03820.00000.11760.0137137.86670.81383.5659
HSSA0.00000.11050.0155524.00002.048915.31880.00000.13970.0213138.90000.12993.3101
HTSA−0.00190.12020.0470540.23337.408627.05020.00000.16180.0551143.50001.04928.4370
HGA−0.00190.00780.0028517.46677.91891.43140.00000.05150.0162138.20001.09931.4697
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M.; Chang, V.; Askar, S.S. A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling. Appl. Sci. 2021, 11, 4837. https://doi.org/10.3390/app11114837

AMA Style

Abdel-Basset M, Mohamed R, Abouhawwash M, Chang V, Askar SS. A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling. Applied Sciences. 2021; 11(11):4837. https://doi.org/10.3390/app11114837

Chicago/Turabian Style

Abdel-Basset, Mohamed, Reda Mohamed, Mohamed Abouhawwash, Victor Chang, and S. S. Askar. 2021. "A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling" Applied Sciences 11, no. 11: 4837. https://doi.org/10.3390/app11114837

APA Style

Abdel-Basset, M., Mohamed, R., Abouhawwash, M., Chang, V., & Askar, S. S. (2021). A Local Search-Based Generalized Normal Distribution Algorithm for Permutation Flow Shop Scheduling. Applied Sciences, 11(11), 4837. https://doi.org/10.3390/app11114837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop