1. Introduction
The permutation flow shop scheduling problem (PFSSP) is a critical problem that needs to be solved accurately and effectively to minimize the makespan criteria. The solution to this problem involves finding the near-optimal permutation of n jobs to be processed in a set of m machines sequentially that will minimize the makespan required, even completing the last job in the last machine [
1]. This problem has significant utilization in several fields, especially in industries such as computing designs, procurement, and information processing. According to its significant effectiveness and its nature, which is normally classified as nondeterministic polynomial time (NP)-hard [
1,
2,
3,
4,
5,
6], several techniques of exact, heuristic, and meta-heuristic properties have been extensively employed for solving this problem. Some of them will be surveyed in the rest of this section.
Exact methods such as linear programming [
7] and branch and bound [
8] could fulfill the optimal value for the small-scale problem, but for medium-scale and large-scale problems, their performance degrades significantly, in addition to increasing exponentially the computational cost. Therefore, the heuristics algorithms have been designed to overcome this expensive computational cost and high dimensionality. Involving the heuristic algorithms, the Nawaz-Enscore-Ham (NEH) algorithm employed by Nawaz et al. [
9] for solving PFSSP could be the most effective heuristic algorithm, and their results are comparable with the meta-heuristic algorithms [
10,
11,
12,
13], which are being used for solving several optimization problems in a reasonable time. Broadly speaking, the image segmentation problem is an indispensable process in image processing fields, so several image segmentation methods have been suggested such as clustering, fractal-wavelet techniques [
14,
15,
16,
17,
18,
19,
20], region growing, and thresolding; Among those techniques, the threshold-based segmentation technique is the most effective due to the metaheuristic algorithms which could segment the images based on this technique with high accuracy [
21].
The particle swarm optimization (PSO)-based memetic algorithm (MA) [
22], namely PSOMA, has been proposed for tackling the PFSSP as an attempt to find the near-optimal job permutation that minimizes the maximum completion time. In detail, to adapt the PSOMA for solving the PFSSP, the authors used a ranked order rule to convert the continuous values produced by the standard algorithms into discrete ones. In addition, to improve the quality and diversity of the initialized solutions, the NEH algorithm has been used. Furthermore, to balance between the exploration and exploitation operators, a local search operator has been used to be applied on some solutions selected using the roulette wheel mechanism with a specific probability. Ultimately, to avoid being stuck into local minima, PSOMA used the simulated annealing with multiple neighborhood search strategies. It is worth mentioning that the local search has been used with the PSO for tackling several optimization problems, and this confirms that the local search has a significant influence on the performance after integration; some of those works are comprehensive learning PSO with a local search for multimodal functions [
23], PSO with local search [
24], and many others [
2,
25,
26,
27,
28,
29].
The cuckoo search-based memetic algorithm (HCS) [
30] has been adapted using the largest ranked values rule for tackling the PFSSP. Besides, HCS used the NEH algorithm to initialize the population to fulfill better quality and diversity. Furthermore, this algorithm used a fast local search to accelerate the convergence speed in an attempt to improve its exploitation algorithm. This algorithm was compared with a number of optimization algorithms: hybrid genetic algorithm (HGA), particle swarm optimization with variable neighborhood search, and the differential-evolution-based hybrid algorithm (HDE) on four benchmark instances to see its efficacy.
The hybrid discrete artificial bee colony algorithm (HDABC) [
31] has been adapted for tackling the PFSSP. In HDABC, the initialization step was achieved based on the Greedy Randomized Adaptive Search Procedure (GRASP) with the NEH algorithm to include better quality and diversity. After that, the discrete operators such as insert, swap, GRASP, and path relinking are used to generate new solutions. Ultimately, a local search strategy has been applied to improve the quality of the best-so-far solution as an attempt to improve the searchability of HDABC. HDABC has been extensively compared with a number of the algorithms: ant colony system (ACS), PSO embedded with a variable neighborhood search (VNS) (PSOVNS), PSOMA, and HDABC.
Xie, Z., et al. [
32] developed a hybrid teaching learning-based optimization (HTLBO) for tackling the PFSSP. Due to the continuous nature of the teaching-learning-based optimization, the largest ranked value rule is used to make it applicable to the PFSSP. In addition, HTLBO used simulated annealing as a local search to improve the quality of the obtained solutions. The differential evolution-based memetic algorithm (ODDE) [
33] has been adapted using the largest ranked value rule for tackling the PFSSP. ODDE in the initialization step used the NEH algorithm to initialize the solutions with a certain quality and diversity. In ODDE, an approach based on the diversity of the population was used to tune the crossover rate, in addition to accelerating the convergence speed of the algorithm using the opposition-based learning. Finally, ODDE used a local search strategy to avoid being stuck into local minima by improving the best-so-far solution.
The whale optimization algorithm integrated with a local search-ability on the best solution and mutation operators have been suggested by Abdel-Basset, M., et al. [
34] to propose a new variant, namely HWA, for tackling PFSSP. Broadly speaking, HWA used the NEH algorithm in the initialization step to create 10% of the populations with a certain diversity and quality as an attempt to avoid being stuck into local minima for reaching better outcomes. Afterward, to make WOA applicable to the PFSSP, the LRV rule was used to make the solutions generated by it relevant to this problem. Furthermore, it was integrated with two operators to improve the diversity for avoiding being stuck into the local minima problem: swap mutation and insert-reversed block. Finally, to accelerate the convergence speed toward the optimal solution and avoid being stuck in the local minimum, it was integrated with a fast local search strategy on the best-so-far solution.
In [
35], Mishra developed a discrete Jaya optimization algorithm for tackling the PFSSP. Because the standard Jaya algorithm has been adapted for tackling the continuous optimization problem that is contradicted to the PFSSP, which is normally classified as a discrete one, the largest order value rule was used to convert those continuous values into discrete ones relevant to the PFSSP. This discrete Jaya algorithm was verified on a set of well-known benchmarks and compared extensively under various statistical analyses with hybrid genetic algorithm (HGA, 2003), hybrid differential evolution (HDE, 2008), hybrid particle swarm optimization (HSPO, 2008), teaching-learning based optimization (TLBO, 2014), and hybrid backtracking search algorithm (HBSA, 2015) that are not up to date, and its performance with the recent optimization algorithms published over the last three years are unknown.
The whale optimization algorithm (WOA) [
36] improved using the chaos map and then integrated with the NEH algorithm has been proposed for tackling the PFSSP. In detail, the NEH algorithm and the largest ranked values rule are used in the initialization step of the chaos WOA (CWA) to initialize the solutions in better quality. After that, CWA used the chaotic maps to avoid being stuck into local minima and accelerate convergence speed by assisting two other operators: cross operator and reversal-insertion to improve its exploration capability. Ultimately, CWA used the local search strategy to improve the quality of the best-so-far solution to improve the exploitation capability of CWA. This algorithm was observed using various benchmarks and compared with various optimization algorithms to check its superiority.
Further, a new discrete multiobjective approach based on the fireworks algorithm (DMOFWA) has been recently proposed for solving the multi-objective flow shop scheduling problem with sequence-dependent setup times (MOFSP-SDST); this approach was abbreviately called DMOFWA [
37]. Inside this approach, two various machine learning techniques have been integrated: The first one called opposition-based learning was used to improve the exploration operator of the standard algorithm to avert entrapment into local minima, and the second one is the clustering analysis and was used to cluster fireworks individuals.
To overcome expensive computational costs and local minima problems that might suffer from most of the above-described algorithms, we developed a novel discrete optimization algorithm to tackle the PFSSP in a reasonable time compared to some existing techniques. Recently, a new optimization algorithm, namely generalized optimization algorithm (GNDO), based on the normal distribution theory, has been developed by Zhang [
38] for tackling the parameter extraction problem of the single diode and double diode photovoltaic models. Due to its high ability to estimate the parameter values that minimize the error rate between the measured I-V curve and the estimated I-V curve, in this paper, we try to observe its performance for tackling the PFSSP. In order to make GNDO applicable to the PFSSP classified as a discrete problem contradicted by the continuous problems tackled using the standard GNDO, the largest ranked value (LRV) rule is used to convert those continuous values into job permutations adequate to solve the PFSSP. Furthermore, this discrete GNDO using the LRV rule is integrated with a local search strategy to avoid being stuck into local minima for reaching better outcomes; this version is named a hybrid GNDO (GNDO). In another attempt to improve the quality of HGNDO, it was integrated with the swap mutation operator applied on the best-so-far solution as another attempt to promote the exploitation capability for reaching better outcomes; this version was abbreviated as HIGNDO. Finally, to improve the quality of the solutions, the local search strategy is improved using the scramble mutation operator and then integrated with HIGNDO to produce a new version named IHGNDO. The proposed algorithms, HGNDO, HIGNDO, and IHGNDO, are verified using 41 well-known instances widely used in the literature and compared with a number of the recent well-established algorithms to verify their efficacy using various performance metrics. The experimental results affirm the superiority of IHGNDO and HIGNDO over the other algorithms in terms of standard deviation, computational cost, and makespan. Generally, our contributions in this work include the following:
Develop GNDO using the LRV rule for PFSSP.
Improve GNDO using the swap mutation operator to avoid being stuck into local minima.
Enhance the local search strategy using the scramble mutation operator for accelerating the convergence speed toward the near-optimal solution.
Integrate the improved local search strategy and the standard one with the improved GNDO and GNDO for tackling the PFSSP.
The experimental findings show that IHGNDO and HIGNDO are better in terms of standard deviation and computational cost and final accuracy.
This work is organized as follows:
Section 2 explains the PFSSP;
Section 3 describes the standard generalized normal distribution optimization algorithm;
Section 4 explains the proposed algorithm;
Section 5 includes the results and discussion; and
Section 6 illustrates our conclusions and future work.
2. Description of the Permutation Flow Shop Scheduling Problem
Assuming that n jobs are running sequentially over m machines in the permutation flow that will minimize the makespan, this problem is known as the permutation flow shop scheduling problem (PFSSP). The makespan is measured using time units such as seconds, milliseconds, etc. Therefore, to solve this problem, the best permutation
that will minimize the makespan of execution of the last job on the last machine must be accurately extracted. In general, the following points summarize the PFSSP: (1) on each machine, each job
could run just once, where
is the number of jobs; (2) just a job could be executed on a machine
at a time with processing time PT, where
is the number of machines; (3) each job
will have a completion time c on a machine
and this time is symbolized as
c(,
); (4) each job has a processing time comprised of the set-up time of the machine and the running time; and (5) each job takes a time of 0 when starting. Mathematically, PFSSP could be modeled as follows:
In our work, the objective function used by the suggested algorithm to evaluate each solution is described as follows:
where
is the jobs permutation of the
ith solution. This objective function will be used to evaluate each permutation extracted by the algorithms, and the one with less makespan is considered the best.
5. Results and Comparisons
In our experiments, the proposed algorithms are extensively validated on three benchmarks commonly used in the literature: (1) the first dataset is called the Carlier dataset, having eight instances with a number of jobs ranging between 7 and 14, and a number of machines at the interval between 4 and 9 [
39]; (2) the second is the Reeves dataset with 21 instances, where the number of machines and the number of jobs ranges between 20 and 75, and 5 and 20, respectively [
40]; and (3) finally, the third one is known as the Heller and involves two instances with a number of jobs ranging between 20 and 100, and a number of machines of 10, respectively [
41]. Those datasets are taken from [
42] with some characteristics about the number of jobs and machines, and the best-known makespan
in
Table 3. Furthermore, the proposed algorithms are extensively compared with a number of the well-established optimization algorithms: sine cosine algorithm (SCA) [
43], slap swarm algorithm (SSA) [
44], whale optimization algorithm (WOA) [
34], genetic algorithm (GA), equilibrium optimization algorithm (EOA) [
45], marine predators optimization algorithm (MPA) [
42], and a hybrid tunicate swarm algorithm (HTSA) [
46] integrated with the local search strategy to ensure a fair comparison and verify their efficacy in terms of six performance metrics: average relative error (ARE), worst relative error (WRE), best relative error (BRE), an average of makespan (Avg), standard deviation (SD), and computational cost (Time in milliseconds (ms)). BRE indicates how far the best-obtained solution
is close to the best-known solution and is formulated using the following formula:
Meanwhile, WRE calculated using the next equation is a metric used to assess the remoteness between the worst-obtained makespan
and the best known.
Regarding ARE, it is used to show the relative error with respect to the average makespan values within 30 independent runs and the best-known one. Mathematically, ARE is modeled as follows.
The algorithms used in our experiments after integrating local search are named a hybrid SCA (HSCA) [
43], a hybrid SSA (HSSA) [
44], a hybrid WOA (HWOA) [
34], a hybrid GA (HGA), a hybrid EOA (HEOA) [
45], a hybrid MPA (HMPA) [
42], and a hybrid TSA (HTSA) [
46]. Regarding the parameters of those algorithms, they were assigned after extensive experiments. The EOA has two parameters:
(exploration factor) and
(exploitation factor), which are needed to be accurately estimated, and after several experiments for extracting their optimal values, we note that all observed values for
were significantly converged; therefore, it is set to 1 as used in the standard algorithm;
, which is responsible for the exploration operator, is assigned a value of 2 estimated after several experiments, pictured in
Figure 3a. The SSA is self-adaptive algorithm since it does not have parameters to be assigned before beginning the optimization process; on the other hand, the HSCA has one parameter called
a responsible for deterimining where the algorithm will search for the near-optimal solution, and the value to this parameter was set to 3, as shown in
Figure 3b. The HMPA has one parameter
P called the scaling factor, and it is set in our experiment as cited in the standard algorithm because we found that these parameters have no effect on the performance of the algorithm while solving this problem. Finally, the HTSA has two effective parameters, namely
and
, representing the initial and subordinate speeds for social interaction and are assigned to 1 and 2, as described in
Figure 3c,d which depict the outcomes of their tuning using various values. The HGA used a value of 0.02 and 0.8 for both the mutation and crossover probabilities, as recommended in [
40]. All algorithms were executed under those parameters 30 independent times within the same environment with a maximum of iteration and population size: 200 and 50, respectively.
5.1. Comparison under Carlier
This section validates the performance of the algorithms on the Carlier instances to show the readers the efficacy of each one. Each algorithm is run 30 independent times on each instance out of eight instances of the Carlier dataset, and then the various performance metrics are calculated and presented in
Table 4, which shows the superiority of IHGNDO, HIGNDO, and HGNDO on most test cases. Broadly speaking, IHGNDO could reach the best-known value for all instances and fulfill a value of 0 for ARE, WRE, BRE, and SD, in addition to its outperformance in the time metric for two instances. Meanwhile, HIGNDO could fulfill the best-known values of seven instances within all independent runs while failing incoming true the best-known value for Car04 instance in all runs. In addition, HIGNDO could be the best for the time metric in five instances. Generally, IHGNDO could occupy the first rank for the makespan metric and the second rank after HIGNDO in terms of the CPU time. Additionally,
Figure 4 presents the average of ARE, WRE, and BRE on all instances, which shows that IHGNDO could occupy the first rank for WRE and ARE, while it is competitive with the others in terms of WRE. Regarding SD, an average of makespan, and time metrics depicted in
Figure 5, HIGNDO comes in the first rank before IHGNDO for the time metric; IHGNDO could be the best for time and Avg metrics. Ultimately,
Figure 6,
Figure 7 and
Figure 8 compare the makespan values obtained by the different algorithms based on the boxplot. Those figures show the superiority of IHGNDO in terms of the average makespan. From the above analysis, IHGNDO could achieve positive outcomes reasonably, which makes it a strong alternative to the existing algorithms developed for tackling the PFSSP.
5.2. Comparison under Reeves
In this subsection, the proposed algorithms will be verified on the Reeve instances to verify their efficacy and compared to some state-of-the-art algorithms to show their superiority. After running and calculation, various metrics values are introduced in
Table 5 and
Table 6 to observe the performance of the algorithms. Observing those tables shows the superiority of the proposed algorithms: IHGNDO, HIGNDO, and HGNDO for most performance metrics in most test cases. To confirm that,
Figure 9 and
Figure 10 are presented to show the average of each performance metric on all instances in the Reeves benchmark; those figures elaborate the superiority of HIGNDO over the others in terms of BRE, ARE, and Avg makespan, while IHGNDO could outperform in terms of SD and come in the six ranks for the time metric. Since the proposed algorithms could outperform the others in terms of final accuracy in a reasonable time, they are a strong alternative to the existing algorithms adapted for tackling the same problem. In addition,
Figure 11,
Figure 12,
Figure 13,
Figure 14,
Figure 15,
Figure 16,
Figure 17,
Figure 18 and
Figure 19 show the boxplot of the makespan values obtained by various algorithms on the instances from reC01 to reC17, which confirm the superiority of IHGNDO and HIGNDO in comparison to the others.
5.3. Comparison under Heller
Here, the proposed algorithms will be compared to the other algorithms under the Heller instances. In
Table 7, various performance metrics values are exposed that show the superiority of IHGNDO in terms of ARE and
for the Hel1 instance and competitiveness with HIGNDO on Hel2 in terms of WRE, ARE, Time, SD, and
. Furthermore, for doing that,
Figure 20 and
Figure 21 are exposed to show the average of WRE, ARE, SD, Time, Avg makespan, and BRE; those figures showed that IHGNDO is the best in terms of ARE, WRE, and Avg makespan; HIGNDO could be superior for Time and SD metrics; and all algorithms are competitive for BRE metric.
Figure 22 and
Figure 23 depict the boxplot of makespan values produced in 30 independent runs on Hel1 and Hel2 using various optimization algorithms. From those figures, it is concluded that IHGNDO is the best.
6. Conclusions and Future Work
As a new attempt to produce a new algorithm that could tackle the permutation flow shop scheduling problem (PFSSP), in this paper, we investigate the performance of a novel optimization algorithm, namely generalized normal distribution (GNDO), for solving this problem. Due to the continuous nature of GNDO and the discreteness of PFSSP, the largest ranked value (LRV) rule is used to make GNDO applicable for solving this problem. In a new attempt to improve the performance of the discrete GNDO, a new version of GNDO, namely a hybrid GNDO (HGNDO), is developed based on applying a local search strategy to improve the quality of the optimal global solution. In addition, the GNDO has an improvement by also applying the swap mutation operator on the best-so-far solution to find better solutions, and this improvement is integrated with HGNDO to produce a new version, namely HIGNDO. Finally, the scramble mutation operator is integrated with the local search strategy to utilize each attempt done by this local search for improving the best-so-far solution as much as possible; this local search is used with the improved GNDO using the swap mutation operator to produce a strong version abbreviated as IHGNDO for tackling the PFSSP. To validate the performance of the algorithms accurately, 41 common instances used widely in the literature are employed. Additionally, to check the proposed superiority, they are extensively compared with some well-established recently-published optimization algorithms using various performance metrics. The findings show that HIGNDO and IHGNDO could be superior in terms of standard deviation, CPU time, and makespan. Those findings also show that IHGNDO is better than HIGNDO for most performance metrics, and this confirms the effectiveness of our improvement to the local search strategy. Our future work involves applying those proposed algorithms for tackling other types of the flow shop scheduling problem.